I have a service which is used to simulate an IoT hardware component which sends data periodically to a SOAP endpoint:
private static final int DEFAULT_CHUNK_SIZE_IN_BYTES = 100 * 1024;
public void simulate(final int samplingFrequencyMs, final int uploadFrequencyMs) {
// ...
This method generates a JSON object every time samplingFrequencyMs
time passes and calls a SOAP endpoint every time uploadFrequencyMs
time passes and chunks the generated data into DEFAULT_CHUNK_SIZE_IN_BYTES
size chunks.
So for example if samplingFrequencyMs
is 500
and uploadFrequencyMs
is 1000
then this method will send 2 generated JSON strings every second to a SOAP endpoint and if the size of the 2 JSON strings is 200 Kbyte then it will send 2 chunks.
There is also a method which stops the simulation:
public void stopSimulation() {
// ...
I've written a test for this simulator:
@Test
public void shouldSendProperAmountOfChunksWhenInvokingSimulate() throws InterruptedException
{
// ...
// when
underTest.simulate(100, 500);
Thread.sleep(750);
underTest.stopSimulation();
Thread.sleep(500);
// then
verify(requestServiceMock, times(EXPECTED_REQUEST_CALL_TIMES)).doRequest(uploadBinary, header);
}
but this test fails 1-2 times out of 100 because the service was not invoked EXPECTED_REQUEST_CALL_TIMES
times so it is not stable. How can I test this when I have to verify the times a method was called but there is timing involved? I used 750
because it is between 500
and 1000
and I thought that this should do the trick but I think there must be some testing best practice for this kind of situation.
What you need to do is abstract away the concept of the system time from your CUT. A typical approach is to inject a Clock
object that you can control. Your real implementation will delegate to the system clock, but within the test you have exact control over what's going on.