I use JMeter for HTTP-based performance testing in Kubernetes. Each JMeter instance has more than 80GB memory.
Everything works fine, but, every 10 minutes, the request rate drops from ~101Requests/Second to ~95Requests/Second:
summary + 3116 in 00:00:30 = 103.9/s Avg: 20 Min: 0 Max: 293 Err: 0 (0.00%) Active: 20 Started: 20 Finished: 0
summary = 58661 in 00:10:13 = 95.6/s Avg: 29 Min: 0 Max: 2625 Err: 23 (0.04%)
summary + 2883 in 00:00:30 = 96.0/s Avg: 24 Min: 0 Max: 2330 Err: 0 (0.00%) Active: 20 Started: 20 Finished: 0
summary = 61544 in 00:10:43 = 95.6/s Avg: 28 Min: 0 Max: 2625 Err: 23 (0.04%)
summary + 3097 in 00:00:30 = 103.2/s Avg: 20 Min: 0 Max: 319 Err: 0 (0.00%) Active: 20 Started: 20 Finished: 0
I have no idea why the request rate is decreasing. I also measure the latency between JMeter and the SUT, but I can't see any increasing latency or anything. Also, the HTTP response code between JMeter and the SUT is always 200. Memory usage is ~30GB, so each instance has more than enough memory. CPU usage is ~80% per instance.
Any ideas?
It's hard to say what's wrong without seeing resources consumption in Kubernetes, JMeter, your application, etc so you need to set up a proper monitoring of everything including JMeter JVM metrics. If you don't have any monitoring toolchain you can consider using JMeter PerfMon plugin for this.
If you need a guess - It might be due to Garbage Collection like it's described in Concurrent, High Throughput Performance Testing with JMeter