I am having some issues with G1GC.
2400.241: [GC concurrent-root-region-scan-start]
2400.241: [Full GC (Metadata GC Threshold) 2400.252: [GC concurrent-root-region-scan-end, 0.0101404 secs]
2400.252: [GC concurrent-mark-start]
1151M->603M(4356M), 2.6980537 secs]
[Eden: 0.0B(2558.0M)->0.0B(2613.0M) Survivors: 55.0M->0.0B Heap: 1151.7M(4356.0M)->603.6M(4356.0M)], [Metaspace: 259187K->92248K(1034240K)]
[Times: user=3.92 sys=0.00, real=2.70 secs]
This is taking a long time and every 20-30 minutes a full gc is triggered by metaspace. I configured it this way:
"-XX:MaxMetaspaceSize=768M",
"-XX:MetaspaceSize=256M"
But everytime it hits 256M~ it triggers a full gc. When it reach this first high-water-mark should not it make it bigger next time until the max size? Also, a full gc on metaspace triggers a full gc on old gen? I read it somewhere but I am not sure about it. This is getting the p99 response time to be higher than I expected.
According to Triggering of gc on Metaspace memory in java 8, the full GC is needed in order to reduce metaspace usage.
My understanding is that metaspace is not garbage collected per se. Instead, you have objects in the ordinary heap that hold special references to metaspace objects. When the objects are collected by the GC, the corresponding metaspace objects are freed. (Conceptually it is like finalization where the finalizer is free
-ing the metaspace objects.)
When it reach this first high-water-mark should not it make it bigger next time until the max size?
Apparently not. The normal strategy for HotSpot collectors is like this:
It seems that the same strategy is used here. And the full GC is causing enough metaspace to be reclaimed that it decides that it doesn't need to expand metaspace.
A band-aid for this would be to try setting -XX:MetaspaceSize
and -XX:MaxMetaspaceSize
to the same value, but that will just make the full GCs less frequent.
A real solution would be to figure out what is consuming the metaspace, and fix it.