Are there some special things that must be considered to avoid memory leaks in Mule Applications?
How can we avoid memory leaks in Mule Applications?
For example; Do we actually have to remove flow variables? What must be done done explicitly by the developers of the Mule Applications and what is done (automatically) by the Mule Runtime and the JVM GC?
A good way to get to the memory leak suspects would be to take a heap dump (of all the nodes) right after you start seeing a decline in memory reclaim post-major GC. There are multiple tools available that help analyze the memory leaks.
There is a great blog post in the topic. This has summarized some memory-leak related issues like the following findings for example:
Finding: Pooled memory manager generally grabs 10% of JVM heap and lives with it without releasing. Fix: Switch the Grizzly Memory Manager implementation HeapMemoryManager. Note that HeapMemoryManager is the default implementation and is recommended by Grizzly for performance; albeit, Mule treats PoolMemoryManager implementation as the default.
Wrapper.conf changes:
wrapper.java.additional.<XX>=-Dorg.glassfish.grizzly.DEFAULT_MEMORY_MANAGER=org.glassfish.grizzly.memory.HeapMemoryManager
Finding: Async logging was being used widely and associated Log4J was observed to be holding a lot of JVM memory. The default setting of 256*1024 slots was apparently too high. Since this RingBuffer does not grow or shrink, a high fixed size with each slot allocated as a separate object (RingBufferLogEvent), each holding a log event, could occupy a considerable amount of memory.
Fix: Reduce the Log4J RingBuffer size to 128 in wrapper.conf or log4j2.xml
wrapper.java.additional.<XX>=-DAsyncLoggerConfig.RingBufferSize=128
Or, in log4j2.xml:
<AsyncLogger name="DebugLog" level="info" includeLocation="true" ringBufferSize="128">
Memory leak due to default HazelCast implementation used for aggregator components (splitter-Aggregator pattern).
Finding: Heap analysis pointed memory being held up by default HazelCast objectstore implementation used in splitter-aggregator components used in specific flows. It appeared as if the store was not getting expired appropriately.
Fix: Custom Object store implementation (subclass of PartitionedInMemoryObjectStore) was written and TTL (TimeToLive) for entries explicitly defined.
@Override
public void expire(int entryTTL, int maxEntries, String partitionName) throws ObjectStoreException
{
super.expire(entryTTL, maxEntries, partitionName);
if (getPrivatePartitionSize(partitionName) == 0) {
disposePartition(partitionName);
}
}
reference : https://dzone.com/articles/enduring-black-fridays-with-mulesoft-apis