Search code examples
hadoopmemorymapreducejobshadoop-yarn

How to cleaning hadoop mapreduce memory usage?


I want to ask. I can say for example I have 10 MB memory on each node after I activate start-all.sh process. So, I run the namenode, datanode, secondary namenode, dll. But after I've done the hadoop mapreduce job, why the memory for example decrease to 5 MB for example. Whereas, the hadoop mapreduce job has done.

How can it back to the 10 MB free memory? Thanks all....


Solution

  • Maybe you can try the linux clear memory command :

    echo 3 > /proc/sys/vm/drop_caches