Search code examples
javascriptnode.jsherokuheap-memorydyno

Heroku node.js app 'allocation failure scavenge might not succeed' with memory still available


I'm getting heap memory errors that crash my app. But my app has memory available.

 app/web.1 [4:0x4ea2840]    27490 ms: Mark-sweep 505.7 (522.4) -> 502.2 (523.1) MB, 440.3 / 0.0 ms  (average mu = 0.280, current mu = 0.148) allocation failure scavenge might not succeed 
app/web.1 FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory 

The error happens around 512mb. My standard-2X dyno has 1gb.

Node version is 16.17.0. As far as I understand, heap limits for node versions over 12 are based on available memory.

Versions of Node that are >= 12 may not need to use the --optimize_for_size and --max_old_space_size flags because the JavaScript heap limit will be based on available memory. src

I considered the one worker on a 512mb dyno. But would a worker dyno cause the whole app to crash? And if that were the case wouldn't the error come from app/worker.1 and not app/web.1 like is shown in the logs?

UPDATE

I figured out how to recreate the heap limit Allocation Error. This allowed me to get some more clues.

I get the error at around 256mb on a 512mb hobby dyno.

I get the error at around 512mb on a 1024mb(1gb) standard 2X.

So it's always half the available memory. I'm not sure if this is some sort of setting on heroku.

UPDATE 2

Using the v8 node module (included in node) I could get more data:

const v8 = require('v8');
   // added the code below to a route that caused a memory leak
   const heapStats = v8.getHeapStatistics();
    const heapStatsMB = heapStats;
    for (const key in heapStatsMB) {
      heapStatsMB[key] = `${(((heapStatsMB[key] / 1024 / 1024) * 100) / 100).toFixed(2)} MB`;
    }

    console.table(heapStatsMB);

I could see the heap_size_limit was 259 on a hobby dyno and 515 on a 2X standard dyno.

   ┌─────────────────────────────┬─────────────┐
   │           (index)           │   Values    │
   ├─────────────────────────────┼─────────────┤
   │       total_heap_size       │ '268.08 MB' │
   │ total_heap_size_executable  │  '2.25 MB'  │
   │     total_physical_size     │ '266.73 MB' │
   │    total_available_size     │  '3.31 MB'  │
   │       used_heap_size        │ '263.69 MB' │
   │       heap_size_limit       │ '259.00 MB' │
   │       malloced_memory       │  '1.01 MB'  │
   │    peak_malloced_memory     │  '3.03 MB'  │
   │      does_zap_garbage       │  '0.00 MB'  │
   │  number_of_native_contexts  │  '0.00 MB'  │
   │ number_of_detached_contexts │  '0.00 MB'  │
   │  total_global_handles_size  │  '0.16 MB'  │
   │  used_global_handles_size   │  '0.16 MB'  │
   │       external_memory       │ '18.53 MB'  │
   └─────────────────────────────┴─────────────┘

Solution

  • TLDR:

    --max-old-space-size=1024 goes in the procfile.

    web: node --max-old-space-size=1024 bin/www
    
    

    or

    Add an environment variable key: NODE_OPTIONS with a value of --max_old_space_size=1024

    Details

    Originally my Procfile was

    web: node --optimize_for_size  bin/www
    
    

    And I had an environment variable key of --max-old-space-size with value 1024.

    This was not the right way to list the environment variable. It should be key: NODE_OPTIONS with a value of --max_old_space_size=1024. I haven't tried this yet myself.

    Instead I changed my procfile to:

    web: node --optimize_for_size --max-old-space-size=1024 bin/www
    
    

    which also works.

    I'm not sure --optimize_for_size does anything because node bin/www vs node --optimize_for_size bin/www both changed the heap_size_limit to around the same amount based on the dyno size. So later I took --optimize_for_size out.