We are doing an experiment where we need to obtain the memory usage of a set of functions. Our initial approcah was to use the process.memoryUsage().heapUsed before and after the function call, and then use the difference as our measure. However, the underlying garbage collector seems to make this approach unviable and unreliable. We sometimes get negative values, which is expected bc the gc, but the range of values we get from the subtraction is great and makes the correctness of this approach not reliable.
// Code example of our inital approach
const memoryBefore = process.memoryUsage().heapUsed
const result = await f1()
const memoryAfter = process.memoryUsage().heapUsed
console.log("MEMORY USAGE:", memoryBefore - memoryAfter)
How can we change our approach to better capture and calculate the memory usage? We have looked at setting the --expose-gc and --trace-gc to call the garbage collector explicitly, and use the trace to calculate the memory usage from the values outputed (found in this question).
The problem with the gc() is that the garbage collector is not always activated upon then gc() call, and is not consistent.
// Child process
import { setFlagsFromString } from 'v8'
setFlagsFromString('--expose-gc --trace-gc')
function f1() {
for (let i = 0; i < 1000; i++) new Array(1000)
console.log('Ran f1')
}
process.on('message', (msg) => {
const start = process.memoryUsage().heapUsed
global.gc?.()
console.log('>>> START')
f1()
console.log('>>> END\n')
const end = process.memoryUsage().heapUsed
global.gc?.()
})
Running the code in the code example above, produces the trace below
[29840:0000020ED48B8010] 870 ms: Scavenge 100.5 (120.4) -> 95.0 (124.9) MB, 2.87 / 0.00 ms (average mu = 1.000, current mu = 1.000) task;
[29840:0000020ED48B8010] 891 ms: Mark-Compact 95.0 (124.9) -> 92.2 (128.9) MB, 1.91 / 0.00 ms (+ 17.6 ms in 259 steps since start of marking, biggest step 0.1 ms, walltime since start of marking 20 ms) (average mu = 0.973, current mu = 0.973) finalize incremental marking via task; GC in old space requested
>>> START
Ran f1
>>> END
>>> START
[29840:0000020ED48B8010] 2004 ms: Scavenge 108.0 (128.9) -> 92.3 (128.9) MB, 0.49 / 0.00 ms (average mu = 0.973, current mu = 0.973) allocation failure;
Ran f1
>>> END
>>> START
Ran f1
>>> END
>>> START
Ran f1
>>> END
>>> START
Ran f1
>>> END
>>> START
Ran f1
>>> END
>>> START
Ran f1
>>> END
There seems to be little to no related questions similar to our problem, so any help is appreciated.
The instance the GC will free the allocations is unpredictable to a good degree. This makes any method of keeping tabs on overall memory usage and measuring based on that unreliable.
Here is a method you can use in Linux:
Prepare two apps - one with the intended function and another empty.
Run the apps one at a time.
Get the PID of the process while running and run the following command to get the peak memory usage of the process.
The difference is the memory usage of the function