Destroying Memory Leak in Node HTTP Proxy

My friends and I have a microservice ecosystem (let’s call it Superapps). When it first runs, everything seems fine and works as expected — until I realize something.
My server is experiencing a memory hog. I remember that each of my app’s services initially runs at around 80MB each, but after a month of continuous operation, their memory usage increases to 1.5GB EACH. This causes my server to crash, forcing us to restart everything. While restarting the services is simple, it doesn’t solve the root problem.
Node.js uses V8’s garbage collector, which automatically frees up memory by removing objects that are no longer reachable, but it doesn’t work perfectly. It may fail to clear one or more variables that are causing the memory leak.
At first, I didn’t overthink it and just assumed it was a memory leak. Why? Because even at midnight, when there’s no traffic, the memory usage still remains at 1.5GB per service.
For two weeks, I’ve been trying to track down which part of our code is causing the memory leak. But before I could find the issue, we decided to set up a cronjob to restart our services every day at 00:00. It seems to fix the problem temporarily, but deep down, I know it doesn’t actually solve anything.
I’m still trying to find the leak. I’ve tried every possible method, and now I’ve found my final weapon.
Heap dump method + stress test using k6
How do I use it?
PREPARATION
First, I prepare a simple stress test script using k6. The concept is simple — I just need to remove the rate limiter, hit the login endpoint, and spam it as much as possible.
Here’s my k6 script:
import http from "k6/http";
import { check, sleep } from "k6";
const API_KEY = "<MY API KEY>";
export let options = {
vus: 10, // Number of virtual users
duration: "30s", // Test duration
};
export default function () {
let unixTime = "1234";
let gatewayKey = API_KEY;
let headers = {
gateway_key: gatewayKey,
unixtime: unixTime,
"Content-Type": "application/json",
};
let payload = JSON.stringify({
email: "<my-email-to-login>",
password: "<my-password-to-login>",
});
let res = http.post("http://<MY-IP-OR-LOCALHOST>/auth/login", payload, {
headers: headers,
});
check(res, {
"status is 200": (r) => r.status === 200,
"response data": (r) => r.json() !== undefined,
});
sleep(1);
}
Then, I start my app’s services — not all of them, just the ones required for the login endpoint (api-gateway, service-manager, and auth-service).
FIRST HEAP DUMP
Before running my k6 script, I need to take a heap dump of my api-gateway.
To do this, I must ensure my app (api-gateway) starts with the following custom flag:
pm2-runtime bin/www --node-args="--inspect=0.0.0.0"
By default, this will use port 9229.
Since each of my services runs in Docker, I need to expose port 9229 to my local network. To do that, I add this option to my docker-compose.yml:
ports:
- 9229:9299
Then i follow this guide.
- I make a script.ts file: copy-paste amirilovic script
- Prepare the package.json file:
{
"name": "heap-snapshot",
"version": "1.0.0",
"description": "A script to take a heap snapshot using WebSocket",
"main": "script.ts",
"scripts": {
"start": "npx tsx script.ts"
},
"dependencies": {
"ws": "^8.17.0"
},
"devDependencies": {
"tsx": "^4.7.0",
"@types/node": "^20.11.0",
"typescript": "^5.3.3"
}
}
- Create a Makefile to simplify this script:
snap:
docker run --rm -v "./:/app" -w /app node:20-alpine sh -c "npm install && npm start"
- Run it
make snap
It will create a profile-<timestamp>.heapsnapshot
file. This will be used as the initial heap snapshot.
Then i ready to go to next step.
STRESS TEST
As I mentioned earlier, each service initially runs at around 80MB. Then, I run my k6 script:
k6 run script.js
At first, it runs with 10 VUs (virtual users) for 30 seconds, meaning that every second, 10 VUs attempt to log in.
When I check my app’s memory usage, it shows:
- api-gateway: 96MB
- service-manager: 102MB
- auth-service: 104MB
I NEED MORE LEAKKKK!
So, I increase the load. I use 100 VUs and run the test for 600 seconds (10 minutes).
Now, the memory usage looks like this:
- api-gateway: 246MB
- service-manager: 217MB
- auth-service: 218MB
Then, I let it rest for about 2 minutes, and I notice something interesting:
- api-gateway: 246MB
- service-manager: 108MB
- auth-service: 110MB
The service-manager and auth-service return to normal, but api-gateway does not. This confirms that the main issue is in api-gateway.
SECOND HEAP DUMP
I run the heap snapshot command again and obtain a second .heapsnapshot
file.
Following Amirilovic’s guide, I proceed with the analysis:
- Open Chromium browser.
- Go to
chrome://inspect
, then Open dedicated DevTools for Node. - Navigate to the Memory tab and upload the two
.heapsnapshot
files.
Initially, I planned to compare both snapshots, but instead, I decide to focus only on the latest one — and that’s when I find something interesting…

HttpProxyMiddleware use 130MB!!!

Now I know the main problem — my proxy isn’t closing connections properly. But how do I fix it?
This is my previous code:
const {
createProxyMiddleware,
fixRequestBody,
} = require("http-proxy-middleware");
exports.serviceProvider = async (req, res) => {
let longPath = req.url;
let endPoint = req.appURL + longPath;
// Escape / and ? to proxy pathRewrite on regex format
const escapedLongPath = longPath.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
// Forward the request using http-proxy-middleware
return createProxyMiddleware({
target: endPoint,
changeOrigin: true,
pathRewrite: {
[`${escapedLongPath}`]: "",
},
onProxyReq: fixRequestBody,
})(req, res);
};
After an hour of trial and error — searching the internet, copying and pasting everything — I finally found this GitHub issue. Then, I applied the solution to my code:
const {
createProxyMiddleware,
fixRequestBody,
} = require("http-proxy-middleware");
exports.serviceProvider = async (req, res) => {
let longPath = req.url;
let endPoint = req.appURL + longPath;
// Escape / and ? to proxy pathRewrite on regex format
const escapedLongPath = longPath.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
// Forward the request using http-proxy-middleware
return createProxyMiddleware({
target: endPoint,
changeOrigin: true,
pathRewrite: {
[`${escapedLongPath}`]: "",
},
onProxyReq: fixRequestBody,
onProxyRes: (proxyRes, req, res) => {
const cleanup = (err) => {
// cleanup event listeners to allow clean garbage collection
proxyRes.removeListener("error", cleanup);
proxyRes.removeListener("close", cleanup);
res.removeListener("error", cleanup);
res.removeListener("close", cleanup);
// destroy all source streams to propagate the caught event backward
req.destroy(err);
proxyRes.destroy(err);
};
proxyRes.once("error", cleanup);
proxyRes.once("close", cleanup);
res.once("error", cleanup);
res.once("close", cleanup);
},
})(req, res);
};
Edit:
I don’t want to hide my pride and stupidity, but after extended testing, I realized that the memory leak had improved slightly but still existed. So, I decided to switch from http-proxy-middleware
to express-http-proxy
, and that solved everything.
const proxy = require("express-http-proxy");
exports.serviceProvider = (req, res, next) => {
const isMultipart = req.headers["content-type"]?.includes(
"multipart/form-data"
);
proxy(req.appURL, {
parseReqBody: !isMultipart, // Disable parsing for multipart, enable for everything else
proxyReqPathResolver: (req) => {
let longPath = req.url;
console.log(`Proxying request to: ${req.appURL + longPath}`);
return longPath;
},
proxyErrorHandler: (err, res, next) => {
console.error("Proxy error:", err);
next(err);
},
})(req, res, next);
};
Then, I ran my k6 script without changing anything (yes, for 10 minutes). I even ran it twice. And finally…



As you can see, the GC works perfectly, unlike the previous graph, which ran better than my financial investment.
Edit:
After switching to express-http-proxy
, you can see that under the same stress test scenario, http-proxy-middleware
keeps increasing, while express-http-proxy
remains stable, even the GC runs while the process is still running.
IT SOLVED!
Member discussion