Next.js

8:41PMFebruary 1 2025Daniel Tompkins

Archive KB

Self-Hosting Next.js

Sep­a­rating Fron­tend and Backend

Run­ning a Next.js ap­pli­ca­tion with a sep­a­rate backend is not a simple task. Next is a full­stack ap­pli­ca­tion, and it's much easier when you can avoid mixing in an­other backend tech­nology. Avoid shooting your­self in the foot.

If you're stub­born, and you feel the masochistic urge to still go this route— read on. It's what I did, and I feel a lot of trep­i­da­tion over that de­ci­sion after count­less de­bug­ging night­mares. If I had to start over, I'd se­ri­ously con­sider going 100% Next or using a dif­ferent fron­tend al­to­gether.

Self-hosting adds fur­ther mu­ni­tion to the mine­field of chal­lenges when set­ting up a pro­duc­tion Next ap­pli­ca­tion. In this page, I'll share my ex­pe­ri­ence and the pit­falls of using Next without Vercel sup­port (and its costs $$$) on a generic vir­tual pri­vate server (VPS).

Memory Re­quire­ments

The pre­de­cessor of this ap­pli­ca­tion was using PHP. Like the cur­rent ap­pli­ca­tion, it was Dock­er­ized (though, using fewer con­tainers). I never had is­sues with memory re­quire­ments. That was on a 2GB/​1vCPU Dig­i­talo­cean VPS for $12/​mo.

The up­dated ap­pli­ca­tion re­places PHP with a Node fron­tend and a Python backend. It's also broken out into sev­eral (6 total) Docker con­tainers. Some con­tainers have mi­nus­cule memory re­quire­ments— less than 30MB in some cases. How­ever, the fron­tend and backend con­tainers reg­u­larly hover be­tween 100-900MB de­pending on traffic.

While set­ting up the Next ap­pli­ca­tion on a VPS with sim­ilar specs, I was im­me­di­ately stopped by the in­ability to even in­stall my Node de­pen­den­cies. The amount of memory needed to run npm install would crash out my fron­tend con­tainer. Through it­er­a­tive testing, I bumped up the VPS memory and was able to get de­pen­den­cies to in­stall at 4GB of RAM.

Un­for­tu­nately, even with this rel­a­tively gen­erous amount of memory, the Next pro­duc­tion build would still fail. Run­ning next build would snag and the con­tainer would get caught in a restart loop.

De­bug­ging this issue was quite dif­fi­cult. For a long time, I thought the build was crashing be­cause it couldn't con­nect to fonts.googleapis.com. It was re­peat­edly killing the con­tainer right as it started to fetch Google fonts:

request to https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@100..800&display=swap failed, reason: Retrying 1/3...

As it turns out, this was just co­in­ci­dence. The com­bined memory re­quire­ments of the con­tainers (to­gether with the op­er­ating sys­tem's normal memory usage) was still too great for a 4GB VPS!

Luckily, Dig­i­talo­cean prices by the hour. I was able to run a final test with double the RAM— fi­nally re­ceiving a suc­cessful build on an 8GB Dig­i­talo­cean "Droplet" (their pro­pri­etary name for a VPS). At $48/​mo, how­ever, this was not sus­tain­able.

There was no way I could jus­tify paying $576/​year for a hobby site. I could barely jus­tify it when it was $144/​year! At this point, I was feeling quite dis­ap­pointed. Then I dis­cov­ered Het­zner .

Het­zner

After some re­search (Googling and Reddit), I found that Het­zner pro­vides ARM-ar­chi­tec­ture VPSs with 8GB RAM for $7/​mo! Not only was this a major up­grade from my ex­isting Dig­i­talo­cean server— with 4x the memory, I was also paying $5 less per month.

Mi­grating to Het­zner was the next step. Overall, the process was fairly pain­less. I still have a soft spot in my heart for Dig­i­talo­cean be­cause they are ex­tremely de­vel­oper friendly and have a wealth of doc­u­men­ta­tion and re­sources that I still re­turn to over and over. I don't re­gret starting with them as a novice Web de­vel­oper.

Self-Hosting and fetch

While self-hosting with an Nginx Web server, the Node.js fetch im­ple­men­ta­tion has been the bane of my ex­is­tence. Async pro­gram­ming is not a simple thing. It can be very dif­fi­cult to un­der­stand how a long chain of re­quests re­solves.

To make mat­ters worse, the undici package (which sup­plies the fetch func­tion) has sev­eral out­standing and poorly doc­u­mented is­sues. It's painful enough that I might rec­om­mend to someone reading this that they ditch it and use axios for any GET/​POST re­quests.

504 Gateway Timeout with next/image

The first fetch-re­lated issue I ob­served man­i­fested in the blur mech­a­nism im­ple­mented by the next/image com­po­nent. The Next image com­po­nent dis­plays a low-quality image place­holder (LQIP) which is swapped out with the op­ti­mized image after a short time.

After doing a pro­duc­tion build (next build and next start on the VPS), I was con­sis­tently get­ting a gateway timeout (504 status) on the op­ti­mized image. When loading a page with a lot of im­ages, the LQIPs would be re­placed on most im­ages. How­ever, some im­ages would not load and the LQIP would be re­placed with the im­age's alt text.

Strangely, the same broken im­ages would usu­ally re­main broken on fu­ture re­loads. How­ever, every time the ser­vice was restarted, dif­ferent im­ages would ex­pe­ri­ence the same problem. In­ter­est­ingly, in the logs, these events would show up as a 499 HTTP status code fol­lowed by a 504 gateway timeout after about 30 sec­onds.

The 499 HTTP status code "in­di­cates that a client closed a con­nec­tion be­fore the server could re­spond." This has gen­er­ally meant that I have a race con­di­tion and the LQIP re­place­ments were hap­pening far­ther along a broken chain of promises.

Ob­vi­ously this was not ideal. After a lot of dig­ging through the Next.js is­sues (at the time, ~2.8k open is­sues), I fi­nally came across a sug­ges­tion on Stack­Over­flow to set the Nginx con­fig­u­ra­tion for proxy_ignore_client_abort to on:

nginx.conf
proxy_ignore_client_abort on;

The issue seemed to be that the client closed the con­nec­tion be­fore it was able to re­ceive a re­sponse from the API— in my case, a Docker con­tainer with a RESTful ser­vice. After ap­plying that con­fig­u­ra­tion op­tion and restarting Nginx + Next, all im­ages seem to load as ex­pected.

Undici (UND_ERR_SOCKET)

An­other hard problem that I've run into while self-hosting Next.js is a random socket ter­mi­na­tion error :

TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at async invokeRequest (/app/node_modules/next/dist/server/lib/server-ipc/invoke-request.js:17:12) at async invokeRender (/app/node_modules/next/dist/server/lib/router-server.js:254:29) at async handleRequest (/app/node_modules/next/dist/server/lib/router-server.js:447:24) at async requestHandler (/app/node_modules/next/dist/server/lib/router-server.js:464:13) at async Server.<anonymous> (/app/node_modules/next/dist/server/lib/start-server.js:117:13) { cause: SocketError: other side closed at Socket.onSocketEnd (/app/node_modules/next/dist/compiled/undici/index.js:1:63301) at Socket.emit (node:events:526:35) at endReadableNT (node:internal/streams/readable:1376:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { code: 'UND_ERR_SOCKET', socket: { localAddress: '::1', localPort: 43952, remoteAddress: undefined, remotePort: undefined, remoteFamily: undefined, timeout: undefined, bytesWritten: 999, bytesRead: 542 } } }

These er­rors only ap­pear when rapidly re­loading or nav­i­gating quickly be­tween pages. It has been in­cred­ibly dif­fi­cult to pin­point the exact cause. I can say with some cer­tainty that the issue al­ways ties back to how data fetches are processed.

If an async fetch op­er­a­tion is un­able to con­sume a re­sponse be­fore the con­nec­tion is closed, Next with throw up the dreaded Next WSOD (White Screen of Death)— along with a generic error mes­sage:

"Ap­pli­ca­tion error: a server-side ex­cep­tion has oc­curred (see the server logs for more in­for­ma­tion)."

If you in­spect the browser con­sole or your ap­pli­ca­tion logs, you may get a more de­scrip­tive error mes­sage. Still, the problem can be very elu­sive. This is why I can't give a de­fin­i­tive so­lu­tion, but I have a strong sus­pi­cion that this issue al­ways con­nects to a bad or broken re­sponse from a RESTful API end­point.

To start de­bug­ging, take a closer look at any async data fetching op­er­a­tions and promises that could end up in­ter­rupted. Re­view your server logs to de­tect which re­quests weren't able to re­ceive a 200 re­sponse. I also rec­om­mend re­struc­turing all await fetch() calls to con­tain a try { ... } catch (error) { ... } state­ment around the re­sponse han­dling. This seems to handle fetch er­rors more grace­fully.

Why Next.js?

At times it can be fu­ri­ously frus­trating to use Next.js. The change from pages to app router was dra­matic and took a lot of refac­toring. There are some GitHub is­sues, among the +2,5k open is­sues, that have gone unan­swered by Vercel in 3 years or more.

Next.js is an open source soft­ware, but it's still under the um­brella of Vercel. I'm sure they're also making a hefty sum from their Next.js cloud de­ploy­ment plat­form. In my ex­pe­ri­ence, they move faster and break more things than Meta has with React.

So why the hell would a de­vel­oper keep using Next?

1. React with bat­teries in­cluded

The hardest part about learning React, for me per­son­ally, was un­der­standing how to con­figure my build tools and de­vel­op­ment en­vi­ron­ment. Next.js bun­dles these build and linting tools to­gether with sane de­faults.

An­other common de­sign pat­tern I'd find in Web de­vel­op­ment was the need for a simple file-based router mech­a­nism to serve mark­down. Next sup­ports this out-of-the-box.

Un­der­standing client- vs server-side code ex­e­cu­tion and Node. Next's "use client" flag might be con­fusing at first. For me, it helped build a better un­der­standing of how clientside in­ter­ac­tions in the browser differ from server-ex­e­cuted code. It also gave me a better un­der­standing of React hooks like useEffect.

2. Im­pec­cable Light­house scores

Per­haps the real reason I keep using Next.js is be­cause of how ef­fort­less it felt to get a per­fect Light­house score. When I was still using the classic LAMP stack for Web ap­pli­ca­tions, it never felt this easy. I would be run­ning gulp tasks to minify and mangle JS, op­ti­mize im­ages, and fig­uring out clever ways to re­duce my bundle sizes.

Screenshot of Lighthouse confetti over four perfect 100s!
Screenshot of Lighthouse confetti over four perfect 100s!