How to reduce memory usage of a docusaurus build (v2)? - javascript

I'm using docusaurus v2.0.0-alpha.39 to generate documentation, and bitbucket pipelines to build everything and push it to AWS.
The problem is I'm hitting Container 'Build' exceeded memory limit. when executing the yarn buildcommand.
In bitbucket-pipelines.yml, I've already bump up the memory at the maximum I can (7680mb for build and 512mb for docker):
deploy: &deploy
size: 2x
name: Deploy
caches:
- node
script:
- yarn
- yarn build
I've also tried to limit node memory usage by setting --max-old-space-size=7168 in various ways :
{
"scripts": {
"build": "cross-env NODE_OPTIONS=--max-old-space-size=7168 node node_modules/#docusaurus/core/bin/docusaurus build"
}
}
{
"scripts": {
"build": "NODE_OPTIONS=--max-old-space-size=7168 node node_modules/#docusaurus/core/bin/docusaurus build"
}
}
{
"scripts": {
"build": "node --max-old-space-size=7168 node_modules/#docusaurus/core/bin/docusaurus build"
}
}
But no matter what i put as max variable, my node process goes up and use up to 14GB of memory !
And if I put a "small" --max-old-space-size, i get :
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
I have pretty huge docs(6,3 MB and 1.493 items per version) i have to admit but I can't even version them using docusaurus, each version add 4/5GB of memory usage...
Do you have any idea of potential solutions for this ?

Related

Gitlab CI - server gets 'killed' before Cypress tests can run

I am running a CI pipeline in Gitlab which runs some Cypress integration tests as part of the testing stage. The tests are working absolutely fine on my machine locally but when I try and run them in Gitlab CI it appears that the Gitlab runner is killing my local server before I can run my Cypress tests against it. Here is my Gitlab config:
variables:
API_BASE_URL: https://t.local.um.io/api
CYPRESS_API_BASE_URL: https://t.local.um.io/api
npm_config_cache: '$CI_PROJECT_DIR/.npm'
CYPRESS_CACHE_FOLDER: '$CI_PROJECT_DIR/cache/Cypress'
cache:
paths:
- node_modules/
- cache/Cypress
stages:
- install
- build
- tests
install:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: install
cache:
key: 'e2eDeps'
paths:
- node_modules/
- cache/Cypress/
script:
- npm ci
build:
stage: build
dependencies:
- install
script:
- npm run build
artifacts:
expire_in: 1 days
when: on_success
tests:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: tests
script:
- npm ci
- npm run test:ci
And here are the relevant package.json scripts that the above config runs in CI:
"scripts": {
"build": "webpack --config webpack.prod.js",
"dev": "webpack serve --config webpack.dev.js",
"start:ci": "export NODE_OPTIONS=--max_old_space_size=4096 serve dist --no-clipboard --listen ${PORT:-3000}",
"test": "cross-env NODE_ENV=test && npm run test:cypress && npm run test:jest",
"test:ci": "cross-env NODE_ENV=test && start-server-and-test start:ci http-get://localhost:3000 test",
"test:cypress": "cypress run --headless --browser chrome",
"test:jest": "jest",
},
It is the final stage tests that is currently failing. Here is the console output from the Gitlab runner, you can see where it says 'killed' then 'err no 137' it appears that it just stops the start:ci process which is what runs my local server so the integration tests can run against them.
Finally here is a small snippet of my test, I use the cy.visit command which never responds as the server is killed:
describe('Code entry page - API responses are managed correctly', () => {
beforeEach(() => {
cy.visit(routes.APP.HOME); // this just times out
});
...
EDIT
I have tried running the test:ci script inside of the exact same docker container that it uses (cypress/browsers:node14.15.0-chrome86-ff82) locally (not in gitlabci) and it works no problem. The issue must lay with Gitlab surely?
Error 137 typically means that your Docker container was killed due to not having sufficient resources. As I mentioned in the comments, your current container is running with 4GB of memory. Since you are defining no tag keys within your CI/CD, you are likely running on GitLab's Linux runner cloud, which runs using a n1-standard-1 instance on GCP, which is limited to 3.75 GB of ram. Essentially, as soon as your test container starts, it instantly consumes all the memory available on the runner and your container is killed.
To get around the memory limitation, you must run your own gitlab-runner. There is no way to run using a higher amount of memory on the shared runner cloud. You can test this fairly easily by spinning up a gitlab-runner on your local machine (see instructions here for installing a gitlab runner). Once you've installed your runner, register your runner to the high-memory tag, then update your CI/CD to use that tag using the following syntax on that last job:
tests:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: tests
tags:
- high-memory
script:
- npm ci
- npm run test:ci
Your jobs are allowed to use as much memory as your runner has allocated. If your machine has 8Gb of memory, the jobs will be allowed to use up to 8Gb of memory.
If your machine doesn't have enough memory by itself, you could always temporarily spin up a cloud instance with sufficient memory. You could try a digital ocean droplet with 16GB of memory for 0.11c / hour, for example. This would allow you to run an instance for a couple of hours to test out the solution before you determine what's viable long-term.

npm stuck at "Starting the development server..."

I know this topic has been created before, but nothing I tried could fix it. The problem is precisely the following. I have a react script on an AWS ec2 server, that I want to execute automatically, whenever the instance is starting. For this purpose, the following script is executed at the start of the AWS server:
#!/usr/bin/python3
import time
import shlex, subprocess
args = shlex.split('sudo su ubuntu -c "/usr/bin/npm start --prefix /home/ubuntu/my-app > /home/ubuntu/output.txt 2>&1"')
subprocess.Popen(args)
When I run the script manually, everything works just fine. But whenever it is run during the server start, I get the following log:
> my-app#0.1.0 start /home/ubuntu/my-app
> react-scripts start
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: Project is running at http://172.31.14.57/
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: webpack output is served from
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: Content not from webpack is served from /home/ubuntu/my-app/public
^[[34mℹ^[[39m ^[[90m「wds」^[[39m: 404s will fallback to /
Starting the development server...
That's all - nothing happens. Does anybody have an idea how to fix this? I thought it had something to do with the fact, that it is started from root. So I tried to fix that by using sudo su ubuntu -c, but it doesn't help either.
My guess is that it's the same problem as this issue. When you call npm start by default it calls the start script in package.json which points to :
"start": "react-scripts start",
There was a change in react-scripts that checks for non interactive shell when CI variable is not set here
But when you start it using sudo su ubuntu -c, it starts a non interactive shell.
What could work is setting CI variable to true like this :
export CI=true
sudo su ubuntu -c "/usr/bin/npm start ....."
You can also create a new script inside package.json :
"scripts": {
"start": "react-scripts start",
"ec2-dev": "CI=true;export CI; react-scripts start",
.....
}
and run :
/usr/bin/npm run ec2-dev
instead of npm start
Starting a development server is only useful if you mount the src folder to a local directory via nfs or other file sharing mechanism in order to use the nodemon capability of react-script to instantly restart your server for live changes during development.
If it's for other purposes. You need to build the app using :
npm run build
Provision the build directory artifacts and serve it using a web server

Angular 5.2 : Getting error while building application using VSTS build server : CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

Suddenly builds started failing with following error :
2019-01-03T12:57:22.2223175Z EXEC : FATAL error : CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
error MSB3073: The command "node node_modules/webpack/bin/webpack.js --env.prod" exited with code 3.
I have tried all the solutions available like :
1) Updating the virtual memory of Windows
2) Updating the NPM version and Node version
3) Also tried adding command to increase --max_old_space_size
Still facing the same issue while Publishing the angular APP. It works on local but failing on build server while publishing
On local getting following error :
<--- Last few GCs --->
[2212:000002BC74FB20D0] 152613 ms: Mark-sweep 1411.4 (1466.9) -> 1411.4 (1466.9) MB, 2117.6 / 0.0 ms last resort GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0000032354625EE1 <JSObject>
1: bindContainer [node_modules\typescript\lib\typescript.js:~22960] [pc=000003AD4C9DBFB9](this=000000063100BE21 <JSGlobal Object>,node=000001B7FE6A7E61 <NodeObject map = 0000020A3EA721A1>,containerFlags=45)
2: visitNodeArray [node_modules\typescript\lib\typescript.js:~15947] [pc=000003AD4C9A32A5](this=000000063100BE21 <JSGloba...
This error occurs when the memory allocated for the execution application is less than the required memory when run application, by default Node allocates a certain size of memory.
You can increase this size for every build changing your package.json, so even local or in the server the application will prepare the publish with ideal size of memory allocation.
"build": "node --max-old-space-size=4096 ./node_modules/#angular/cli/bin/ng build --prod",
Another option is inside your npm folder, edit the ng.cmd to always increase the memory size.
#IF EXIST "%~dp0\node.exe" (
"%~dp0\node.exe" --max_old_space_size=8192 "%~dp0\..\#angular\cli\bin\ng" %*
) ELSE (
#SETLOCAL
#SET PATHEXT=%PATHEXT:;.JS;=;%
node --max_old_space_size=8192 "%~dp0\..\#angular\cli\bin\ng" %*
)
And a less elegant solution is using a dependency that handle this problem.
Run from the root location of your project:
npm install -g increase-memory-limit
increase-memory-limit
In the server you will need to make a script for these steps before the publish.
More details about the package here: https://www.npmjs.com/package/increase-memory-limit

Handle background process while running `npm test`

I need to perform a set of actions before and after running Mocha tests in Node.js.
Run a script that creates the user and bot accounts on a server. This script uses an API call that creates a connection to the server and the process is not terminated after the accounts are created.
Run the server with the account credentials created above.
Run the Mocha tests.
Delete the accounts created on server.
In package.json,
"scripts": {
"start": "node index.js",
"test": "node createAccounts.js && (sleep 10 && mocha -t 60000 ./*spec.js); node deleteAccounts.js"
}
The problem while running npm test is that after the tests pass, the process does not terminate. A workaround I came up with was to use node createAccounts.js & (sleep 10 && mocha -t 60000 ./*spec.js); node deleteAccounts.js & so that the other processes are run in the background and the tests terminate properly.
I finally settled for the below test script that runs the non-terminating processes in the background and saves the exit code in $s. There is just one problem: The background processes are not being terminated :(
"test": "node createAccounts.js & (sleep 10 && mocha -t 60000 ./*spec.js --exit) && s=0 || s=$? ; node deleteAccounts.js & sleep 5 && exit $s"

Node.js heap out of memory

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:
[md5:] 241613/241627 97.5%
[md5:] 241614/241627 97.5%
[md5:] 241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)
<--- Last few GCs --->
11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/usr/bin/node]
2: 0xe2c5fc [/usr/bin/node]
3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
7: 0x3629ef50961b
Server is equipped with 16gb RAM and 24gb SSD swap. I highly doubt my script exceeded 36gb of memory. At least it shouldn't
Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)
Here's full script code:
http://pastebin.com/mjaD76c3
I've already experiend weird node issues in the past with this script what forced me eg. split index into multiple files as node was glitching when working on such big files as String. Is there any way to improve nodejs memory management with huge datasets?
If I remember correctly, there is a strict standard limit for the memory usage in V8 of around 1.7 GB, if you do not increase it manually.
In one of our products we followed this solution in our deploy script:
node --max-old-space-size=4096 yourFile.js
There would also be a new space command but as I read here: a-tour-of-v8-garbage-collection the new space only collects the newly created short-term data and the old space contains all referenced data structures which should be in your case the best option.
If you want to increase the memory usage of the node globally - not only single script, you can export environment variable, like this:
export NODE_OPTIONS=--max_old_space_size=4096
Then you do not need to play with files when running builds like
npm run build.
Just in case anyone runs into this in an environment where they cannot set node properties directly (in my case a build tool):
NODE_OPTIONS="--max-old-space-size=4096" node ...
You can set the node options using an environment variable if you cannot pass them on the command line.
Here are some flag values to add some additional info on how to allow more memory when you start up your node server.
1GB - 8GB
#increase to 1gb
node --max-old-space-size=1024 index.js
#increase to 2gb
node --max-old-space-size=2048 index.js
#increase to 3gb
node --max-old-space-size=3072 index.js
#increase to 4gb
node --max-old-space-size=4096 index.js
#increase to 5gb
node --max-old-space-size=5120 index.js
#increase to 6gb
node --max-old-space-size=6144 index.js
#increase to 7gb
node --max-old-space-size=7168 index.js
#increase to 8gb
node --max-old-space-size=8192 index.js
I just faced same problem with my EC2 instance t2.micro which has 1 GB memory.
I resolved the problem by creating swap file using this url and set following environment variable.
export NODE_OPTIONS=--max_old_space_size=4096
Finally the problem has gone.
I hope that would be helpful for future.
i was struggling with this even after setting --max-old-space-size.
Then i realised need to put options --max-old-space-size before the karma script.
also best to specify both syntaxes --max-old-space-size and --max_old_space_size my script for karma :
node --max-old-space-size=8192 --optimize-for-size --max-executable-size=8192 --max_old_space_size=8192 --optimize_for_size --max_executable_size=8192 node_modules/karma/bin/karma start --single-run --max_new_space_size=8192 --prod --aot
reference https://github.com/angular/angular-cli/issues/1652
I encountered this issue when trying to debug with VSCode, so just wanted to add this is how you can add the argument to your debug setup.
You can add it to the runtimeArgs property of your config in launch.json.
See example below.
{
"version": "0.2.0",
"configurations": [{
"type": "node",
"request": "launch",
"name": "Launch Program",
"program": "${workspaceRoot}\\server.js"
},
{
"type": "node",
"request": "launch",
"name": "Launch Training Script",
"program": "${workspaceRoot}\\training-script.js",
"runtimeArgs": [
"--max-old-space-size=4096"
]
}
]}
I had a similar issue while doing AOT angular build. Following commands helped me.
npm install -g increase-memory-limit
increase-memory-limit
Source: https://geeklearning.io/angular-aot-webpack-memory-trick/
I just want to add that in some systems, even increasing the node memory limit with --max-old-space-size, it's not enough and there is an OS error like this:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
In this case, probably is because you reached the max mmap per process.
You can check the max_map_count by running
sysctl vm.max_map_count
and increas it by running
sysctl -w vm.max_map_count=655300
and fix it to not be reset after a reboot by adding this line
vm.max_map_count=655300
in /etc/sysctl.conf file.
Check here for more info.
A good method to analyse the error is by run the process with strace
strace node --max-old-space-size=128000 my_memory_consuming_process.js
I've faced this same problem recently and came across to this thread but my problem was with React App. Below changes in the node start command solved my issues.
Syntax
node --max-old-space-size=<size> path-to/fileName.js
Example
node --max-old-space-size=16000 scripts/build.js
Why size is 16000 in max-old-space-size?
Basically, it varies depends on the allocated memory to that thread and your node settings.
How to verify and give right size?
This is basically stay in our engine v8. below code helps you to understand the Heap Size of your local node v8 engine.
const v8 = require('v8');
const totalHeapSize = v8.getHeapStatistics().total_available_size;
const totalHeapSizeGb = (totalHeapSize / 1024 / 1024 / 1024).toFixed(2);
console.log('totalHeapSizeGb: ', totalHeapSizeGb);
Steps to fix this issue (In Windows) -
Open command prompt and type %appdata% press enter
Navigate to %appdata% > npm folder
Open or Edit ng.cmd in your favorite editor
Add --max_old_space_size=8192 to the IF and ELSE block
Your node.cmd file looks like this after the change:
#IF EXIST "%~dp0\node.exe" (
"%~dp0\node.exe" "--max_old_space_size=8192" "%~dp0\node_modules\#angular\cli\bin\ng" %*
) ELSE (
#SETLOCAL
#SET PATHEXT=%PATHEXT:;.JS;=;%
node "--max_old_space_size=8192" "%~dp0\node_modules\#angular\cli\bin\ng" %*
)
Recently, in one of my project ran into same problem. Tried couple of things which anyone can try as a debugging to identify the root cause:
As everyone suggested , increase the memory limit in node by adding this command:
{
"scripts":{
"server":"node --max-old-space-size={size-value} server/index.js"
}
}
Here size-value i have defined for my application was 1536 (as my kubernetes pod memory was 2 GB limit , request 1.5 GB)
So always define the size-value based on your frontend infrastructure/architecture limit (little lesser than limit)
One strict callout here in the above command, use --max-old-space-size after node command not after the filename server/index.js.
If you have ngnix config file then check following things:
worker_connections: 16384 (for heavy frontend applications)
[nginx default is 512 connections per worker, which is too low for modern applications]
use: epoll (efficient method) [nginx supports a variety of connection processing methods]
http: add following things to free your worker from getting busy in handling some unwanted task. (client_body_timeout , reset_timeout_connection , client_header_timeout,keepalive_timeout ,send_timeout).
Remove all logging/tracking tools like APM , Kafka , UTM tracking, Prerender (SEO) etc middlewares or turn off.
Now code level debugging: In your main server file , remove unwanted console.log which is just printing a message.
Now check for every server route i.e app.get() , app.post() ... below scenarios:
data => if(data) res.send(data) // do you really need to wait for data or that api returns something in response which i have to wait for?? , If not then modify like this:
data => res.send(data) // this will not block your thread, apply everywhere where it's needed
else part: if there is no error coming then simply return res.send({}) , NO console.log here.
error part: some people define as error or err which creates confusion and mistakes. like this:
`error => { next(err) } // here err is undefined`
`err => {next(error) } // here error is undefined`
`app.get(API , (re,res) =>{
error => next(error) // here next is not defined
})`
remove winston , elastic-epm-node other unused libraries using npx depcheck command.
In the axios service file , check the methods and logging properly or not like :
if(successCB) console.log("success") successCB(response.data) // here it's wrong statement, because on success you are just logging and then `successCB` sending outside the if block which return in failure case also.
Save yourself from using stringify , parse etc on accessive large dataset. (which i can see in your above shown logs too.
Last but not least , for every time when your application crashes or pods restarted check the logs. In log specifically look for this section: Security context
This will give you why , where and who is the culprit behind the crash.
I will mention 2 types of solution.
My solution : In my case I add this to my environment variables :
export NODE_OPTIONS=--max_old_space_size=20480
But even if I restart my computer it still does not work. My project folder is in d:\ disk. So I remove my project to c:\ disk and it worked.
My team mate's solution : package.json configuration is worked also.
"start": "rimraf ./build && react-scripts --expose-gc --max_old_space_size=4096 start",
For other beginners like me, who didn't find any suitable solution for this error, check the node version installed (x32, x64, x86). I have a 64-bit CPU and I've installed x86 node version, which caused the CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory error.
if you want to change the memory globally for node (windows) go to advanced system settings -> environment variables -> new user variable
variable name = NODE_OPTIONS
variable value = --max-old-space-size=4096
You can also change Window's environment variables with:
$env:NODE_OPTIONS="--max-old-space-size=8192"
Unix (Mac OS)
Open a terminal and open our .zshrc file using nano like so (this will create one, if one doesn't exist):
nano ~/.zshrc
Update our NODE_OPTIONS environment variable by adding the following line into our currently open .zshrc file:
export NODE_OPTIONS=--max-old-space-size=8192 # increase node memory limit
Please note that we can set the number of megabytes passed in to whatever we like, provided our system has enough memory (here we are passing in 8192 megabytes which is roughly 8 GB).
Save and exit nano by pressing: ctrl + x, then y to agree and finally enter to save the changes.
Close and reopen the terminal to make sure our changes have been recognised.
We can print out the contents of our .zshrc file to see if our changes were saved like so: cat ~/.zshrc.
Linux (Ubuntu)
Open a terminal and open the .bashrc file using nano like so:
nano ~/.bashrc
The remaining steps are similar with the Mac steps from above, except we would most likely be using ~/.bashrc by default (as opposed to ~/.zshrc). So these values would need to be substituted!
Link to Nodejs Docs
Use the option --optimize-for-size. It's going to focus on using less ram.
I had this error on AWS Elastic Beanstalk, upgrading instance type from t3.micro (Free tier) to t3.small fixed the error
In my case, I upgraded node.js version to latest (version 12.8.0) and it worked like a charm.
Upgrade node to the latest version. I was on node 6.6 with this error and upgraded to 8.9.4 and the problem went away.
For Angular, this is how I fixed
In Package.json, inside script tag add this
"scripts": {
"build-prod": "node --max_old_space_size=5048 ./node_modules/#angular/cli/bin/ng build --prod",
},
Now in terminal/cmd instead of using ng build --prod just use
npm run build-prod
If you want to use this configuration for build only just remove --prod from all the 3 places
I experienced the same problem today. The problem for me was, I was trying to import lot of data to the database in my NextJS project.
So what I did is, I installed win-node-env package like this:
yarn add win-node-env
Because my development machine was Windows. I installed it locally than globally. You can install it globally also like this: yarn global add win-node-env
And then in the package.json file of my NextJS project, I added another startup script like this:
"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev"
Here, am passing the node option, ie. setting 8GB as the limit.
So my package.json file somewhat looks like this:
{
"name": "my_project_name_here",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "next dev",
"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev",
"build": "next build",
"lint": "next lint"
},
......
}
And then I run it like this:
yarn dev_more_mem
For me, I was facing the issue only on my development machine (because I was doing the importing of large data). Hence this solution. Thought to share this as it might come in handy for others.
I had the same issue in a windows machine and I noticed that for some reason it didn't work in git bash, but it was working in power shell
Just in case it may help people having this issue while using nodejs apps that produce heavy logging, a colleague solved this issue by piping the standard output(s) to a file.
If you are trying to launch not node itself, but some other soft, for example webpack you can use the environment variable and cross-env package:
$ cross-env NODE_OPTIONS='--max-old-space-size=4096' \
webpack --progress --config build/webpack.config.dev.js
For angular project bundling, I've added the below line to my pakage.json file in the scripts section.
"build-prod": "node --max_old_space_size=5120 ./node_modules/#angular/cli/bin/ng build --prod --base-href /"
Now, to bundle my code, I use npm run build-prod instead of ng build --requiredFlagsHere
hope this helps!
If any of the given answers are not working for you, check your installed node if it compatible (i.e 32bit or 64bit) to your system. Usually this type of error occurs because of incompatible node and OS versions and terminal/system will not tell you about that but will keep you giving out of memory error.
None of all these every single answers worked for me (I didn't try to update npm tho).
Here's what worked: My program was using two arrays. One that was parsed on JSON, the other that was generated from datas on the first one. Just before the second loop, I just had to set my first JSON parsed array back to [].
That way a loooooot of memory is freed, allowing the program to continue execution without failing memory allocation at some point.
Cheers !
You can fix a "heap out of memory" error in Node.js by below approaches.
Increase the amount of memory allocated to the Node.js process by using the --max-old-space-size flag when starting the application. For example, you can increase the limit to 4GB by running node --max-old-space-size=4096 index.js.
Use a memory leak detection tool, such as the Node.js heap dump module, to identify and fix memory leaks in your application. You can also use the node inspector and use chrome://inspect to check memory usage.
Optimize your code to reduce the amount of memory needed. This might involve reducing the size of data structures, reusing objects instead of creating new ones, or using more efficient algorithms.
Use a garbage collector (GC) algorithm to manage memory automatically. Node.js uses the V8 engine's garbage collector by default, but you can also use other GC algorithms such as the Garbage Collection in Node.js
Use a containerization technology like Docker which limits the amount of memory available to the container.
Use a process manager like pm2 which allows to automatically restart the node application if it goes out of memory.

Categories