今天,我运行了文件系统索引的脚本来刷新RAID文件索引,4h后它崩溃了,出现以下错误:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

服务器配置16gb RAM和24gb SSD交换盘。我非常怀疑我的脚本内存超过了36gb。至少不应该是这样

脚本创建文件索引存储为对象数组与文件元数据(修改日期,权限等,没有大数据)

以下是完整的脚本代码: http://pastebin.com/mjaD76c3

我已经经历了奇怪的节点问题在过去与这个脚本迫使我eg。分割索引到多个文件作为节点是故障时,工作在这样的大文件字符串。对于庞大的数据集,有什么方法可以改善nodejs的内存管理吗?


当前回答

我今天也遇到了同样的问题。对我来说,问题是,我试图在我的NextJS项目的数据库中导入大量数据。

所以我所做的是,我像这样安装win-node-env包:

yarn add win-node-env

因为我的开发机器是Windows。我在本地安装,而不是在全球安装。你也可以像这样全局安装它:yarn global add win-node-env

然后在包装中。我的NextJS项目的json文件,我添加了另一个启动脚本,像这样:

"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev"

在这里,我传递节点选项,即。设置8GB为上限。 我的包裹。Json文件看起来像这样:

{
  "name": "my_project_name_here",
  "version": "1.0.0",
  "private": true,
  "scripts": {
    "dev": "next dev",
    "dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev",
    "build": "next build",
    "lint": "next lint"
  },
  ......
}

然后我这样运行它:

yarn dev_more_mem

对我来说,我只在我的开发机器上遇到了这个问题(因为我正在导入大数据)。因此有了这个解决方案。我想分享一下,因为它可能对其他人有用。

其他回答

你可以通过以下方法修复Node.js中的“堆出内存”错误。

Increase the amount of memory allocated to the Node.js process by using the --max-old-space-size flag when starting the application. For example, you can increase the limit to 4GB by running node --max-old-space-size=4096 index.js. Use a memory leak detection tool, such as the Node.js heap dump module, to identify and fix memory leaks in your application. You can also use the node inspector and use chrome://inspect to check memory usage. Optimize your code to reduce the amount of memory needed. This might involve reducing the size of data structures, reusing objects instead of creating new ones, or using more efficient algorithms. Use a garbage collector (GC) algorithm to manage memory automatically. Node.js uses the V8 engine's garbage collector by default, but you can also use other GC algorithms such as the Garbage Collection in Node.js Use a containerization technology like Docker which limits the amount of memory available to the container. Use a process manager like pm2 which allows to automatically restart the node application if it goes out of memory.

我只是想补充一点,在一些系统中,即使使用——max-old-space-size来增加节点内存限制,这也是不够的,而且会出现这样的操作系统错误:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

在这种情况下,可能是因为您达到了每个进程的最大mmap。

可以通过运行命令查看max_map_count

sysctl vm.max_map_count

并通过跑步来增加它

sysctl -w vm.max_map_count=655300

并通过添加这一行来修复它,使其在重新启动后不被重置

vm.max_map_count=655300

在/etc/sysctl.conf文件中。

点击这里查看更多信息。

分析误差的一个很好的方法是用strace运行该过程

strace node --max-old-space-size=128000 my_memory_consuming_process.js

我在做AOT角构建时也遇到了类似的问题。听从命令帮助了我。

npm install -g increase-memory-limit
increase-memory-limit

来源:https://geeklearning.io/angular-aot-webpack-memory-trick/

对于Angular,这就是我修复的方法

在包中。Json,在脚本标签中添加这个

"scripts": {
  "build-prod": "node --max_old_space_size=5048 ./node_modules/@angular/cli/bin/ng build --prod",
},

现在在terminal/cmd中使用ng build——prod只是使用

npm run build-prod

如果你想使用这个配置构建只需要删除-prod从所有3个地方

将节点升级到最新版本。我在6.6节点上出现了这个错误,并升级到8.9.4,问题消失了。