当前位置: 代码迷 >> 综合 >> 【Hive】报错Container is running beyond physical memory limits.4.0 GB of 4 GB physical memory used
  详细解决方案

【Hive】报错Container is running beyond physical memory limits.4.0 GB of 4 GB physical memory used

热度:11   发布时间:2024-02-22 08:55:16.0

最近碰到任务报错如下: 

Container [pid=105939,containerID=container_e03_1599034722009_11568264_01_000200] is running beyond physical memory limits. 
Current usage: 4.0 GB of 4 GB physical memory used; 6.0 GB of 8.4 GB virtual memory used. 
Killing container.

原先设置的参数为: 

set mapreduce.map.memory.mb=8192;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.map.java.opts=-Xmx8192m;
set mapreduce.reduce.java.opts=-Xmx4096m;

这个参数是有问题的。简单的来讲,Java堆内存要小于container的大小。

调整参数如下,解决报错:

set mapreduce.map.memory.mb=8192;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.map.java.opts=-Xmx6144m;
set mapreduce.reduce.java.opts=-Xmx3072m;

Each machine in our cluster has 48 GB of RAM. Some of this RAM should be >reserved for Operating System usage. On each node, we’ll assign 40 GB RAM for >YARN to use and keep 8 GB for the Operating System

For our example cluster, we have the minimum RAM for a Container (yarn.scheduler.minimum-allocation-mb) = 2 GB. We’ll thus assign 4 GB for Map task Containers, and 8 GB for Reduce tasks Containers.

In mapred-site.xml:

mapreduce.map.memory.mb: 4096

mapreduce.reduce.memory.mb: 8192

Each Container will run JVMs for the Map and Reduce tasks. The JVM heap size should be set to lower than the Map and Reduce memory defined above, so that they are within the bounds of the Container memory allocated by YARN.

In mapred-site.xml:

mapreduce.map.java.opts-Xmx3072m

mapreduce.reduce.java.opts-Xmx6144m

The above settings configure the upper limit of the physical RAM that Map and Reduce tasks will use.

To sum it up:

  1. In YARN, you should use the mapreduce configs, not the mapred ones. EDIT: This comment is not applicable anymore now that you've edited your question.
  2. What you are configuring is actually how much you want to request, not what is the max to allocate.
  3. The max limits are configured with the java.opts settings listed above.

Finally, you may want to check this other SO question that describes a similar problem (and solution).

【参考】https://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

  相关解决方案