The required MAP capability is more than the supported max container capability in the cluster. Kill

yarn內存設置問題node

hive查詢時出現apache

Ended Job = job_1544003470555_0007 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTaskoop

而後進行yarn測試測試

hadoop jar hadoop-mapreduce-examples-3.0.0-cdh6.0.0.jar pi 2 10ui

報錯debug

18/12/11 17:58:56 INFO mapreduce.Job: Job job_1544003470555_0008 failed with state KILLED due to: The required MAP capability is more than the supported max container capability in the cluster. Killing the Job. mapResourceRequest: <memory:2048, vCores:2> maxContainerCapability:<memory:1024, vCores:1>code

解決方法orm

修改參數 調大虛擬內存,根據本身狀況配置blog

mapreduce.map.memory.mb=2048
mapreduce.reduce.memory.mb=2048
yarn.nodemanager.vmem-pmem-ratio=3
參考yarn平臺參數設置點擊此處
這樣map運行時的虛擬內存大小爲 2048*3內存

相似這樣的狀況還有
設置Container的分配的內存大小,意味着ResourceManager只能分配給Container的內存

大於yarn.scheduler.minimum-allocation-mb=2G,
不能超過 yarn.scheduler.maximum-allocation-mb=8G  的值。

ResourceManager分配給container的CPU也要知足最小和最大值的條件,經過設置

yarn.scheduler.minimum-allocation-vcores=2 yarn.scheduler.maximum-allocation-vcores=8

相關文章
相關標籤/搜索