以前由於java.lang.OutOfMemoryError: unable to create new native thread
設置了Xss參數,見http://zouqingyun.blog.51cto.com/782246/1879975java
nodeManager仍然出現該異常,同時map-reduce的任務中也出現該異常node
2、一些現象
apache
跑了一個map-reduce任務,這個任務處理的都是小文件,最後生成了2萬多個map任務。這個job中許多任務出現java.lang.OutOfMemoryError: unable to create new native thread
,觀察了這個job的一些任務,發現這個任務的thread stack持續增加,最後有7000多個thread,最後致使java.lang.OutOfMemoryError: unable to create new native thread
,由於每一個map任務分配的內存爲800m,ThreadStackSize是默認值1024k,最後致使內存耗盡。任務的線程棧中持續一下輸出:bash
"Thread-3689" daemon prio=10 tid=0x00007fb6bf364000 nid=0x2331 in Object.wait() [0x00007fb5b9b94000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:638) - locked <0x00000000f89800d0> (a java.util.LinkedList) "Thread-3688" daemon prio=10 tid=0x00007fb6bf362000 nid=0x10a9 in Object.wait() [0x00007fb5b9c95000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:638) - locked <0x00000000f89701c0> (a java.util.LinkedList) "Thread-3687" daemon prio=10 tid=0x00007fb6bf35a800 nid=0xf23 in Object.wait() [0x00007fb5b9d96000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:638) - locked <0x00000000f89681c0> (a java.util.LinkedList) "Thread-3686" daemon prio=10 tid=0x00007fb6bf358800 nid=0xde9 in Object.wait() [0x00007fb5b9e97000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:638)
3、猜想ide
一、nodemanager的異常可能與這個有關,當該map-reduce全部任務調度到一臺機器(大概40個container),每一個container中任務都生成7000個thread(生成不少小文件?)。致使耗盡max user processes(262144)。但nodemanger須要new thread的時候,出現java.lang.OutOfMemoryError: unable to create new native thread。(ps 昨天這個任務確實在定時跑)oop
二、多是hadoop/yarn某些地方的內存溢出問題。參見一個相似的問題。https://issues.apache.org/jira/browse/YARN-4581 spa
4、後記線程
hadoop處理大量小文件,要使用org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat,並設置mapreduce.input.fileinputformat.split.maxsize = 5147483648
code