Linux/Python学习论坛-京峰教育

 找回密码
 立即注册

一键登录:

搜索
热搜: 活动 交友 discuz
查看: 1052|回复: 0

Hadoop集群启动NodeManager的时候NodeManager does not satisfy minimum al...

[复制链接]

238

主题

288

帖子

1925

积分

超级版主

Rank: 8Rank: 8

积分
1925
QQ
发表于 2015-3-18 11:13:11 | 显示全部楼层 |阅读模式
原文http://tonylixu.blogspot.hk/2014 ... roubleshooting.html


MapReduce with Yarn - Troubleshooting
Here are some of the errors I got when I was configuring MapReduce in YARN, hope it helps:


Errors:
1. NodeManager doesn't satisfy minimum allocations:
2014-02-26 16:44:01,191 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from  hadoop3 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>256</value>
        <description>minimum limit of memory to allocate to each container request
            at the Resource Manager
        </description>
    </property>


Make sure in your yarn-site.xml, the value of "yarn.scheduler.minimum-allocation-mb" is consistent.


2. Repeating "Sending out status for container" in nodemanager log file:
repeating:


2014-02-27 14:45:18,521 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 2 cluster_timestamp: 1393529557445 } attemptId: 1 } id: 1 } state: C_RUNNING diagnostics: "" exit_status: -1000

^X2014-02-27 14:45:19,526 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 2 cluster_timestamp: 1393529557445 } attemptId: 1 } id: 1 } state: C_RUNNING diagnostics: "" exit_status: -1000

2014-02-27 14:45:19,806 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 21223 for container-id container_1393529557445_0002_01_000001: 106.0 MB of 128 MB physical memory used; 770.7 MB of 2.5 GB virtual memory used

2014-02-27 14:45:20,532 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id { app_attempt_id { application_id { id: 2 cluster_timestamp: 1393529557445 } attemptId: 1 } id: 1 } state: C_RUNNING diagnostics: "" exit_status: -1000


Make sure you defined your "yarn.resourcemanager.scheduler.address", "yarn.resourcemanager.resource-tracker.address", "yarn.resourcemanager.admin.address" and "yarn.resourcemanager.webapp.adderss" in your yarn-site.xml file on all nodes.




3. Java heap space error:
O mapreduce.Job: Task Id : attempt_1393530502372_0001_m_000000_0, Status : FAILED
Error: Java heap space
14/02/27 14:49:26 INFO mapreduce.Job: Task Id : attempt_1393530502372_0001_m_000000_1, Status : FAILED
Error: Java heap space
14/02/27 14:49:33 INFO mapreduce.Job: Task Id : attempt_1393530502372_0001_m_000000_2, Status : FAILED
Error: Java heap space




Try to increase "mapreduce.map.java.opts" and "mapreduce.reduce.java.opts" values in mapred-site.xml file. Make sure it is bigger than "mapreduce.map.memory.mb" and "mapreduce.reduce.memory.mb".


4. Input path does not exist error:


ERROR security.UserGroupInformation: PriviledgedActionException as:lxu (auth:SIMPLE) causerg.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://hadoop1:8020/user/lxu/grep-temp-1989143918

Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://hadoop1:8020/user/lxu/grep-temp-1989143918
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.apache.hadoop.examples.Grep.run(Grep.java:92)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.Grep.main(Grep.java:101)




Probably you set "mapreduce.jobhistory.intermediate-done-dir“ and "mapreduce.jobhistory.done-dir" to some value in your mapred-site.xml file, this might confuses MapReduce job. Check your mapred-site.xml, if you do have the two properties set, comment out then restart your historyserver. It will use the default path:


drwxrwxrwx   - hdfs   hadoop          0 2014-02-27 16:16 /tmp/hadoop-yarn/staging/history/done
drwxrwxrwt   - mapred hadoop          0 2014-02-27 14:33 /tmp/hadoop-yarn/staging/history/done_intermediate

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|京峰教育,只为有梦想的人 ( 京ICP备15013173号 )

GMT+8, 2020-1-29 01:08 , Processed in 0.026151 second(s), 12 queries , Redis On.

快速回复 返回顶部 返回列表