Linux/Python学习论坛-京峰教育

 找回密码
 立即注册

一键登录:

搜索
热搜: 活动 交友 discuz
查看: 772|回复: 0

多机搭建Hadoop2.0

[复制链接]

238

主题

288

帖子

1925

积分

超级版主

Rank: 8Rank: 8

积分
1925
QQ
发表于 2015-3-18 11:21:30 | 显示全部楼层 |阅读模式
机器规划如下, 没有配置zookeeper,即没有自动切换
master1, Active NameNode
master2, Standby NameNode
slaver1, JournalNode, DataNode
slaver2, JournalNode, DataNode
slaver3, JournalNode, DataNode




配置本地域名解析, 做好这一步之后, 剩下的直接复制粘贴我的过程就可以搭建一个hadoop2.0集群了了, 前提是准备好了编译好的hadoop包和java
[root@master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.206.101 master1
192.168.206.102 master2
192.168.206.103 slaver1
192.168.206.104 slaver2
192.168.206.105 slaver3
[root@master1 ~]# ▊




停止防火墙
[root@master1 ~]# for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh $i 'service iptables stop;chkconfig iptables off';done
iptables: Setting chains to policy ACCEPT: filter nat [  OK  ]
iptables: Flushing firewall rules: [  OK  ]
iptables: Unloading modules: [  OK  ]
iptables: Setting chains to policy ACCEPT: filter [  OK  ]
iptables: Flushing firewall rules: [  OK  ]
iptables: Unloading modules: [  OK  ]
iptables: Flushing firewall rules: [  OK  ]
iptables: Setting chains to policy ACCEPT: filter [  OK  ]
iptables: Unloading modules: [  OK  ]
iptables: Flushing firewall rules: [  OK  ]
iptables: Setting chains to policy ACCEPT: filter [  OK  ]
iptables: Unloading modules: [  OK  ]
iptables: Flushing firewall rules: [  OK  ]
iptables: Setting chains to policy ACCEPT: filter [  OK  ]
iptables: Unloading modules: [  OK  ]
[root@master1 ~]# ▊




停止selinux
[root@master1 ~]# for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh $i 'setenforce 0';done
[root@master1 ~]# ▊




时间同步
[root@master1 ~]# for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh $i 'ntpdate 0.rhel.pool.ntp.org';done
20 Aug 11:43:54 ntpdate[4391]: adjust time server 202.112.31.197 offset 0.000049 sec
20 Aug 11:44:00 ntpdate[2875]: adjust time server 202.112.31.197 offset -0.261408 sec
20 Aug 11:44:05 ntpdate[2809]: adjust time server 202.112.31.197 offset -0.401874 sec
20 Aug 11:44:10 ntpdate[2857]: adjust time server 202.112.31.197 offset -0.276382 sec
20 Aug 11:44:15 ntpdate[2788]: adjust time server 202.112.29.82 offset 0.220171 sec
[root@master1 ~]# ▊




添加用户
[root@master1 ~]# for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh $i "useradd cc";done
[root@master1 ~]# ▊




添加密码
[root@master1 ~]# for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh $i "echo 'cc' | passwd cc --stdin";done
Changing password for user cc.
passwd: all authentication tokens updated successfully.
Changing password for user cc.
passwd: all authentication tokens updated successfully.
Changing password for user cc.
passwd: all authentication tokens updated successfully.
Changing password for user cc.
passwd: all authentication tokens updated successfully.
Changing password for user cc.
passwd: all authentication tokens updated successfully.
[root@master1 ~]# ▊




设置免密码登陆
[cc@master1 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cc/.ssh/id_rsa):
Created directory '/home/cc/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cc/.ssh/id_rsa.
Your public key has been saved in /home/cc/.ssh/id_rsa.pub.
The key fingerprint is:
ce:11:df:74:07:76:b0:d7:af:f3:99:73:a7:f3:9e:d9 cc@master1
The key's randomart image is:
+--[ RSA 2048]----+
|              +..|
|             . +.|
|        .   . o +|
|         o o . o.|
|        S . .   .|
|       o .     . |
|        o     o  |
|              .+O|
|              .XE|
+-----------------+
[cc@master1 ~]$ for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh-copy-id $i;done
The authenticity of host 'master1 (192.168.206.101)' can't be established.
RSA key fingerprint is 23:27:20:48:3e:64:77:50:c3:d8:ad:31:2a:8d:9c:4f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master1,192.168.206.101' (RSA) to the list of known hosts.
cc@master1's password:
Now try logging into the machine, with "ssh 'master1'", and check in:


  .ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


The authenticity of host 'master2 (192.168.206.102)' can't be established.
RSA key fingerprint is 23:27:20:48:3e:64:77:50:c3:d8:ad:31:2a:8d:9c:4f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master2,192.168.206.102' (RSA) to the list of known hosts.
cc@master2's password:
Now try logging into the machine, with "ssh 'master2'", and check in:


  .ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


The authenticity of host 'slaver1 (192.168.206.103)' can't be established.
RSA key fingerprint is 23:27:20:48:3e:64:77:50:c3:d8:ad:31:2a:8d:9c:4f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slaver1,192.168.206.103' (RSA) to the list of known hosts.
cc@slaver1's password:
Now try logging into the machine, with "ssh 'slaver1'", and check in:


  .ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


The authenticity of host 'slaver2 (192.168.206.104)' can't be established.
RSA key fingerprint is 23:27:20:48:3e:64:77:50:c3:d8:ad:31:2a:8d:9c:4f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slaver2,192.168.206.104' (RSA) to the list of known hosts.
cc@slaver2's password:
Now try logging into the machine, with "ssh 'slaver2'", and check in:


  .ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


The authenticity of host 'slaver3 (192.168.206.105)' can't be established.
RSA key fingerprint is 23:27:20:48:3e:64:77:50:c3:d8:ad:31:2a:8d:9c:4f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slaver3,192.168.206.105' (RSA) to the list of known hosts.
cc@slaver3's password:
Now try logging into the machine, with "ssh 'slaver3'", and check in:


  .ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


[cc@master1 ~]$ ▊




安装好的java
[cc@master1 ~]$ for i in `echo master1 master2 slaver1 slaver2 slaver3`;do ssh $i "/opt/java/bin/java -version";done
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
[cc@master1 ~]$ ▊




解压hadoop包
[cc@master1 ~]$ ls
hadoop-2.3.0-cdh5.0.0.tar.gz
[cc@master1 ~]$ tar xf hadoop-2.3.0-cdh5.0.0.tar.gz
[cc@master1 ~]$ ls
hadoop-2.3.0-cdh5.0.0  hadoop-2.3.0-cdh5.0.0.tar.gz
[cc@master1 ~]$ ▊




设置hadoop环境变量
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/java
[cc@master1 ~]$ ▊




配置core-site,填写进Active NameNode的名字和端口,这里是master1(没有配置zookeeper,没有自动切换,直接写主机名端口)
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/core-site.xml
<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://master1:8020</value>
        </property>
</configuration>
[cc@master1 ~]$ ▊




配置mapred-site.xml,jobhistory这个是用来查看历史作业的,可有可无
[cc@master1 ~]$ cp hadoop-2.3.0-cdh5.0.0/etc/hadoop/mapred-site.xml.template hadoop-2.3.0-cdh5.0.0/etc/hadoop/mapred-site.xml
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/mapred-site.xml
<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>master2:10020</value>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>master2:19888</value>
        </property>
        </property>
</configuration>
[cc@master1 ~]$ ▊




配置hdfs
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/hdfs-site.xml
<configuration>
        <property>
                <name>dfs.nameservices</name>
                <value>cc-cluster</value>
        </property>
        <property>
                <name>dfs.ha.namenodes.cc-cluster</name>
                <value>NameNode1,NameNode2</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.cc-cluster.NameNode1</name>
                <value>master1:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.cc-cluster.NameNode2</name>
                <value>master2:8020</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.cc-cluster.NameNode1</name>
                <value>master1:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.cc-cluster.NameNode2</name>
                <value>master2:50070</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/cc/dfs.data/namenode.data</value>
        </property>
        <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://slaver1:8485;slaver2:8485;slaver3:8485/cc-cluster-journal</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:///home/cc/dfs.data/datanode.data</value>
        </property>
        <property>
                <!-- 没有配置zookeeper, 没有自动切换, 设置为false -->
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>file:///home/cc/dfs.data/journal.data</value>
        </property>
</configuration>
[cc@master1 ~]$ ▊




配置yarn
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/yarn-site.xml
<configuration>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master1</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>${yarn.resourcemanager.hostname}:8032</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>${yarn.resourcemanager.hostname}:8030</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address</name>
                <value>${yarn.resourcemanager.hostname}:8088</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.https.address</name>
                <value>${yarn.resourcemanager.hostname}:8090</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address</name>
                <value>${yarn.resourcemanager.hostname}:8031</value>
        </property>
        <property>
                <name>yarn.resourcemanager.admin.address</name>
                <value>${yarn.resourcemanager.hostname}:8033</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
        </property>
        <property>
                <name>yarn.scheduler.fair.allocation.file</name>
                <value>${yarn.home.dir>/etc/hadoop/fairscheduler.xml</value>
        </property>
        <property>
                <name>yarn.nodemanager.local-dirs</name>
                <value>/home/cc/yarn/local</value>
        </property>
        <property>
                <name>yarn.log-aggregation-enable</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.nodemanager.remote-app-log-dir</name>
                <value>/tmp/logs</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>2048</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.cpu-vcores</name>
                <value>2</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
</configuration>
[cc@master1 ~]$ ▊




配置slavers
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/slaves
slaver1
slaver2
slaver3
[cc@master1 ~]$ ▊




配置fairscheduler。 这里把所有资源分成了1个队列,只有root和cc用户可以提交作业到队列中,可以添加多个队列,这样可以控制资源分配, 权重,权限等等
[cc@master1 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/fairscheduler.xml
<?xml version="1.0"?>
<allocations>
        <queue name="infrqastructure">
                <minResources>512 mb, 1 vcores </minResources>
                <maxResources>1024 mb, 2 vcores </maxResources>
                <maxRunningApps>2</maxRunningApps>
                <minSharePreemptionTimeout>300</minSharePreemptionTimeout>
                <weight>1.0</weight>
                <aclSubmitApps>root,cc</aclSubmitApps>
        </queue>
</allocations>
[cc@master1 ~]$ ▊




把hadoop文件拷贝到所有机器上
[cc@master1 ~]$ for i in `echo master2 slaver1 slaver2 slaver3`;do scp -r hadoop-2.3.0-cdh5.0.0 $i:;done
[cc@master1 ~]$ ▊




所有机器进入到hadoop的文件目录, 每台机器都要
[cc@master1 ~]$ cd hadoop-2.3.0-cdh5.0.0
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊


     


[cc@master2 ~]$ cd hadoop-2.3.0-cdh5.0.0
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ ▊


     


[cc@slaver1 ~]$ cd hadoop-2.3.0-cdh5.0.0
[cc@slaver1 hadoop-2.3.0-cdh5.0.0]$ ▊


     


[cc@slaver2 ~]$ cd hadoop-2.3.0-cdh5.0.0
[cc@slaver2 hadoop-2.3.0-cdh5.0.0]$ ▊


     


[cc@slaver3 ~]$ cd hadoop-2.3.0-cdh5.0.0
[cc@slaver3 hadoop-2.3.0-cdh5.0.0]$ ▊




在slaver机器上启动journalnode服务
[cc@slaver1 hadoop-2.3.0-cdh5.0.0]$ sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-journalnode-slaver1.out
[cc@slaver1 hadoop-2.3.0-cdh5.0.0]$ ▊


     
     
[cc@slaver2 hadoop-2.3.0-cdh5.0.0]$ sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-journalnode-slaver2.out
[cc@slaver2 hadoop-2.3.0-cdh5.0.0]$ ▊


     


[cc@slaver3 hadoop-2.3.0-cdh5.0.0]$ sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-journalnode-slaver3.out
[cc@slaver3 hadoop-2.3.0-cdh5.0.0]$ ▊




应该每个节点上都有启动到JournalNode进程
[cc@slaver1 hadoop-2.3.0-cdh5.0.0]$ /opt/java/bin/jps
3068 JournalNode
3153 Jps
[cc@slaver1 hadoop-2.3.0-cdh5.0.0]$ ▊




在NameNode1上(即master1, 根据hdfs-site.xml)格式化hdfs
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ bin/hdfs namenode -format
14/08/20 12:04:02 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master1/192.168.206.101
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.3.0-cdh5.0.0
STARTUP_MSG:   classpath = /home/cc/hadoop-2.3.0-cdh5.0.0/etc/hadoop:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/hadoop-auth-2.3.0-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/avro-1.7.5-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/hadoop-annotations-2.3.0-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/xz-1.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/activation-1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/asm-3.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/zookeeper-3.4.5-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jettison-1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/
STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on 2014-03-28T04:29Z
STARTUP_MSG:   java = 1.7.0_55
************************************************************/
14/08/20 12:04:02 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/08/20 12:04:02 INFO namenode.NameNode: createNameNode [-format]
14/08/20 12:04:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-a7a72730-502f-400d-a90f-0edf48efa993
14/08/20 12:04:05 INFO namenode.FSNamesystem: fsLock is fair:true
14/08/20 12:04:05 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/08/20 12:04:05 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/08/20 12:04:05 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/08/20 12:04:05 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/08/20 12:04:05 INFO util.GSet: Computing capacity for map BlocksMap
14/08/20 12:04:05 INFO util.GSet: VM type       = 64-bit
14/08/20 12:04:05 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/08/20 12:04:05 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/08/20 12:04:05 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/08/20 12:04:05 INFO blockmanagement.BlockManager: defaultReplication         = 3
14/08/20 12:04:05 INFO blockmanagement.BlockManager: maxReplication             = 512
14/08/20 12:04:05 INFO blockmanagement.BlockManager: minReplication             = 1
14/08/20 12:04:05 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/08/20 12:04:05 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/08/20 12:04:05 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/08/20 12:04:05 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/08/20 12:04:05 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
14/08/20 12:04:05 INFO namenode.FSNamesystem: fsOwner             = cc (auth:SIMPLE)
14/08/20 12:04:05 INFO namenode.FSNamesystem: supergroup          = supergroup
14/08/20 12:04:05 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/08/20 12:04:05 INFO namenode.FSNamesystem: Determined nameservice ID: cc-cluster
14/08/20 12:04:05 INFO namenode.FSNamesystem: HA Enabled: true
14/08/20 12:04:05 INFO namenode.FSNamesystem: Append Enabled: true
14/08/20 12:04:06 INFO util.GSet: Computing capacity for map INodeMap
14/08/20 12:04:06 INFO util.GSet: VM type       = 64-bit
14/08/20 12:04:06 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/08/20 12:04:06 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/08/20 12:04:06 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/20 12:04:06 INFO util.GSet: Computing capacity for map cachedBlocks
14/08/20 12:04:06 INFO util.GSet: VM type       = 64-bit
14/08/20 12:04:06 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/08/20 12:04:06 INFO util.GSet: capacity      = 2^18 = 262144 entries
14/08/20 12:04:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/08/20 12:04:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/08/20 12:04:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/08/20 12:04:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/08/20 12:04:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/08/20 12:04:06 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/08/20 12:04:06 INFO util.GSet: VM type       = 64-bit
14/08/20 12:04:06 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/08/20 12:04:06 INFO util.GSet: capacity      = 2^15 = 32768 entries
14/08/20 12:04:06 INFO namenode.AclConfigFlag: ACLs enabled? false
14/08/20 12:04:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1309254799-192.168.206.101-1408507448987
14/08/20 12:04:09 INFO common.Storage: Storage directory /home/cc/dfs.data/namenode.data has been successfully formatted.
14/08/20 12:04:10 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/08/20 12:04:10 INFO util.ExitUtil: Exiting with status 0
14/08/20 12:04:10 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master1/192.168.206.101
************************************************************/
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




立刻在master1上启动namenode
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-namenode-master1.out
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




应该有启动到NameNode进程
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ /opt/java/bin/jps
4711 Jps
4639 NameNode
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




在NameNode2上(即master2, 根据hdfs-site.xml)同步NameNode1上的元数据信息
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ bin/hdfs namenode -bootstrapStandby
14/08/20 12:11:29 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master2/192.168.206.102
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 2.3.0-cdh5.0.0
STARTUP_MSG:   classpath = /home/cc/hadoop-2.3.0-cdh5.0.0/etc/hadoop:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/hadoop-auth-2.3.0-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/avro-1.7.5-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/hadoop-annotations-2.3.0-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/xz-1.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/activation-1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/asm-3.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/zookeeper-3.4.5-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jettison-1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/
STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on 2014-03-28T04:29Z
STARTUP_MSG:   java = 1.7.0_55
************************************************************/
14/08/20 12:11:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/08/20 12:11:29 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
14/08/20 12:11:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
=====================================================
About to bootstrap Standby ID NameNode2 from:
           Nameservice ID: cc-cluster
        Other Namenode ID: NameNode1
  Other NN's HTTP address: http://master1:50070
  Other NN's IPC  address: master1/192.168.206.101:8020
             Namespace ID: 2017173258
            Block pool ID: BP-1309254799-192.168.206.101-1408507448987
               Cluster ID: CID-a7a72730-502f-400d-a90f-0edf48efa993
           Layout version: -55
=====================================================
14/08/20 12:11:33 INFO common.Storage: Storage directory /home/cc/dfs.data/namenode.data has been successfully formatted.
14/08/20 12:11:34 INFO namenode.TransferFsImage: Opening connection to http://master1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-55:2017173258:0:CID-a7a72730-502f-400d-a90f-0edf48efa993
14/08/20 12:11:34 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
14/08/20 12:11:35 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB/s
14/08/20 12:11:35 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 349 bytes.
14/08/20 12:11:35 INFO util.ExitUtil: Exiting with status 0
14/08/20 12:11:35 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master2/192.168.206.102
************************************************************/
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ ▊




在master2上启动namenode进程
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-namenode-master2.out
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ ▊




现在在master2上应该有启动到NameNode进程
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ /opt/java/bin/jps
3135 NameNode
3206 Jps
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ ▊




到现在为止,两个namenode都是standby状态,不能提供服务


在任意namenode上执行命令切换namenode1为active
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ bin/hdfs haadmin -transitionToActive NameNode1
14/08/20 12:17:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[cc@master2 hadoop-2.3.0-cdh5.0.0]$ ▊




可以看到是active了, 但是存储大小是0B

在master1上启动所有datanode,因为它配置了免密码ssh登陆到其它机器(如果datanode的hostname跟hosts文件的hostname对应不上会启动失败)
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ sbin/hadoop-daemons.sh start datanode
slaver3: starting datanode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-datanode-slaver3.out
slaver1: starting datanode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-datanode-slaver1.out
slaver2: starting datanode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-datanode-slaver2.out
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




启动完datanode可以看到存储大小有那么几十GB, 我这是开的虚拟机, 所以比较小

测试创建目录
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ bin/hadoop fs -mkdir /cc
14/08/20 12:27:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




上传文件
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ bin/hadoop fs -put /etc/services /cc
14/08/20 12:29:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




可以看到有文件

启动yarn
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/yarn-cc-resourcemanager-master1.out
slaver2: starting nodemanager, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/yarn-cc-nodemanager-slaver2.out
slaver1: starting nodemanager, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/yarn-cc-nodemanager-slaver1.out
slaver3: starting nodemanager, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/yarn-cc-nodemanager-slaver3.out
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




通过web查看yarn的状态



进程状况如下
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ for i in `echo master1 master2 slaver1 slaver2 slaver3`;do echo $i;ssh $i "/opt/java/bin/jps";done
master1
5720 ResourceManager
4639 NameNode
6041 Jps
master2
3135 NameNode
3879 Jps
slaver1
3572 NodeManager
3068 JournalNode
3323 DataNode
3720 Jps
slaver2
3219 DataNode
3464 NodeManager
3600 Jps
2978 JournalNode
slaver3
3553 Jps
3416 NodeManager
2924 JournalNode
3170 DataNode
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




跑一个测试用例
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.0.0.jar pi 3 100
Number of Maps  = 3
Samples per Map = 100
14/08/20 13:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
14/08/20 13:21:37 INFO client.RMProxy: Connecting to ResourceManager at master1/192.168.206.101:8032
14/08/20 13:21:38 INFO input.FileInputFormat: Total input paths to process : 3
14/08/20 13:21:38 INFO mapreduce.JobSubmitter: number of splits:3
14/08/20 13:21:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1408511753479_0001
14/08/20 13:21:40 INFO impl.YarnClientImpl: Submitted application application_1408511753479_0001
14/08/20 13:21:40 INFO mapreduce.Job: The url to track the job: http://master1:8088/proxy/application_1408511753479_0001/
14/08/20 13:21:40 INFO mapreduce.Job: Running job: job_1408511753479_0001
14/08/20 13:21:54 INFO mapreduce.Job: Job job_1408511753479_0001 running in uber mode : false
14/08/20 13:21:54 INFO mapreduce.Job:  map 0% reduce 0%
14/08/20 13:22:10 INFO mapreduce.Job:  map 33% reduce 0%





可以看到正在跑



然后可以看到结果
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.0.0.jar pi 3 100
Number of Maps  = 3
Samples per Map = 100
14/08/20 13:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
14/08/20 13:21:37 INFO client.RMProxy: Connecting to ResourceManager at master1/192.168.206.101:8032
14/08/20 13:21:38 INFO input.FileInputFormat: Total input paths to process : 3
14/08/20 13:21:38 INFO mapreduce.JobSubmitter: number of splits:3
14/08/20 13:21:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1408511753479_0001
14/08/20 13:21:40 INFO impl.YarnClientImpl: Submitted application application_1408511753479_0001
14/08/20 13:21:40 INFO mapreduce.Job: The url to track the job: http://master1:8088/proxy/application_1408511753479_0001/
14/08/20 13:21:40 INFO mapreduce.Job: Running job: job_1408511753479_0001
14/08/20 13:21:54 INFO mapreduce.Job: Job job_1408511753479_0001 running in uber mode : false
14/08/20 13:21:54 INFO mapreduce.Job:  map 0% reduce 0%
14/08/20 13:22:10 INFO mapreduce.Job:  map 33% reduce 0%
14/08/20 13:22:17 INFO mapreduce.Job:  map 100% reduce 0%
14/08/20 13:22:20 INFO mapreduce.Job:  map 100% reduce 100%
14/08/20 13:22:21 INFO mapreduce.Job: Job job_1408511753479_0001 completed successfully
14/08/20 13:22:21 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=72
                FILE: Number of bytes written=358741
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=780
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=15
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters
                Launched map tasks=3
                Launched reduce tasks=1
                Data-local map tasks=3
                Total time spent by all maps in occupied slots (ms)=55402
                Total time spent by all reduces in occupied slots (ms)=7339
                Total time spent by all map tasks (ms)=55402
                Total time spent by all reduce tasks (ms)=7339
                Total vcore-seconds taken by all map tasks=55402
                Total vcore-seconds taken by all reduce tasks=7339
                Total megabyte-seconds taken by all map tasks=56731648
                Total megabyte-seconds taken by all reduce tasks=7515136
        Map-Reduce Framework
                Map input records=3
                Map output records=6
                Map output bytes=54
                Map output materialized bytes=84
                Input split bytes=426
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=84
                Reduce input records=6
                Reduce output records=0
                Spilled Records=12
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=1109
                CPU time spent (ms)=4270
                Physical memory (bytes) snapshot=676921344
                Virtual memory (bytes) snapshot=3354505216
                Total committed heap usage (bytes)=367910912
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=354
        File Output Format Counters
                Bytes Written=97
Job Finished in 44.207 seconds
Estimated value of Pi is 3.16000000000000000000
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




停止yarn
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
slaver3: stopping nodemanager
slaver2: stopping nodemanager
slaver1: stopping nodemanager
no proxyserver to stop
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




停止dfs
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ sbin/stop-dfs.sh
14/08/20 13:24:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [master2 master1]
master2: stopping namenode
master1: stopping namenode
slaver2: stopping datanode
slaver3: stopping datanode
slaver1: stopping datanode
Stopping journal nodes [slaver1 slaver2 slaver3]
slaver2: stopping journalnode
slaver3: stopping journalnode
slaver1: stopping journalnode
14/08/20 13:24:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[cc@master1 hadoop-2.3.0-cdh5.0.0]$ ▊




再次启动无需执行format和同步元数据,只要把服务起起来就可以了

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|京峰教育,只为有梦想的人 ( 京ICP备15013173号 )

GMT+8, 2020-8-6 14:52 , Processed in 0.045909 second(s), 12 queries , Apc On.

快速回复 返回顶部 返回列表