Linux/Python学习论坛-京峰教育

 找回密码
 立即注册

一键登录:

搜索
热搜: 活动 交友 discuz
查看: 757|回复: 0

搭建单机Hadoop2.0

[复制链接]

238

主题

288

帖子

1925

积分

超级版主

Rank: 8Rank: 8

积分
1925
QQ
发表于 2015-3-18 11:16:43 | 显示全部楼层 |阅读模式
添加本地域名解析, 用localhost也可以
[root@centos7 ~]# tail -n 1 /etc/hosts
127.0.0.1 centos7
[root@centos7 ~]# ▊




切换到普通用户, 我这有个文件夹, java文件夹是java的家目录
[root@centos7 ~]# su - cc
Last login: Tue Aug 19 18:13:25 HKT 2014 on pts/14
[cc@centos7 ~]$ ls
hadoop-2.3.0-cdh5.0.0.tar.gz java
[cc@centos7 ~]$ ls java
bin  COPYRIGHT  db  include  jre  lib  LICENSE  man  README.html  release  src.zip  THIRDPARTYLICENSEREADME-JAVAFX.txt  THIRDPARTYLICENSEREADME.txt
[cc@centos7 ~]$ ▊




java版本是
[cc@centos7 ~]$ java/bin/java -version
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
[cc@centos7 ~]$ ▊




配置本地的免密码登陆
[cc@centos7 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cc/.ssh/id_rsa): Created directory '/home/cc/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cc/.ssh/id_rsa.
Your public key has been saved in /home/cc/.ssh/id_rsa.pub.
The key fingerprint is:
c7:66:71:0c:8d:ed:ca:76:b1:a5:68:65:46:b3:64:6c cc@centos7
The key's randomart image is:
+--[ RSA 2048]----+
|          .=     |
|          .oE    |
|          .*oo   |
|         . oB .  |
|        S.=* =   |
|         +* +    |
|         o .     |
|                 |
|                 |
+-----------------+
[cc@centos7 ~]$ cp .ssh/id_rsa.pub .ssh/authorized_keys
[cc@centos7 ~]$ chmod 600 .ssh/authorized_keys
[cc@centos7 ~]$ ▊




解压hadoop软件包
[cc@centos7 ~]$ tar xf hadoop-2.3.0-cdh5.0.0.tar.gz
[cc@centos7 ~]$ ▊




设置环境变量, 启动hadoop只需要在这里设置一下hadoop的环境变量就可以了
[cc@centos7 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/cc/java
[cc@centos7 ~]$ ▊




配置mapred-site
[cc@centos7 ~]$ cp hadoop-2.3.0-cdh5.0.0/etc/hadoop/mapred-site.xml.template hadoop-2.3.0-cdh5.0.0/etc/hadoop/mapred-site.xml
[cc@centos7 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/mapred-site.xml
<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>
[cc@centos7 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/core-site.xml
<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://centos7:8020</value>
        </property>
</configuration>
[cc@centos7 ~]$ ▊




配置hdfs的参数, 数据块的备份个数, 如果单机的不使用1的话会报错的, 还有NameNode的数据存放目录, DataNode的数据存放目录, 目录可以不用存在
[cc@centos7 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/hdfs-site.xml
<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/cc/dfs.data/namenode.data</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/cc/dfs.data/datanode.data</value>
        </property>
</configuration>
[cc@centos7 ~]$ ▊




配置yarn-site
[cc@centos7 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/yarn-site.xml
<configuration>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
</configuration>
[cc@centos7 ~]$ ▊




配置slaves的机器
[cc@centos7 ~]$ vim hadoop-2.3.0-cdh5.0.0/etc/hadoop/slaves
centos7
[cc@centos7 ~]$ ▊




格式化(初始化)NameNode
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/bin/hdfs namenode -format
14/08/19 18:30:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = centos7/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.3.0-cdh5.0.0
STARTUP_MSG:   classpath = /home/cc/hadoop-2.3.0-cdh5.0.0/etc/hadoop:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/activation-1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/hadoop-auth-2.3.0-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/xz-1.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/hadoop-annotations-2.3.0-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/asm-3.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/avro-1.7.5-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/zookeeper-3.4.5-cdh5.0.0.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/cc/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/cc/hadoop-
STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on 2014-03-28T04:29Z
STARTUP_MSG:   java = 1.7.0_55
************************************************************/
14/08/19 18:30:45 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/08/19 18:30:45 INFO namenode.NameNode: createNameNode [-format]
14/08/19 18:30:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/08/19 18:30:46 WARN common.Util: Path /home/cc/dfs.data/namenode.data should be specified as a URI in configuration files. Please update hdfs configuration.
14/08/19 18:30:46 WARN common.Util: Path /home/cc/dfs.data/namenode.data should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-1a176c80-a4e5-41ab-baf2-26de35d57ebf
14/08/19 18:30:46 INFO namenode.FSNamesystem: fsLock is fair:true
14/08/19 18:30:46 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/08/19 18:30:46 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/08/19 18:30:46 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/08/19 18:30:46 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/08/19 18:30:46 INFO util.GSet: Computing capacity for map BlocksMap
14/08/19 18:30:46 INFO util.GSet: VM type       = 64-bit
14/08/19 18:30:46 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
14/08/19 18:30:46 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/08/19 18:30:46 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/08/19 18:30:46 INFO blockmanagement.BlockManager: defaultReplication         = 1
14/08/19 18:30:46 INFO blockmanagement.BlockManager: maxReplication             = 512
14/08/19 18:30:46 INFO blockmanagement.BlockManager: minReplication             = 1
14/08/19 18:30:46 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/08/19 18:30:46 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/08/19 18:30:46 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/08/19 18:30:46 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/08/19 18:30:46 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
14/08/19 18:30:46 INFO namenode.FSNamesystem: fsOwner             = cc (auth:SIMPLE)
14/08/19 18:30:46 INFO namenode.FSNamesystem: supergroup          = supergroup
14/08/19 18:30:46 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/08/19 18:30:46 INFO namenode.FSNamesystem: HA Enabled: false
14/08/19 18:30:46 INFO namenode.FSNamesystem: Append Enabled: true
14/08/19 18:30:46 INFO util.GSet: Computing capacity for map INodeMap
14/08/19 18:30:46 INFO util.GSet: VM type       = 64-bit
14/08/19 18:30:46 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
14/08/19 18:30:46 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/08/19 18:30:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/19 18:30:46 INFO util.GSet: Computing capacity for map cachedBlocks
14/08/19 18:30:46 INFO util.GSet: VM type       = 64-bit
14/08/19 18:30:46 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
14/08/19 18:30:46 INFO util.GSet: capacity      = 2^18 = 262144 entries
14/08/19 18:30:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/08/19 18:30:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/08/19 18:30:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/08/19 18:30:46 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/08/19 18:30:46 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/08/19 18:30:46 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/08/19 18:30:46 INFO util.GSet: VM type       = 64-bit
14/08/19 18:30:46 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
14/08/19 18:30:46 INFO util.GSet: capacity      = 2^15 = 32768 entries
14/08/19 18:30:46 INFO namenode.AclConfigFlag: ACLs enabled? false
14/08/19 18:30:46 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1600560430-127.0.0.1-1408444246578
14/08/19 18:30:46 INFO common.Storage: Storage directory /home/cc/dfs.data/namenode.data has been successfully formatted.
14/08/19 18:30:46 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/08/19 18:30:46 INFO util.ExitUtil: Exiting with status 0
14/08/19 18:30:46 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at centos7/127.0.0.1
************************************************************/
[cc@centos7 ~]$ ▊




可以看到NameNode的存放数据的目录被创建了, 里面有一些文件, DataNode的文件夹要启动DataNode的时候自动创建
[cc@centos7 ~]$ ls dfs.data/namenode.data/current/
fsimage_0000000000000000000  fsimage_0000000000000000000.md5  seen_txid  VERSION
[cc@centos7 ~]$ ▊




启动NameNode
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-namenode-centos7.out
[cc@centos7 ~]$ ▊




可以看到NameNode进程
[cc@centos7 ~]$ java/bin/jps
25140 Jps
25060 NameNode
[cc@centos7 ~]$ ▊




hdfs的NameNode日志, 启动报错可以在这里看原因, 就是垃圾java报错不大好看, 报一堆垃圾信息
[cc@centos7 ~]$ tail hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-namenode-centos7.log
2014-08-19 18:32:15,986 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 18 msec
2014-08-19 18:32:16,013 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting
2014-08-19 18:32:16,014 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-08-19 18:32:16,032 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: centos7/127.0.0.1:8020
2014-08-19 18:32:16,032 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2014-08-19 18:32:16,035 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2014-08-19 18:32:16,036 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning because of pending operations
2014-08-19 18:32:16,036 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2014-08-19 18:32:46,036 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-19 18:32:46,036 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
[cc@centos7 ~]$ ▊




启动DataNode
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/hadoop-cc-datanode-centos7.out
[cc@centos7 ~]$ ▊




可以看到NameNode和DataNode的进程
[cc@centos7 ~]$ java/bin/jps
25303 Jps
25217 DataNode
25060 NameNode
[cc@centos7 ~]$ ▊




打开web页面, 可以看到一些概述信息



连接的DataNode



测试创建目录, 这个警告可以忽略, 没关系的
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/bin/hadoop fs -mkdir /home
14/08/19 18:36:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[cc@centos7 ~]$ ▊




上传文件
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/bin/hadoop fs -put /etc/services /home
14/08/19 18:37:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[cc@centos7 ~]$ ▊




可以看到上传的文件

启动yarn, 会需要ssh到本机(到slaver, 启动NodeManager)
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/yarn-cc-resourcemanager-centos7.out
The authenticity of host 'centos7 (127.0.0.1)' can't be established.
ECDSA key fingerprint is 9e:45:b9:22:dc:c1:26:01:89:5e:01:31:2e:fc:d0:7f.
Are you sure you want to continue connecting (yes/no)? yes
centos7: Warning: Permanently added 'centos7' (ECDSA) to the list of known hosts.
centos7: starting nodemanager, logging to /home/cc/hadoop-2.3.0-cdh5.0.0/logs/yarn-cc-nodemanager-centos7.out
[cc@centos7 ~]$ ▊




查看进程, 有启动ResourceManager和NodeManager
[cc@centos7 ~]$ java/bin/jps
26220 NodeManager
25931 ResourceManager
26400 Jps
25217 DataNode
25060 NameNode
[cc@centos7 ~]$ ▊




打开yarn的web页面



跑一个测试用例看看
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/bin/hadoop jar hadoop-2.3.0-cdh5.0.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.0.0.jar pi 2 100
Number of Maps  = 2
Samples per Map = 100
14/08/19 18:46:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
14/08/19 18:46:32 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/08/19 18:46:33 INFO input.FileInputFormat: Total input paths to process : 2
14/08/19 18:46:34 INFO mapreduce.JobSubmitter: number of splits:2
14/08/19 18:46:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1408444938105_0001
14/08/19 18:46:34 INFO impl.YarnClientImpl: Submitted application application_1408444938105_0001
14/08/19 18:46:34 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1408444938105_0001/
14/08/19 18:46:34 INFO mapreduce.Job: Running job: job_1408444938105_0001
14/08/19 18:46:39 INFO mapreduce.Job: Job job_1408444938105_0001 running in uber mode : false
14/08/19 18:46:39 INFO mapreduce.Job:  map 0% reduce 0%
14/08/19 18:46:45 INFO mapreduce.Job:  map 50% reduce 0%
14/08/19 18:46:46 INFO mapreduce.Job:  map 100% reduce 0%
14/08/19 18:46:51 INFO mapreduce.Job:  map 100% reduce 100%
14/08/19 18:46:52 INFO mapreduce.Job: Job job_1408444938105_0001 completed successfully
14/08/19 18:46:52 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=50
                FILE: Number of bytes written=264720
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=520
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=11
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters
                Launched map tasks=2
                Launched reduce tasks=1
                Data-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=5877
                Total time spent by all reduces in occupied slots (ms)=3032
                Total time spent by all map tasks (ms)=5877
                Total time spent by all reduce tasks (ms)=3032
                Total vcore-seconds taken by all map tasks=5877
                Total vcore-seconds taken by all reduce tasks=3032
                Total megabyte-seconds taken by all map tasks=6018048
                Total megabyte-seconds taken by all reduce tasks=3104768
        Map-Reduce Framework
                Map input records=2
                Map output records=4
                Map output bytes=36
                Map output materialized bytes=56
                Input split bytes=284
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=56
                Reduce input records=4
                Reduce output records=0
                Spilled Records=8
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=77
                CPU time spent (ms)=1320
                Physical memory (bytes) snapshot=655237120
                Virtual memory (bytes) snapshot=2693619712
                Total committed heap usage (bytes)=511705088
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=236
        File Output Format Counters
                Bytes Written=97
Job Finished in 19.43 seconds
Estimated value of Pi is 3.12000000000000000000
[cc@centos7 ~]$ ▊




关闭yarn
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
centos7: stopping nodemanager
centos7: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[cc@centos7 ~]$ ▊




停掉hdfs
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/sbin/hadoop-daemon.sh stop namenode
stopping namenode
[cc@centos7 ~]$ hadoop-2.3.0-cdh5.0.0/sbin/hadoop-daemon.sh stop datanode
stopping datanode
[cc@centos7 ~]$ ▊




查看进程
[cc@centos7 ~]$ java/bin/jps
28135 Jps
[cc@centos7 ~]$ ▊

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|京峰教育,只为有梦想的人 ( 京ICP备15013173号 )

GMT+8, 2018-12-10 08:30 , Processed in 0.053475 second(s), 20 queries , Apc On.

快速回复 返回顶部 返回列表