[Spark] 00 - Install Hadoop & Spark

Hadoop安裝


環境配置 - Single Node Cluster

1、JDK配置

Ref: How to install hadoop 2.7.3 single node cluster on ubuntu 16.04html

Ubuntu 18 + Hadoop 2.7.3 + Java 8java

$ sudo apt-get update
$ sudo apt-get install openjdk-8-jdk
$ java –version

版本若不對, 能夠切換.node

$ update-alternatives --config java

 

2、SSH配置

Now we are logined in in ‘hduser’.git

$ ssh-keygen -t rsa
NOTE: Leave file name and other things blank.
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
$ ssh localhost

 

3、安裝 Hadoop

其實就是把內容總體移動到一個新的位置: /usr/local/下面github

$ wget http://www-us.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
$ tar xvzf hadoop-2.7.3.tar.gz
$ sudo mkdir -p /usr/local/hadoop
$ cd hadoop-2.7.3/
$ sudo mv * /usr/local/hadoop
$ sudo chown -R hduser:hadoop /usr/local/hadoop

 

4、配置 Hadoop

4.1 ~/.bashrcweb

4.2 hadoop-env.shajax

4.3 core-site.xmlsql

4.4 mapred-site.xmldocker

4.5 hdfs-site.xmlshell

4.6 yarn-site.xml

 

4.1 ~/.bashrc

#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
#HADOOP VARIABLES END

 

4.2 設置 java env

/usr/local/hadoop/etc/hadoop/hadoop-env.sh 文件中設置。

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

 

4.3 設置 core-site

/usr/local/hadoop/etc/hadoop/core-site.xml 文件中設置。

<configuration>
<property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property>
<property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property>
</configuration>
 

參考: https://blog.csdn.net/Mr_LeeHY/article/details/77049800

<configuration>
    <!--指定namenode的地址-->
    <property>
              <name>fs.defaultFS</name>
              <value>hdfs://master:9000</value>
    </property>
<!--用來指定使用hadoop時產生文件的存放目錄--> <property> <name>hadoop.tmp.dir</name> <value>file:///usr/hadoop/hadoop-2.6.0/tmp</value> </property>
<!--用來設置檢查點備份日誌的最長時間--> <name>fs.checkpoint.period</name> <value>3600</value> </configuration>

因此,建立對應的目錄存放文件。

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp

 

4.4 設置 mapred-site

/usr/local/hadoop/etc/hadoop/mapred-site.xml 文件中配置。

同級目錄下,已提供了一個模板,先拷貝。

$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

告訴hadoop之後 MR (Map/Reduce) 運行在YARN上。

<configuration>
<property> <name>mapred.job.tracker</name> <value>localhost:54311</value>
<description> The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property>
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>

 

4.5 設置 hdfs-site

/usr/local/hadoop/etc/hadoop/hdfs-site.xml 文件中配置。

<configuration>
<property> <name>dfs.replication</name> <value>1</value> <description>Default block replication.The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property>
<property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop_store/hdfs/namenode</value> </property>
<property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop_store/hdfs/datanode</value> </property> </configuration>

參考:https://blog.csdn.net/Mr_LeeHY/article/details/77049800

<configuration>
    <!--指定hdfs保存數據的副本數量-->
    <property>
            <name>dfs.replication</name>
            <value>2</value>
    </property>
<!--指定hdfs中namenode的存儲位置--> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/hadoop/hadoop-2.6.0/tmp/dfs/name</value> </property>
<!--指定hdfs中datanode的存儲位置--> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/hadoop/hadoop-2.6.0/tmp/dfs/data</value> </property> </configuration>

 

4.6 配置 yarn-site

/usr/local/hadoop/etc/hadoop/yarn-site.xml 文件中配置。

<configuration>
   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>
</configuration>

參考:https://blog.csdn.net/Mr_LeeHY/article/details/77049800

<configuration>
    <!--nomenodeManager獲取數據的方式是shuffle-->
    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
<!--指定Yarn的老大(ResourceManager)的地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property>
<!--Yarn打印工做日誌--> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <configuration>

 

5、初始化 fs,並啓動守護進程

守護進程:hadoop daemons。

$  hadoop namenode –format

$  cd /usr/local/hadoop/sbin
$ start-all.sh

查看啓動了哪些守護進程。

hadoop@ThinkPad:~$ jps
8593 ResourceManager 8407 SecondaryNameNode 9096 Jps 8941 NodeManager 7967 NameNode

 

6、Hadoop 測試樣例

須要確保datanode啓動,這是一個single node cluster。

刪除datanode內容,重啓,而後再運行測試。 

這裏設計到jar包,也就是mapReduce的接口java編程。

$ sudo rm -r /usr/local/hadoop_store/hdfs/datanode/current
$ hadoop namenode -format
$ start-all.sh
$ jps
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 2 5
hadoop@unsw-ThinkPad-T490:/usr/local/hadoop$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 2 5
Number of Maps  = 2
Samples per Map = 5
19/10/21 11:06:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
19/10/21 11:06:31 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/10/21 11:06:31 INFO input.FileInputFormat: Total input paths to process : 2
19/10/21 11:06:31 INFO mapreduce.JobSubmitter: number of splits:2
19/10/21 11:06:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1571615960327_0001
19/10/21 11:06:31 INFO impl.YarnClientImpl: Submitted application application_1571615960327_0001
19/10/21 11:06:31 INFO mapreduce.Job: The url to track the job: http://unsw-ThinkPad-T490:8088/proxy/application_1571615960327_0001/
19/10/21 11:06:31 INFO mapreduce.Job: Running job: job_1571615960327_0001
19/10/21 11:06:37 INFO mapreduce.Job: Job job_1571615960327_0001 running in uber mode : false
19/10/21 11:06:37 INFO mapreduce.Job:  map 0% reduce 0%
19/10/21 11:06:42 INFO mapreduce.Job:  map 100% reduce 0%
19/10/21 11:06:47 INFO mapreduce.Job:  map 100% reduce 100%
19/10/21 11:06:47 INFO mapreduce.Job: Job job_1571615960327_0001 completed successfully
19/10/21 11:06:47 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=50
        FILE: Number of bytes written=357033
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=534
        HDFS: Number of bytes written=215
        HDFS: Number of read operations=11
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=3
    Job Counters 
        Launched map tasks=2
        Launched reduce tasks=1
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=4693
        Total time spent by all reduces in occupied slots (ms)=2032
        Total time spent by all map tasks (ms)=4693
        Total time spent by all reduce tasks (ms)=2032
        Total vcore-milliseconds taken by all map tasks=4693
        Total vcore-milliseconds taken by all reduce tasks=2032
        Total megabyte-milliseconds taken by all map tasks=4805632
        Total megabyte-milliseconds taken by all reduce tasks=2080768
    Map-Reduce Framework
        Map input records=2
        Map output records=4
        Map output bytes=36
        Map output materialized bytes=56
        Input split bytes=298
        Combine input records=0
        Combine output records=0
        Reduce input groups=2
        Reduce shuffle bytes=56
        Reduce input records=4
        Reduce output records=0
        Spilled Records=8
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=176
        CPU time spent (ms)=1870
        Physical memory (bytes) snapshot=717053952
        Virtual memory (bytes) snapshot=6000562176
        Total committed heap usage (bytes)=552075264
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=236
    File Output Format Counters 
        Bytes Written=97
Job Finished in 15.972 seconds
Estimated value of Pi is 3.60000000000000000000
Log

 

  

 

分佈式配置

1、前言

僞分佈式配置

安裝課程:安裝配置【廈大課程視頻】

配置手冊:Hadoop安裝教程_單機/僞分佈式配置_Hadoop2.6.0/Ubuntu14.04【廈大課程i筆記】

"手把手"環境配置:玩轉大數據分析!Spark2.X+Python 精華實戰課程(免費)【其實只是環境搭建】

/* 略,更關心真分佈式 */

 

真分佈式配置

本地虛擬機實驗:1.3 virtualbox高級應用構建本地大數據集羣服務器

三臺不一樣雲機器:Hadoop徹底分佈式安裝配置完整過程

 

2、虛擬機

(1) 配置好virtualbox後(須要關閉security boot),配置IP,最好的固定的。

Goto: 1.3 virtualbox高級應用構建本地大數據集羣服務器·【只有服務器配置,未配置hadoop】

只須要修改這裏,給不一樣的slave設置不一樣的ip地址就行了。

/etc/network/interfaces 文件中配置

# (註釋的內容忽略)增長的Host-only靜態IP設置 (enp0s8 是根據拓撲關係映射的網卡名稱(舊規則是eth0,eth1))
# 能夠經過 ```ls /sys/class/net```查看,是否爲enp0s8

auto enp0s8
iface enp0s8 inet static
address 192.168.56.106
netmask 255.255.255.0

 

(2) 再安裝工具:

# 安裝網絡工具
sudo apt install net-tools
# 查看本地網絡狀況 ifconfig

 

(3) 經過ssh登陸slave機,進行驗證。

$ ssh -p 22 hadoop@192.168.56.101
The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established. ECDSA key fingerprint is SHA256:IPf76acROSwMC7BQO3hBAThLZCovARuoty765MfTps0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.56.101' (ECDSA) to the list of known hosts. hadoop@192.168.56.101's password:
hadoop@node2-VirtualBox:~$ hostname node2-VirtualBox hadoop@node2-VirtualBox:~$ sudo hostname worker1-VirtualBox [sudo] password for hadoop:
hadoop@node2-VirtualBox:~$ hostname worker1-VirtualBox

 

(4)  永久性修改hostname的方式,注意,要修改兩個文件。

Type the following command to edit /etc/hostname using nano or vi text editor:
sudo nano /etc/hostname
Delete the old name and setup new name. Next Edit the /etc/hosts file:
sudo nano /etc/hosts
Replace any occurrence of the existing computer name with your new one. Reboot the system to changes take effect:
sudo reboot

 

(5) 關閉 x-window

Ref: 2.2 Hadoop3.1.0徹底分佈式集羣配置與部署

卸載x-window:Remove packages to transform Desktop to Server?

內存比大概是:140M : 900M

若是不是卸載,而是關掉x-window,則,沒啥變化,還在內存裏,只是變爲了inactive。

按住ctrl+alt+f1,進入命令行。

輸入sudo /etc/init.d/lightdm stop
sudo /etc/init.d/lightdm status

重啓xserver?輸入sudo /etc/init.d/lightdm restart

 

(6) ssh無密碼登陸slave

master免密登陸到worker中,如下是示範。

ssh-copy-id -i ~/.ssh/id_rsa.pub master
ssh-copy-id -i ~/.ssh/id_rsa.pub worker1
ssh-copy-id -i ~/.ssh/id_rsa.pub worker2

 

(7) docker配置集羣

廈大手冊:使用Docker搭建Hadoop分佈式集羣

 

3、分佈式slave主機配置

寫在前面

能夠在本地修改好配置文件後,而後將Hadoop複製到集羣服務器中。

Single node cluster的方式有點問題,配置過於繁瑣,再次全新地配置一遍。

 

Ref: Part 1: How To install a 3-Node Hadoop Cluster on Ubuntu 16

Ref: How to Install and Set Up a 3-Node Hadoop Cluster【良心配置,可用】

hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ hdfs namenode -format
2019-10-24 16:10:48,131 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = node-master/192.168.56.2
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.1.2
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.18.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.13.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.13.0.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.1.2.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.1.2-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-kms-3.1.2.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.2-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn:/usr/local/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.2.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a; compiled by 'sunilg' on 2019-01-29T01:39Z
STARTUP_MSG:   java = 1.8.0_222
************************************************************/
2019-10-24 16:10:48,148 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-10-24 16:10:48,243 INFO namenode.NameNode: createNameNode [-format]
2019-10-24 16:10:48,354 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-10-24 16:10:48,607 INFO common.Util: Assuming 'file' scheme for path /usr/local/hadoop/data/nameNode in configuration.
2019-10-24 16:10:48,607 INFO common.Util: Assuming 'file' scheme for path /usr/local/hadoop/data/nameNode in configuration.
Formatting using clusterid: CID-cadb861e-e62d-42e6-b62b-f834bbf05bca
2019-10-24 16:10:48,637 INFO namenode.FSEditLog: Edit logging is async:true
2019-10-24 16:10:48,653 INFO namenode.FSNamesystem: KeyProvider: null
2019-10-24 16:10:48,654 INFO namenode.FSNamesystem: fsLock is fair: true
2019-10-24 16:10:48,657 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-10-24 16:10:48,666 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2019-10-24 16:10:48,666 INFO namenode.FSNamesystem: supergroup          = supergroup
2019-10-24 16:10:48,667 INFO namenode.FSNamesystem: isPermissionEnabled = true
2019-10-24 16:10:48,667 INFO namenode.FSNamesystem: HA Enabled: false
2019-10-24 16:10:48,708 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-10-24 16:10:48,717 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-10-24 16:10:48,718 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-10-24 16:10:48,721 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-10-24 16:10:48,721 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Oct 24 16:10:48
2019-10-24 16:10:48,723 INFO util.GSet: Computing capacity for map BlocksMap
2019-10-24 16:10:48,723 INFO util.GSet: VM type       = 64-bit
2019-10-24 16:10:48,725 INFO util.GSet: 2.0% max memory 443 MB = 8.9 MB
2019-10-24 16:10:48,725 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2019-10-24 16:10:48,731 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2019-10-24 16:10:48,736 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2019-10-24 16:10:48,737 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2019-10-24 16:10:48,737 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2019-10-24 16:10:48,737 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2019-10-24 16:10:48,737 INFO blockmanagement.BlockManager: defaultReplication         = 1
2019-10-24 16:10:48,737 INFO blockmanagement.BlockManager: maxReplication             = 512
2019-10-24 16:10:48,738 INFO blockmanagement.BlockManager: minReplication             = 1
2019-10-24 16:10:48,738 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2019-10-24 16:10:48,738 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2019-10-24 16:10:48,738 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2019-10-24 16:10:48,738 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2019-10-24 16:10:48,756 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2019-10-24 16:10:48,767 INFO util.GSet: Computing capacity for map INodeMap
2019-10-24 16:10:48,767 INFO util.GSet: VM type       = 64-bit
2019-10-24 16:10:48,768 INFO util.GSet: 1.0% max memory 443 MB = 4.4 MB
2019-10-24 16:10:48,768 INFO util.GSet: capacity      = 2^19 = 524288 entries
2019-10-24 16:10:48,768 INFO namenode.FSDirectory: ACLs enabled? false
2019-10-24 16:10:48,769 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-10-24 16:10:48,769 INFO namenode.FSDirectory: XAttrs enabled? true
2019-10-24 16:10:48,769 INFO namenode.NameNode: Caching file names occurring more than 10 times
2019-10-24 16:10:48,773 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-10-24 16:10:48,775 INFO snapshot.SnapshotManager: SkipList is disabled
2019-10-24 16:10:48,778 INFO util.GSet: Computing capacity for map cachedBlocks
2019-10-24 16:10:48,778 INFO util.GSet: VM type       = 64-bit
2019-10-24 16:10:48,779 INFO util.GSet: 0.25% max memory 443 MB = 1.1 MB
2019-10-24 16:10:48,779 INFO util.GSet: capacity      = 2^17 = 131072 entries
2019-10-24 16:10:48,784 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-10-24 16:10:48,784 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-10-24 16:10:48,784 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-10-24 16:10:48,787 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2019-10-24 16:10:48,787 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-10-24 16:10:48,789 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2019-10-24 16:10:48,789 INFO util.GSet: VM type       = 64-bit
2019-10-24 16:10:48,789 INFO util.GSet: 0.029999999329447746% max memory 443 MB = 136.1 KB
2019-10-24 16:10:48,789 INFO util.GSet: capacity      = 2^14 = 16384 entries
2019-10-24 16:10:48,814 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1068893594-192.168.56.2-1571893848809
2019-10-24 16:10:48,832 INFO common.Storage: Storage directory /usr/local/hadoop/data/nameNode has been successfully formatted.
2019-10-24 16:10:48,838 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/data/nameNode/current/fsimage.ckpt_0000000000000000000 using no compression
2019-10-24 16:10:48,910 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/data/nameNode/current/fsimage.ckpt_0000000000000000000 of size 393 bytes saved in 0 seconds .
2019-10-24 16:10:48,918 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2019-10-24 16:10:48,924 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node-master/192.168.56.2
************************************************************/
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ jps
2128 Jps
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ start-dfs.sh
Starting namenodes on [node-master]
Starting datanodes
Starting secondary namenodes [node-master]
2019-10-24 16:11:40,842 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ jps
2739 Jps
2342 NameNode
2617 SecondaryNameNode
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ stop-dfs.sh
Stopping namenodes on [node-master]
Stopping datanodes
Stopping secondary namenodes [node-master]
2019-10-24 16:12:18,740 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ hdfs dfsadmin -report
2019-10-24 16:12:41,062 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
report: Call From node-master/192.168.56.2 to node-master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ start-dfs.sh
Starting namenodes on [node-master]
Starting datanodes
Starting secondary namenodes [node-master]
2019-10-24 16:13:16,921 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ hdfs dfsadmin -report
2019-10-24 16:13:21,162 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 20014161920 (18.64 GB)
Present Capacity: 2642968576 (2.46 GB)
DFS Remaining: 2642919424 (2.46 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Replicated Blocks:
	Under replicated blocks: 0
	Blocks with corrupt replicas: 0
	Missing blocks: 0
	Missing blocks (with replication factor 1): 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0
Erasure Coded Block Groups: 
	Low redundancy block groups: 0
	Block groups with corrupt internal blocks: 0
	Missing block groups: 0
	Low redundancy blocks with highest priority to recover: 0
	Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.56.101:9866 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 10007080960 (9.32 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 7705030656 (7.18 GB)
DFS Remaining: 1773494272 (1.65 GB)
DFS Used%: 0.00%
DFS Remaining%: 17.72%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Oct 24 16:13:18 AEDT 2019
Last Block Report: Thu Oct 24 16:13:12 AEDT 2019
Num of Blocks: 0


Name: 192.168.56.102:9866 (node2)
Hostname: node2
Decommission Status : Normal
Configured Capacity: 10007080960 (9.32 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 8609099776 (8.02 GB)
DFS Remaining: 869425152 (829.15 MB)
DFS Used%: 0.00%
DFS Remaining%: 8.69%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Oct 24 16:13:18 AEDT 2019
Last Block Report: Thu Oct 24 16:13:12 AEDT 2019
Num of Blocks: 0


hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ jps
3795 SecondaryNameNode
3983 Jps
3519 NameNode
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -mkdir -p /user/hadoop
2019-10-24 16:16:17,898 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ hdfs dfs -mkdir books
2019-10-24 16:16:24,789 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ 
hadoop@node-master:/usr/local/hadoop/etc/hadoop$ cd /home/hadoop
hadoop@node-master:~$ wget -O alice.txt https://www.gutenberg.org/files/11/11-0.txt
--2019-10-24 16:16:42--  https://www.gutenberg.org/files/11/11-0.txt
Resolving www.gutenberg.org (www.gutenberg.org)... 152.19.134.47, 2610:28:3090:3000:0:bad:cafe:47
Connecting to www.gutenberg.org (www.gutenberg.org)|152.19.134.47|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 173595 (170K) [text/plain]
Saving to: 'alice.txt’

alice.txt                                          100%[=============================================================================================================>] 169.53K  51.4KB/s    in 3.3s    

2019-10-24 16:16:51 (51.4 KB/s) - 'alice.txt’ saved [173595/173595]

hadoop@node-master:~$ 
hadoop@node-master:~$ 
hadoop@node-master:~$ wget -O holmes.txt https://www.gutenberg.org/files/1661/1661-0.txt
--2019-10-24 16:16:56--  https://www.gutenberg.org/files/1661/1661-0.txt
Resolving www.gutenberg.org (www.gutenberg.org)... 152.19.134.47, 2610:28:3090:3000:0:bad:cafe:47
Connecting to www.gutenberg.org (www.gutenberg.org)|152.19.134.47|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 607788 (594K) [text/plain]
Saving to: 'holmes.txt’

holmes.txt                                         100%[=============================================================================================================>] 593.54K   138KB/s    in 4.3s    

2019-10-24 16:17:03 (138 KB/s) - 'holmes.txt’ saved [607788/607788]

hadoop@node-master:~$ 
hadoop@node-master:~$ 
hadoop@node-master:~$ wget -O frankenstein.txt https://www.gutenberg.org/files/84/84-0.txt
--2019-10-24 16:17:07--  https://www.gutenberg.org/files/84/84-0.txt
Resolving www.gutenberg.org (www.gutenberg.org)... 152.19.134.47, 2610:28:3090:3000:0:bad:cafe:47
Connecting to www.gutenberg.org (www.gutenberg.org)|152.19.134.47|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 450783 (440K) [text/plain]
Saving to: 'frankenstein.txt’

frankenstein.txt                                   100%[=============================================================================================================>] 440.22K   124KB/s    in 3.6s    

2019-10-24 16:17:14 (124 KB/s) - 'frankenstein.txt’ saved [450783/450783]

hadoop@node-master:~$ 
hadoop@node-master:~$ 
hadoop@node-master:~$ hdfs dfs -put alice.txt holmes.txt frankenstein.txt books
2019-10-24 16:17:21,244 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@node-master:~$ 
hadoop@node-master:~$ 
hadoop@node-master:~$ hdfs dfs -ls books
2019-10-24 16:17:29,413 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r--   1 hadoop supergroup     173595 2019-10-24 16:17 books/alice.txt
-rw-r--r--   1 hadoop supergroup     450783 2019-10-24 16:17 books/frankenstein.txt
-rw-r--r--   1 hadoop supergroup     607788 2019-10-24 16:17 books/holmes.txt
hadoop@node-master:~$ 
hadoop@node-master:~$ 
hadoop@node-master:~$ hdfs dfs -get books/alice.txt
2019-10-24 16:17:35,328 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
get: `alice.txt': File exists
hadoop@node-master:~$
實踐日誌

 

 

(1) Monitor your HDFS Cluster

主要是配置slave文件,3.0版本後改成了worker文件。

若是運行失敗,就關掉,format一下,再開啓。而後再運行下面命令。

/usr/local/hadoop$ hdfs dfsadmin -report
19/10/22 17:24:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 20014161920 (18.64 GB) Present Capacity: 7294656512 (6.79 GB) DFS Remaining: 7294607360 (6.79 GB) DFS Used: 49152 (48 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0   Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (2): Name: 192.168.56.102:50010 (worker2-VirtualBox) Hostname: worker2-VirtualBox Decommission Status : Normal Configured Capacity: 10007080960 (9.32 GB) DFS Used: 24576 (24 KB) Non DFS Used: 6359736320 (5.92 GB) DFS Remaining: 3647320064 (3.40 GB) DFS Used%: 0.00% DFS Remaining%: 36.45% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Tue Oct 22 17:24:06 AEDT 2019 Name: 192.168.56.101:50010 (worker1-VirtualBox) Hostname: worker1-VirtualBox Decommission Status : Normal Configured Capacity: 10007080960 (9.32 GB) DFS Used: 24576 (24 KB) Non DFS Used: 6359769088 (5.92 GB) DFS Remaining: 3647287296 (3.40 GB) DFS Used%: 0.00% DFS Remaining%: 36.45% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Tue Oct 22 17:24:06 AEDT 2019

 

 (2) 圖形化監控

Goto: Yarn http://192.168.56.1:8088/

 

Goto: http://192.168.56.1:50070

 

 

 

 

Spark安裝


1、安裝方法

Goto: Install, Configure, and Run Spark on Top of a Hadoop YARN Cluster

Goto: https://anaconda.org/conda-forge/pyspark

    • hadoop-3.1.2.tar.gz
    • scala-2.12.10.deb
    • spark-2.4.4-bin-without-hadoop.tgz

 

2、一些可能的問題

Ref: 6 2 2Spark配置安裝實驗二:集羣版

Ref: Spark multinode environment setup on yarn 

Ref: SBT Error: 「Failed to construct terminal; falling back to unsupported…」【.bashrc添加句柄】

Ref: Getting 「cat: /release: No such file or directory」 when running scala【使用高版本2.12.2+】

Ref: Using Spark's "Hadoop Free" Build【須要指定已安裝hadoop的位置】

 

3、測試

 

4、遠程 Notebook

Goto: Jupyter notebook遠程訪問服務器(實踐出真知版)

 

End.

相關文章
相關標籤/搜索