Top
NSD ARCHITECTURE DAY06
- 案例1:安裝與部署
- 案例2:Hadoop詞頻統計
- 案例3:節點管理
- 案例4:NFS配置
1 案例1:安裝與部署
1.1 問題
本案例要求:html
- 對mapred和yarn文件進行配置
- 驗證訪問Hadoop
1.2 方案
在day05準備好的環境下給master (nn01)主機添加ResourceManager的角色,在node1,node2,node3上面添加NodeManager的角色,如表-1所示:java
表-1node
1.3 步驟
實現此案例須要按照以下步驟進行。linux
步驟一:安裝與部署hadoopweb
1)配置mapred-site(nn01上面操做)vim
- [root@nn01 ~]# cd /usr/local/hadoop/etc/hadoop/
- [root@nn01 hadoop]# mv mapred-site.xml.template mapred-site.xml
- [root@nn01 hadoop]# vim mapred-site.xml
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- </configuration>
2)配置yarn-site(nn01上面操做)app
- [root@nn01 hadoop]# vim yarn-site.xml
- <configuration>
- <!-- Site specific YARN configuration properties -->
- <property>
- <name>yarn.resourcemanager.hostname</name>
- <value>nn01</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- </configuration>
3)同步配置(nn01上面操做)ssh
- [root@nn01 hadoop]# for i in {22..24}; do rsync -aSH --delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e 'ssh' & done
- [1] 712
- [2] 713
- [3] 714
4)驗證配置(nn01上面操做)tcp
- [root@nn01 hadoop]# cd /usr/local/hadoop
- [root@nn01 hadoop]# ./sbin/start-dfs.sh
- Starting namenodes on [nn01]
- nn01: namenode running as process 23408. Stop it first.
- node1: datanode running as process 22409. Stop it first.
- node2: datanode running as process 22367. Stop it first.
- node3: datanode running as process 22356. Stop it first.
- Starting secondary namenodes [nn01]
- nn01: secondarynamenode running as process 23591. Stop it first.
- [root@nn01 hadoop]# ./sbin/start-yarn.sh
- starting yarn daemons
- starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-nn01.out
- node2: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-node2.out
- node3: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-node3.out
- node1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-node1.out
- [root@nn01 hadoop]# jps
- 23408 NameNode
- 1043 ResourceManager
- 1302 Jps
- 23591 SecondaryNameNode
- [root@nn01 hadoop]# ssh node1 jps
- 25777 Jps
- 22409 DataNode
- 25673 NodeManager
- [root@nn01 hadoop]# ssh node2 jps
- 25729 Jps
- 25625 NodeManager
- 22367 DataNode
- [root@nn01 hadoop]# ssh node3 jps
- 22356 DataNode
- 25620 NodeManager
- 25724 Jps
5)web訪問hadoopoop
- http:
- http:
- http:
- http:
- http:
2 案例2:Hadoop詞頻統計
2.1 問題
本案例要求:
- 在集羣文件系統裏建立文件夾
- 上傳要分析的文件到目錄中
- 分析上傳文件
- 展現結果
2.2 步驟
實現此案例須要按照以下步驟進行。
步驟一:詞頻統計
- [root@nn01 hadoop]# ./bin/hadoop fs -ls /
- [root@nn01 hadoop]# ./bin/hadoop fs -mkdir /aaa
- [root@nn01 hadoop]# ./bin/hadoop fs -ls /
- Found 1 items
- drwxr-xr-x - root supergroup 0 2018-09-10 09:56 /aaa
- [root@nn01 hadoop]# ./bin/hadoop fs -touchz /fa
- [root@nn01 hadoop]# ./bin/hadoop fs -put *.txt /aaa
- [root@nn01 hadoop]# ./bin/hadoop fs -ls /aaa
- Found 3 items
- -rw-r--r-- 2 root supergroup 86424 2018-09-10 09:58 /aaa/LICENSE.txt
- -rw-r--r-- 2 root supergroup 14978 2018-09-10 09:58 /aaa/NOTICE.txt
- -rw-r--r-- 2 root supergroup 1366 2018-09-10 09:58 /aaa/README.txt
- [root@nn01 hadoop]# ./bin/hadoop fs -get /aaa
- [root@nn01 hadoop]# ./bin/hadoop jar \
- share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount /aaa /bbb
- [root@nn01 hadoop]# ./bin/hadoop fs -cat /bbb
3 案例3:節點管理
3.1 問題
本案例要求:
3.2 方案:
另外準備兩臺主機,node4和nfsgw,做爲新添加的節點和網關,具體要求如表-2所示:
表-2
3.3 步驟
實現此案例須要按照以下步驟進行。
步驟一:增長節點
1)增長一個新的節點node4
- [root@hadoop5 ~]# echo node4 > /etc/hostname
- [root@hadoop5 ~]# hostname node4
- [root@node4 ~]# yum -y install rsync
- [root@node4 ~]# yum -y install java-1.8.0-openjdk-devel
- [root@node4 ~]# mkdir /var/hadoop
- [root@nn01 .ssh]# ssh-copy-id 192.168.1.25
- [root@nn01 .ssh]# vim /etc/hosts
- 192.168.1.21 nn01
- 192.168.1.22 node1
- 192.168.1.23 node2
- 192.168.1.24 node3
- 192.168.1.25 node4
- [root@nn01 .ssh]# scp /etc/hosts 192.168.1.25:/etc/
- [root@nn01 ~]# cd /usr/local/hadoop/
- [root@nn01 hadoop]# vim ./etc/hadoop/slaves
- node1
- node2
- node3
- node4
- [root@nn01 hadoop]# for i in {22..25}; do rsync -aSH --delete /usr/local/hadoop/
- \ 192.168.1.$i:/usr/local/hadoop/ -e 'ssh' & done
- [1] 1841
- [2] 1842
- [3] 1843
- [4] 1844
- [root@node4 hadoop]# ./sbin/hadoop-daemon.sh start datanode
2)查看狀態
- [root@node4 hadoop]# jps
- 24439 Jps
- 24351 DataNode
3)設置同步帶寬
- [root@node4 hadoop]# ./bin/hdfs dfsadmin -setBalancerBandwidth 60000000
- Balancer bandwidth is set to 60000000
- [root@node4 hadoop]# ./sbin/start-balancer.sh
4)刪除節點
- [root@nn01 hadoop]# vim /usr/local/hadoop/etc/hadoop/slaves
- node1
- node2
- node3
- [root@nn01 hadoop]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
- <property>
- <name>dfs.hosts.exclude</name>
- <value>/usr/local/hadoop/etc/hadoop/exclude</value>
- </property>
- [root@nn01 hadoop]# vim /usr/local/hadoop/etc/hadoop/exclude
- node4
5)導出數據
- [root@nn01 hadoop]# ./bin/hdfs dfsadmin -refreshNodes
- Refresh nodes successful
- [root@nn01 hadoop]# ./bin/hdfs dfsadmin -report
- Dead datanodes (1):
- Name: 192.168.1.25:50010 (node4)
- Hostname: node4
- Decommission Status : Decommissioned
- Configured Capacity: 17168314368 (15.99 GB)
- DFS Used: 12288 (12 KB)
- Non DFS Used: 1656664064 (1.54 GB)
- DFS Remaining: 15511638016 (14.45 GB)
- DFS Used%: 0.00%
- DFS Remaining%: 90.35%
- Configured Cache Capacity: 0 (0 B)
- Cache Used: 0 (0 B)
- Cache Remaining: 0 (0 B)
- Cache Used%: 100.00%
- Cache Remaining%: 0.00%
- Xceivers: 1
- Last contact: Mon Sep 10 10:59:58 CST 2018
- [root@node4 hadoop]# ./sbin/hadoop-daemon.sh stop datanode
- stopping datanode
- [root@node4 hadoop]# ./sbin/yarn-daemon.sh start nodemanager
- [root@node4 hadoop]# ./sbin/yarn-daemon.sh stop nodemanager
- stopping nodemanager
- [root@node4 hadoop]# ./bin/yarn node -list
- 18/09/10 11:04:50 INFO client.RMProxy: Connecting to ResourceManager at nn01/192.168.1.21:8032
- Total Nodes:4
- Node-Id Node-State Node-Http-Address Number-of-Running-Containers
- node3:34628 RUNNING node3:8042 0
- node2:36300 RUNNING node2:8042 0
- node4:42459 RUNNING node4:8042 0
- node1:39196 RUNNING node1:8042 0
4 案例4:NFS配置
4.1 問題
本案例要求:
- 建立代理用戶
- 啓動一個新系統,禁用Selinux和firewalld
- 配置NFSWG
- 啓動服務
- 掛載NFS並實現開機自啓
4.2 步驟
實現此案例須要按照以下步驟進行。
步驟一:基礎準備
1)更改主機名,配置/etc/hosts(/etc/hosts在nn01和nfsgw上面配置)
- [root@localhost ~]# echo nfsgw > /etc/hostname
- [root@localhost ~]# hostname nfsgw
- [root@nn01 hadoop]# vim /etc/hosts
- 192.168.1.21 nn01
- 192.168.1.22 node1
- 192.168.1.23 node2
- 192.168.1.24 node3
- 192.168.1.25 node4
- 192.168.1.26 nfsgw
2)建立代理用戶(nn01和nfsgw上面操做),以nn01爲例子
- [root@nn01 hadoop]# groupadd -g 200 nfs
- [root@nn01 hadoop]# useradd -u 200 -g nfs nfs
3)配置core-site.xml
- [root@nn01 hadoop]# ./sbin/stop-all.sh
- This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
- Stopping namenodes on [nn01]
- nn01: stopping namenode
- node2: stopping datanode
- node4: no datanode to stop
- node3: stopping datanode
- node1: stopping datanode
- Stopping secondary namenodes [nn01]
- nn01: stopping secondarynamenode
- stopping yarn daemons
- stopping resourcemanager
- node2: stopping nodemanager
- node3: stopping nodemanager
- node4: no nodemanager to stop
- node1: stopping nodemanager
- ...
- [root@nn01 hadoop]# cd etc/hadoop
- [root@nn01 hadoop]# >exclude
- [root@nn01 hadoop]# vim core-site.xml
- <property>
- <name>hadoop.proxyuser.nfs.groups</name>
- <value>*</value>
- </property>
- <property>
- <name>hadoop.proxyuser.nfs.hosts</name>
- <value>*</value>
- </property>
4)同步配置到node1,node2,node3
- [root@nn01 hadoop]# for i in {22..24}; do rsync -aSH --delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e 'ssh' & done
- [4] 2722
- [5] 2723
- [6] 2724
5)啓動集羣
- [root@nn01 hadoop]# /usr/local/hadoop/sbin/start-dfs.sh
6)查看狀態
- [root@nn01 hadoop]# /usr/local/hadoop/bin/hdfs dfsadmin -report
步驟二:NFSGW配置
1)安裝java-1.8.0-openjdk-devel和rsync
- [root@nfsgw ~]# yum -y install java-1.8.0-openjdk-devel
- [root@nfsgw ~]# yum -y install rsync
- [root@nn01 hadoop]# rsync -avSH --delete \
- /usr/local/hadoop/ 192.168.1.26:/usr/local/hadoop/ -e 'ssh'
2)建立數據根目錄 /var/hadoop(在NFSGW主機上面操做)
- [root@nfsgw ~]# mkdir /var/hadoop
3)建立轉儲目錄,並給用戶nfs 賦權
- [root@nfsgw ~]# mkdir /var/nfstmp
- [root@nfsgw ~]# chown nfs:nfs /var/nfstmp
4)給/usr/local/hadoop/logs賦權(在NFSGW主機上面操做)
- [root@nfsgw ~]# setfacl -m u:nfs:rwx /usr/local/hadoop/logs
- [root@nfsgw ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
- <property>
- <name>nfs.exports.allowed.hosts</name>
- <value>* rw</value>
- </property>
- <property>
- <name>nfs.dump.dir</name>
- <value>/var/nfstmp</value>
- </property>
5)能夠建立和刪除便可
- [root@nfsgw ~]# su - nfs
- [nfs@nfsgw ~]$ cd /var/nfstmp/
- [nfs@nfsgw nfstmp]$ touch 1
- [nfs@nfsgw nfstmp]$ ls
- 1
- [nfs@nfsgw nfstmp]$ rm -rf 1
- [nfs@nfsgw nfstmp]$ ls
- [nfs@nfsgw nfstmp]$ cd /usr/local/hadoop/logs/
- [nfs@nfsgw logs]$ touch 1
- [nfs@nfsgw logs]$ ls
- 1 hadoop-root-secondarynamenode-nn01.log yarn-root-resourcemanager-nn01.log
- hadoop-root-namenode-nn01.log hadoop-root-secondarynamenode-nn01.out yarn-root-resourcemanager-nn01.out
- hadoop-root-namenode-nn01.out hadoop-root-secondarynamenode-nn01.out.1
- hadoop-root-namenode-nn01.out.1 SecurityAuth-root.audit
- [nfs@nfsgw logs]$ rm -rf 1
- [nfs@nfsgw logs]$ ls
6)啓動服務
- [root@nfsgw ~]# /usr/local/hadoop/sbin/hadoop-daemon.sh --script ./bin/hdfs start portmap
- starting portmap, logging to /usr/local/hadoop/logs/hadoop-root-portmap-nfsgw.out
- [root@nfsgw ~]# jps
- 23714 Jps
- 23670 Portmap
- [root@nfsgw ~]# su - nfs
- Last login: Mon Sep 10 12:31:58 CST 2018 on pts/0
- [nfs@nfsgw ~]$ cd /usr/local/hadoop/
- [nfs@nfsgw hadoop]$ ./sbin/hadoop-daemon.sh --script ./bin/hdfs start nfs3
- starting nfs3, logging to /usr/local/hadoop/logs/hadoop-nfs-nfs3-nfsgw.out
- [nfs@nfsgw hadoop]$ jps
- 1362 Jps
- 1309 Nfs3
- [root@nfsgw hadoop]# jps
- 1216 Portmap
- 1309 Nfs3
- 1374 Jps
7)實現客戶端掛載(客戶端能夠用node4這臺主機)
- [root@node4 ~]# rm -rf /usr/local/hadoop
- [root@node4 ~]# yum -y install nfs-utils
- [root@node4 ~]# mount -t nfs -o \
- vers=3,proto=tcp,nolock,noatime,sync,noacl 192.168.1.26:/ /mnt/
- [root@node4 ~]# cd /mnt/
- [root@node4 mnt]# ls
- aaa bbb fa system tmp
- [root@node4 mnt]# touch a
- [root@node4 mnt]# ls
- a aaa bbb fa system tmp
- [root@node4 mnt]# rm -rf a
- [root@node4 mnt]# ls
- aaa bbb fa system tmp
8)實現開機自動掛載
- [root@node4 ~]# vim /etc/fstab
- 192.168.1.26:/ /mnt/ nfs vers=3,proto=tcp,nolock,noatime,sync,noacl,_netdev 0 0
- [root@node4 ~]# mount -a
- [root@node4 ~]# df -h
- 192.168.1.26:/ 64G 6.2G 58G 10% /mnt
- [root@node4 ~]# rpcinfo -p 192.168.1.26
- program vers proto port service
- 100005 3 udp 4242 mountd
- 100005 1 tcp 4242 mountd
- 100000 2 udp 111 portmapper
- 100000 2 tcp 111 portmapper
- 100005 3 tcp 4242 mountd
- 100005 2 tcp 4242 mountd
- 100003 3 tcp 2049 nfs
- 100005 2 udp 4242 mountd
- 100005 1 udp 4242 mountd