第二十七週微職位

1、概述docker容器虛擬化技術,並完成如下練習:
(1)構建一個基於centos的httpd鏡像,要求,其主目錄路徑爲/web/htdocs,且主頁存在,並以apache用戶的身份運行,暴露80端口;
(2)進一步地,其頁面文件爲主機上的卷;
(3)進一步地,httpd支持解析php頁面;
(4)構建一個基於centos的maridb鏡像,讓容器間可互相通訊;
(5)在httpd上部署wordpress;
1)建立帶Apache服務的Centos Docker鏡像.
基礎鏡像:
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
sshd-centos latest 64136bdc0cc8 22 hours ago 261.8 MB
centos latest 0f73ae75014f 5 weeks ago 172.3 MB
2)以鏡像sshd-centos爲基礎新建容器,並指定容器的ssh端口22映射到宿主機的10022端口上
docker run -p 10022:22 -d sshd-centos /usr/sbin/sshd -D
3)查看容器運行狀況:
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 22 hours ago Up 12 seconds 0.0.0.0:10022->22/tcp trusting_morse
4)在宿主機上經過ssh登陸容器
ssh localhost -p 10022
5)若是提示沒有ssh命令請安裝openssh-clients
yum install -y openssh-clients
6)下載apache源碼包,編譯安裝
1.安裝wget:yum install -y wget
2.下載源碼包:cd /usr/local/src
wget http://apache.fayea.com/httpd/httpd-2.4.17.tar.gzphp

3.解壓源碼包:tar -zxvf httpd-2.4.17.tar.gz
cd httpd-2.4.17
4.安裝gcc 、make編譯器和apache依賴包
因爲下載的docker鏡像是簡化版,因此連最基本的gcc和make都沒有帶,只好自已安裝; 同時需 要安裝apache依賴包apr 和 pcre
yum install -y gcc make apr-devel apr apr-util apr-util-devel pcre-devel
5.編譯:./configure --prefix=/usr/local/apache2 --enable-mods-shared=most --enable-so
make
make install
6.修改apache配置文件
sed -i 's/#ServerName www.example.com:80/ServerName localhost:80/g' /usr/local/apache2/conf/httpd.conf
7.啓動apache服務:/usr/local/apache2/bin/httpd
8.查看是否啓動:ps aux
9.編寫啓動ssh和apache服務的腳本
cd /usr/local/sbin
vi run.shhtml

------------------------------------------------------
#!/bin/bash

/usr/sbin/sshd &
/usr/local/apache2/bin/httpd -D FOREGROUND node

改變腳本權限,使其能夠運行:chmod 755 run.sh

10.建立帶有apache和ssh服務的鏡像
   1)查看當前容器的 Container ID:
   [root@localhost ~]# docker ps -a
 CONTAINER ID        IMAGE               COMMAND               CREATED                   STATUS                      PORTS                   NAMES

66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 23 hours ago Up 45 minutes 0.0.0.0:10022->22/tcp trusting_morse
2)根據容器CONTAINER ID生成新的鏡像:docker commit 66b4ab8dbdeb apache:centos
3)查看新生成的鏡像:
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
apache centos 31668185b8f1 About a minute ago 433.4 MB
sshd-centos latest 64136bdc0cc8 23 hours ago 261.8 MB
centos latest 0f73ae75014f 5 weeks ago 172.3 MB
11.根據新生成的鏡像生成容器
分別映射容器的22端口和80端口到宿主機的2222端口和8000端口
docker run -d -p 2222:22 -p 8000:80 apache:centos /usr/local/sbin/run.sh
查看生成的容器:
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a9021c9b510 apache:centos "/usr/local/sbin/run 4 minutes ago Up 4 minutes 0.0.0.0:2222->22/tcp, 0.0.0.0:8000->80/tcp tender_payne
66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 23 hours ago Up 57 minutes 0.0.0.0:10022->22/tcp trusting_morse
6c40d0d2d8be centos "/bin/bash" 23 hours ago Exited (137) 23 hours ago centos-ssh
12.測試apache服務:[root@localhost ~]# curl localhost:8000
<html><body><h1>It works!</h1></body></html>
13.測試ssh服務
[root@localhost ~]#ssh localhost -p 2222
root@localhost's password:
Last login: Sat Nov 13 14:20:41 2017 from 172.17.42.1
[root@7a9021c9b510 ~]#
測試經過!
14.映射宿主機目錄
將宿主機的/www目錄映射到容器的/usr/local/apache2/htdocs目錄
1)在宿主機上新建目錄並創建主頁文件
mkdir /www
cd /www
vi index.html
代碼以下:
<html><body><h1>It's test!</h1></body></html>
爲了區別於以前生成的8000端口的容器的默認主頁內容,我將「It works」 改成 「It’s test」.
2)生成新的窗口
docker run -d -p 2223:22 -p 8001:80 -v /www:/usr/local/apache2/htdocs:ro apache:centos /usr/local/sbin/run.sh
分別映射容器的22端口和80端口到宿主機的2223端口和8001端口;
經過-v 參數將/www映射到/usr/local/apache2/htdocs,同時出於安全性和隔離性的考慮加上ro只讀參數
3)查看生成的容器:
[root@localhost www]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd8335195b44 apache:centos "/usr/local/sbin/run 9 minutes ago Up 9 minutes 0.0.0.0:2223->22/tcp, 0.0.0.0:8001->80/tcp cranky_nobel
7a9021c9b510 apache:centos "/usr/local/sbin/run 21 minutes ago Up 21 minutes 0.0.0.0:2222->22/tcp, 0.0.0.0:8000->80/tcp tender_payne
66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 23 hours ago Up About an hour 0.0.0.0:10022->22/tcp trusting_morse
6c40d0d2d8be centos "/bin/bash" 24 hours ago Exited (137) 23 hours ago centos-ssh
4)測試:
[root@localhost www]# curl localhost:8001
<html><body><h1>It's test!</h1></body></html>mysql

[root@localhost www]# curl localhost:8000
<html><body><h1>It works!</h1></body></html>web

事例2:
容器導入和導出:
docker export
docker importsql

鏡像的保存及裝載:
docker save -o /PATH/TO/SOMEFILE.TAR NAME[:TAG]docker

docker load -i /PATH/FROM/SOMEFILE.TARapache

回顧:
Dockerfile指令:
FROM,MAINTAINER
COPY,ADD
WORKDIR, ENV
USER
VOLUME
EXPOSE
RUN
CMD,ENTRYPOINT
ONBUILDcentos

Dockerfile(2)
示例2:httpd安全

FROM centos:latest
MAINTAINER MageEdu "<mage@magedu.com>"

RUN sed -i -e 's@^mirrorlist.repo=os.$@baseurl=http://mirrors.163.com/centos/$releasever/@g' -e '/^mirrorlist.repo=updates/a enabled=0' -e '/^mirrorlist.repo=extras/a enabled=0' /etc/yum.repos.d/CentOS-Base.repo && \
yum -y install httpd php php-mysql php-mbstring && \
yum clean all && \
echo -e '<?php\n\tphpinfo();\n?>' > /var/www/html/info.php

EXPOSE 80/tcp

CMD ["/usr/sbin/httpd","-f","/etc/httpd/conf/httpd.conf","-DFOREGROUND"]

2、搭建一套hadoop集羣系統。
一、首先到官網上下載一個Hadoop的壓縮安裝包,我安裝用的版本是hadoop-2.7.1.tar.gz,因爲我安裝的是最新的版本,和Hadoop以前的版本有很大的差別,因此網上不少的教程都不適用,這也是致使在安裝過程當中遇到問題所在,下載地址:http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz

二、下載完成後(這個壓縮包比較大,有201M,下載比較慢,耐心等待吧),放到Linux某個目錄下,這裏我用的系統是:CentOS release 6.5 (Final),我放的目錄是:/usr/local/jiang/hadoop-2.7.1.tar.gz,而後執行:tar zxvf hadoop-2.7.1.tar.gz解壓(這些操做都是要在集羣中的主機上進行,也就是hadoop的master上面)

三、配置host文件

進入/etc/hosts,配置主機名和ip的映射, 這裏是集羣的每一個機子都須要配置,這裏個人logsrv02是主機(master),其他兩臺是從機(slave)

[root@logsrv03 /]# vi /etc/hosts
172.17.6.142 logsrv02
172.17.6.149 logsrv04
172.17.6.148 logsrv03

四、jdk的安裝(這裏個人機子上面已經有了,因此就不須要再安裝了)
我使用的jdk是jdk1.7.0_71,沒有的須要安裝,將jdk下載下來,解壓到某個目錄下,而後到/etc/profile中配置環境變量,在執行Java -version驗證是否安裝成功。

五、配置SSH免密碼登錄

這裏所說的免密碼登陸是相對於主機master來講的,master和slave之間須要通訊,配置好後,master和slave進行ssh登錄的時候不須要輸入密碼。

若是系統中沒有ssh的須要安裝,而後執行:
[root@logsrv03 ~]# ssh-keygen -t rsa
會在根目錄下生成私鑰id_rsa和公鑰id_rsa.pub

[root@logsrv03 /]# cd ~  
[root@logsrv03 ~]# cd .ssh  
[root@logsrv03 .ssh]# ll  
總用量 20  
-rw-------  1 root root 1185 11月 10 14:41 authorized_keys  
-rw-------  1 root root 1675 11月  2 15:57 id_rsa  
-rw-r--r--  1 root root  395 11月  2 15:57 id_rsa.pub

而後將這裏的公鑰分別拷貝到其他slave中的.ssh文件中,而後要把公鑰(id_dsa.pub)追加到受權的key中去:

cat id_rsa.pub >> authorized_keys

而後修改權限

[root@logsrv04 .ssh]# chmod 600 authorized_keys   
[root@logsrv04 .ssh]# chmod 700 -R .ssh

將生成的公鑰複製到從機上的.ssh目錄下:

[root@logsrv03 .ssh]# scp -r id_rsa.pub root@logsrv02:~/.ssh/  
[root@logsrv03 .ssh]# scp -r id_rsa.pub root@logsrv04:~/.ssh/

而後全部機子都須要重啓ssh服務

[root@logsrv03 .ssh]# service sshd restart  
[root@logsrv02 .ssh]# service sshd restart  
[root@logsrv04 .ssh]# service sshd restart

而後驗證免密碼登錄是否成功,這裏在主機master這裏驗證:

[root@logsrv03 .ssh]# ssh logsrv02  
[root@logsrv03 .ssh]# ssh logsrv04

若是在登錄slave不須要輸入密碼,則免密碼登錄設置成功。

六、開始安裝Hadoop,配置hadoop環境變量/etc/profile(全部機子都須要配置)

export HADOOP_HOME=/usr/local/jiang/hadoop-2.7.1
export PATH= HADOOP_HOME/bin

七、修改配置文件:

(1)、修改hadoop-2.7.1/etc/hadoop/hadoop-env.sh

[root@logsrv03 /]# cd usr/local/jiang/hadoop-2.7.1
[root@logsrv03 hadoop-2.7.1]# cd etc/hadoop/
[root@logsrv03 hadoop]# vi hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.7.0_71

(2)、修改hadoop-2.7.1/etc/hadoop/slaves

[root@logsrv03 hadoop]# vi slaves   
logsrv02  
logsrv04

(3)、修改hadoop-2.7.1/etc/hadoop/core-site.xml

<configuration>  
<property>  
                <name>fs.defaultFS</name>  
                <value>hdfs://logsrv03:8020</value>  
        </property>  
        <property>  
                <name>io.file.buffer.size</name>  
                <value>131072</value>  
        </property>  
        <property>  
                <name>hadoop.tmp.dir</name>  
                <value>file:/opt/hadoop/tmp</value>  
        </property>  
        <property>  
                <name>fs.hdfs.impl</name>  
                <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>  
                <description>The FileSystem for hdfs: uris.</description>  
        </property>  
        <property>  
                <name>fs.file.impl</name>  
                <value>org.apache.hadoop.fs.LocalFileSystem</value>  
                <description>The FileSystem for hdfs: uris.</description>  
    </property>  
</configuration>

(4)、修改hadoop-2.7.1/etc/hadoop/hdfs-site.xml

<configuration>  
<property>  
                <name>dfs.namenode.name.dir</name>  
                <value>file:/opt/hadoop/dfs/name</value>  
        </property>  
        <property>  
                <name>dfs.datanode.data.dir</name>  
                <value>file:/opt/hadoop/dfs/data</value>  
        </property>  
        <property>  
                <name>dfs.replication</name>      
                <value>2</value>   
        </property>  
</configuration>

(5)、修改hadoop-2.7.1/etc/hadoop/yarn-site.xml

<configuration>  

<!-- Site specific YARN configuration properties -->  
<property>  
                <name>yarn.resourcemanager.address</name>  
                <value>logsrv03:8032</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.scheduler.address</name>  
                <value>logsrv03:8030</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.resource-tracker.address</name>  
                <value>logsrv03:8031</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.admin.address</name>  
                <value>logsrv03:8033</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.webapp.address</name>  
                <value>logsrv03:8088</value>  
        </property>  
        <property>  
                <name>yarn.nodemanager.aux-services</name>  
                <value>mapreduce_shuffle</value>  
        </property>  
        <property>  
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
        </property>  
</configuration>

(6)、修改hadoop-2.7.1/etc/hadoop/mapred-site.xml

<configuration>  
<property>  
                <name>mapreduce.framework.name</name>  
                <value>yarn</value>  
        </property>  
        <property>  
                <name>mapreduce.jobhistory.address</name>  
                <value>logsrv03:10020</value>  
        </property>  
        <property>  
                <name>mapreduce.jobhistory.webapp.address</name>  
                <value>logsrv03:19888</value>  
        </property>  
</configuration>

八、這些配置文件配置完畢後,而後將整個hadoop-2.7.1文件複製到各個從機的目錄下,這裏目錄最好與主機一致

[root@logsrv03 hadoop-2.7.1]# scp -r hadoop-2.7.1 root@logsrv02:/usr/local/jiang/  
[root@logsrv03 hadoop-2.7.1]# scp -r hadoop-2.7.1 root@logsrv04:/usr/local/jiang/

九、到這裏所有配置完畢,而後開始啓動hadoop,首先格式化hdfs
[root@logsrv03 hadoop-2.7.1]# bin/hdfs namenode -format
若是出現successfully formatted則表示格式化成功。
十、而後啓動hdfs
[root@logsrv03 hadoop-2.7.1]# sbin/start-dfs.sh
到這裏,能夠查看啓動的進程:
主機logsrv03:
[root@logsrv03 hadoop-2.7.1]# jps
29637 NameNode
29834 SecondaryNameNode

從機logsrv0二、logsrv04:
[root@logsrv02 hadoop-2.7.1]# jps
10774 DataNode

[root@logsrv04 hadoop-2.7.1]# jps
20360 DataNode

十一、啓動yarn
[root@logsrv03 hadoop-2.7.1]# sbin/start-yarn.sh

到這裏,啓動的進程:
主機logsrv03:

[root@logsrv03 hadoop-2.7.1]# jps   
29637 NameNode  
29834 SecondaryNameNode  
30013 ResourceManager

從機logsrv0二、logsrv04:

[root@logsrv02 hadoop-2.7.1]# jps
10774 DataNode
10880 NodeManager

[root@logsrv04 hadoop-2.7.1]# jps
20360 DataNode
20483 NodeManager

到這裏,恭喜整個集羣配置完成,能夠經過:http://logsrv03:8088/cluster查看hadoop集羣圖

查看HDFS:能夠經過:http://logsrv03:50070查看

相關文章
相關標籤/搜索