概念:
puppet基於ruby的開源自動化部署工具
具備可測試性,強一致性部署體驗
豐富資源定義完成系統配置全週期流程
內置豐富編程體驗完成自主任務
資源標識與通知機制特性
puttet服務器端定義客戶端節點所需資源類型,客戶端加入部署環境後,
客戶端憑藉主機名等申請(https)對應服務器資源,服務器編譯catelog
後將結果返回客戶端,客戶端利用這些信息完成系統資源的定義與處理,
並返回結果給服務器.css
規劃:
172.16.43.200 master.king.com (預裝 puppet server)
172.16.43.1 slave1.king.com (預裝 puppet agent再無其餘資源) html
按照以前架構,此次演示slave1.king.com的自動化安裝,其餘節點的安裝配置以附件形式上傳node
針對slave1.king.com部署實現
-> puppet agent 軟件(由cobbler預裝)web
-> hosts 文件(puppet自動化)redis
-> bind 軟件及配置文件(puppet自動化)編程
-> haproxy 軟件及配置文件(puppet自動化)vim
-> keepalived 軟件及配置文件
安全
本地測試階段(master.king.com)ruby
在master節點編寫模塊,測試經過後進行主從方式的配置
bash
# 安裝puppet yum -y install puppet-2.7.23-1.el6.noarch.rpm
1.在/etc/puppet目錄創建以下目錄
mkdir /etc/puppet/manifest/{haproxy,hosts,keepalived,named}/{files,manifests,templates} -pv mkdir /etc/puppet/modules
2.編輯/etc/puupet/manifests/hosts/files/hosts (host)
# 若要部署成前四節的架構,這裏須要把每個點都自動化因此配置文件及其對應的軟件都要安裝配置 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.43.1 slave1.king.com 172.16.43.2 slave2.king.com 172.16.43.3 slave3.king.com 172.16.43.4 slave4.king.com 172.16.43.3 imgs1.king.com 172.16.43.3 imgs2.king.com 172.16.43.3 text1.king.com 172.16.43.3 text2.king.com 172.16.43.3 dynamic1.king.com 172.16.43.4 dynamic2.king.com 172.16.43.200 master.king.com 172.16.43.6 server.king.com 172.16.43.5 proxy.king.com
編輯/etc/puupet/manifests/hosts/files/init.pp, 這是hosts的模塊定義
# hosts 的 init.pp # 自動定義資源模塊名爲hosts # hosts 不須要安裝軟件僅僅是將hosts文件在master/slave模型下複製到slave的某目錄中 # 因此這裏定義的資源類型只有 file class hosts { file { "hosts" : ensure => file, source => "puppet:///modules/hosts/hosts", path => "/etc/hosts" } }
3. 以後配置都如(2)類型配置,代碼文件分別爲配置文件和模塊定義文件 (named)
// named.conf options { // listen-on port 53 { 127.0.0.1; }; // listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; // memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { 172.16.0.0/16; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones";
// named.rfc1912.zones: zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "king.com" IN { type master; file "king.com.zone"; };
# king.com.zone $TTL 600 @ IN SOA dns.king.com. adminmail.king.com. ( 2014050401 1H 5M 3D 12H ) IN NS dns dns IN A 172.16.43.1 www IN A 172.16.43.88 www IN A 172.16.43.188
# named 的 init.pp # 定義模塊named , dns軟件報名爲bind,因此package是安裝bind軟件藉助yum源 class named { package { "bind": ensure => present, name => "bind", provider => yum, } # named 軟件要運行起來須要配置區域文件而且要求改變屬主屬組 file { "/etc/named.conf": ensure => file, require => Package["bind"], source => "puppet:///modules/named/named.conf", owner => named, group => named, } file { "/etc/named.rfc1912.zones": ensure => file, require => Package["bind"], source => "puppet:///modules/named/named.rfc1912.zones", owner => named, group => named, } file { "/var/named/king.com.zone": ensure => file, require => Package["bind"], source => "puppet:///modules/named/king.com.zone", owner => named, group => named, } # bind軟件在package安裝了以後,須要啓動並設置開啓啓動,但前提是package已經幫咱們裝好的bind軟件 service { "named": ensure => true, require => Package["bind"], enable => true, } }
4. haproxy的配置
# /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 30000 listen stats mode http bind 0.0.0.0:8080 stats enable stats hide-version stats uri /haproxyadmin?stats stats realm Haproxy\ Statistics stats auth admin:admin stats admin if TRUE frontend http-in bind *:80 mode http log global option httpclose option logasap option dontlognull capture request header Host len 20 capture request header Referer len 60 acl img_static path_beg -i /p_w_picpaths /imgs acl img_static path_end -i .jpg .jpeg .gif .png acl text_static path_beg -i / /static /js /css acl text_static path_end -i .html .shtml .js .css use_backend img_servers if img_static use_backend text_servers if text_static default_backend dynamic_servers backend img_servers balance roundrobin server imgsrv1 imgs1.king.com:6081 check maxconn 4000 server imgsrv2 imgs2.king.com:6081 check maxconn 4000 backend text_servers balance roundrobin server textsrv1 text1.king.com:6081 check maxconn 4000 server textsrv2 text2.king.com:6081 check maxconn 4000 backend dynamic_servers balance roundrobin server websrv1 dynamic1.king.com:80 check maxconn 1000 server websrv2 dynamic2.king.com:80 check maxconn 1000
# haproxy 的 init.pp class haproxy { package { "haproxy": ensure => present, name => "haproxy", provider => yum, } file { "/etc/haproxy/haproxy.cfg": ensure => file, require => Package["haproxy"], source => "puppet:///modules/haproxy/haproxy.cfg", } service { "haproxy": ensure => true, require => Package["haproxy"], enable => true, } }
5. keepalived的配置
# /etc/keepalived/keepalived.conf global_defs { notification_email { root@localhost } notification_email_from keepadmin@localhost smtp_connect_timeout 3 smtp_server 127.0.0.1 router_id LVS_DEVEL_KING } vrrp_script chk_haproxy { script "/etc/keepalived/chk_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER # BACKUP for slave routers priority 100 # 99 for BACKUP virtual_router_id 173 garp_master_delay 1 authentication { auth_type PASS auth_pass king1 } track_interface { eth0 } virtual_ipaddress { 172.16.43.88/16 dev eth0 } track_script { chk_haproxy } } vrrp_instance VI_2 { interface eth0 state BACKUP # master for slave routers priority 99 # 99 for master virtual_router_id 174 garp_master_delay 1 authentication { auth_type PASS auth_pass king2 } track_interface { eth0 } virtual_ipaddress { 172.16.43.188/16 dev eth0 } }
# keepalived 的 init.pp class keepalived { package { "keepalived": ensure => present, name => "keepalived", provider => yum, } file { "/etc/keepalived/keepalived.conf": ensure => file, require => Package["keepalived"], source => "puppet:///modules/keepalived/keepalived.conf", notify => Exec["reload"], } service { "keepalived": ensure => true, require => Package["keepalived"], enable => true, } exec { "reload": command => "/usr/sbin/keepalived reload", subscribe => File["/etc/keepalived/keepalived.conf"], path => "/bin:/sbin:/usr/bin:/usr/sbin", refresh => "/usr/sbin/keepalived reload", } }
6. 一切定義完畢進入測試
# 首先全部的包都沒有安裝
# 使用puppet apply -e 命令在master節點測試安裝hosts模塊先後 /etc/hosts 文件的變化狀況
# 其他模塊一次測試 ,
# 觀察keepalived的啓動效果,在配置文件中定義的vrrp已經出現
# 觀察服務端口是否均已啓動
# 本地測試沒有異常的狀況下,如今考慮將slave1.king.com以前內容狀況後,利用puppet自動化部署slave1.king.com節點所需的軟件和配置
master/slave測試
puppet master:
1, 安裝puppet-server並配置,啓動服務
yum -y install puppet-server-2.7.23-1.el6.noarch.rpm
puppet master --genconfig >> /etc/puppet/puppet.conf
service puppetmaster start
puppet agent:
一、安裝puppet客戶端並配置,啓動服務
yum -y install puppet-2.7.23-1.el6.noarch.rpm
vim /etc/puppet/puppet.conf,在[agent]中添加server=master.king.com
service puppet start
# 在master/slave服務啓動後,slave1節點會發出請求查詢,master會從site.pp文件中查找slave1.king.com該裝什麼模塊
# /etc/puppet/modules/site.pp # 這裏就是表示slave1將安裝named,keepalived,haproxy,hosts等文件 # 固然以前寫過的模塊還有這裏的節點均可以按照面向對象編程裏的class同樣 # 帶有繼承關係的 node 'slave1.king.com' { include named include keepalived include haproxy include hosts }
# 在master/slave服務啓動後,agent會根據配置文件中server的位置發起調用安裝配置的請求,這才服務器須要將請求籤證
# 已達到安全傳輸的目的
簽署證書:
master:
puppet cert list # 查看全部的客戶端的簽證請求
puppet cert sign --all # 簽署全部請求,那麼客戶端與服務器就能夠進行交互了
# 在稍等片刻後,這些節點將被自動的在slave1.king.com上完成部署,固然也能夠手動進行
# slave1.king.com puppet agent -t
# 手動去請求服務器應用完畢的結果
# 部署完畢後vrrp節點信息,此時僅僅部署了一個slave1節點,因此會有兩個vrrp的ip地址,詳見keepalived的配置文件
# slave1上自動部署完成後的服務端口信息