學習saltstack (七)

1、SaltStack概述

Salt,,一種全新的基礎設施管理方式,部署輕鬆,在幾分鐘內可運行起來,擴展性好,很容易管理上萬臺服務器,速度夠快,服務器之間秒級通信。php

salt底層採用動態的鏈接總線, 使其能夠用於編配, 遠程執行, 配置管理等等.html

多種配置管理工具對比: 
Puppet(rubby開發,如今不多使用) 
ansible(python開發,輕量級,沒有agent,大規模環境下使用ssh會很慢,串行傳輸) 
Saltstack(python開發,遠程執行、配置管理、事件驅動基礎設施、使用saltcloud能夠管理私有云和公有云)
node

官方文檔:https://docs.saltstack.com/en/getstarted/; 
官方提供官方yum源:repo.slatstack.com–>可使用cobbler自定義yum倉庫進行同步 
官方安裝源:http://repo.saltstack.com/2016.11.html#rhel; 
Saltstack組件: 
SaltMaster 
SaltMinion 
Execution Modules
python

環境說明:mysql

主機名                  IP地址        說明          系統
linux-node1.example.com    192.168.56.11    模式:master    Centos 7.4 x86_64
linux-node2.example.com    192.168.56.12    模式:minion    Centos 7.4 x86_64

2、SaltStack安裝

1.安裝指定的yum源

[root@linux-node1 ~]# yum install https://repo.saltstack.com/yum/redhat/salt-repo-2016.11-2.el7.noarch.rpm
[root@linux-node2 ~]# yum install https://repo.saltstack.com/yum/redhat/salt-repo-2016.11-2.el7.noarch.rpm 

2.安裝salt-master和salt-minion

[root@linux-node1 ~]# yum install -y salt-master
[root@linux-node1 ~]# yum install -y salt-minion
[root@linux-node2 ~]# yum install -y salt-minion

3.修改minion配置並啓動

[root@linux-node1 ~]# systemctl start salt-master    #啓動salt-master
[root@linux-node1 ~]# vim /etc/salt/minion           #配置salt-minion
master: 192.168.56.11      #能夠是主機名須要解析(指定服務端的IP地址),冒號有空格
id:   惟一標識符,能夠不配,不配默認就是主機名
[root@linux-node1 ~]# systemctl start salt-minion    #啓動salt-minion
[root@linux-node2 salt]# vim minion
master: 192.168.56.11      #能夠是主機名須要解析(指定服務端的IP地址),冒號有空格
id:   惟一標識符,能夠不配,不配默認就是主機名
[root@linux-node2 salt]# systemctl start salt-minion

minion配置中有一個id配置,默認是hostname,若是id配置和hostname不一致會致使沒法進行通訊,那麼當hostname作了修改,或者錯誤的時候該怎麼配置呢?
①關閉salt-minion  
②salt-key -d id  在master上刪除minion的id  
③minion上刪除pki目錄 
④minion上刪除minion_id文件
⑤修改完成,啓動minion
#此處必須先停掉minion修改,並刪除相應的文件,不然會默認地去查找原先的配置,已踩坑

#如下是剛裝完查看minion_id變成了www.test123.com。進行修改爲linux-node2.example.com
[root@linux-node2 salt]# cat minion_id 
www.test123.com
[root@linux-node2 salt]# systemctl stop salt-minion
[root@linux-node2 salt]# rm -rf pki
[root@linux-node2 salt]# rm -rf minion_id 
[root@linux-node2 salt]# systemctl start salt-minion
[root@linux-node2 salt]# cat minion_id 
linux-node2.example.com

4.配置說明

[root@linux-node2 salt]# ll
總用量 124
-rw-r----- 1 root root  2624 9月  15 23:19 cloud
drwxr-xr-x 2 root root     6 9月  16 00:41 cloud.conf.d
drwxr-xr-x 2 root root     6 9月  16 00:41 cloud.deploy.d
drwxr-xr-x 2 root root     6 9月  16 00:41 cloud.maps.d
drwxr-xr-x 2 root root     6 9月  16 00:41 cloud.profiles.d
drwxr-xr-x 2 root root     6 9月  16 00:41 cloud.providers.d
-rw-r----- 1 root root 46034 9月  15 23:19 master
drwxr-xr-x 2 root root     6 9月  16 00:41 master.d
-rw-r----- 1 root root 35101 1月  16 10:29 minion
drwxr-xr-x 2 root root    27 1月  16 11:47 minion.d
-rw-r--r-- 1 root root    23 1月  16 11:45 minion_id
drwxr-xr-x 3 root root    19 1月  16 11:45 pki
-rw-r----- 1 root root 26984 9月  15 23:19 proxy
drwxr-xr-x 2 root root     6 9月  16 00:41 proxy.d
-rw-r----- 1 root root   344 9月  15 23:19 roster

說明:
(1)salt-minion首次啓動會在/etc/salt/pki/minion目錄下生成公鑰和祕鑰
[root@linux-node2 salt]# ll /etc/salt/pki/minion/
總用量 12
-rw-r--r-- 1 root root  450 1月  16 11:47 minion_master.pub
-r-------- 1 root root 1674 1月  16 11:45 minion.pem
-rw-r--r-- 1 root root  450 1月  16 11:45 minion.pub

(2)而且在salt-master的/etc/salt/pki/master/minion_pre中存放了salt-minion的公鑰。
[root@linux-node1 ~]# ll /etc/salt/pki/master/minions_pre/
linux-node1.example.com
linux-node2.example.com

5.配置salt-master和slat-minion通訊

[root@linux-node1 salt]# salt-key
Accepted Keys:       贊成的
Denied Keys:                拒絕的
Unaccepted Keys:            等待贊成的
linux-node1.example.com
linux-node2.example.com
Rejected Keys:

贊成認證的方法:
分爲三種:

[root@linux-node1 salt]# salt-key -A
[root@linux-node1 salt]# salt-key -a 指定id
[root@linux-node1 salt]# salt-key -a 支持通配符
[root@linux-node1 master]# salt-key -a linux*
The following keys are going to be accepted:
Unaccepted Keys:
linux-node1.example.com
linux-node2.example.com
Proceed? [n/Y] Y
Key for minion linux-node1.example.com accepted.
Key for minion linux-node2.example.com accepted.

salt-key 命令參數介紹
-L  列出全部
-d 刪除指定的支持通配符
-D 刪除全部
-A 添加全部
-a 指定添加

贊成以後生成的文件
pki/
├── master
│   ├── master.pem
│   ├── master.pub
│   ├── minions
│   │   ├── linux-node1.example.com
│   │   └── linux-node2.example.com
│   ├── minions_autosign
│   ├── minions_denied
│   ├── minions_pre
│   └── minions_rejected
└── minion
    ├── minion_master.pub  贊成以後master發送公鑰
    ├── minion.pem
    └── minion.pub

##############################################################linux

1.遠程執行

第一條命令:
[root@linux-node1 master]# salt '*' test.ping
linux-node2.example.com:
    True
linux-node1.example.com:
    True

說明:
salt:命令
*:匹配目標,使用通配符
test.ping:模塊.方法   
#此處的ping並不是ICMP的ping命令,而是master向minion發送了一個包,minion收到了,返回一個True

[root@linux-node1 ~]# salt '*' cmd.run 'uptime'
linux-node1.example.com:
     11:51:47 up 21 days,  5:57,  2 users,  load average: 0.04, 0.03, 0.05
linux-node2.example.com:
     11:51:47 up 12 days,  6:26,  2 users,  load average: 0.00, 0.03, 0.05
[root@linux-node1 ~]# salt '*' cmd.run 'w'
linux-node1.example.com:
     11:52:11 up 21 days,  5:58,  2 users,  load average: 0.03, 0.02, 0.05
    USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
    root     pts/2    192.168.56.1     06Jan18  6:51   3.27s  3.27s -bash
    root     pts/3    192.168.56.1     06Jan18  3.00s  6:17   0.46s /usr/bin/python /usr/bin/salt * cmd.run w
linux-node2.example.com:
     11:52:11 up 12 days,  6:26,  2 users,  load average: 0.00, 0.03, 0.05
    USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
    root     pts/1    192.168.56.1     Mon10   21:59m  0.28s  0.28s -bash
    root     pts/3    192.168.56.1     06Jan18  6:59   4.82s  0.02s -bash
[root@linux-node1 ~]# salt '*' cmd.run 'df -h'
linux-node2.example.com:
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root   18G   17G  1.1G  95% /
    devtmpfs                 905M     0  905M   0% /dev
    tmpfs                    916M   12K  916M   1% /dev/shm
    tmpfs                    916M   41M  876M   5% /run
    tmpfs                    916M     0  916M   0% /sys/fs/cgroup
    /dev/sda1                497M  171M  326M  35% /boot
    tmpfs                    184M     0  184M   0% /run/user/0
    /dev/loop0               4.1G  4.1G     0 100% /mnt
linux-node1.example.com:
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root   18G   11G  7.2G  60% /
    devtmpfs                 905M     0  905M   0% /dev
    tmpfs                    916M   28K  916M   1% /dev/shm
    tmpfs                    916M   57M  860M   7% /run
    tmpfs                    916M     0  916M   0% /sys/fs/cgroup
    /dev/sda1                497M  171M  326M  35% /boot
    tmpfs                    184M     0  184M   0% /run/user/0

[root@linux-node1 ~]# netstat -tulnp|grep minion
minion不須要監聽端口,說明minion須要主動去連接master,master監聽端口爲450五、4506
[root@linux-node1 ~]# netstat -tulnp|grep python
tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      37039/python        
tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      37045/python       

#master和minion默認使用一個叫zeroMQ進行並行通訊,zeroMQ屬於底層(傳輸層)的消息隊列,
#至關於一個發佈與訂閱系統,好比你訂了一個教室聽課,那麼全部訂了此間課室的人都能聽到老師的課程。

[root@linux-node1 ~]# lsof -ni:4505
COMMAND     PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
salt-mast 37039 root   16u  IPv4 3394584      0t0  TCP *:4505 (LISTEN)
salt-mast 37039 root   18u  IPv4 3412804      0t0  TCP 192.168.56.11:4505->192.168.56.12:43126 (ESTABLISHED)
salt-mast 37039 root   19u  IPv4 3412811      0t0  TCP 192.168.56.11:4505->192.168.56.11:38262 (ESTABLISHED)
salt-mini 39623 root   27u  IPv4 3412810      0t0  TCP 192.168.56.11:38262->192.168.56.11:4505 (ESTABLISHED)

查看4505端口,咱們能夠發現salt-minion使用一個隨機端口經過4505端口與salt-master通訊,master使用4505端口發送指定到salt-minion上進行執行。而4606端口是用於接收數據的返回,用於zeroMQ的請求與響應的系統。nginx

能夠經過date命令查看salt的並行通訊,能夠看到是同時返回
[root@linux-node1 ~]# salt '*' cmd.run 'date'
linux-node2.example.com:
    Tue Jan 16 12:01:52 CST 2018
linux-node1.example.com:
    Tue Jan 16 12:01:52 CST 2018

2.配置管理

(1)saltstack是使用YAML的格式做爲管理文件的格式,下面的YAML的樣例:c++

YAML樣例:
house:
family:
name: Doe
parents:
  - John
  - Jane
children:
  - Paul
  - Mark
  - Simone
address:
number: 34
street: Main Street
city: Nowheretown
zipcode: 12345

(2)YAML的規則:web

①縮進表示層級關係,默認縮進是2個空格、4個空格、6個空格 
②冒號後面有個空格,以冒號結尾能夠有空格,能夠無空格 
③短橫線表明一個列表,短橫線後面有個空格
正則表達式

(3)定義yaml文件放的位置:salt內置一個fileserver,在master文件配置:file_roots

[root@linux-node1 ~]# vim /etc/salt/master     #定義yaml文件放的位置,base環境是必備的
file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
  test:
    - /srv/salt/test
  prod:
    - /srv/salt/prod
[root@linux-node1 ~]# mkdir -p /srv/salt/{base,dev,test,prod}
[root@linux-node1 ~]# systemctl restart salt-master
[root@linux-node1 ~]# cd /srv/salt/base/
[root@linux-node1 base]# mkdir web
[root@linux-node1 web]# vim apache.sls    #編寫安裝apache的YAML文件
apache-install:
  pkg.installed:---------->模塊pkg,方法installed,會匹配操做系統進行選擇安裝的方法
    - name: httpd--------->裝的包的名稱

apache-service:----------->id要惟一
  service.running:-------->狀態模塊service,running爲模塊的方法
    - name: httpd--------->管理服務的名稱
    - enable: True-------->設置開機自動啓動

[root@linux-node1 ~]# salt 'linux-node2.example.com' state.sls web.apache
#若是apache.sls的位置是在prod目錄下,須要在後面增長saltenv=prod
#salt 'linux-node2.example.com' state.sls web.apache saltenv=prod

實現自動化安裝,須要寫一個top.sls
top.sls是state系統的入口文件,它在大規模配置管理工做中負責制定哪些設備調用哪些states.sls文件。top.sls入口文件不是必須的,若是隻須要簡單地對某臺機器進行配置管理工做,咱們能夠直接使用state.sls命令來指定states.sls文件便可。
[root@linux-node1 base]# pwd
/srv/salt/base
[root@linux-node1 base]# vim top.sls    #必須在base環境下寫
base:
  'linux-node1.example.com':
    - web.apache
  'linux-node2.example.com':
    - web.apache
****************************
若是隻有一個任務在所有機子上執行,也能夠:
base:
  '*'
    - web.apache
****************************
[root@linux-node1 ~]# salt '*' state.highstate   #去top.sls讀取,*表明通知哪些主機
[root@linux-node1 ~]# salt '*' state.highstate test=True
#在不想影響當前主機的運行狀況,可使用test=True 進行預測試

 ######################################################################

一、什麼是Grains?

Grains是saltstack的組件,用於收集salt-minion在啓動時候的信息,又稱爲靜態信息。能夠理解爲Grains記錄着每臺Minion的一些經常使用屬性,好比CPU、內存、磁盤、網絡信息等。咱們能夠經過grains.items來查看某臺Minion的全部Grains信息。 
Grains是服務器的一系列粒子信息,也就是服務器的一系列物理,軟件環境信息。在執行salt的sls時候能夠根據Grains信息的不一樣對服務器進行匹配分組,例如能夠根據系統是centos服務器跟系統是redhat環境的安裝不一樣的軟件包。 
Grains功能:1.收集資產信息 2.信息查詢 
官方文檔:https://docs.saltstack.com/en/getstarted/overview.html

二、Grains的功能使用

(1)Grains查詢信息

[root@linux-node1 ~]# salt '*' grains.items    #查看全部grains的key和values
[root@linux-node1 ~]# salt '*' grains.get saltversion  #查看salt的版本
linux-node2.example.com:
    2016.11.8
linux-node1.example.com:
    2016.11.8
[root@linux-node1 ~]# salt '*' grains.get ip4_interface    #查看ip
[root@linux-node1 ~]# salt '*' grains.get ip4_interface:eth0

(2)Grains目標匹配

grains能夠用於進行目標匹配,好比讓全部的centos系統進行某個操做。使用salt -G

#(1)對os系統爲centos系統執行一個uptime的命令:
[root@linux-node1 ~]# salt -G 'os:Centos' cmd.run 'uptime'  #查看負載
linux-node2.example.com:
     14:17:06 up 13 days,  8:51,  2 users,  load average: 0.00, 0.01, 0.05
linux-node1.example.com:
     14:17:06 up 22 days,  8:23,  2 users,  load average: 0.01, 0.02, 0.05

 #(2)在init爲systemd的系統上執行查看負載:
[root@linux-node1 ~]# salt -G 'init:systemd' cmd.run 'uptime'
linux-node1.example.com:
     14:21:00 up 22 days,  8:27,  2 users,  load average: 0.00, 0.01, 0.05
linux-node2.example.com:
     14:21:00 up 13 days,  8:55,  2 users,  load average: 0.00, 0.01, 0.05

(3)Grains在top file中進行匹配

#在top.sls中定義對系統是CentOS的服務之星web.apached定義的狀態信息
[root@linux-node1 ~]# vim /srv/salt/base/top.sls 
base:
  'os:CentOS':
    - match: grain
    - web.apache
[root@linux-node1 ~]# salt '*' state.highstate
linux-node2.example.com:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 14:28:57.612549
    Duration: 2490.712 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 14:29:00.104396
    Duration: 41.901 ms
     Changes:   

Summary for linux-node2.example.com
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time:   2.533 s
linux-node1.example.com:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 14:29:12.061257
    Duration: 11458.788 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 14:29:23.520720
    Duration: 46.868 ms
     Changes:   

Summary for linux-node1.example.com

(4)Grains自定義

Grains的四種存在形式: 
①Core grains. 
②在 /etc/salt/grains 自定義grains。 
③在 /etc/salt/minion 自定義grains。 
④在 _grains 目錄自定義grain,同步到minions。

#生產環境使用自定義一個grains
[root@linux-node1 ~]# vim /etc/salt/grains 
test-grains: linux-node2   #冒號後面有空格
[root@linux-node1 ~]# systemctl restart salt-minion
[root@linux-node1 ~]# salt '*' grains.get test-grains
linux-node1.example.com:
    linux-node2
linux-node2.example.com:
[root@linux-node1 ~]# vim /etc/salt/grains 
test-grains: linux-node2
hehe: haha
[root@linux-node1 ~]# salt '*' saltutil.sync_grains
[root@linux-node1 ~]# salt '*' grains.get hehe
linux-node1.example.com:
    haha
linux-node2.example.com:

三、什麼是Pillar?

Pillar是Salt最重要的系統之一,它跟grains的結構同樣,也是一個字典格式,數據經過key/value的格式進行存儲。在Salt的設計中,Pillar使用獨立的加密sessiion。可用於提供開發接口,用於在master端定義數據,而後再minion中使用,通常傳輸敏感的數據,例如ssh key,加密證書等。

pillar和states創建方式相似,由sls文件組成,有一個入口文件top.sls,經過這個文件關聯其餘sls文件,默認路徑在/srv/pillar,可經過/etc/salt/master裏面pillar_roots:指定位置。

pillar到底什麼做用呢?那麼下面介紹一個簡單的例子,你就明白了。

用zabbix監控新上架的服務器(10臺),須要將zabbix_agentd.conf分發到被監控主機,這個文件中hostname的ip每臺都不一樣,咱們不可能寫10分配置文件吧!那麼如何讓hostname在分發的時候就根據被監控主機IP,修改爲本身的呢?這時就用到渲染了,默認渲染器是jinja,支持for in循環判斷,格式是{%…%}{% end* %},這樣一來salt會先讓jinja渲染,而後交給yaml處理。

 

四、Pillar的功能使用

(1)如何定義Pillar數據

a.master配置文件中定義pillar: 
默認狀況下,master配置文件中的全部數據都添加到Pillar中,且對全部minion可用。若是要禁用這一默認值,能夠在master配置文件中添加以下數據,重啓服務後生效:

#默認的pillar的items爲空,須要修改/etc/salt/master
[root@linux-node1 ~]# salt '*' pillar.items
linux-node1.example.com:
    ----------
linux-node2.example.com:
    ----------
[root@linux-node1 ~]# vim /etc/salt/master
#pillar_opts: False   打開該項,修改爲True
pillar_opts: True
[root@linux-node1 ~]# systemctl restart salt-master
[root@linux-node1 ~]# salt '*' pillar.items

b.使用SLS文件定義Pillar 
Pillar使用與State類似的SLS文件。Pillar文件放在master配置文件中pillar_roots定義的目錄下。示例以下:

[root@linux-node1 ~]# vim /etc/salt/master
pillar_roots:
  base:
    - /srv/pillar/base
  prod:
    - /srv/pillar/prod

#此段代碼定義了base環境下的Pillar文件保存在/srv/pillar/base目錄下。prod環境下的Pillar文件保存在/srv/pillar/prod下。

[root@linux-node1 ~]# mkdir -p /srv/pillar/{base,prod}
[root@linux-node1 ~]# tree /srv/pillar/
/srv/pillar/
├── base
└── prod
[root@linux-node1 ~]# systemctl restart salt-master

#建立base環境下的pillar文件爲apache
[root@linux-node1 ~]# vim /srv/pillar/base/apache.sls
{% if grains['os'] == 'CentOS' %}
apache: httpd
{% elif grains['os'] == 'Debian' %}
apache: apache2
{% endif %}

#與State類似,Pillar也有top file,也使用相同的匹配方式將數據應用到minion上。示例以下:
[root@linux-node1 ~]# vim /srv/pillar/base/top.sls 
base:
  '*':
    - apache
[root@linux-node1 ~]# salt '*' pillar.items
linux-node1.example.com:
    ----------
    apache:
        httpd
linux-node2.example.com:
    ----------
    apache:
        httpd

#在base環境下,引用pillar
[root@linux-node1 ~]# vim /srv/salt/base/web/apache.sls 
apache-install:
  pkg.installed:
    - name: {{ pillar['apache'] }}

apache-service:
  service.running:
    - name: {{ pillar['apache'] }}
    - enable: True
[root@linux-node1 ~]# salt '*' state.highstate
linux-node2.example.com:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 15:15:13.424547
    Duration: 940.333 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 15:15:14.366780
    Duration: 55.706 ms
     Changes:   

Summary for linux-node2.example.com
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time: 996.039 ms
linux-node1.example.com:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 15:15:14.648492
    Duration: 8242.769 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 15:15:22.891907
    Duration: 42.651 ms
     Changes:   

Summary for linux-node1.example.com
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time:   8.285 s

總結: 
1.pillar和狀態同樣,有pillar_roots,在master中配置 
2.到配置的地方/srv/pillar/base下寫一個apache.sls 
3.pillar必須在top file指定才能使用,在top.sls中指定全部的minion,都須要執行在base環境下的apache.sls 
4.用以前查看是否能獲取到pillar值:salt ‘*’ pillar.items 
5.更改狀態配置,把name改成一個pillar的引用,這是一個jinja的語法

五、Grains VS Pillar

名稱      存儲位置    類型        採集方式          場景
Grains    minion    靜態    minion啓動時,能夠刷新    1.獲取信息 2.匹配
Pillar    master    動態    指定,實時生效         1.匹配 2.敏感數據配置

 #######################################################################

1.目標 
2.執行模塊 
3.返回

salt    ‘*’    cmd.run    ‘uptime’
命令    目標    執行模塊    執行模塊參數

一、SlatStack遠程執行–目標

執行目標:https://docs.saltstack.com/en/latest/topics/targeting/index.html#advanced-targeting-methods

  • (1)和Minion ID相關的目標匹配方式
1、MinionID匹配
[root@linux-node1 ~]# salt 'linux-node1.example.com' service.status sshd
linux-node1.example.com:
    True

2、通配符* ? [1-2]等匹配
[root@linux-node1 ~]# salt 'linux*' service.status sshd
linux-node2.example.com:
    True
linux-node1.example.com:
    True
[root@linux-node1 ~]# salt 'linux-node?.example.com' service.status sshd
linux-node2.example.com:
    True
linux-node1.example.com:
    True
[root@linux-node1 ~]# salt 'linux-node[1-2].example.com' service.status sshd
linux-node2.example.com:
    True
linux-node1.example.com:
    True

3、列表匹配
[root@linux-node1 ~]# salt -L 'linux-node1.example.com,linux-node2.example.com' test.ping
linux-node2.example.com:
    True
linux-node1.example.com:
    True

4、正則表達式匹配
[root@linux-node1 ~]# salt -E 'linux-(node1|node2)*' test.ping
linux-node2.example.com:
    True
linux-node1.example.com:
    True
  • (2)和Minion無關匹配
    1、Grains匹配
    [root@linux-node1 ~]# salt -G 'os:CentOS' test.ping
    linux-node2.example.com:
        True
    linux-node1.example.com:
        True
    
    2、子網、IP地址匹配
    [root@linux-node1 ~]# salt -S '192.168.56.0/24' test.ping
    linux-node1.example.com:
        True
    linux-node2.example.com:
        True
    
    3、Pillar匹配
    #這裏目標key:value,是在pillar系統中定義
    [root@linux-node1 ~]# salt -I 'apache:httpd' test.ping
    linux-node2.example.com:
        True
    linux-node1.example.com:
        True

     

  • (3)混合匹配(少用)
  • (4)Node Groups匹配
    #在master配置文件進行定義node-groups
    [root@linux-node1 ~]# vim /etc/salt/master
    nodegroups:
      web-group: 'L@linux-node1.example.com,linux-node2.example.com'
    [root@linux-node1 ~]# systemctl restart salt-master
    [root@linux-node1 ~]# salt -N web-group test.ping
    linux-node2.example.com:
        True
    linux-node1.example.com:
        True
  • (5)批處理執行–Batch size
    #先執行1臺完成後再執行一臺,按比例去執行
    [root@linux-node1 ~]# salt '*' -b 1 test.ping
    
    Executing run on ['linux-node2.example.com']
    
    jid:
        20180117172632455823
    linux-node2.example.com:
        True
    retcode:
        0
    
    Executing run on ['linux-node1.example.com']
    
    jid:
        20180117172632650981
    linux-node1.example.com:
        True
    retcode:
        0
    
    #按比例匹配執行,比如在重啓服務器時,爲了避免影響業務,能夠先重啓一部分,再重啓後面一部分
    [root@linux-node1 ~]# salt -G 'os:CentOS' --batch-size 50% test.ping
    
    Executing run on ['linux-node2.example.com']
    
    jid:
        20180117172759207757
    linux-node2.example.com:
        True
    retcode:
        0
    
    Executing run on ['linux-node1.example.com']
    
    jid:
        20180117172759402383
    linux-node1.example.com:
        True
    retcode:
        0

二、SlatStack遠程執行–執行模塊

執行模塊:https://docs.saltstack.com/en/latest/ref/modules/all/index.html#all-salt-modules

三、SlatStack遠程執行–返回

返回模塊:https://docs.saltstack.com/en/latest/ref/returners/index.html 
Return組件能夠理解爲SaltStack系統對執行Minion返回後的數據進行存儲或者返回給其餘程序,它支持多種存儲方式,如MySQL、Redis、ELK、zabbix,經過Return咱們能夠對SaltStack的每次操做進行記錄,對之後的日誌審計提供了數據來源。 
Return是在Master端觸發任務,而後Minion接受處理任務直接與Return存儲服務器創建連接,而後把數據存儲到服務器。 
返回是minion直接將命令執行結果寫入到MySQL,須要的依賴包:MySQL-python

  • (1)SATL.RETURNERS.MYSQL(minion返回MySQL)
    1)全部minion須要安裝MySQL-python
    [root@linux-node1 ~]# salt '*' cmd.run 'yum install -y MySQL-python'
    [root@linux-node1 ~]# salt '*' pkg.install MySQL-python    #使用pkg模塊安裝MySQL-python
    
    (2)安裝mariadb數據庫
    [root@linux-node1 ~]# yum install -y mariadb-server
    [root@linux-node1 ~]# systemctl start mariadb
    
    (3)建立salt庫,建立jid、salt_returns、salt_events表,受權
    [root@linux-node1 ~]# mysql -uroot -p
    Enter password: 
    MariaDB [(none)]> CREATE DATABASE  `salt`
        ->   DEFAULT CHARACTER SET utf8
        ->   DEFAULT COLLATE utf8_general_ci;
    Query OK, 1 row affected (0.00 sec)
    
    MariaDB [(none)]> USE `salt`;
    Database changed
    
    MariaDB [salt]> CREATE TABLE `jids` (
        ->   `jid` varchar(255) NOT NULL,
        ->   `load` mediumtext NOT NULL,
        ->   UNIQUE KEY `jid` (`jid`)
        -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    Query OK, 0 rows affected (0.00 sec)
    
    MariaDB [salt]> CREATE TABLE `salt_returns` (
        ->   `fun` varchar(50) NOT NULL,
        ->   `jid` varchar(255) NOT NULL,
        ->   `return` mediumtext NOT NULL,
        ->   `id` varchar(255) NOT NULL,
        ->   `success` varchar(10) NOT NULL,
        ->   `full_ret` mediumtext NOT NULL,
        ->   `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
        ->   KEY `id` (`id`),
        ->   KEY `jid` (`jid`),
        ->   KEY `fun` (`fun`)
        -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    Query OK, 0 rows affected (0.03 sec)
    
    MariaDB [salt]> CREATE TABLE `salt_events` (
        -> `id` BIGINT NOT NULL AUTO_INCREMENT,
        -> `tag` varchar(255) NOT NULL,
        -> `data` mediumtext NOT NULL,
        -> `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
        -> `master_id` varchar(255) NOT NULL,
        -> PRIMARY KEY (`id`),
        -> KEY `tag` (`tag`)
        -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    Query OK, 0 rows affected (0.02 sec)
    
    MariaDB [salt]> show tables;
    +----------------+
    | Tables_in_salt |
    +----------------+
    | jids           |
    | salt_events    |
    | salt_returns   |
    +----------------+
    3 rows in set (0.00 sec)
    
    MariaDB [salt]> grant all on salt.* to salt@'%' identified by 'salt';
    Query OK, 0 rows affected (0.00 sec)
    
    (4)修改salt-minion,配置MySQL連接
    [root@linux-node2 ~]# vim /etc/salt/minion
    ######      Returner  settings        ######
    ############################################
    mysql.host: '192.168.56.11'
    mysql.user: 'salt'
    mysql.pass: 'salt'
    mysql.db: 'salt'
    mysql.port: 3306
    [root@linux-node2 ~]# systemctl restart salt-minion
    [root@linux-node1 ~]# vim /etc/salt/minion
    ######      Returner  settings        ######
    ############################################
    mysql.host: '192.168.56.11'
    mysql.user: 'salt'
    mysql.pass: 'salt'
    mysql.db: 'salt'
    mysql.port: 3306
    [root@linux-node1 ~]# systemctl restart salt-minion
    
    (5)測試,並在數據庫查看返回結果
    [root@linux-node1 ~]# salt '*' test.ping --return mysql
    linux-node2.example.com:
        True
    linux-node1.example.com:
        True
    MariaDB [salt]> select * from salt_returns;
    +-----------+----------------------+--------+-------------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
    | fun       | jid                  | return | id                      | success | full_ret                                                                                                                                            | alter_time          |
    +-----------+----------------------+--------+-------------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
    | test.ping | 20180118093222060862 | true   | linux-node2.example.com | 1       | {"fun_args": [], "jid": "20180118093222060862", "return": true, "retcode": 0, "success": true, "fun": "test.ping", "id": "linux-node2.example.com"} | 2018-01-18 09:32:22 |
    | test.ping | 20180118093222060862 | true   | linux-node1.example.com | 1       | {"fun_args": [], "jid": "20180118093222060862", "return": true, "retcode": 0, "success": true, "fun": "test.ping", "id": "linux-node1.example.com"} | 2018-01-18 09:32:24 |
    +-----------+----------------------+--------+-------------------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
    2 rows in set (0.00 sec)
    View Code

     

  • 使用salt的job_cache機制將命令寫入mysql(經常使用方法)
  • 執行的全部命令都會寫入mysql,不用使用return,把cache寫在mysql
    [root@linux-node1 ~]# vim /etc/salt/master
    master_job_cache: mysql
    mysql.host: '192.168.56.11'
    mysql.user: 'salt'
    mysql.pass: 'salt'
    mysql.db: 'salt'
    mysql.port: 3306
    [root@linux-node1 ~]# systemctl restart salt-master
    [root@linux-node1 ~]# salt '*' cmd.run 'w'
    [root@linux-node1 ~]# mysql -uroot -p123456 -e "select * from salt.salt_returns;"
    
    #加上-v參數能夠看到jid,而且經過jid能夠查看運行的結果
    [root@linux-node1 ~]# salt '*' cmd.run 'uptime' -v
    Executing job with jid 20180118095000725560
    -------------------------------------------
    
    linux-node2.example.com:
         09:50:00 up 14 days,  4:24,  2 users,  load average: 0.00, 0.01, 0.05
    linux-node1.example.com:
         09:50:00 up 23 days,  3:56,  2 users,  load average: 0.00, 0.06, 0.18
    [root@linux-node1 ~]# salt-run jobs.lookup_jid 20180118095000725560
    linux-node1.example.com:
         09:50:00 up 23 days,  3:56,  2 users,  load average: 0.00, 0.06, 0.18
    linux-node2.example.com:
         09:50:00 up 14 days,  4:24,  2 users,  load average: 0.00, 0.01, 0.05

     

############################################################################

一、salt-ssh的使用

官方文檔:https://docs.saltstack.com/en/2016.11/topics/ssh/index.html

1)安裝salt-ssh
[root@linux-node1 ~]# yum install -y salt-ssh2)配置salt-ssh
[root@linux-node1 ~]# vim /etc/salt/roster 
linux-node1:
  host: 192.168.56.11
  user: root
  passwd: 123123
linux-node2:
  host: 192.168.56.12
  user: root
  passwd: 1231233)使用ssh遠程執行
[root@linux-node1 ~]# salt-ssh '*' -r 'uptime'
linux-node2:
    ----------
    retcode:
        0
    stderr:
    stdout:
        root@192.168.56.12's password: 
         14:07:19 up 14 days,  8:41,  2 users,  load average: 0.04, 0.08, 0.07
linux-node1:
    ----------
    retcode:
        0
    stderr:
    stdout:
        root@192.168.56.11's password: 
         14:07:20 up 23 days,  8:13,  2 users,  load average: 2.86, 0.81, 0.34

二、配置管理

(1)什麼是狀態?

States是Saltstack中的配置語言,在平常進行配置管理時須要編寫大量的States文件。好比咱們須要安裝一個包,而後管理一個配置文件,最後保證某個服務正常運行。這裏就須要咱們編寫一些states sls文件(描述狀態配置的文件)去描述和實現咱們的功能。編寫的states sls文件都是YAML語法,states sls文件也支持使用Python語言編寫。 
所謂的狀態就是但願系統運行某些命令以後的結果。描述狀態使用YAML格式的文件。SLS:salt state 
舉例安裝apache,以下:

[root@linux-node1 ~]# vim /srv/salt/base/web/apache.sls 
apache:
  pkg.installed:
    - name: httpd
  service.running:
    - name: httpd
  file.managed:
    - name: /etc/httpd/conf/httpd.conf
    - source: salt://apache/files/httpd.conf
    - user: root
    - group: root
    - mode: 644

解釋說明:
apache:id聲明,在全部環境(base、prod)下全局惟一
pkg:狀態模塊
.:引用關係
installed:模塊中的方法
::表明層級關係
name:能夠理解爲參數,後面跟的是參數值
file.managed:文件管理模塊,必需要有source指定文件的來源路徑
source:文件的來源路徑,salt://表明着環境的根路徑,這的根路徑爲:/srv/salt/base/
user、group、mode:分別指定文件的所屬者,所屬組和權限

以上的文件還可使用分id的寫法:
apache-install:
  pkg.installed:
    - name: httpd

apache-service:
  service.running:
    - name: httpd

apache-config:
  file.managed:
    - name: /etc/httpd/conf/httpd.conf
    - source: salt://apache/files/httpd.conf
    - user: root
    - group: root
    - mode: 644

存在指定多個配置文件,還可使用一下寫法:(不適用name做爲參數傳遞時,id就是name)
/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://apache/files/httpd.conf
    - user: root
    - group: root
    - mode: 644
/etc/httpd/conf/php.conf:
  file.managed:
    - source: salt://apache/files/php.conf
    - user: root
    - group: root
    - mode: 644

(2) LAMP的狀態設計與實現部署

一、設計分析

1 名稱                  軟件包                                  配置文件                 服務
2 使用模塊                pkg                                    file                 service
3 LAMP    httpd、php、mariadb、mariadb-server、php-mysql、php-pdo、php-cli    /etc/httpd/conf/httpd.conf、/etc/php.ini    httpd、mysqld

二、Aapche的狀態配置

 1 [root@linux-node1 prod]# pwd
 2 /srv/salt/prod
 3 [root@linux-node1 prod]# mkdir apache php mysql
 4 [root@linux-node1 prod]# tree 
 5 .
 6 ├── apache
 7 ├── mysql
 8 └── php
 9 
10 3 directories, 0 files
11 
12 [root@linux-node1 prod]# cd apache/
13 [root@linux-node1 apache]# vim apache.sls      #編寫apache的狀態模塊
14 apache-install:
15   pkg.installed:
16     - name: httpd
17 
18 apache-config:
19   file.managed:
20     - name: /etc/httpd/conf/httpd.conf
21     - source: salt://apache/files/httpd.conf    #salt://表明着環境的根路徑
22     - user: root
23     - group: root
24     - mode: 644
25 
26 apache-service:
27   service.running:
28     - name: httpd
29     - enable: True
30 [root@linux-node1 apache]# mkdir files    #建立source目錄
31 [root@linux-node1 apache]# cd files/
32 [root@linux-node1 files]# cp /etc/httpd/conf/httpd.conf .
33 [root@linux-node1 apache]# tree 
34 .
35 ├── apache.sls
36 └── files
37     └── httpd.conf
38 
39 1 directory, 2 files
40 [root@linux-node1 apache]# salt 'linux-node1' state.sls apache.apache saltenv=prod

三、php的狀態配置

[root@linux-node1 prod]# cd php
[root@linux-node1 php]# mkdir files
[root@linux-node1 php]# vim init.sls
php-install:
  pkg.installed:
    - pkgs:
      - php
      - php-pdo
      - php-mysql

php-config:
  file.managed:
    - name: /etc/php.ini
    - source: salt://php/files/php.ini
    - user: root
    - group: root
    - mode: 644
[root@linux-node1 php]# cp /etc/php.ini files/
[root@linux-node1 php]# tree 
.
├── files
│   └── php.ini
└── init.sls

1 directory, 2 files

四、mysql的狀態配置

[root@linux-node1 prod]# cd mysql/
[root@linux-node1 mysql]# vim init.sls
mysql-install:
  pkg.installed:
    - pkgs:
      - mariadb
      - mariadb-server

mysql-config:
  file.managed:
    - name: /etc/my.cnf
    - source: salt://mysql/files/my.cnf
    - user: root
    - gourp: root
    - mode: 644

mysql-service:
  service.running:
    - name: mariadb-server
    - enable: True
[root@linux-node1 mysql]# mkdir files
[root@linux-node1 mysql]# cp /etc/my.cnf files/
[root@linux-node1 prod]# tree 
.
├── apache
│   ├── files
│   │   └── httpd.conf
│   └── init.sls
├── mysql
│   ├── files
│   │   └── my.cnf
│   └── init.sls
└── php
    ├── files
    │   └── php.ini
    └── init.sls
[root@linux-node1 prod]# salt -S '192.168.56.11' state.sls php.init saltenv=prod
linux-node1.example.com:
----------
          ID: php-install
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: php-mysql
              The following packages were already installed: php-pdo, php
     Started: 10:30:14.780998
    Duration: 118711.436 ms
     Changes:   
              ----------
              php-mysql:
                  ----------
                  new:
                      5.4.16-43.el7_4
                  old:
----------
          ID: php-config
    Function: file.managed
        Name: /etc/php.ini
      Result: True
     Comment: File /etc/php.ini is in the correct state
     Started: 10:32:13.556562
    Duration: 51.913 ms
     Changes:   

Summary for linux-node1.example.com
------------
Succeeded: 2 (changed=1)
Failed:    0
------------
Total states run:     2
Total run time: 118.763 s
View Code

五、寫入top file,執行高級狀態

[root@linux-node1 base]# pwd
/srv/salt/base
[root@linux-node1 base]# vim top.sls 
prod:
  'linux-node1.example.com':
   - apache.init
   - php.init
   - mysql.init
[root@linux-node1 base]# salt 'linux-node1*' state.highstate
linux-node1.example.com:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 10:39:04.214911
    Duration: 762.144 ms
     Changes:   
----------
          ID: apache-config
    Function: file.managed
        Name: /etc/httpd/conf/httpd.conf
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf is in the correct state
     Started: 10:39:04.979376
    Duration: 13.105 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 10:39:04.992962
    Duration: 36.109 ms
     Changes:   
----------
          ID: php-install
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 10:39:05.029241
    Duration: 0.65 ms
     Changes:   
----------
          ID: php-config
    Function: file.managed
        Name: /etc/php.ini
      Result: True
     Comment: File /etc/php.ini is in the correct state
     Started: 10:39:05.029987
    Duration: 10.642 ms
     Changes:   
----------
          ID: mysql-install
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 10:39:05.040793
    Duration: 0.422 ms
     Changes:   
----------
          ID: mysql-config
    Function: file.managed
        Name: /etc/my.cnf
      Result: True
     Comment: File /etc/my.cnf is in the correct state
     Started: 10:39:05.041301
    Duration: 7.869 ms
     Changes:   
----------
          ID: mysql-service
    Function: service.running
        Name: mariadb
      Result: True
     Comment: The service mariadb is already running
     Started: 10:39:05.049284
    Duration: 28.054 ms
     Changes:   

Summary for linux-node1.example.com
------------
Succeeded: 8
Failed:    0
------------
Total states run:     8
Total run time: 858.995 ms   
View Code

 #########################################################################

1、部署Redis主從

需求:

  1. 192.168.56.11是主,192.168.56.12是從
  2. redis監聽本身的ip地址,而不是0.0.0.0

分析: 
linux-node1 安裝 配置 啓動 
linux-node2 安裝 配置 啓動 設置主從

[root@linux-node1 ~]# yum install redis -y
[root@linux-node1 prod]# mkdir redis/files -p
[root@linux-node1 redis]# cp /etc/redis.conf /srv/salt/prod/redis/files/
[root@linux-node1 redis]# tree 
.
├── files
│   └── redis.conf
└── init.sls

1 directory, 2 files
[root@linux-node1 redis]# vim init.sls 
redis-install:
  pkg.installed:
    - name: redis

redis-config:
  file.managed:
    - name: /etc/redis.conf
    - source: salt://redis/files/redis.conf
    - user: root
    - group: root
    - mode: 644
    - template: jinja
      defaults:
      PORT: 6379
      IPADDR: {{ grains['fqdn_ip4'][0] }}

redis-service:
  service.running:
    - name: redis
    - enable: True
    - reload: True
[root@linux-node1 redis]# salt '*' state.sls redis.init saltenv=prod  #測試單一執行sls是否成功
[root@linux-node1 redis]# netstat -tulnp|grep redis-server
tcp        0      0 192.168.56.11:6379      0.0.0.0:*               LISTEN      10186/redis-server  
[root@linux-node2 ~]# netstat -tulnp |grep redis-server
tcp        0      0 192.168.56.12:6379      0.0.0.0:*               LISTEN      17973/redis-server  

主從配置:
[root@linux-node1 redis]# vim master.sls 
include:
  - redis.init
[root@linux-node1 redis]# vim slave.sls 
include:
  - redis.init

slave_config:
  cmd.run:
    - name: redis-cli -h 192.168.56.12 slaveof 192.168.56.11 6379--->設置主從
    - unless: redis-cli -h 192.168.56.12 info |grep role:slave-->判斷node2是否爲從,若是是就不執行設置主從
    - require:
      - service: redis-service
[root@linux-node1 redis]# vim /srv/salt/base/top.sls #配置top file
prod:
  'linux-node1.example.com':
    - lamp
    - redis.master
  'linux-node2.example.com':
    - lamp
    - redis.slave
[root@linux-node1 redis]# salt '*' state.highstate
......
----------
          ID: slave_config
    Function: cmd.run
        Name: redis-cli -h 192.168.56.12 slaveof 192.168.56.11 6379
      Result: True
     Comment: Command "redis-cli -h 192.168.56.12 slaveof 192.168.56.11 6379" run
     Started: 12:08:46.428924
    Duration: 31.328 ms
     Changes:   
              ----------
              pid:
                  18132
              retcode:
                  0
              stderr:
              stdout:
                  OK

Summary for linux-node2.example.com
-------------
Succeeded: 14 (changed=1)
Failed:     0
-------------
Total states run:     14
Total run time:    1.527 s
......
[root@linux-node1 redis]# tree 
.
├── files
│   └── redis.conf
├── init.sls
├── master.sls
└── slave.sls

1 directory, 4 files
[root@linux-node1 redis]# cat slave.sls 
include:
  - redis.init

slave_config:
  cmd.run:
    - name: redis-cli -h 192.168.56.12 slaveof 192.168.56.11 6379
    - unless: redis-cli -h 192.168.56.12 info |grep role:slave
    - require:
      - service: redis-service

TIPS:生產環境中,務必使用test=True進行與測試,而且目標選擇一個節點進行,避免錯誤,影響業務的運行。

2、SaltStack–Job管理

官方文檔:https://docs.saltstack.com/en/2016.11/ref/modules/all/salt.modules.saltutil.html 
在SaltStack裏面執行任何一個操做都會在Master上產生一個jid號。Minion端會在cache目錄下的proc目錄建立一個以jid爲名稱的文件,這個文件裏面的內容就是記錄這次操做的記錄,當操做處理完成後改文件會自動刪除。而master端會記錄每次操做的詳細信息,這個記錄都是存到在Master端cache目錄下的jobs下。

[root@linux-node1 ~]# cd /var/cache/salt/master/jobs/----->任務管理目錄
[root@linux-node1 jobs]# pwd
/var/cache/salt/master/jobs
[root@linux-node1 jobs]# ls
07  0e  2f  3a  44  4c  53  5c  72  92  ac  b2  bf  e6  f4
0c  0f  34  3f  45  4e  5a  63  8b  93  ad  b9  c1  e9  fb
0d  13  37  43  49  52  5b  64  8c  a5  af  be  c4  f1  fe
[root@linux-node1 linux-node1.example.com]# pwd
/var/cache/salt/master/jobs/07/f8d6ec1380412c95718d931cfb300e793f6b7316d58ad3f34dd57052ca178f/linux-node1.example.com
[root@linux-node1 linux-node1.example.com]# ll
total 8
-rw------- 1 root root   10 Jan 20 09:39 out.p
-rw------- 1 root root 1748 Jan 20 09:39 return.p---->結果返回
[root@linux-node1 ~]# grep "#keep_jobs: 24" /etc/salt/master
#keep_jobs: 24
默認的緩存是24小時,能夠進行修改。管理job是模塊進行管理,由執行模塊進行管理:SALT.MODULES.SALTUTIL

salt '*' saltutil.clear_cache   清除緩存
salt '*' saltutil.find_job <job id>     查找當前在運行的job,並返回它的id
salt '*' saltutil.is_running    查看當前在運行的job
salt '*' saltutil.kill_job <job id>     殺死job

 ###############################################################

實驗環境設置:

主機名                  IP地址                角色
linux-node1.example.com    192.168.56.11    Master、Minion、Haproxy+Keepalived、Nginx+PHP
linux-node2.example.com    192.168.56.12    Minion、Memcached、Haproxy+Keepalived、Nginx+PHP

SaltStack環境設置: 
base環境用於存放初始化的功能,prod環境用於放置生產的配置管理功能

[root@linux-node1 ~]# vim /etc/salt/master
file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
  test:
    - /srv/salt/test
  prod:
    - /srv/salt/prod

pillar_roots:
  base:
    - /srv/pillar/base
  prod:
    - /srv/pillar/prod

一、系統初始化

當咱們的服務器上架並安裝好操做系統後,都會有一些基礎的操做,因此生產環境中使用SaltStack,建議將全部服務器都會涉及的基礎配置或者軟件部署歸類放在base環境下。此處,在base環境下建立一個init目錄,將系統初始化配置的sls均放置到init目錄下,稱爲「初始化模塊」。

(1)需求分析和模塊識別

初始化內容 模塊使用 文件
關閉SElinux file.managed /etc/selinux/config
關閉默認firewalld service.disabled  
時間同步 pkg.installed  
文件描述符 file.managed /etc/security/limits.conf
內核優化 sysctl.present  
SSH服務優化 file.managed、service.running  
精簡開機系統服務 service.dead  
DNS解析 file.managed /etc/resolv.conf
歷史記錄優化history file.append /etc/profile
設置終端超時時間 file.append /etc/profile
配置yum源 file.managed /etc/yum.repo.d/epel.repo
安裝各類agent pkg.installed 、file.managed、service.running  
基礎用戶 user.present、group.present  
經常使用基礎命令 pkg.installed、pkgs  
用戶登陸提示、PS1的修改 file.append /etc/profile

 

(2)需求實現

  1 [root@linux-node1 base]# pwd
  2 /srv/salt/base
  3 [root@linux-node1 base]# mkdir init/files -p
  4 
  5 1、關閉selinux
  6 #使用了file模塊的managed方法
  7 [root@linux-node1 init]# vim selinux.sls 
  8 selinux-config:
  9   file.managed:
 10     - name: /etc/selinux/config
 11     - source: salt://salt/init/files/selinux-config
 12     - user: root
 13     - group: root
 14     - mode: 0644
 15 [root@linux-node1 init]# cp /etc/selinux/config files/selinux-config
 16 
 17 2、關閉firewalld
 18 #使用service模塊的dead方法,直接關閉firewalld,並禁止開機啓動
 19 [root@linux-node1 init]# vim firewalld.sls 
 20 firewall-stop:
 21   service.dead:
 22     - name: firewalld.service
 23     - enable: False
 24 
 25 3、時間同步
 26 #先使用pkg模塊安裝ntp服務,再使用cron模塊加入計劃任務
 27 [root@linux-node1 init]# vim ntp.sls 
 28 ntp-install:
 29   pkg.installed:
 30     - name: ntpdate
 31 
 32 cron-ntpdate:
 33   cron.present:
 34     - name: ntpdate time1.aliyun.com
 35     - user: root
 36     - minute: 5
 37 
 38 4、修改文件描述符
 39 #使用file模塊的managed方法
 40 [root@linux-node1 init]# vim limit.sls 
 41 limit-config:
 42   file.managed:
 43     - name: /etc/security/limits.conf
 44     - source: salt://init/files/limits.conf
 45     - user: root
 46     - group: root
 47     - mode: 0644
 48 [root@linux-node1 init]# cp /etc/security/limits.conf files/
 49 [root@linux-node1 init]# echo "*               -       nofile          65535
 50 " >> files/limits.conf 
 51 
 52 5、內核優化
 53 #使用sysctl模塊的present方法,此處演示一部分,這裏沒有使用name參數,因此id就至關因而name
 54 [root@linux-node1 init]# vim sysctl.sls 
 55 net.ipv4.tcp_fin_timeout:
 56   sysctl.present:
 57     - value: 2
 58 
 59 net.ipv4.tcp_tw_reuse:
 60   sysctl.present:
 61     - value: 1
 62 
 63 net.ipv4.tcp_tw_recycle:
 64   sysctl.present:
 65     - value: 1
 66 
 67 net.ipv4.tcp_syncookies:
 68   sysctl.present:
 69     - value: 1
 70 
 71 net.ipv4.tcp_keepalive_time:
 72   sysctl.present:
 73     - value: 600
 74 
 75 6、SSH服務優化
 76 #使用file.managed和service.running以及watch,對ssh服務進行優化配置
 77 [root@linux-node1 init]# vim sshd.sls
 78 sshd-config:
 79   file.managed:
 80     - name: /etc/ssh/sshd_config
 81     - source: salt://init/files/sshd_config
 82     - user: root
 83     - gourp: root
 84     - mode: 0600
 85   service.running:
 86     - name: sshd
 87     - enable: True
 88     - reload: True
 89     - watch:
 90       - file: sshd-config
 91 [root@linux-node1 init]# cp /etc/ssh/sshd_config files/
 92 [root@linux-node1 init]# vim files/sshd_config 
 93 Port 8022
 94 UseDNS no
 95 PermitRootLogin no
 96 PermitEmptyPasswords no
 97 GSSAPIAuthentication no
 98 
 99 7、精簡開機啓動的系統服務
100 #舉例關閉postfix開機自啓動
101 [root@linux-node1 init]# vim thin.sls 
102 postfix:
103   service.dead:
104     - enable: False
105 
106 8、DNS解析
107 [root@linux-node1 init]# vim dns.sls 
108 dns-config:
109   file.managed:
110     - name: /etc/resolv.conf
111     - source: salt://init/files/resolv.conf
112     - user: root
113     - group: root
114     - mode: 644
115 [root@linux-node1 init]# cp /etc/resolv.conf files/
116 
117 9、歷史記錄優化history
118 #使用file.append擴展修改HISTTIMEFORMAT的值
119 [root@linux-node1 init]# vim history.sls 
120 history-config:
121   file.append:
122     - name: /etc/profile
123     - text:
124       - export HISTTIMEFORMAT="%F %T `whoami` "
125       - export HISTSIZE=5
126       - export HISTFILESIZE=5
127 
128 10、設置終端超時時間
129 #使用file.append擴展修改TMOUT環境變量的值
130 [root@linux-node1 init]# vim tty-timeout.sls 
131 ty-timeout:
132   file.append:
133     - name: /etc/profile
134     - text:
135       - export TMOUT=300
136 
137 11、配置yum源
138 #拷貝yum源
139 [root@linux-node1 init]# vim yum-repo.sls 
140 /etc/yum.repos.d/epel.repo:
141   file.managed:
142     - source: salt://init/files/epel.repo
143     - user: root
144     - group: root
145     - mode: 0644
146 
147 12、安裝各類agent(如安裝zabbix-agent)
148 #至關於一個軟件的安裝、配置、啓動,此處也使用了jinja模板和pillar
149 [root@linux-node1 base]# mkdir zabbix
150 [root@linux-node1 base]# vim zabbix/zabbix-agent.sls 
151 zabbix-agent:
152   pkg.installed:
153     - name: zabbix22-agent
154   file.managed:
155     - name: /etc/zabbix_agentd.conf
156     - source: salt://zabbix/files/zabbix_agentd.conf
157     - template: jinja
158     - defaults:
159       ZABBIX-SERVER: {{ pillar['zabbix-agent']['Zabbix_Server'] }}
160     - require:
161       - pkg: zabbix-agent
162   service.running:
163     - enable: True
164     - watch:
165       - pkg: zabbix-agent
166       - file: zabbix-agent
167 zabbix_agent.conf.d:
168   file.directory:
169     - name: /etc/zabbix_agentd.conf.d
170     - watch_in:
171       - service: zabbix-agent
172     - require:
173       - pkg: zabbix-agent
174       - file: zabbix-agent
175 [root@linux-node1 srv]# vim pillar/base/zabbix.sls 
176 zabbix-agent:
177   Zabbix_Server: 192.168.56.11
178 
179 13、基礎用戶
180 #增長基礎管理用戶www,使用user.present和group.present
181 [root@linux-node1 init]# vim user-www.sls 
182 www-user-group:
183   group.present:
184     - name: www
185     - gid: 1000
186 
187   user.present:
188     - name: www
189     - fullname: www
190     - shell: /sbin/bash
191     - uid: 1000
192     - gid: 1000
193 
194 14、經常使用基礎命令
195 #這裏由於各軟件包會依賴源,因此使用include講yum源包含進來,並在pkg.installed最後增長require依賴
196 [root@linux-node1 init]# vim pkg-base.sls 
197 include:
198   - init.yum-repo
199 base-install:
200   pkg.installed:
201     - pkgs:
202       - screen
203       - lrzsz
204       - tree
205       - openssl
206       - telnet
207       - iftop
208       - iotop
209       - sysstat
210       - wget
211       - dos2unix
212       - lsof
213       - net-tools
214       - mtr
215       - unzip
216       - zip
217       - vim
218       - bind-utils
219     - require:
220       - file: /etc/yum.repos.d/epel.repo
221 
222 15、用戶登陸提示、PS1的修改    
223 [root@linux-node1 init]# vim tty-ps1.sls 
224 /etc/bashrc:
225   file.append:
226     - text:
227       - export PS1=' [\u@\h \w]\$ '
228 
229 16、編寫一個總的狀態,並寫入top file中
230 #將全部初始化所須要的功能編寫完成,每一個小功能都是一個sls文件,統一放在init目錄下。此時再使用include把這些初始化的功能都包含進來。
231 [root@linux-node1 init]# vim init-all.sls 
232 include:
233   - init.dns
234   - init.yum-repo
235   - init.firewalld
236   - init.history
237   - init.limit
238   - init.ntp
239   - init.pkg-base
240   - init.selinux
241   - init.sshd
242   - init.sysctl
243   - init.thin
244   - init.tty-timeout
245   - init.tty-ps1
246   - init.user-www
247 
248 #在top.sls裏面給Minion指定狀態並執行,強烈建議先測試,肯定SaltStack會執行哪些操做而後再應用狀態到服務器上
249 [root@linux-node1 base]# vim top.sls 
250 base:
251   '*':
252     - init.init-all
253 [root@linux-node1 base]# salt '*' state.highstate test=True
254 [root@linux-node1 base]# salt '*' state.highstate 
View Code

二、MySQL主從

1.需求分析: 
配置MySQL主從的有如下步驟: 
(1)MySQL安裝初始化—->mysql-install.sls 
(2)MySQL的主配置文件my.cnf配置不一樣的server_id–>mariadb-server-master.cnf、mariadb-server-slave.cnf 
(3)建立主從同步用戶–>master.sls 
(4)master獲取bin-log和post值–>經過腳本實現 
(5)slave上,change master && start slave–>slave.sls

2.需求實現:

(1)在prod環境下載建立modules和mysql目錄
[root@linux-node1 prod]# pwd
/srv/salt/prod
[root@linux-node1 prod]# mkdir modules/mysql

(2)配置安裝和配置狀態文件install.sls
[root@linux-node1 mysql]# cat install.sls 
mysql-install:
  pkg.installed:
    - pkgs:
      - mariadb
      - mariadb-server

mysql-config:
  file.managed:
    - name: /etc/my.cnf
    - source: salt://modules/mysql/files/my.cnf
    - user: root
    - gourp: root
    - mode: 644
[root@linux-node1 mysql]# cp /etc/my.cnf files/3)在主上配置mariadb-server.cnf,並更改server_id,以及建立主從用戶
[root@linux-node1 mysql]# cat master.sls 
include:
  - modules.mysql.install

master-config:
  file.managed:
    - name: /etc/my.cnf.d/mariadb-server.cnf
    - source: salt://modules/mysql/files/mariadb-server-master.cnf
    - user: root
    - group: root
    - mode: 0644

master-grant:
  cmd.run:
    - name: mysql -e "grant replication slave on *.* to repl@'192.168.56.0/255.255.255.0' identified by '123456';flush privileges;"
[root@linux-node1 mysql]# cp /etc/my.cnf.d/mariadb-server.cnf files/mariadb-server-master.cnf 
[root@linux-node1 mysql]# cp /etc/my.cnf.d/mariadb-server.cnf files/mariadb-server-slave.cnf 

#修改主從的配置文件的server_id和開啓主上的log-bin功能
[root@linux-node1 mysql]# vim files/mariadb-server-master.cnf 
[mysqld]
server_id=1111
log-bin=mysql-bin
[root@linux-node1 mysql]# vim files/mariadb-server-slave.cnf 
[mysqld]
server_id=22224)編寫shell腳本獲取bin-log值和pos值
[root@linux-node1 mysql]# cat files/start-slave.sh 
#!/bin/bash
for i in `seq 1 10`
do
    mysql -h 192.168.56.11 -urepl -p123456 -e "exit"
    if [ $? -eq 0 ];then
        Bin_log=`mysql -h 192.168.56.11 -urepl -p123456 -e "show master status;"|awk  'NR==2{print $1}'`
        POS=`mysql -h 192.168.56.11 -urepl -p123456 -e "show master status;"|awk  'NR==2{print $2}'`
    mysql -e "change master to master_host='192.168.56.11', master_user='repl', master_password='123456', master_log_file='$Bin_log', master_log_pos=$POS;start slave;"
    exit;
    else
        sleep 60;
    fi
done5)從庫上配置slave,並啓動
[root@linux-node1 mysql]# cat slave.sls 
include:
  - modules.mysql.install

slave-config:
  file.managed:
    - name: /etc/my.cnf.d/mariadb-server.cnf
    - source: salt://modules/mysql/files/mariadb-server-slave.cnf
    - user: root
    - group: root
    - mode: 0644

start-slave:
  file.managed:
    - name: /tmp/start-slave.sh
    - source: salt://modules/mysql/files/start-slave.sh
    - user: root
    - group: root
    - mode: 755
  cmd.run:
    - name: /bin/bash /tmp/start-slave.sh
View Code

三、HAproxy+Keepalived

(1)pkg配置管理

[root@linux-node1 modules]# mkdir pkg
[root@linux-node1 pkg]# vim pkg-init.sls 
pkg-init:
  pkg.installed:
    - names:
      - gcc
      - gcc-c++
      - glibc
      - make
      - autoconf
      - openssl
      - openssl-devel
[root@linux-node1 pkg]# salt 'linux-node1*' state.sls modules.pkg.pkg-init saltenv=prod test=True

(2)haproxy配置管理

[root@linux-node1 modules]# mkdir haproxy/files -p
[root@linux-node1 haproxy]# cat haproxy.sls 
include:
  - pkg.pkg-init

haproxy-install:
  file.managed:
    - name: /usr/local/src/haproxy-1.5.3.tar.gz
    - source: salt://modules/haproxy/files/haproxy-1.5.3.tar.gz
    - user: root
    - group: root
    - mode: 755
  cmd.run:
    - name: cd /usr/local/src && tar -zxvf haproxy-1.5.3.tar.gz && cd haproxy-1.5.3 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy
    - unless: test -d /usr/local/haproxy
    - require:
      - pkg: pkg-init
      - file: haproxy-install

/etc/init.d/haproxy:
  file.managed:
    - source: salt://modules/haproxy/files/haproxy.init
    - user: root
    - group: root
    - mode: 755
    - require:
      - cmd: haproxy-install

net.ipv4.ip_nonlocal_bind:
  sysctl.present:
    - value: 1

haproxy-config-dir:
  file.directory:
    - name: /etc/haproxy
    - mode: 755
    - user: root
    - group: root

haproxy-init:
  cmd.run:
    - name: chkconfig --add haproxy
    - unless: chkconfig --list | grep haproxy
    - require:
      - file: /etc/init.d/haproxy
[root@linux-node1 haproxy]# cp /usr/local/src/haproxy-1.5.3.tar.gz files/
[root@linux-node1 haproxy]# cp /usr/local/src/haproxy-1.5.3/examples/haproxy.init files/
[root@linux-node1 haproxy]# tree 
.
├── files
│   ├── haproxy-1.5.3.tar.gz
│   └── haproxy.init
└── install.sls
View Code

(3)Keepalived配置管理

[root@linux-node1 keepalived]# vim install.sls 
include:
  - pkg.pkg-init

keepalived-install:
  file.managed:
    - name: /usr/local/src/keepalived-1.2.17.tar.gz
    - source: salt://modules/keepalived/files/keepalived-1.2.17.tar.gz
    - user: root
    - gourp: root
    - mode: 755
  cmd.run:
    - name: cd /usr/locall/src && tar -zxf keepalived-1.2.17.tar.gz && cd keepalived-1.2.17 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install
    - unless: test -d /usr/local/keepalived
    - require:
      - pkg: pkg-init
      - file: keepalived-install

/etc/sysconfig/keeplived:
  file.managed:
    - source: salt://modules/keepalived/files/keepalived-sysconfig
    - user: root
    - gourp: root
    - mode: 644

/etc/init.d/keepalived:
  file.managed:
    - sourcd: salt://modules/keepalived/files/keepalived.init
    - user: root
    - group: root
    - mode: 755

keepalive-init:
  cmd.run:
    - name: chkconfig --add keepalived
    - unless: chkconfig --list | grep keepalived
    - require:
      - file: /etc/init.d/keepalived

/etc/keepalived:
  file.directory:
    - user: root
    - group: root
[root@linux-node1 keepalived]# cp /usr/local/src/keepalived-1.2.17.tar.gz files/
[root@linux-node1 init.d]# pwd
/usr/local/src/keepalived-1.2.17/keepalived/etc/init.d
[root@linux-node1 init.d]# cp keepalived.init /srv/salt/prod/modules/keepalived/files/
[root@linux-node1 init.d]# cp keepalived.sysconfig /srv/salt/prod/modules/keepalived/files/
[root@linux-node1 keepalived]# tree 
.
├── files
│   ├── keepalived-1.2.17.tar.gz
│   ├── keepalived.init
│   └── keepalived.sysconfig
└── install.sls
View Code

四、Nginx+PHP

(1)Nginx配置管理

[root@linux-node1 modules]# mkdir pcre
[root@linux-node1 pcre]# cat init.sls 
pcre-install:
  pkg.installed:
    - names: 
      - pcre
      - pcre-devel
[root@linux-node1 modules]# mkdir user
[root@linux-node1 user]# cat www.sls 
www-user-group:
  group.present:
    - name: www
    - gid: 1000

  user.present:
    - name: www
    - fullname: www
    - shell: /sbin/nologin
    - uid: 1000
    - gid: 1000
[root@linux-node1 modules]# mkdir nginx/files -p
[root@linux-node1 nginx]# cp /usr/local/src/nginx-1.12.2.tar.gz files/
[root@linux-node1 nginx]# tree 
.
├── files
│   └── nginx-1.12.2.tar.gz
└── install.sls
[root@linux-node1 nginx]# cat install.sls 
include:
  - modules.pcre.init
  - modules.user.www
  - modules.pkg.pkg-init

nginx-source-install:
  file.managed:
    - name: /usr/local/src/nginx-1.12.2.tar.gz
    - source: salt://modules/nginx/files/nginx-1.12.2.tar.gz
    - user: root
    - group: root
    - mode: 755
  cmd.run:
    - name : cd /usr/local/src && tar -zxf nginx-1.12.2.tar.gz && cd nginx-1.12.2 && ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_stub_status_module --with-file-aio --with-http_dav_module && make && make install && chown -R www.www /usrl/local/nginx
    - unless: test -d /usr/local/nginx
    - require:
      - user: www-user-group
      - file: nginx-source-install
      - pkg: pcre-install
      - pkg: pkg-init
[root@linux-node1 nginx]# salt 'linux-node1*' state.sls modules.nginx.install saltenv=prod test=True
View Code

(2)PHP配置管理

[root@linux-node1 modules]# mkdir php/files -p
[root@linux-node1 php]# cp /usr/local/src/php-5.6.9/sapi/fpm/init.d.php-fpm files/
[root@linux-node1 php]# cp /usr/local/php/etc/php-fpm.conf.default files/
[root@linux-node1 php]# cp /usr/local/src/php-5.6.9/php.ini-production files/
[root@linux-node1 php]# cp /usr/local/src/php-5.6.9.tar.gz files/
[root@linux-node1 php]# tree 
.
├── files
│   ├── init.d.php-fpm
│   ├── php-5.6.9.tar.gz
│   ├── php-fpm.conf.default
│   └── php.ini-production
└── install.sls
[root@linux-node1 php]# cat install.sls 
include:
  - modules.user.www

pkg-php:
  pkg.installed:
    - names:
      - mysql-devel
      - openssl-devel
      - swig
      - libjpeg-turbo
      - libjpeg-turbo-devel
      - libpng
      - libpng-devel
      - freetype
      - freetype-devel
      - libxml2
      - libxml2-devel
      - zlib
      - zlib-devel
      - libcurl
      - libcurl-devel

php-source-install:
  file.managed:
    - name: /usr/local/src/php-5.6.9.tar.gz
    - source: salt://modules/php/files/php-5.6.9.tar.gz
    - user: root
    - gourp: root
    - mode: 755
  cmd.run:
    - name: cd /usr/local/src && tar -zxf php-5.6.9.tar.gz && cd php-5.6.9 && ./configure --prefix=/usr/local/php -with-pdo-mysql=mysqlnd --with-mysqli=mysqlnd --with-mysql=mysqlnd --with-jpeg-dir --with-png-dir --with-zlib --enable-xml  --with-libxml-dir --with-curl --enable-bcmath --enable-shmop --enable-sysvsem  --enable-inline-optimization --enable-mbregex --with-openssl --enable-mbstring --with-gd --enable-gd-native-ttf --with-freetype-dir=/usr/lib64 --with-gettext=/usr/lib64 --enable-sockets --with-xmlrpc --enable-zip --enable-soap --disable-debug --enable-opcache --enable-zip --with-config-file-path=/usr/local/php-fastcgi/etc --enable-fpm --with-fpm-user=www --with-fpm-group=www && make && make install
    - require:
      - file: php-source-install
      - user: www-user-group
    - unless: test -d /user/local/php

php-ini:
  file.managed:
    - name: /usr/local/php/etc/php.ini
    - source: salt://modules/php/files/php.ini-production
    - user: root
    - group: root
    - mode: 644

php-fpm:
  file.managed:
    - name: /usr/local/php/etc/php-fpm.conf
    - source: salt://modules/php/files/php-fpm.conf.default
    - user: root
    - group: root
    - mode: 644

php-service:
  file.managed:
   - name: /etc/init.d/php-fpm
   - source: salt://modules/php/files/init.d.php-fpm
   - user: root
   - group: root
   - mode: 755
  cmd.run:
    - name: chkconfig --add php-fpm
    - unless: chkconfig --list | grep php-fpm
    - require:
      - file: php-service
  service.running:
    - name: php-fpm
    - enable: True
    - reload: True
    - require:
      - file: php-ini
      - file: php-fpm
      - file: php-service
      - cmd: php-service
View Code

統一使用的功能都抽象成一個模塊,如安裝以及基本配置(nginx中包含include,php中包含的include,那麼就能夠將nginx.conf放在功能模塊,而虛擬主機配置文件,能夠放在業務模塊)。 
其它配置和服務啓動能夠抽象在一個業務模塊,每個業務都是使用不一樣的配置文件。

服務所有使用www用戶,統一id,只開放8080端口,對於web服務只開放ssh的8022端口以及web的8080端口。其他不用的端口一概不開啓

這裏將nginx,php都抽象成一個模塊,把安裝和基礎配置都放在了modules中,在nginx衍生的業務模塊web目錄下,作一個bbs的虛擬主機。

[root@linux-node1 base]# vim top.sls 
prod:
  '*':
    - web.bbs
[root@linux-node1 base]# salt '*' state.highstate
相關文章
相關標籤/搜索