命令行工具php
1.click模塊html
click模塊和argparse功能相同,但更爲易用,使用click分爲兩個步驟,1)使用@click.command()裝飾一個函數,使之成爲命令行接口;2)使用@click.option()等裝飾函數,爲其添加命令行選項等node
import click @click.command() @click.option('--count',default=1,help='Number of greetings.') @click.option('--name',prompt='Your name',help='The person to greet') def hellp(count,name): '''simple program thar greets NAME for total of COUNT times...''' for x in range(count): click.echo('Hello %s!!!' % name) if __name__ == '__main__': hellp()
實例中,函數hellp接受兩個參數count和name,他們的取值從命令行中獲取,使用了click中的command、option、echo,使用了prompt選項,所以在沒有指定name參數時,click會提示咱們在交互模式下輸入python
經過nargs配置參數的個數,經過type設置參數類型mysql
import click @click.command() @click.option('--pos',nargs=2,type=float) def findme(pos): click.echo('%s / %s' % pos)
type=click.Choice(['md5','sha1'])只能選擇二者之一linux
若是須要從命令行輸入密碼,使用argparse只能像輸入普通參數同樣,在click中只需將prompt設爲True,設置hide_input爲True,就能夠隱藏輸入,設置confirmation_prompt爲True,就能夠兩次驗證ios
2.prompt_toolkit模塊nginx
prompt_toolkit是一個處理交互式場景的開源庫,用來取代開源的redline和curses。特色包括:git
1)語法高亮github
2)支持多行編輯
3)支持代碼補全
4)支持自動提示
5)支持鼠標移動光標
6)支持Emacs與Vi風格快捷鍵
7)支持查詢歷史
8)對Unicode支持友好
簡單示例
#!/usr/bin/python from __future__ import unicode_literals from prompt_toolkit import prompt while True: user_input = prompt('>') print(user_input)
這樣交互式輸入的時候就能使用ctrl+a或者ctrl+e等快捷鍵了,注意:在python2中須要加入from __future__ import unicode_literals
實現歷史輸入
from prompt_toolkit import prompt from __future__ import unicode_literals from prompt_toolkit.history import FileHistory while True: user_input = prompt('>', history=FileHistory('history.txt'), ) print(user_input)
還能夠實現輸入提示
from __future__ import unicode_literals from prompt_toolkit import prompt from prompt_toolkit.history import FileHistory from prompt_toolkit.auto_suggest import AutoSuggestFromHistory while True: user_input = prompt('>', history=FileHistory('history.txt'), auto_suggest=AutoSuggestFromHistory(), ) print(user_input)
最後實現自動輸入自動補全
from prompt_toolkit import prompt from __future__ import unicode_literals from prompt_toolkit.history import FileHistory from prompt_toolkit.auto_suggest import AutoSuggestFromHistory from prompt_toolkit.contrib.completers import WordCompleter SQLCompleter = WordCompleter(['select','update','drop','insert','delete'],ignore_case=True) while True: user_input = prompt('>', history=FileHistory('history.txt'), auto_suggest=AutoSuggestFromHistory(), completer=SQLCompleter, ) print(user_input)
3.系統管理相關腳本
按條件查找目錄下的文件
#!/usr/bin/python import os import fnmatch def is_file_match(filenames,patterns): if fnmatch.fnmatch(filenames,patterns): return True return False def find_specific_files(dir,patterns=['*'],exclude_dirs=[]): for root,dirnames,filenames in os.walk(dir): for filename in filenames: for pattern in patterns: if is_file_match(filename,pattern): yield os.path.join(root,filename) for d in exclude_dirs: if d in dirnames: dirnames.remove(d) if __name__ == '__main__': patterns=['*.jpg','*.png','*.jbg'] exclude_dirs = ['2'] for item in find_specific_files(r"D:\python_test",patterns,exclude_dirs): print(item)
例如查找目錄下除dir2之外其餘目錄下的全部圖片
patterns=['*.jpg','*.png','*.jbg']
exclude_dirs = [’ dir2']
for item in find_specific_files(".",patterns,exclude_dirs):
print(item)
filecmp模塊
比較目錄和文件
最簡單的比較兩個文件使用:filecmp.cmp('a.txt','b.txt'), 若是相同返回爲True,不一樣返回爲False
filecmp的cmpfiles函數,用來同時比較兩個不一樣目錄下的多個文件,而且返回一個三元組,分別包含相同的文件、不一樣的文件和沒法比較的文件
找出目錄下全部的重複文件:
import hashlib import sys import os import fnmatch CHUCK_SIZE = 8192 def is_file_match(filenames,patterns): if fnmatch.fnmatch(filenames,patterns): return True return False def find_specific_files(dir,patterns=['*'],exclude_dirs=[]): for root,dirnames,filenames in os.walk(dir): for filename in filenames: for pattern in patterns: if is_file_match(filename,pattern): yield os.path.join(root,filename) for d in exclude_dirs: if d in dirnames: dirnames.remove(d) def get_chunk(filename): with open(filename,encoding="utf-8") as f: while True: chunk = f.read(CHUCK_SIZE) if not chunk: break else: yield chunk def get_file_checksum(filename): h = hashlib.md5() for chunk in get_chunk(filename): h.update(chunk.encode("utf-8")) return h.hexdigest() def main(): sys.argv.append("") directory = sys.argv[1] if not os.path.isdir(directory): raise SystemExit("{0} is not a directory".format(directory)) record = {} for item in find_specific_files(directory): checksum = get_file_checksum(item) if checksum in record: print("找到相同文件: {0} VS {1}".format(record[checksum],item)) print(checksum) else: record[checksum] = item if __name__ == '__main__': main()
4.tarfile模塊
這是一個壓縮模塊,首先介紹基本用法
讀取tar包:
import tarfile with tarfile.open('tarfile_add.tar') as f: for member_ino in f.getmembers(): print(member_info.name)
tarfile中經常使用函數:getnames獲取tar包中的文件列表 extract提取單個文件 extractall提取全部文件
建立tar包:
import tarfile with tarfile.open('tarfile_add.tar',mode='w') as out: out.add('README.txt)
讀取一個用gzip算法壓縮的tar包:with tarfile.open('tarfile_add.tar',mode='r:gz') as out:
建立一個用bzip2算法的壓縮tar包: with .tarfile.open('tarfile_add.tar',mode='w:bz2') as out:
實例,備份指定文件到壓縮包中:
#-*- conding:utf-8 -*- import os import fnmatch import tarfile import datetime def find_specific_files(dir,patterns=['*'],exclude_dirs=[]): for root,dirnames,filenames in os.walk(dir): for filename in filenames: if is_file_match(filename,patterns): yield os.path.join(root,filename) for d in exclude_dirs: if d in dirnames: dirnames.remove(d) def main(): patterns = ['*.txt','*.py'] now = datetime.datetime.now().strftime('%Y-%m-%d_%H_%M_%S') filename = "all_file_{0}".format(now) with tarfile.open(filename,mode='w:gz') as f: for item in find_specific_files(".",patterns): f.add(item) if __name__ == '__main__': main()
暴力破解zip壓縮包密碼:
with open('password.txt') as f: for line in f: try: f.extractall(pwd=line.strip()) print("password is {0}".format(line.strip()) except: pass
使用python監控linux系統
開源工具:
dstat:集成了linux下vmstat、iostat、netstat和ifstat等命令
使用:dstat -cdngy(-a)統計cpu、磁盤io、網絡流量、換頁活動、系通通計(終端和上下文切換)
--fs:統計文件打開數和inodes數
-t:顯示當前系統時間
-l:統計負載狀況
-p:統計進程信息
--tcp:顯示經常使用的TCP統計
--top-mem、--top-io、--top-cpu查看當前佔用內存、io、cpu高的進程信息
--output:輸出到csv文件中
glances
glances是一款優秀的可視化很好的監控工具
配合Bottle這個web框架,能夠經過瀏覽器訪問 pip install Bottle glances -w
使用python監控磁盤io信息:
from collections import namedtuple Disk = namedtuple('Disk','major_number minor_number device_name' ' read_count read_merged_count read_sections' ' time_spent_reading write_count write_merged_count' ' write_sections time_spent_write io_requests' ' time_spent_doing_io weighted_time_spent_doing_io' ) def get_disk_info(device): """ from /proc/diskstats read IO message :param device: :return: """ with open("/proc/diskstats") as f: for line in f: if line.split()[2] == device: return Disk(*(line.split())) raise RuntimeError("device ({0}) not found !".format(device)) def main(): disk_info = get_disk_info('sda1') print(disk_info) print("磁盤寫次數:{0}".format(disk_info.write_count)) print("磁盤寫字節數:{0}".format(disk_info.write_sections)) print("磁盤寫延時:{0}".format(disk_info.time_spent_write)) if __name__ == '__main__': main()
psutil
使用psutil實現系統監控,psutil常見去用法Google一下,一下是實例:
from __future__ import unicode_literals import os import jinja2 import socket from datetime import datetime import yagmail import psutil EMAILL_USER = 'xxx' EMALL_PASSWORD = 'xxx' RECIPIENTS = ['me@xxx.com'] def render(tpl_path,**kwargs): path,filename = os.path.split(tpl_path) return jinja2.Environment( loader=jinja2.FileSystemLoader(path or './') ).get_template(filename).render(**kwargs) def bytes2human(n): symbols = ('B','K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y') prefix = {'B': 1,'K': 1024, 'M': 1048576, 'G': 1073741824, 'T': 1099511627776, 'P': 1125899906842624, 'E': 1152921504606846976, 'Z': 1180591620717411303424, 'Y': 1208925819614629174706176} while True: for p in reversed(symbols): if data > prefix[p]: value = float(data) / prefix[p] return '%.1f%s' % (value, p) elif data >= 1 and data <1024: return "%sB" % data else: continue def get_cpu_info(): cpu_count = psutil.cpu_count() cpu_percent = psutil.cpu_percent(interval=1) return dict(cpu_count=cpu_count,cpu_percent=cpu_percent) def get_memory_info(): virtual_mem = psutil.virtual_memory() mem_total = bytes2human(virtual_mem.total) mem_percent = virtual_mem.percent mem_free = bytes2human(virtual_mem.free + virtual_mem.buffers + virtual_mem.cached) mem_used = bytes2human(virtual_mem.total * virtual_mem.percent) return dict(mem_total=mem_total,mem_percent=mem_percent,mem_free=mem_free,mem_used=mem_used) def get_disk_info(): disk_usage = psutil.disk_usage('/') disk_total = bytes2human(disk_usage.total) disk_percent = disk_usage.percent disk_free = bytes2human(disk_usage.free) disk_used = bytes2human(disk_usage.used) return dict(disk_total=disk_total,disk_percent=disk_percent,disk_free=disk_free,disk_used=disk_used) def get_boot_info(): boot_time = datetime.fromtimestamp(psutil.boot_time()).strftime("%Y-%m-%d %H:%M:%S") return dict(boot_time=boot_time) def collect_monitor_data(): data = {} data.update(get_boot_info()) data.update(get_cpu_info()) data.update(get_memory_info()) data.update(get_disk_info()) return data def main(): hostname = socket.gethostname() data = collect_monitor_data() data.update(dict(hostname=hostname)) content = render('monitor.html',**data) # with yagmail.SMTP(user=EMAILL_USER,password=EMALL_PASSWORD,host='smtp.163.com',port=25) as yag: # for recipient in RECIPIENTS: # yag.send(recipient,"監控信息".encode('utf-8').content.encode('utf-8')) print(data) if __name__ == '__main__': main()
monitor.html模板內容:
<html> <header><title>監控信息</title> <body> <table border="1"> <tr><td>服務器名稱</td><td>{{hostname}}</td></tr> <tr><td>開機時間</td><td>{{boot_time}}</td></tr> <tr><td>cpu個數</td><td>{{cpu_count}}</td></tr> <tr><td>cpu利用率</td><td>{{cpu_percent}}</td></tr> <tr><td>內存總量</td><td>{{mem_total}}</td></tr> <tr><td>內存利用率</td><td>{{mem_percent}}</td></tr> <tr><td>內存已用空間</td><td>{{mem_used}}</td></tr> <tr><td>內存可用空間</td><td>{{mem_free}}</td></tr> <tr><td>磁盤空間總量</td><td>{{disk_total}}</td></tr> <tr><td>磁盤空間利用率</td><td>{{disk_percent}}</td></tr> <tr><td>磁盤已用空間</td><td>{{disk_used}}</td></tr> <tr><td>磁盤可用空間</td><td>{{disk_free}}</td></tr> </table> </body> </html>
python文檔與報告
openpyxl模塊:
openpyxl把Excel抽象爲了三個不一樣的類:Workbook、Worksheet、Cell。Workbook是對Excel工做簿的抽象,Worksheet是對錶格的抽象,Cell是對單元格的抽象
加載一個Excel表格:wb = openpyxl.load_workbook('1.xlsx') 而後就能夠調用屬性和方法:active 獲取活躍的Worksheet read_only 是否以read only打開Excel表格 encoding 文檔的字符集編碼 properties:文檔的元數據如標題,建立者,建立日期 worksheets 以列表的形式返回全部的Worksheet
Workbook對象的方法大都與Worksheet相關 經常使用方法:get_sheet_names 獲取全部表格的名稱 get_sheet_by_name 經過表格名稱獲取Worksheet對象 get_active_sheet 獲取活躍的表格 remove_sheet 刪除一個表格
create_sheet 建立一個空的表格 copy_worksheet 在Worksheet內拷貝表格
經常使用的Worksheet屬性以下:title 表格的標題 dimensions 表格的大小,這裏的大小是指含有數據的表格大小 max_row 表格的最大行 min_row 表格的最小行 max_column 表格的最大列 min_column 表格的最小列
rows 按行獲取單元格(Cell對象) columns 按列獲取單元格(Cell對象) freeze_panes 凍結窗格 values 按行獲取表格的內容(數據) iter_rows 按行獲取全部單元格(Cell對象) iter_columns 按列獲取全部單元格 append 在表格末尾添加數據 merged_cells 合併多個單元格 unmerge_cells 移除合併的單元格
獲取表格數據:
import openpyxl wb = openpyxl.load_workbook('1.xlsx') ws = wb.get_sheet_by_name('Sheet1') for row in ws.row: print(*[cell.value for cell in row]) # 或者也能夠寫成 for i in ws.values: print(*i) # 再或者 for row in ws.iter_rows: print(*[cell.value for cell in row])
python處理圖片
PIL模塊
使用方法: from PIL import Image 打開一個圖片:im = Image.open('1.jpg') print(im.format,im.size,im.mode) 旋轉圖片:rot = im.rotate(45) rot.save('2.jpg')
把一個目錄下的圖片都建立爲縮略圖:
from PIL import Image import glob import os size = 128,128 for infile in glob.glob("*.jpg"): file,ext = os.path.splitext(infile) im = Image.open(infile) im.thumbnail(size) im.save(file + ".thumbnail","JPEG")
獲取照片的exif信息:
import sys import os from PIL.ExifTags import TAGS from PIL.ExifTags import GPSTAGS def get_iamge_meta_info(filename): exif_data = {} with Image.open(filename) as img: data = img._getexif() for tag,value in data.iteritems(): decode = TAGS.get(tag) exif_data[decode] = value if exif_data['GPSInfo']: gps_data = {} for tag,value in exif_data['GPSInfo'].iteritems(): decode = GPSTAGS.get(tag) gps_data['GPSInfo'] = gps_data return exif_data def main(): sys.argv.append("") filename = sys.argv[1] if not os.path.isfile(filename): raise SystemExit("{0} is not exists".format(filename)) exif_data = get_iamge_meta_info(filename) for key,value in exif_data.items(): print(key,value,sep=':') if __name__ == "__main__": main()
發送郵件
步驟:一、鏈接到SMTP服務器;二、發送SMTP的'Hello'消息;三、登陸到SMTP服務器;四、發送電子郵件;五、關閉SMTP服務器的鏈接
1:smtp = smtplib.SMTP('smtp.163.com',25) 建立SMTP對象
2:smtp.ehlo() 調用ehlo方法與SMTP服務"打招呼"
3:smtp.starttls() 調用starttls方法將當前會話轉換成一個加密的會話
4:smtp.login('xxx@163.com','password') 將smtp會話加密了就能夠傳輸敏感信息了,若是沒有調用starttls方法就登陸,則會拋出SMTPAuthenticationError異常
5:smtp.sendmail(發件人郵件,收件人郵件,郵件內容) 登陸上SMTP服務器以後就能夠調用sendmail方法發送郵件了
6:smtp.quit() 郵件發送完成以後,調用quit方法斷開與SMTP服務器的鏈接
爲了構造一個完整的郵件,須要藉助email模塊,email模塊用來構造郵件和解析郵件內容,構造一個郵件就是建立一個Message對象,若是構造一個MIMEText對象就表示構造一個純文本的郵件,若是構造一個MIMEImage對象,就表示構造了一個做爲附件的圖片。若是把多個對象組合起來,就須要用到MIMEMultipart對象email模塊下有多個類包括Message、MIMEBase、MIMEText、MIMEAudio、MIMEImage和MIMEMultipart。
發送純文本信息的郵件:
import smtplib from email.mime.text import MIMEText SMTP_SERVER = "smtp.163.com" SMTP_PORT = 25 def send_mail(user,pwd,to,subject,text): msg = MIMEText(text) msg['From'] = user msg['To'] = to msg['subject'] = subject smtp_server = smtplib.SMTP(SMTP_SERVER,SMTP_PORT) print('Connecting To Mail Server.') try: smtp_server.ehlo() print('Starting Encrypted Section.') smtp_server.starttls() smtp_server.ehlo() print('Logging Info Mail Server') smtp_server.login(user,pwd) print('Sending Mail') smtp_server.sendmail(user,to,msg.as_string()) except Exception as err: print('Sending Mail faild:{0}'.format(err)) finally: smtp_server.quit() def main(): send_mail(發件人郵箱,密碼,收件人郵箱,郵件主題,郵件內容) if __name__ == '__main__': main()
發送帶各類附件的郵件:
import urllib import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.application import MIMEApplication username = 'xxx@163.com' password = 'xxx' sender = username receivers = ','.join(['xxx@qq.com']) # 如名字所示: Multipart就是多個部分 msg = MIMEMultipart() msg['Subject'] = 'Python mail Test' msg['From'] = sender msg['To'] = receivers # 下面是文字部分,也就是純文本 puretext = MIMEText('我是純文本部分,') msg.attach(puretext) # 下面是附件部分 ,這裏分爲了好幾個類型 # 首先是xlsx類型的附件 xlsxpart = MIMEApplication(open(r'C:\Users\qinghesh\Desktop\1.xlsx', 'rb').read()) xlsxpart.add_header('Content-Disposition', 'attachment', filename='績1.xlsx') msg.attach(xlsxpart) # jpg類型的附件 jpgpart = MIMEApplication(open(r'D:\1.jpg', 'rb').read()) jpgpart.add_header('Content-Disposition', 'attachment', filename='1.jpg') msg.attach(jpgpart) # mp3類型的附件 mp3part = MIMEApplication(open(r'C:\Users\Public\Music\Sample Music\Kalimba.mp3', 'rb').read()) mp3part.add_header('Content-Disposition', 'attachment', filename='Kalimba.mp3') msg.attach(mp3part) ## 下面開始真正的發送郵件了 try: client = smtplib.SMTP() client.connect('smtp.163.com') client.login(username, password) client.sendmail(sender, receivers, msg.as_string()) client.quit() print('帶有各類附件的郵件發送成功!') except smtplib.SMTPRecipientsRefused: print('Recipient refused') except smtplib.SMTPAuthenticationError: print('Auth error') except smtplib.SMTPSenderRefused: print('Sender refused') except smtplib.SMTPException as e: print(e)
yagmail模塊:
import yagmail with yagmail.SMTP(user=user_mail,password=password,host=smtp_host,port=port) as yag: yag.send(recipients,subject,content)
注意這裏的recipients和content均可以等於一個列表,表示多個收件人和多個附件
cmcli客戶端項目:
1 import os 2 import sys 3 try: 4 import ConfigParser 5 except ImportError: 6 import configparser as ConfigParser 7 import argparse 8 import yagmail 9 10 from storage import Storage 11 from logger import get_logger 12 13 logger = get_logger() 14 15 16 def get_argparse(): 17 parser = argparse.ArgumentParser(description='A email client in terminal') 18 parser.add_argument('-s', action='store', dest='subject', required=True, help='specify a subject (must be in quotes if it has spaces)') 19 parser.add_argument('-a', action='store', nargs='*', dest='attaches', required=False, help='attach file(s) to the message') 20 parser.add_argument('-f', action='store', dest='conf', required=False, help='specify an alternate .emcli.cnf file') 21 parser.add_argument('-r', action='store', nargs='*', dest='recipients', required=True, help='recipient who you are sending the email to') 22 parser.add_argument('-v', action='version', version='%(prog)s 0.2') 23 return parser.parse_args() 24 25 26 def get_config_file(config_file): 27 if config_file is None: 28 config_file = os.path.expanduser('~/.emcli.cnf') 29 return config_file 30 31 32 def get_meta_from_config(config_file): 33 config = ConfigParser.SafeConfigParser() 34 35 with open(config_file) as fp: 36 config.readfp(fp) 37 38 meta = Storage() 39 for key in ['smtp_server', 'smtp_port', 'username', 'password']: 40 try: 41 val = config.get('DEFAULT', key) 42 except (ConfigParser.NoSectionError, ConfigParser.NoOptionError) as err: 43 logger.error(err) 44 raise SystemExit(err) 45 else: 46 meta[key] = val 47 48 return meta 49 50 51 def get_email_content(): 52 return sys.stdin.read() 53 54 55 def send_email(meta): 56 content = get_email_content() 57 body = [content] 58 if meta.attaches: 59 body.extend(meta.attaches) 60 61 with yagmail.SMTP(user=meta.username, password=meta.password, 62 host=meta.smtp_server, port=int(meta.smtp_port)) as yag: 63 logger.info('ready to send email "{0}" to {1}'.format(meta.subject, meta.recipients)) 64 ret = yag.send(meta.recipients, meta.subject, body) 65 66 67 def main(): 68 parser = get_argparse() 69 70 config_file = get_config_file(parser.conf) 71 72 if not os.path.exists(config_file): 73 logger.error('{0} is not exists'.format(config_file)) 74 raise SystemExit() 75 else: 76 meta = get_meta_from_config(config_file) 77 78 meta.subject = parser.subject 79 meta.recipients = parser.recipients 80 meta.attaches = parser.attaches 81 82 for attach in meta.attaches: 83 if not os.path.exists(attach): 84 logger.error('{0} is not exists'.format(attach)) 85 raise SystemExit() 86 87 send_email(meta) 88 89 90 if __name__ == '__main__': 91 main()
1 import logging 2 3 4 def get_logger(log_level=logging.INFO): 5 logger = logging.getLogger(__name__) 6 logger.setLevel(log_level) 7 8 formatter = logging.Formatter("%(asctime)s [emcli] [%(levelname)s] : %(message)s", "%Y-%m-%d %H:%M:%S") 9 10 handler = logging.StreamHandler() 11 handler.setFormatter(formatter) 12 13 logger.handlers = [handler] 14 15 return logger
1 class Storage(dict): 2 """ 3 A Storage object is like a dictionary except `obj.foo` can be used 4 in addition to `obj['foo']`. 5 """ 6 def __getattr__(self, key): 7 try: 8 return self[key] 9 except KeyError as k: 10 raise AttributeError(k) 11 12 def __setattr__(self, key, value): 13 self[key] = value 14 15 def __delattr__(self, key): 16 try: 17 del self[key] 18 except KeyError as k: 19 raise AttributeError(k) 20 21 def __repr__(self): 22 return '<Storage ' + dict.__repr__(self) + '>'
1 from emcli import main
[DEFAULT] smtp_server = smtp.163.com smtp_port = 25 username = xxx@163.com password = 123456
檢查主機是否存活,用ping:
第一種方法:
import subprocess import threading def is_reacheable(ip): if subprocess.call(["ping","-c","2",ip]): print("{0} is alived".format(ip)) else: print("{0} is unreacheable".format(ip)) def main(): with open('ip.txt') as f: lines = f.readlines() threads = [] for line in lines: thr = threading.Thread(target=is_reacheable,args=(line,)) thr.start() threads.append(thr) for thr in threads: thr.join() if __name__ == '__main__': main()
這種方法咱們是將全部ip先都讀入內存當中,當若是須要掃描機器很是很是多的時候,咱們能夠用生產者消費者模型來寫,以下:
import subprocess import threading import queue def call_ping(ip): if subprocess.call(["ping","-c","1",ip]) print("{0} is alive".format(ip)) else: print("{0} is unreacheable".format(ip)) def is_reacheable(q): try: while True: ip = q.get() call_ping(ip) except: pass def main(): q = queue.Queue() with open('ip.txt') as f: for line in f: q.put(line) threads = [] for i in range(10): thr = threading.Thread(target=is_reacheable,args=(q,)) thr.start() threads.append(thr) for ths in threads: ths.join() if __name__ == '__main__': main()
nmap
打印IP列表,不進行任何操做:nmap -sL 192.168.0.0/30
指定多個主機:nmap -sL 192.168.1.1 192.168.1.18 也能夠排除某個IP:nmap -sL 192.168.0.* --exclude 192.168.0.100 也能夠將IP保存在文件中,同過-iL選項讀取文件中的IP:nmap -iL ip.list
使用nmap檢查網絡上全部在線主機:nmap -sP 10.166.224.* 或者:nmap -sn 10.166.224.*
IPy
處理IP地址的模塊,它可以自動識別IP地址的版本、IP地址類型。使用IPy模塊,能夠方便的進行IP地址計算,用法去百度。幾個重要的用法
IP('192.168.1.1').int():將ip地址轉換爲整數,mysql也有相應的函數,select inet_aton('192.168.1.1')能夠將ip轉化爲整數,select inet_ntoa('2887432721')解析爲IP地址
dnspython
域名解析模塊
dns.resolver.query('域名名稱',rdtype=1,rdclass=1,tcp=False,source=None,raise_on_no_answer=True,source_port=9)
rdtype:指定RR資源 A:地址記錄,返回域名指向IP地址 NS:域名服務器記錄,返回保存下一級域名信息的服務器地址。該記錄只能設置爲域名,不能設置爲IP地址 MX:郵件記錄,返回接受郵件的服務器地址
CNAME:規範名稱目錄,別名記錄,實現域名間的映射 PTR:逆向查詢記錄,反向解析,與A記錄相反,將IP地址轉換爲主機名
rdclass:網絡類型
tcp:指定查詢是否使用TCP協議
source:查詢源的地址
source_port:查詢源的端口
raise_on_no_answer:指定查詢無應答時是否觸發異常,默認爲True
polysh
polysh運維工具,直接用pip安裝便可,是一個交互式命令,能夠批量對服務器進行處理。也就是說能夠同時登錄多臺服務器,並在各服務器上同時執行相同的操做。
polysh --ssh='exec ssh -p 22 -i ~/.ssh/id_rsa' --user=root --hosts-file=hosts.txt
fabric
fabric是對paramiko更好的封裝,它既是一個python庫,也是一個命令行工具:fab fabric典型的使用方式是,建立一個python文件(默認使用fab命令不指定文件的是尋找fabfile.py),改文件包含一到多個函數,而後使用fab命令直接調用這些函數便可
from fabric.api import run,sudo from fabric.api import env env.hosts = ["ip1","ip2"] env.port = 22 env.user = 'root' env.password = 'password' def hostname(): run('hostname') def ipaddr(): run('ifconfig')
而後使用:fab hostname(函數名稱)或者ipaddr(函數名稱)調用便可 可使用fab --list查看有哪些方法
run:執行遠程命令的封裝 sudo:以sudo權限執行 env:保存配置信息的字典
run和sudo函數有一個比較重要的參數,即pty,用於設置僞終端,若是咱們執行命令之後須要有一個常駐的服務進程,則須要設置pty=False,避免由於fabric退出致使進程退出 建立一個文件:run("mkdir /tmp/1.txt")
result=run("whoami")若是失敗:result.failed sudo("mkdir /var/www/1.log") sudo("service httpd restart",pty=False)
local:local用以執行本地命令,local是對python的subprocess進行封裝,便於在本地執行shell命令,如需執行更復雜的功能,能夠直接使用subprocess local("pwd ")
get從遠程服務器獲取文件,經過remote_path參數聲明從何處下載,經過local_path參數聲明下載到何處 get(remote_path =」 / tmp/log _ extracts.t ar.gz 」, local _pa th =」 / l ogs / new_ log.tar.gz 」)
put將本地的文件上傳到遠程服務器,參數與get類似,此外還能夠經過mode設置遠程文件的權限 put (」 / local/path/to/app.tar.gz 」,」 /t mp /trunk / app.tar.gz」,mode=0755)
reboot重啓遠程服務器,能夠經過wait參數設置等待幾秒開始重啓 reboot(wait=30)
prompt用以在fabric執行任務的過程當中與管理員進行交互,做用相似於raw_input prompt ( ’ Please specify process nicelevel :’, key =’ nice ’, validate =int)
Fabric設置:
fabric將全部配置保存在全局的env字典中,咱們能夠直接修改env字典來改變fabric的配置,可是,有的時候咱們並不但願修改全局的參數配置,只但願臨時修改部分配置。例如修改當前工做目錄,修改日誌的輸出級別等
1)在shell中語法:cd /tmp && pwd 在fabric中能夠:with cd('/var/log'): run('ls')
2)lcd與cd相似,只不過是切換本地目錄
3)path配置遠程服務器PATH環境變量:只會對當前會話有效,path修改支持多種模式:append:默認行爲,將給定的路徑添加到path後面,prepend:將給定的路徑添加到path前面 replace:替換當前的path環境變量
4)prefix就是前綴的意思,實際效果是對於prefix,每一個命令都執行一遍:
with cd('/path/to/app/'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata')
替換成shell至關於:
cd /path/to/app && workon myvenv && ./manage.py syncdb
cd /path/to/app && workon myvenv && ./manage.py loaddata
5)shell_env設置Shell腳本的環境變量
with shell_env(ZMQ_DIR='/home/user/local'): run('pip install pyzmq')
至關於shell代碼:
export ZMQ_DIR='/home/user/local' && pip install pyzmq
6)settings通用的設置,用於臨時覆蓋env變量
with settings(user='foo'): do something
7)remote_tunnel經過ssh的端口轉發創建轉發通道
with remote_tunnel(3306): run('mysql -u root -p password')
輸出相關
hide隱藏指定類型的輸出
def my_task(): with hide('running','stout','stderr'): run('ls /var/www')
hide可選類型有7種:
status 狀態信息,如服務器斷開了鏈接,用戶使用Ctrl+C等,若是fabric正常運行不會有狀態信息
aborts 終止信息,通常將fabric當作庫使用的時候須要關閉
warnings 警告信息
running 運行過程當中的輸出
stdout 執行shell命令的標準輸出
stderr 執行shell命令的錯誤輸出
user 用戶輸出,相似於python代碼中的print函數
show與hide相反,顯示指定類型的輸出 output:stdout、stderr everything:stdout、stderr、warnings、running、user commands:stdout、running
quiet隱藏所有輸出,僅在執行錯誤時發出警告
warn_only:默認狀況下當命令執行失敗時,fabric會中止執行後面的命令,當設置爲True的時候,容許執行失敗的時候繼續執行
fabric提供的裝飾器
fabric提供的裝飾器既不是執行具體操做的,也不是修改參數的,二是控制如何執行這些操做的,在哪些服務器上執行這些操做的
task:就是fabric須要在遠程服務器執行的任務,task是一個抽象的事。以下使用方法:
from fabric.api import env from fabric.api import task,run import json @task def hostname(): run('hostname') def ipaddr(): run('ifconfig')
fibric中全部可調用對象都是一個task,可是若是用task裝飾器定義的task,其餘沒有被裝飾器裝飾的函數將不會被認爲是一個task,上面的代碼中,只有hostname是可調用的task,可使用fab --list
經過裝飾器指定對哪些hosts執行當前task:@hosts('host1','host2') def hostname():run('hostname')
經過env.reject_unknow_hosts控制未知host的行爲,該選項默認爲True,相似於SSH中的StricHostKeyChecking爲no
fabric中的role
role是對服務器進行分類的手段,使用role能夠定義服務器的角色,以便對不一樣的服務器執行不容的操做,role是邏輯上將服務器進行了分類,分類以後咱們須要將指定某一類服務器時指定role名稱便可,一個role能夠包含一臺或多臺服務器。role的定義保存在env.roledef中:
from fabric.api import env env.roledefs['webserver'] = ['www1','www2','www3'] env.roledefs = { 'web':['www1','www2','www3'], 'db':['db01','db02','db03'] }
定義好role後,就能夠經過role裝飾器指定在哪些role上運行當前這個task:
from fabric.api import env env.roledefs = { 'web':['www1','www2','www3'], 'db':['db01','db02','db03'] } @roles('db') def mirget(): pass @roles('web') def webt(): pass
fabric執行模型
步驟:
1.建立一個任務列表,這些任務就是經過fab命令參數指定的任務,fabric會保持這些任務的順序
2.對於每一個任務,構造須要執行任務的服務器列表,服務器列表能夠經過傳參數指定,能夠經過env,hosts指定,也能夠經過hosts或者roles裝飾器指定
3.歷遍任務列表,對於每一臺服務器分別執行任務
默認是串行執行:
for task in tasks: for host in hosts: execute(task in host)
並行執行方式
1.經過命令行參數-P(--parallel)通知Fabric並行執行task
2.經過env.parallel設置是否須要並行執行
3.經過parallel裝飾器通知fabric並行執行task
runs_once裝飾器:只執行一次,防止task被屢次調用:
from fabric.api import execute,env,runs_once,task @runs_once def hello(): print("hello") @task def test(): execute(hello) execute(hello)
serial:強制當前task串行執行。使用該裝飾器後,即便用戶經過參數--parallel指定須要並行執行,當執行唄serial裝飾器修改過的task時,依然會串行執行。這樣就達到了部分task能夠串行執行
execute函數,用來對task進行封裝。其好處是將一個大的任務拆分紅多個小任務。每一個小任務相互獨立,互不干擾:
from fabric.api import execute,env,runs_once,task env.roledefs = { 'db':['db01','db02'], 'wb':['wb01','wb02'] } @roles('db') def migrate(): pass @roles('web') def update(): pass def deploy(): execute(migrate) execute(update)
一源碼的方式在遠程服務器上安裝redis
1)在本地執行make test及redis單元測試,若是單元測試出現錯誤,提示用戶是否須要繼續部署。執行成功以後,刪除redis的二進制文件,並將redis源碼打成一個tar包,顯然在執行單元測試的時候只須要執行一次,使用runs_once裝飾器裝飾函數
2)接着就是將redis源碼上傳到遠程服務器,並執行「make install」,安裝完成後清理遠程服務器的文件,先使用cd切換到相應目錄,在執行刪除操做
3)清理本地文件
Ansible
ansible all -m file -a "dest=/tmp/data.txt mode=500 owner=root group=root" -become 這裏的-become至關於sudo
plooybook示例:
--- - host: test become: yes become_method: sudo task: - name: copy file copy: src=/tmp/data.txt dest=/tmp/data.txt - name: change mode file: dest=/tmp/data.txt mode=500 owner=root group=root - name: ensure packages installed apt: pkg={{ item }} state=present with_items: - tmux - git
YMAL語法規則:
文件的第一行爲'---',表示這是一個YAML文件
YAML中的字段大小寫敏感
YAML與python同樣,使用縮進表示層級關係
YAML的縮進不容許使用Tab鍵,只容許使用空格,且空格數目不重要,只要相同層級的元素左側對齊便可
‘#’表示註釋
YAML支持三種格式的數據,分別是:
對象:鍵值對的集合,又稱爲映射,相似於python中的字典
數組:一組按次序排列的值,又稱爲序列,相似於python中的列表
純量:單個的、不可再分的值,如字符串、布爾值或數字
從數據庫動態取出inventor:
import argparse import json from collections import defaultdict from contextlib import contextmanager import pymysql def to_json(in_dict): return json.dumps(in_dict,sort_keys=True,indent=2) @contextmanager def get_conn(**kwargs): conn = pymysql.connect(**kwargs) try: yield conn finally: conn.close() def parse_args(): parser = argparse.ArgumentParser(description='Openstack Inventory Module') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--list',action='store_true',help='List active servers') group.add_argument('--host',help='List details about the specific host') return parser.parse_args() def list_all_hosts(conn): hosts = defaultdict(list) with conn as cur: cur.execute('select * from hosts') rows = cur.fetchall() # print(rows) for row in rows: no,host,group,user,port = row hosts[group].append(host) return hosts def get_host_detail(conn,host): details = {} with conn as cur: cur.execute("select * from hosts where host='{0}'".format((host))) rows = cur.fetchall() if rows: no,host,group,user,port = rows[0] details.update(ansible_user=user,ansible_port=port) return details def main(): parser = parse_args() with get_conn(host='x.x.x.x',user='root',passwd='123456',db='db_ansible') as conn: if parser.list: hosts = list_all_hosts(conn) print(to_json(hosts)) else: details = get_host_detail(conn,parser.host) print(to_json(details)) if __name__ == '__main__': main()
表結構爲:
CREATE TABLE `hosts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `host` varchar(15) DEFAULT NULL, `groupname` varchar(15) DEFAULT NULL, `username` varchar(15) DEFAULT NULL, `port` int(11) DEFAULT '22', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 |
#建立一個目錄
ansible test -m file -a 'path=/tmp/dd state=directory mode=0755'
#修改文件的權限
ansible test -m file -a 'path=/tmp/dd state=touch mode="u=rw,g=r,o=r"'
#建立一個軟鏈接
ansible test -m file -a 'path=/tmp/dd dest=/tmp/ddl owner=lmx group=lmx state=link'
#修改一個文件的全部者
ansible test -m file -a 'path=/tmp/dd owner=root group=root mode=0644' -become
#建立一個用戶
ansible test -m user -a 'name=johd comment="jobe done" uid=1321 group=root' -become
#刪除一個用戶
ansible test -m user -a 'name=john state=absent' -become
#建立一個用戶,併產生一對祕鑰
ansible test -m user -a 'name=joind comment="joned jjj" generate_ssh_key=yes ssh_key_bits=2048' -become
#建立羣組
ansible test -m group -a 'name=ansible state=present git=1234' -become
#刪除羣組
ansible test -m group -a 'name=ansible state=absent' -become
#下載文件到遠程服務器
ansible test -m get_url -a 'url=http://www.baidu.com dest=/tmp/baidu.txt'
#下載文件到遠程服務器並修改文件的權限
ansible test -m get_url -a 'url=http://www.baidu.com dest=/tmp/baidu.txt mode=0777'
#在本地建立一個名爲data的目錄,並在該目錄穿件一些文件,而後再該目錄上建立兩個壓縮文件,分別爲data.tar.gz和data.tar.bz2
#建立一個目錄
ansible test -m file -a 'path=/data state=directory'
#解壓本地文件
ansible test -m unarchive -a 'src=data.tar.gz dest=/data list_files=yes'
#將本地文件拷貝到遠程
ansible test -m unarchive -a 'src=data.tar.bz2 dest=/data'
#解壓遠程文件
ansible test -m unarchive -a 'src=/data/data.tar.bz2 dest=/tmp remote_src=yes'
git
#將requests克隆到/tmp/request目錄下
ansible test -m git -a 'repo=https://github.com/kennethreitz/requests.git dest=/tmp/request version=HEAD'
#從源碼安裝requests
ansible test -a 'python setup.py install chdir=/tmp/request' -become
#驗證requests是否安裝成功
ansible test -a 'python -c "import requests"'
#獲取文件的相關信息
ansible test -m stat -a 'path=/etc/passwd'
cron
#增長一個cron任務ansible test -m cron -a ’ backup=yes name =」 test cron 」 minute=*/2 hour=* job =」 ls /tmp >/dev/null "
service
#安裝httpd
ansible test -m yum -a 'name=httpd state=prestent' -become
#中止httpd
ansible test -m service -a 'name=httpd state=stopped'
#重啓httpd
ansible test -m service -a 'name=httpd state=restarted'
sysctl
#設置overcommit_memory參數值爲1
ansible test -m sysctl -a 'name=vm.overcommit_memory value=1' -become
mount
#掛載/dev/vda盤到/mnt/data目錄
ansible test -m mount -a 'name=/mnt/data src=/dev/vda fstype=ext4 state=mounted'
--- - hosts: dbserver become: yes become_method: sudo tasks: - name: install mongodb apt: name=mongodb-server state=present - hosts: webservers tasks: - name: copy file copy: src=/tmp/data.txt dest=/tmp/data.txt - name: change mode file: dest=/tmp/data.txt mode=655 owner=root group=root
--- - hosts: dbserver become: yes become_method: sudo tasks: - name: install mongodb apt: name=mongodb-server state=present state=present - hosts: webserver tasks: - name: copy file copy: src=/tmp/data.txt dest=/data.txt - name: change mode file: dest=/data.txt mode=755 ower=lmx group=lmx
palybook能夠包含多個play,可是爲了playbook的可讀性和可維護性,咱們通常會在一個playbook寫一個play,若是咱們在first_playbook.yml這個拆分紅兩個playbook,分別爲web.yml和db.yml,當咱們能夠先執行db.yml,再執行web.yml,能夠寫一個all.yml
--- - include: db.yml - include: web.yml
include選項是ansible提供的,用於在一個playbook中導入其餘playbook,被導入的playbook會按順序依次執行
ansible-playbook
-T --timeout:創建SSH鏈接的超時時間
--key-file --private-key:創建SSH鏈接的私鑰文件
-i --inventory-file:指定inventory文件,默認是/etc/ansible/hosts
-f --foks:併發執行進程數,默認是5
--list-hosts:匹配到服務器列表
--list-tasks:列出任務列表
--step:每執行一個任務後中止,等待用戶確認
--syntax-check:檢查playbook語法
-C --check:檢查當前這個playbook是否會修改遠程服務器,至關於預測playbook的執行結果
playbook詳細語法
在定義play時,只有hosts和tasks是必選項,其餘都是根據須要加的
在ansible中,默認使用當前用戶鏈接遠程服務器執行操做,咱們也能夠在ansible.cfg文件中配置鏈接遠程服務器的默認用戶,此外,若是是不一樣的用戶使用不一樣類型的遠程服務器,那麼也能夠在playbook的play定義中指定鏈接遠程服務器的用戶。例如咱們能夠指定執行play的用戶:
--- - hosts: webserver remote_user: root
用戶也能夠細分每個task
--- - hosts: test remote_user: root tasks: - name: test connection ping: remote_user: root
與remote_user相似,咱們也能夠爲單個任務使用管理員權限,
--- - hosts: test remote_user: root tasks: - service: name=nginx state=restarted become: yes become_method: sudo
通知
在ansible中,模塊是冪等的。例如,咱們要在遠程服務器上建立一個用戶,若是用戶已存在,那麼ansible不會將該用戶刪除後從新建立,而是直接返回成功,並經過change字段表示是否對遠程服務器進行了修改
當咱們經過ansible修改apache的配置文件,並重啓apache的時候,若是咱們修改的內容沒變,咱們就不須要重啓。在ansible中,經過notify與handler機制來實現
--- - hosts: webserver remote_user: test become: yes become_method: sudo tasks: - name: ensure apache is at letest version yum: name=httpd state=latest - name: write the apache config file template: src=/srv/httpd.j2dest=/ect/httpd.conf notify: - restart apache - name: ensure apache is running service: name=httpd state=started handlers: - name: restart apache service: name=httpd state=restarted
須要注意的是,handle只會在全部task執行完以後執行,而且即使一個handler被觸發屢次,它只會執行一次。handler不是在被觸發時當即執行,而是按照play中定義的順序執行。通常狀況下handler都位於play最後,即在全部任務執行完之後再執行
變量
在inventory中,咱們能夠定義變量,對於簡單的playbook,最直接的方式是將變量定義在playbook的vars選項中
--- - hosts: webserver vars: mysql_port: 3307
在playbook中定義的變量,能夠在模板渲染時使用,
[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql port={{ mysql_port }}
當變量比較多的時候,能夠將其保存在一個獨立的文件中,並經過vars_files選項引用該文件
--- - hosts: webserver vars: favcolor: blue vars_files: - /vars/external_vars.yml tasks: - name: this is just a placeholder command: /bin/echo foo
保存變量的文件是一個簡單的YAML格式的字典
--- somervar: somevalue password: magic
在shell中,經過上一條命令的返回碼判斷命令是否執行成功,在ansible中,咱們將任務的執行結果保存在一個變量中,並在以後引用這個變量。這樣的變量在ansible中用register獲取,也成爲註冊變量
以下,執行/usr/bin/foo命令,經過register選項獲取命令執行結果,將結果保存在foo_result中,以後的task中引用這個變量的執行結果
--- - hosts: webserver tasks: - shell: /usr/bin/foo register: foo_result ignore_errors: True - shell: /usr/bin/bar when: foo_result.rc == 5
ignore_errors表示忽略中的錯誤,後者是一個條件語句,只有條件爲真時才執行當前task
Facts變量
在ansible中還有一些特殊的變量,這些變量時直接使用的。facts變量是ansible執行遠程部署以前從遠程服務器得到的系統信息
在playbook中能夠直接引用變量,以下
--- - hosts: test tasks: - shell: echo {{ansible_os_family}} register: myecho - debug: var=myecho.stdout_lines - name: install git on Centos linux yum: name=git state=installed when: ansible_os_family == "Centos"
訪問複雜變量的playbook
--- - hosts: test gather_facts: yes tasks: - shell: echo {{ansible_eth0["ipv4"]["address"]}} register: myecho - debug: var=myecho.stdout_lines - shell: echo {{ansible_eth0.ipv4.address}} register: myecho1 - debug: var=myecho1.stdout_lines
在ansible中,能夠經過gather_facts選項控制是否手機遠程服務器的信息,該選項默認爲yes,若是肯定不須要到遠程服務器的信息,能夠將該選項設置爲no,以此提升ansible部署效率
--- - hosts: test gather_facts: no tasks: - name: Install Mysql package yum: name={{ item }} state=installed with_items: - mysql-server - MySQL-python - libselinux-python - libsemanage-python
在playbook中,能夠經過when選項執行條件語句,when就相似於編程語言的if,示例
tasks: - name:"shut down Debian flavored systems" command: /sbin/shutdown -t now when: ansible_os_family == "Debian"
when也支持多個條件語句,
tasks: - name:"shut down Centos 6 systems" command:/sbin/shutdown -t now when: - ansible_distribution == "CentOS" - ansible_distribution_major_version == "6"
對於更加複雜的條件可使用and、or與括號進行定義
--- - hosts: test gather_facts: yes tasks: - name: "shut down CentOS 6 and Debian 7 systems" command: /sbin/shutdown -t now when: (ansible_distribution == "CentOS" and ansible_distribution_major_version == "6") or (ansible_distribution == "Debian" and ansible_distribution_major_version == "7")
when選項中能夠讀取變量的取值,如
vars: epic: true tasks: - shell: echo "This is epic" when: epic
when選項能夠與循環一塊兒使用,以實現過濾的功能:
tasks: - command: echo {{ item }} with_items: [0,2,4,6,8,10] when: item > 5
任務執行策略
在ansible中,playbook的執行方式是以task爲單位進行的,ansible默認是5個進程對遠程執行任務,在默認狀況的執行策略中,ansible首先執行task1,等到全部服務器執行完task1之後在執行task2,從2.0開始,ansible支持名爲free的任務執行策略,容許執行較快的遠程服務器提早完成Play的部署,不用等待其餘遠程服務器一塊兒執行task
--- - hosts: test strategy: free tasks: ...
使用playbook部署nginx
--- - hosts: test become: yes become_method: sudo vars: worker_connections: 768 worker_processes: auto max_open_files: 65535 tasks: - name: install nginx server yum: name=nginx state=latest - name: copy nginx config file template: src=/etc/ansible/playbook/nginx.conf.j2 dest=/etc/nginx/nginx.conf notify: restart nginx server - name: copy index template: src=/etc/ansible/playbook/index.html.j2 dest=/usr/share/nginx/html/index.html mode=0644 handlers: - name: restart nginx server service: name=nginx state=restarted
# For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user root; worker_processes {{ worker_processes }}; work_rlimit_nofile {{ max_open_files }}; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections {{ worker_connections }}; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80 default_server; listen [::]:80 default_server; server_name 172.26.186.194; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { root /opt/piwik; index index.php index.html; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html/html$fastcgi_script_name; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } # Settings for a TLS enabled server. # # server { # listen 443 ssl http2 default_server; # listen [::]:443 ssl http2 default_server; # server_name _; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # location / { # } # # error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } }
<html> <head> <title>Wecome to ansible</title> </head> <body> <h1>nginx,configured by ansible</h1> <p>If you can see this,ansible successful installed nginx.</p> <p>{{ ansible_hostname }}</p> </body> </html>
在nginx.conf.j2中引用了nginx.yaml裏面定義的三個參數,在index.html.j2裏面引用了系統變量
使用playbook部署mongodb
playbook中的高級語法
高級語法包括:serial、delegate_to、local_action、run_once、with_*、tags、changed_when和faild_when
線性更新服務器
ansible爲了提升部署的效率,默認使用併發的方式對遠程服務器進行更新。咱們可使用--forks參數控制併發的進程數,默認併發數爲5.咱們更新線上服務的時候應該按部就班更新,以此達到下降對線上服務影響的目的。若是服務器數量較少,能夠逐臺更新,若是服務器較多,則漸進式更新。如先更新一臺服務器,沒有問題再更新2臺服務器。若是更新過程當中存在不規範或者有問題,按部就班的更新方式能夠及時中止更新,儘量下降對線上的影響
爲了實現線性更新,咱們可使用ansible playbook的serial選項。該選項值