python自動化運維之路~DAY5

                                      python自動化運維之路~DAY5html

                                                    做者:尹正傑node

版權聲明:原創做品,謝絕轉載!不然將追究法律責任。 python

 

 

一.模塊的分類mysql

模塊,用一砣代碼實現了某個功能的代碼集合。 linux

相似於函數式編程和麪向過程編程,函數式編程則完成一個功能,其餘代碼用來調用便可,提供了代碼的重用性和代碼間的耦合。而對於一個複雜的功能來,可能須要多個函數才能完成(函數又能夠在不一樣的.py文件中),n個 .py 文件組成的代碼集合就稱爲模塊。git

如:os 是系統相關的模塊;file是文件操做相關的模塊正則表達式

模塊分爲三種:算法

  1>.自定義模塊:自定義的模塊,顧名思義,就是你本身寫的python程序,咱們知道python的代碼都存在一個以".py"結尾的文件中的,咱們這樣命名一個python腳本,吧後綴去掉就是模塊名稱,這個就是自定義模塊,我舉個例子:我寫了一個yinzhengjie.py的文件。裏面的內容咱們能夠忽略,若是咱們要導入這個模塊的話直接導入yinzhengjie這個模塊名稱就行了;sql

  2>.內置模塊:那麼問題來了,咱們學的cha(),id()等等全部的內置函數是模塊嗎?答案是否認的!不是!對內置函數不是內置模塊,他們只是python解釋器自帶的一些功能,那麼什麼是內置模塊呢?一會我會再個人博客中提到一些經常使用的內置模塊;shell

  3>.開源模塊:這個就很好解釋了,python語言的官網提供了一個供應開發人員上傳你的代碼到服務器上(https://pypi.python.org/pypi),而後客戶端只要在命令行中輸入安裝命令就能夠隨意的在shell或者cmd的python解釋器中調用這個第三方模塊,好比:pip install paramiko.

二.內置模塊解析:

 1.OS模塊詳解

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import os  7 #1.獲取當前工做目錄,即當前python腳本工做的目錄路徑
 8 print(os.getcwd())  9 #2.改變當前腳本工做目錄;至關於shell下cd,記住,這個是沒有返回值的喲!
10 print(os.chdir(r"D:\python\daima\DAY1")) 11 #3.返回當前目錄: ('.')
12 print(os.curdir) 13 #4.獲取當前目錄的父目錄字符串名:('..')
14 print(os.pardir) 15 #5.可生成多層遞歸目錄(建立目錄),生產完畢後返回一個None值
16 print(os.makedirs("D:\python\daima\DAY10")) 17 #6.若目錄爲空,則刪除,並遞歸到上一級目錄,如若也爲空,則刪除,依此類推(和上面的相反,就是刪除目錄。)
18 print(os.removedirs("D:\python\daima\DAY10")) 19 #7.生成單級目錄;至關於shell中mkdir dirname,若是當前目錄已經存在改目錄就會報錯!
20 print(os.mkdir("DAY10")) 21 #8.刪除單級空目錄,若目錄不爲空則沒法刪除,報錯;至關於shell中rmdir dirname,若是當前目錄沒有改目錄就會報錯!
22 print(os.rmdir("DAY10")) 23 #9.列出指定目錄下的全部文件和子目錄,包括隱藏文件,並以列表方式打印
24 print(os.listdir("D:\python\daima")) 25 #10.刪除一個文件
26 # os.remove("locked.txt")
27 #11.重命名文件/目錄
28 # os.rename("oldname","newname")
29 #12.os.stat('path/filename') 獲取文件/目錄信息
30 print(os.stat("D:\python\daima\DAY4")) 31 #13.輸出操做系統特定的路徑分隔符,win下爲"\\",Linux下爲"/"
32 print(os.sep) 33 #14.輸出當前平臺使用的行終止符,win下爲"\r\n",Linux下爲"\n"
34 print(os.linesep) 35 #15.輸出用於分割文件路徑的字符串
36 print(os.pathsep) 37 #16.輸出字符串指示當前使用平臺。win->'nt'; Linux->'posix'
38 print(os.name) 39 #17.運行shell或者windows命令,直接顯示命令的輸出結果,能夠將這個數據存放在一個變量中喲
40 # print(os.system("dir"))
41 #18.返回path規範化的絕對路徑
42 print(os.path.abspath("user_info.txt")) 43 #19.將path分割成目錄和文件名二元組返回
44 print(os.path.split(r"D:\python\daima\DAY1\user_info.txt")) 45 #20.返回path的目錄。其實就是os.path.split(path)的第一個元素
46 print(os.path.dirname(r"D:\python\daima\DAY1\user_info.txt")) 47 #21.os.path.basename(path) 返回path最後的文件名。如何path以/或\結尾,那麼就會返回空值。即os.path.split(path)的第二個元素
48 print(os.path.basename(r"D:\python\daima\DAY1\user_info.txt")) 49 #22.os.path.exists(path) 若是path存在,返回True;若是path不存在,返回False
50 print(os.path.exists(r"D:\python\daima\DAY1\user_info.txt")) 51 #23.os.path.isabs(path) 若是path是絕對路徑,返回True
52 print(os.path.isabs(r"D:\python\daima\DAY1\user_info.txt")) 53 #24.os.path.isfile(path) 若是path是一個存在的文件,返回True。不然返回False
54 print(os.path.isfile(r"D:\python\daima\DAY1\user_info.txt")) 55 #25.os.path.isdir(path) 若是path是一個存在的目錄,則返回True。不然返回False
56 print(os.path.isdir(r"D:\python\daima\DAY1\user_info.txt")) 57 #26.os.path.join(path1[, path2[, ...]]) 將多個路徑組合後返回,第一個絕對路徑以前的參數將被忽略
58 print(os.path.join(r"user_info.txt",r"D:\python\daima\DAY1\user_info.txt")) 59 #27.os.path.getatime(path) 返回path所指向的文件或者目錄的最後存取時間
60 print(os.path.getatime(r"D:\python\daima\DAY1\user_info.txt")) 61 #28.os.path.getmtime(path) 返回path所指向的文件或者目錄的最後修改時間
62 print(os.path.getmtime(r"D:\python\daima\DAY1\user_info.txt")) 63 '''
64 更多關於os模塊的使用方法請參考:https://docs.python.org/2/library/os.html?highlight=os#module-os 65 '''
66 
67 
68 #以上代碼執行結果以下:
69 D:\python\daima\DAY4 70 None 71 . 72 .. 73 None 74 None 75 None 76 None 77 ['.idea', 'DAY1', 'DAY2', 'DAY3', 'DAY4', 'DAY5', '__pycache__'] 78 os.stat_result(st_mode=16895, st_ino=22799473113577966, st_dev=839182139, st_nlink=1, st_uid=0, st_gid=0, st_size=4096, st_atime=1487743397, st_mtime=1487743397, st_ctime=1486692902) 79 \ 80 
81 
82 ; 83 nt 84 D:\python\daima\DAY1\user_info.txt 85 ('D:\\python\\daima\\DAY1', 'user_info.txt') 86 D:\python\daima\DAY1 87 user_info.txt 88 True 89 True 90 True 91 False 92 D:\python\daima\DAY1\user_info.txt 93 1483869109.7747889
94 1483869109.7758367
os模塊經常使用方法詳解

 2.sys模塊經常使用方法

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import sys  7 #1.獲取Python解釋程序的版本信息
 8 print(sys.version)  9 #2.返回操做系統平臺名稱
10 print(sys.platform) 11 #3.返回模塊的搜索路徑,初始化時使用PYTHONPATH環境變量的值
12 print(sys.path) 13 #4.退出程序,正常退出時exit(0),若是不寫數字的話,默認就是0
14 # print(sys.exit(100))
15 #5.命令行參數List,第一個元素是程序自己路徑
16 # path_info = sys.argv[1]
17 #6.顯示當前系統最大的Int值
18 print(sys.maxsize) 19 
20 '''
21 更多使用方法請參考:https://docs.python.org/2/library/sys.html?highlight=sys#module-sys 22 '''
23 
24 
25 #以上代碼執行結果以下:
26 
27 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (Intel)] 28 win32 29 ['D:\\python\\daima\\DAY4', 'D:\\python\\daima', 'C:\\Users\\yzj\\AppData\\Local\\Programs\\Python\\Python35-32\\python35.zip', 'C:\\Users\\yzj\\AppData\\Local\\Programs\\Python\\Python35-32\\DLLs', 'C:\\Users\\yzj\\AppData\\Local\\Programs\\Python\\Python35-32\\lib', 'C:\\Users\\yzj\\AppData\\Local\\Programs\\Python\\Python35-32', 'C:\\Users\\yzj\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages'] 30 2147483647
sys模塊經常使用方法詳解

3.json和pickle模塊詳解

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import pickle  7 '''
 8 pickle:  9  1.>用於python特有的類型 和 python的數據類型間進行轉換 10  2.>pickle模塊提供了四個功能:dumps、dump、loads、load. 11  補充說明:將數據經過特殊的形式轉換成只有python解釋器識別的字符串,這個過程咱們叫作序列化,而把哪些python可以識別的字符串轉換成咱們能看懂的叫作反序列化。 12 '''
13 data_info = {"name":"尹正傑","password":"123"} 14 #1.將數據經過特殊的形式轉換爲只有python語言知識的字符串並寫入文件
15 # pickle_str = pickle.dumps(data_info)
16 # print(pickle_str)
17 # f = open("test.txt","wb")
18 # f.write(pickle_str)
19 #2.上面的寫入文件的方法也能夠這麼玩,看起來更簡單
20 # with open("test_1.txt","wb") as fb:
21 # pickle.dump(data_info,fb)
22 #咱們知道將數據存入文件,那麼咱們怎麼把存入文件的東西讀出來呢?
23 #方法一:
24 # f = open("test_1.txt","rb")
25 # print(pickle.loads(f.read()))
26 #方法二:
27 f = open("test_1.txt","rb") 28 print(pickle.load(f))
python自帶pickle模塊用法

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import json  7 '''
 8 用於序列化的兩個模塊  9  1>.json:用於字符串 和 python數據類型間進行轉換 10  2>.pickle:用於python特有的類型和python的數據類型間進行轉換 11  json模塊提供了四個功能:dumps、dump、loads、load 12  pickle模塊提供了四個功能:dumps、dump、loads、load 13 '''
14 accounts = { 15     "id":521, 16     "name":"yinzhengjie", 17     "banlance":"9000"
18 } 19 #存數據方式一:
20 # f = open(r"D:\python\daima\DAY4\test_2.txt","w")
21 # json_str = json.dumps(accounts)
22 # f.write(json_str)
23 #存數據方式二:
24 # with open(r"D:\python\daima\DAY4\test_2.txt","w") as fp:
25 # json.dump(accounts,fp)
26 #讀取數據的方法一:
27 # f = open("test_2.txt","r")
28 # print(json.loads(f.read()))
29 #方法二:
30 f = open("test_2.txt","r") 31 print(json.load(f))
Python自帶的json模塊用法

對比json和pickle的異同:

     1>.相同點:都是用於系列化和反序列化的模塊。

  2>.不一樣點:json是在全部語言都通用的數據存儲格式,而pickle是僅僅只有python語言獨有的存儲格式。

4.time模塊與datetime模塊詳解

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import time  7 #1.測量處理器運算時間,不包括sleep時間,不穩定,mac上測不出來
 8 print(time.process_time())  9 #2.返回與utc時間的時間差,以秒計算
10 print(time.altzone) 11 #3.返回默認時間格式
12 print(time.asctime()) 13 #4.返回本地時間的struct_time對象格式
14 print(time.localtime()) 15 #5.返回utc時間的struc時間對象格式
16 print(time.gmtime(time.time()-800000)) 17 #6.返回本地時間格式,
18 print(time.asctime(time.localtime())) 19 #7.返回時間格式,同上
20 print(time.ctime()) 21 #8.將日期字符串轉成struct時間對象格式
22 string_2_struct = time.strptime("2016/05/22","%Y/%m/%d") 23 print(string_2_struct) 24 #9.將struct時間對象轉成時間戳
25 struct_2_stamp = time.mktime(string_2_struct) 26 print(struct_2_stamp) 27 #10.將utc時間戳轉換成struct_time格式
28 print(time.gmtime(time.time()-86640)) 29 #11.將utc struct_time格式轉成指定的字符串格式
30 print(time.strftime("%Y-%m-%d %H:%M:%S",time.gmtime()))
time模塊演示

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import time,datetime  7 #1.打印當前系統時間
 8 print(datetime.datetime.now())  9 #2.時間戳直接轉成日期格式如:2017-02-22
10 print(datetime.date.fromtimestamp(time.time())) 11 #3.當前時間+3天
12 print(datetime.datetime.now() + datetime.timedelta(3)) 13 #4.當前時間-3天
14 print(datetime.datetime.now() + datetime.timedelta(-3)) 15 #5.當前時間+3小時
16 print(datetime.datetime.now() + datetime.timedelta(hours=3)) 17 #6.當前時間+30分
18 print(datetime.datetime.now() + datetime.timedelta(minutes=30)) 19 #7.時間替換
20 c_time  = datetime.datetime.now() 21 print(c_time.replace(minute=3,hour=2))
datetime模塊用法

 

關於時間的一個轉換流程圖:

測試代碼以下:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import time  7 #1.將日期轉換成struct_time格式,會以一個tuple的形式打印出來
 8 # struct_time = time.strptime("2017/2/22","%Y/%m/%d") #注意分割的符號要保持一致喲,我這裏分割的符號是「/」
 9 struct_time = time.strptime("2017-2-22 17:29:30","%Y-%m-%d %H:%M:%S")  #ye ky 注意分割的符號要保持一致喲,我這裏分割的符號是「-」
10 print(struct_time) 11 #2.將struct_time格式轉換成時間戳的形式
12 stamp_time = time.mktime(struct_time) 13 print(stamp_time) 14 #3.將時間戳的形式,轉換成日期格式
15 date_time  = time.gmtime(stamp_time) 16 print(date_time) 17 print(time.strftime("%Y-%m-%d %H:%M:%S",date_time))

 

 

 關於參數的詳細說明以下:

Directive Meaning Notes
%a Locale’s abbreviated weekday name.  
%A Locale’s full weekday name.  
%b Locale’s abbreviated month name.  
%B Locale’s full month name.  
%c Locale’s appropriate date and time representation.  
%d Day of the month as a decimal number [01,31].  
%H Hour (24-hour clock) as a decimal number [00,23].  
%I Hour (12-hour clock) as a decimal number [01,12].  
%j Day of the year as a decimal number [001,366].  
%m Month as a decimal number [01,12].  
%M Minute as a decimal number [00,59].  
%p Locale’s equivalent of either AM or PM. (1)
%S Second as a decimal number [00,61]. (2)
%U Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0. (3)
%w Weekday as a decimal number [0(Sunday),6].  
%W Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Monday are considered to be in week 0. (3)
%x Locale’s appropriate date representation.  
%X Locale’s appropriate time representation.  
%y Year without century as a decimal number [00,99].  
%Y Year with century as a decimal number.  
%z Time zone offset indicating a positive or negative time difference from UTC/GMT of the form +HHMM or -HHMM, where H represents decimal hour digits and M represents decimal minute digits [-23:59, +23:59].  
%Z Time zone name (no characters if no time zone exists).  
%% A literal '%' character.

5.random模塊詳解

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import random  7 '''
 8 random:  9  是用來生成隨機數字的。 10 '''
11 #舉例子:
12 print(random.random()) 13 print(random.randint(1,20)) 14 print(random.randrange(1,10)) 15 '''
16 random 都有哪些應用呢? 17 '''
18 #生成隨機驗證碼,版本一:
19 checkcode = ''
20 for i in range(6):  #修改後面的數字表示隨機生成的數字個數,由於要循環6次
21     current = random.randrange(0,4) 22     if current != i: 23         temp = chr(random.randint(65,90)) 24     else: 25         temp = random.randint(0,9) 26     checkcode += str(temp) 27 print(checkcode) 28 #生成隨機驗證碼,版本二
29 import string 30 source = string.digits + string.ascii_lowercase 31 print("".join(random.sample(source,6)))  #修改後面的數字表示隨機生成的數字個數
random常見方法及應用

6.logging模塊

   不少程序都有記錄日誌的需求,而且日誌中包含的信息即有正常的程序訪問日誌,還可能有錯誤、警告等信息輸出,python的logging模塊提供了標準的日誌接口,你能夠經過它存儲各類格式的日誌,logging的日誌能夠分爲 debug()info()warning()error() and critical() 5個級別,從左往右依次增長告警級別,下面咱們看一下怎麼用。

    看一下這幾個日誌級別分別表明什麼意思

Level When it’s used
DEBUG Detailed information, typically of interest only when diagnosing problems.
INFO Confirmation that things are working as expected.
WARNING An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
ERROR Due to a more serious problem, the software has not been able to perform some function.
CRITICAL A serious error, indicating that the program itself may be unable to continue running.

 

  日誌格式

%(name)s

Logger的名字

%(levelno)s

數字形式的日誌級別

%(levelname)s

文本形式的日誌級別

%(pathname)s

調用日誌輸出函數的模塊的完整路徑名,可能沒有

%(filename)s

調用日誌輸出函數的模塊的文件名

%(module)s

調用日誌輸出函數的模塊名

%(funcName)s

調用日誌輸出函數的函數名

%(lineno)d

調用日誌輸出函數的語句所在的代碼行

%(created)f

當前時間,用UNIX標準的表示時間的浮 點數表示

%(relativeCreated)d

輸出日誌信息時的,自Logger建立以 來的毫秒數

%(asctime)s

字符串形式的當前時間。默認格式是 「2003-07-08 16:49:45,896」。逗號後面的是毫秒

%(thread)d

線程ID。可能沒有

%(threadName)s

線程名。可能沒有

%(process)d

進程ID。可能沒有

%(message)s

用戶輸出的消息

A.簡單的logging模塊案例演示:

1>.初探logging模塊:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import logging
 7 logging.warning("user [尹正傑] attempted wrong password more than 3 times")
 8 logging.critical("server is down")
 9 
10 
11 以上代碼執行結果以下:
12 WARNING:root:user [尹正傑] attempted wrong password more than 3 times
13 CRITICAL:root:server is down

2>.若是想把日誌寫到文件裏,也很簡單:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import logging
 7 logging.basicConfig(filename='yinzhengjie.log', level=logging.INFO)
 8 logging.debug('This message should go to the log file')
 9 logging.info('So should this')
10 logging.warning('And this, too')
11 
12 '''
13 補充說明:
14      logging.basicConfig中的參數level=loggin.INFO意思是,把日誌紀錄級別設置爲INFO,也就是說,只有比日誌是INFO或比INFO級別更高的日誌纔會被紀錄到文件裏,在這個例子,第一條日誌是不會被紀錄的,若是但願紀錄debug的日誌,那把日誌級別改爲DEBUG就好了。
15 '''
16 
17 
18 #查看'yinzhengjie.log'文件內容以下:
19 INFO:root:So should this
20 WARNING:root:And this, too

B.若是想同時把log打印在屏幕和文件日誌裏,就須要瞭解一點複雜的知識了:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 '''
 7 Python使用logging模塊記錄日誌涉及四個主要類,使用官方文檔中的歸納最爲合適:
 8     1>.logger提供了應用程序能夠直接使用的接口;
 9     2>.handler將(logger建立的)日誌記錄發送到合適的目的輸出;
10     3>.filter提供了細度設備來決定輸出哪條日誌記錄;
11     4>.formatter決定日誌記錄的最終輸出格式。
12 '''

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 '''
 7 Python使用logging模塊記錄日誌涉及四個主要類,使用官方文檔中的歸納最爲合適:
 8     1>.logger提供了應用程序能夠直接使用的接口;
 9     2>.handler將(logger建立的)日誌記錄發送到合適的目的輸出;
10     3>.filter提供了細度設備來決定輸出哪條日誌記錄;
11     4>.formatter決定日誌記錄的最終輸出格式。
12 '''
13 
14 
15 #logger
16 '''
17     每一個程序在輸出信息以前都要得到一個Logger。Logger一般對應了程序的模塊名.
18 #1>.好比聊天工具的圖形界面模塊能夠這樣得到它的Logger:
19 LOG=logging.getLogger(」chat.gui」)
20 #2>.而核心模塊能夠這樣:
21 LOG=logging.getLogger(」chat.kernel」)
22 #3>.指定最低的日誌級別,低於lel的級別將被忽略。debug是最低的內置級別,critical爲最高
23 Logger.setLevel(lel)
24 #4>.添加或刪除指定的filter
25 Logger.addFilter(filt)、Logger.removeFilter(filt)
26 #5>.增長或刪除指定的handler
27 Logger.addHandler(hdlr)、Logger.removeHandler(hdlr)
28 #6>.能夠設置的日誌級別
29 Logger.debug()、Logger.info()、Logger.warning()、Logger.error()、Logger.critical()
30 '''
31 
32 
33 #handler
34 '''
35         handler對象負責發送相關的信息到指定目的地。Python的日誌系統有多種Handler可使用。有些Handler能夠把信息輸出到控制檯,有些Logger能夠把信息輸出到文件,還有些 Handler能夠把信息發送到網絡上。若是以爲不夠用,還能夠編寫本身的Handler。能夠經過addHandler()方法添加多個多handler
36 #1>.指定被處理的信息級別,低於lel級別的信息將被忽略
37 Handler.setLevel(lel)
38 #2>.給這個handler選擇一個格式
39 Handler.setFormatter()
40 #3>.新增或刪除一個filter對象
41 Handler.addFilter(filt)、Handler.removeFilter(filt)
42         每一個Logger能夠附加多個Handler。接下來咱們就來介紹一些經常使用的Handler:
43 #1>.logging.StreamHandler
44         使用這個Handler能夠向相似與sys.stdout或者sys.stderr的任何文件對象(file object)輸出信息。它的構造函數是:StreamHandler([strm]),其中strm參數是一個文件對象。默認是sys.stderr
45 2) logging.FileHandler
46         和StreamHandler相似,用於向一個文件輸出日誌信息。不過FileHandler會幫你打開這個文件。它的構造函數是:FileHandler(filename[,mode]),filename是文件名,必須指定一個文件名。mode是文件的打開方式。參見Python內置函數open()的用法。默認是’a',即添加到文件末尾。
47 3) logging.handlers.RotatingFileHandler
48         這個Handler相似於上面的FileHandler,可是它能夠管理文件大小。當文件達到必定大小以後,它會自動將當前日誌文件更名,而後建立 一個新的同名日誌文件繼續輸出。好比日誌文件是chat.log。當chat.log達到指定的大小以後,RotatingFileHandler自動把 文件更名爲chat.log.1。不過,若是chat.log.1已經存在,會先把chat.log.1重命名爲chat.log.2。。。最後從新建立 chat.log,繼續輸出日誌信息。它的構造函數是:RotatingFileHandler( filename[, mode[, maxBytes[, backupCount]]]),其中filename和mode兩個參數和FileHandler同樣。maxBytes用於指定日誌文件的最大文件大小。若是maxBytes爲0,意味着日誌文件能夠無限大,這時上面描述的重命名過程就不會發生。backupCount用於指定保留的備份文件的個數。好比,若是指定爲2,當上面描述的重命名過程發生時,原有的chat.log.2並不會被改名,而是被刪除。
49 4) logging.handlers.TimedRotatingFileHandler
50         這個Handler和RotatingFileHandler相似,不過,它沒有經過判斷文件大小來決定什麼時候從新建立日誌文件,而是間隔必定時間就 自動建立新的日誌文件。重命名的過程與RotatingFileHandler相似,不過新的文件不是附加數字,而是當前時間。它的構造函數是:TimedRotatingFileHandler( filename [,when [,interval [,backupCount]]]),其中filename參數和backupCount參數和RotatingFileHandler具備相同的意義,interval是時間間隔。when參數是一個字符串。表示時間間隔的單位,不區分大小寫。它有如下取值:(S[秒],M[分],H[小時],D[天],W[每星期(interval==0時表明星期一)],midnight[天天凌晨]
51 '''
經常使用的功能介紹

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import logging
 7 # create logger
 8 logger = logging.getLogger('TEST-LOG')  #定義報警的標題
 9 logger.setLevel(logging.DEBUG)  #設置在屏幕中打印的報警級別和寫入文件的文件的最低級別這個至關於總開關,下面2個調試都得在這基礎之上對告警級別作處理。
10 # create console handler and set level to debug
11 ch = logging.StreamHandler()
12 ch.setLevel(logging.DEBUG)  #設置在屏幕中打印的報警級別,(優先權沒有第一個權限高)
13 # create file handler and set level to warning  #建立文件處理程序並設置告警級別。
14 fh = logging.FileHandler("access.log")  #定義保存日誌的文件
15 fh.setLevel(logging.WARNING)   #定義將報警信息寫入文件中的級別(優先權沒有第一個權限高))
16 # create formatter  #定義輸出格式
17 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
18 # add formatter to ch and fh  #添加格式花輸出
19 ch.setFormatter(formatter)
20 fh.setFormatter(formatter)
21 # add ch and fh to logger  #添加記錄器
22 logger.addHandler(ch)
23 logger.addHandler(fh)
24 # 'application' code #定義各類報警級別輸出的內容。
25 logger.debug('debug message')
26 logger.info('info message')
27 logger.warn('warn message')
28 logger.error('error message')
29 logger.critical('critical message')
30 
31 #屏幕輸入內容以下:
32 2017-02-23 10:08:28,184 - TEST-LOG - DEBUG - debug message
33 2017-02-23 10:08:28,185 - TEST-LOG - INFO - info message
34 2017-02-23 10:08:28,185 - TEST-LOG - WARNING - warn message
35 2017-02-23 10:08:28,185 - TEST-LOG - ERROR - error message
36 2017-02-23 10:08:28,185 - TEST-LOG - CRITICAL - critical message
37 
38 
39 #文件存入內容以下:
40 2017-02-23 10:08:28,185 - TEST-LOG - WARNING - warn message
41 2017-02-23 10:08:28,185 - TEST-LOG - ERROR - error message
42 2017-02-23 10:08:28,185 - TEST-LOG - CRITICAL - critical message
控制報警級別的輸出案例

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import logging
 7 from logging import handlers
 8 logger = logging.getLogger(__name__)
 9 log_file = "timelog.log"
10 #fh = handlers.RotatingFileHandler(filename=log_file,maxBytes=10,backupCount=3)
11 fh = handlers.TimedRotatingFileHandler(filename=log_file,when="S",interval=5,backupCount=3) #filename定義將信息輸入到指定的文件,when指定單位是s(秒),interval是時間間隔的頻率,單位是when所指定的喲(因此,你能夠理解頻率是5s);backupCount表示備份的文件個數,我這裏是指定的3個文件。
12 formatter = logging.Formatter('%(asctime)s %(module)s:%(lineno)d %(message)s')  #定義輸出格式
13 fh.setFormatter(formatter) #添加格式化輸出
14 logger.addHandler(fh)
15 logger.warning("test1")
16 logger.warning("test2")
17 logger.warning("test3")
18 logger.warning("test4")
按照時間自動截斷並保存指定文件個數的案例

 

7.shutil模塊3

    改模塊能夠處理高級的 文件、文件夾、壓縮包

1>.將文件內容拷貝到另外一個文件中,能夠部份內容

1 def copyfileobj(fsrc, fdst, length=16*1024): #須要制定一個源文件,目標文件,以及每次讀取的長度.
2     """copy data from file-like object fsrc to file-like object fdst"""
3     while 1:
4         buf = fsrc.read(length)
5         if not buf:
6             break
7         fdst.write(buf)
shutil.copyfileobj函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 f_1 = open("file_1","r+",encoding="utf-8")
8 f_2 = open("file_2","a+",encoding="utf-8") #若是這裏是「r+」的方式打開的話那麼在下面調用copyfileobj函數的時候,這個文件會被重定向的喲!(也就是以前的內容會被覆蓋掉)
9 shutil.copyfileobj(f_1,f_2)  #將f_1文件的內容追加到f_2文件中.
調用方法展現

2>.拷貝文件

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def copyfile(src, dst):
 7     """Copy data from src to dst"""
 8     if _samefile(src, dst):
 9         raise Error("`%s` and `%s` are the same file" % (src, dst))
10 
11     for fn in [src, dst]:
12         try:
13             st = os.stat(fn)
14         except OSError:
15             # File most likely does not exist
16             pass
17         else:
18             # XXX What about other special files? (sockets, devices...)
19             if stat.S_ISFIFO(st.st_mode):
20                 raise SpecialFileError("`%s` is a named pipe" % fn)
21 
22     with open(src, 'rb') as fsrc:
23         with open(dst, 'wb') as fdst:
24             copyfileobj(fsrc, fdst)
shutil.copyfile函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.copyfile("file_1","file_2") #直接寫源文件和目標文件,不須要像上面那樣費勁的打開一個文件了,若是沒有文件就建立一個,若是有的話就會直接覆蓋源文件的內容有喲
調用方法展現

3>.僅拷貝權限。內容、組、用戶均不變

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def copymode(src, dst):
 7     """Copy mode bits from src to dst"""
 8     if hasattr(os, 'chmod'):
 9         st = os.stat(src)
10         mode = stat.S_IMODE(st.st_mode)
11         os.chmod(dst, mode)
shutil.copymode函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.copymode("file_1","file_3") #首先,這2個文件必須存在,僅僅拷貝的是權限!要注意喲!
調用方法展現

4>.拷貝狀態的信息,包括:mode bits, atime, mtime, flags

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def copystat(src, dst):
 7     """Copy all stat info (mode bits, atime, mtime, flags) from src to dst"""
 8     st = os.stat(src)
 9     mode = stat.S_IMODE(st.st_mode)
10     if hasattr(os, 'utime'):
11         os.utime(dst, (st.st_atime, st.st_mtime))
12     if hasattr(os, 'chmod'):
13         os.chmod(dst, mode)
14     if hasattr(os, 'chflags') and hasattr(st, 'st_flags'):
15         try:
16             os.chflags(dst, st.st_flags)
17         except OSError, why:
18             for err in 'EOPNOTSUPP', 'ENOTSUP':
19                 if hasattr(errno, err) and why.errno == getattr(errno, err):
20                     break
21             else:
22                 raise
shutil.copystat函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.copystat("file_1","file_3") #將前面的狀態信息拷貝給後面的文件。
調用方法展現

5>.拷貝文件和權限

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def copy(src, dst):
 7     """Copy data and mode bits ("cp src dst").
 8 
 9     The destination may be a directory.
10 
11     """
12     if os.path.isdir(dst):
13         dst = os.path.join(dst, os.path.basename(src))
14     copyfile(src, dst)
15     copymode(src, dst)
shutil.copy函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.copy("file_1","file_11") #這個就是拷貝一個的內容還有權限一塊兒拷貝
調用方法展現

6>.拷貝文件和狀態信息

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def copy2(src, dst):
 7     """Copy data and all stat info ("cp -p src dst").
 8 
 9     The destination may be a directory.
10 
11     """
12     if os.path.isdir(dst):
13         dst = os.path.join(dst, os.path.basename(src))
14     copyfile(src, dst)
15     copystat(src, dst)
shutil.copy2函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.copy2("file_1","file_22") #這個就是拷貝一個的內容還有狀態信息也一併拷貝
調用方法展現

 7>.遞歸的去拷貝文件

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def ignore_patterns(*patterns):
 7     """Function that can be used as copytree() ignore parameter.
 8 
 9     Patterns is a sequence of glob-style patterns
10     that are used to exclude files"""
11     def _ignore_patterns(path, names):
12         ignored_names = []
13         for pattern in patterns:
14             ignored_names.extend(fnmatch.filter(names, pattern))
15         return set(ignored_names)
16     return _ignore_patterns
17 
18 def copytree(src, dst, symlinks=False, ignore=None):
19     """Recursively copy a directory tree using copy2().
20 
21     The destination directory must not already exist.
22     If exception(s) occur, an Error is raised with a list of reasons.
23 
24     If the optional symlinks flag is true, symbolic links in the
25     source tree result in symbolic links in the destination tree; if
26     it is false, the contents of the files pointed to by symbolic
27     links are copied.
28 
29     The optional ignore argument is a callable. If given, it
30     is called with the `src` parameter, which is the directory
31     being visited by copytree(), and `names` which is the list of
32     `src` contents, as returned by os.listdir():
33 
34         callable(src, names) -> ignored_names
35 
36     Since copytree() is called recursively, the callable will be
37     called once for each directory that is copied. It returns a
38     list of names relative to the `src` directory that should
39     not be copied.
40 
41     XXX Consider this example code rather than the ultimate tool.
42 
43     """
44     names = os.listdir(src)
45     if ignore is not None:
46         ignored_names = ignore(src, names)
47     else:
48         ignored_names = set()
49 
50     os.makedirs(dst)
51     errors = []
52     for name in names:
53         if name in ignored_names:
54             continue
55         srcname = os.path.join(src, name)
56         dstname = os.path.join(dst, name)
57         try:
58             if symlinks and os.path.islink(srcname):
59                 linkto = os.readlink(srcname)
60                 os.symlink(linkto, dstname)
61             elif os.path.isdir(srcname):
62                 copytree(srcname, dstname, symlinks, ignore)
63             else:
64                 # Will raise a SpecialFileError for unsupported file types
65                 copy2(srcname, dstname)
66         # catch the Error from the recursive copytree so that we can
67         # continue with other files
68         except Error, err:
69             errors.extend(err.args[0])
70         except EnvironmentError, why:
71             errors.append((srcname, dstname, str(why)))
72     try:
73         copystat(src, dst)
74     except OSError, why:
75         if WindowsError is not None and isinstance(why, WindowsError):
76             # Copying file access times may fail on Windows
77             pass
78         else:
79             errors.append((src, dst, str(why)))
80     if errors:
81         raise Error, errors
shutil.ignore_patterns與shutil.copytree函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.copytree(r"D:\python\daima\DAY5",r"D:\python\daima\DAY6\test",ignore=shutil.ignore_patterns("atm","*.log")) #須要輸入原路徑,目標路徑,利用ignore函數能夠對須要拷貝的東西進行過濾,我這裏過濾掉源目錄中全部的包含「atm」,「*log」關鍵字的目錄或者文件。(換句話說,就是新拷貝的路徑中,不包含被過濾掉的文件信息。)
調用方法展現

8>.遞歸的去刪除文件

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def rmtree(path, ignore_errors=False, onerror=None):
 7     """Recursively delete a directory tree.
 8 
 9     If ignore_errors is set, errors are ignored; otherwise, if onerror
10     is set, it is called to handle the error with arguments (func,
11     path, exc_info) where func is os.listdir, os.remove, or os.rmdir;
12     path is the argument to that function that caused it to fail; and
13     exc_info is a tuple returned by sys.exc_info().  If ignore_errors
14     is false and onerror is None, an exception is raised.
15 
16     """
17     if ignore_errors:
18         def onerror(*args):
19             pass
20     elif onerror is None:
21         def onerror(*args):
22             raise
23     try:
24         if os.path.islink(path):
25             # symlinks to directories are forbidden, see bug #1669
26             raise OSError("Cannot call rmtree on a symbolic link")
27     except OSError:
28         onerror(os.path.islink, path, sys.exc_info())
29         # can't continue even if onerror hook returns
30         return
31     names = []
32     try:
33         names = os.listdir(path)
34     except os.error, err:
35         onerror(os.listdir, path, sys.exc_info())
36     for name in names:
37         fullname = os.path.join(path, name)
38         try:
39             mode = os.lstat(fullname).st_mode
40         except os.error:
41             mode = 0
42         if stat.S_ISDIR(mode):
43             rmtree(fullname, ignore_errors, onerror)
44         else:
45             try:
46                 os.remove(fullname)
47             except os.error, err:
48                 onerror(os.remove, fullname, sys.exc_info())
49     try:
50         os.rmdir(path)
51     except os.error:
52         onerror(os.rmdir, path, sys.exc_info())
shutil.rmtree函數的源代碼
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.rmtree(r"D:\python\daima\DAY6\test")
調用方法展現

9>.遞歸的去移動文件

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 def move(src, dst):
 7     """Recursively move a file or directory to another location. This is
 8     similar to the Unix "mv" command.
 9 
10     If the destination is a directory or a symlink to a directory, the source
11     is moved inside the directory. The destination path must not already
12     exist.
13 
14     If the destination already exists but is not a directory, it may be
15     overwritten depending on os.rename() semantics.
16 
17     If the destination is on our current filesystem, then rename() is used.
18     Otherwise, src is copied to the destination and then removed.
19     A lot more could be done here...  A look at a mv.c shows a lot of
20     the issues this implementation glosses over.
21 
22     """
23     real_dst = dst
24     if os.path.isdir(dst):
25         if _samefile(src, dst):
26             # We might be on a case insensitive filesystem,
27             # perform the rename anyway.
28             os.rename(src, dst)
29             return
30 
31         real_dst = os.path.join(dst, _basename(src))
32         if os.path.exists(real_dst):
33             raise Error, "Destination path '%s' already exists" % real_dst
34     try:
35         os.rename(src, real_dst)
36     except OSError:
37         if os.path.isdir(src):
38             if _destinsrc(src, dst):
39                 raise Error, "Cannot move a directory '%s' into itself '%s'." % (src, dst)
40             copytree(src, real_dst, symlinks=True)
41             rmtree(src)
42         else:
43             copy2(src, real_dst)
44             os.unlink(src)
shutil.move函數源代碼以下
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import shutil
7 shutil.move(r"D:\python\daima\DAY5\atm","D:\python\daima\DAY6") #前面是原路徑,後面是默認路徑
調用方法展現

10>.建立壓縮包並返回文件路徑,例如:zip、tar(改方法其實質上就是調用的zipfile和tarfile函數的,能夠了解一下這2個函數的用法)

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import shutil
 7 shutil.make_archive("day5","zip",r"D:\python\daima\DAY5") #第一個參數須要傳入歸檔後的文件名稱,能夠指定絕對路徑,第二個參數表示打包的類型,tar表示歸檔不壓縮,可是zip表示壓縮,我這裏用了壓縮類型,第三個參數傳遞的是被被壓縮的對象。還能夠傳遞所屬者,所屬組等等。
 8 '''
 9 補充:
10     base_name: 壓縮包的文件名,也能夠是壓縮包的路徑。只是文件名時,則保存至當前目錄,不然保存至指定路徑,
11     如:www                        =>保存至當前路徑
12     如:/Users/yinzhengjie/www =>保存至/Users/yinzhengjie/
13     format:    壓縮包種類,「zip」, 「tar」, 「bztar」,「gztar」
14     root_dir:    要壓縮的文件夾路徑(默認當前目錄)
15     owner:    用戶,默認當前用戶
16     group:    組,默認當前組
17     logger:    用於記錄日誌,一般是logging.Logger對象
18 '''
將目錄打包的案例

11>.zipfile用法擴充

  1 #!/usr/bin/env python
  2 #_*_coding:utf-8_*_
  3 #@author :yinzhengjie
  4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
  5 #EMAIL:y1053419035@qq.com
  6 class ZipFile(object):
  7     """ Class with methods to open, read, write, close, list zip files.
  8 
  9     z = ZipFile(file, mode="r", compression=ZIP_STORED, allowZip64=False)
 10 
 11     file: Either the path to the file, or a file-like object.
 12           If it is a path, the file will be opened and closed by ZipFile.
 13     mode: The mode can be either read "r", write "w" or append "a".
 14     compression: ZIP_STORED (no compression) or ZIP_DEFLATED (requires zlib).
 15     allowZip64: if True ZipFile will create files with ZIP64 extensions when
 16                 needed, otherwise it will raise an exception when this would
 17                 be necessary.
 18 
 19     """
 20 
 21     fp = None                   # Set here since __del__ checks it
 22 
 23     def __init__(self, file, mode="r", compression=ZIP_STORED, allowZip64=False):
 24         """Open the ZIP file with mode read "r", write "w" or append "a"."""
 25         if mode not in ("r", "w", "a"):
 26             raise RuntimeError('ZipFile() requires mode "r", "w", or "a"')
 27 
 28         if compression == ZIP_STORED:
 29             pass
 30         elif compression == ZIP_DEFLATED:
 31             if not zlib:
 32                 raise RuntimeError,\
 33                       "Compression requires the (missing) zlib module"
 34         else:
 35             raise RuntimeError, "That compression method is not supported"
 36 
 37         self._allowZip64 = allowZip64
 38         self._didModify = False
 39         self.debug = 0  # Level of printing: 0 through 3
 40         self.NameToInfo = {}    # Find file info given name
 41         self.filelist = []      # List of ZipInfo instances for archive
 42         self.compression = compression  # Method of compression
 43         self.mode = key = mode.replace('b', '')[0]
 44         self.pwd = None
 45         self._comment = ''
 46 
 47         # Check if we were passed a file-like object
 48         if isinstance(file, basestring):
 49             self._filePassed = 0
 50             self.filename = file
 51             modeDict = {'r' : 'rb', 'w': 'wb', 'a' : 'r+b'}
 52             try:
 53                 self.fp = open(file, modeDict[mode])
 54             except IOError:
 55                 if mode == 'a':
 56                     mode = key = 'w'
 57                     self.fp = open(file, modeDict[mode])
 58                 else:
 59                     raise
 60         else:
 61             self._filePassed = 1
 62             self.fp = file
 63             self.filename = getattr(file, 'name', None)
 64 
 65         try:
 66             if key == 'r':
 67                 self._RealGetContents()
 68             elif key == 'w':
 69                 # set the modified flag so central directory gets written
 70                 # even if no files are added to the archive
 71                 self._didModify = True
 72             elif key == 'a':
 73                 try:
 74                     # See if file is a zip file
 75                     self._RealGetContents()
 76                     # seek to start of directory and overwrite
 77                     self.fp.seek(self.start_dir, 0)
 78                 except BadZipfile:
 79                     # file is not a zip file, just append
 80                     self.fp.seek(0, 2)
 81 
 82                     # set the modified flag so central directory gets written
 83                     # even if no files are added to the archive
 84                     self._didModify = True
 85             else:
 86                 raise RuntimeError('Mode must be "r", "w" or "a"')
 87         except:
 88             fp = self.fp
 89             self.fp = None
 90             if not self._filePassed:
 91                 fp.close()
 92             raise
 93 
 94     def __enter__(self):
 95         return self
 96 
 97     def __exit__(self, type, value, traceback):
 98         self.close()
 99 
100     def _RealGetContents(self):
101         """Read in the table of contents for the ZIP file."""
102         fp = self.fp
103         try:
104             endrec = _EndRecData(fp)
105         except IOError:
106             raise BadZipfile("File is not a zip file")
107         if not endrec:
108             raise BadZipfile, "File is not a zip file"
109         if self.debug > 1:
110             print endrec
111         size_cd = endrec[_ECD_SIZE]             # bytes in central directory
112         offset_cd = endrec[_ECD_OFFSET]         # offset of central directory
113         self._comment = endrec[_ECD_COMMENT]    # archive comment
114 
115         # "concat" is zero, unless zip was concatenated to another file
116         concat = endrec[_ECD_LOCATION] - size_cd - offset_cd
117         if endrec[_ECD_SIGNATURE] == stringEndArchive64:
118             # If Zip64 extension structures are present, account for them
119             concat -= (sizeEndCentDir64 + sizeEndCentDir64Locator)
120 
121         if self.debug > 2:
122             inferred = concat + offset_cd
123             print "given, inferred, offset", offset_cd, inferred, concat
124         # self.start_dir:  Position of start of central directory
125         self.start_dir = offset_cd + concat
126         fp.seek(self.start_dir, 0)
127         data = fp.read(size_cd)
128         fp = cStringIO.StringIO(data)
129         total = 0
130         while total < size_cd:
131             centdir = fp.read(sizeCentralDir)
132             if len(centdir) != sizeCentralDir:
133                 raise BadZipfile("Truncated central directory")
134             centdir = struct.unpack(structCentralDir, centdir)
135             if centdir[_CD_SIGNATURE] != stringCentralDir:
136                 raise BadZipfile("Bad magic number for central directory")
137             if self.debug > 2:
138                 print centdir
139             filename = fp.read(centdir[_CD_FILENAME_LENGTH])
140             # Create ZipInfo instance to store file information
141             x = ZipInfo(filename)
142             x.extra = fp.read(centdir[_CD_EXTRA_FIELD_LENGTH])
143             x.comment = fp.read(centdir[_CD_COMMENT_LENGTH])
144             x.header_offset = centdir[_CD_LOCAL_HEADER_OFFSET]
145             (x.create_version, x.create_system, x.extract_version, x.reserved,
146                 x.flag_bits, x.compress_type, t, d,
147                 x.CRC, x.compress_size, x.file_size) = centdir[1:12]
148             x.volume, x.internal_attr, x.external_attr = centdir[15:18]
149             # Convert date/time code to (year, month, day, hour, min, sec)
150             x._raw_time = t
151             x.date_time = ( (d>>9)+1980, (d>>5)&0xF, d&0x1F,
152                                      t>>11, (t>>5)&0x3F, (t&0x1F) * 2 )
153 
154             x._decodeExtra()
155             x.header_offset = x.header_offset + concat
156             x.filename = x._decodeFilename()
157             self.filelist.append(x)
158             self.NameToInfo[x.filename] = x
159 
160             # update total bytes read from central directory
161             total = (total + sizeCentralDir + centdir[_CD_FILENAME_LENGTH]
162                      + centdir[_CD_EXTRA_FIELD_LENGTH]
163                      + centdir[_CD_COMMENT_LENGTH])
164 
165             if self.debug > 2:
166                 print "total", total
167 
168 
169     def namelist(self):
170         """Return a list of file names in the archive."""
171         l = []
172         for data in self.filelist:
173             l.append(data.filename)
174         return l
175 
176     def infolist(self):
177         """Return a list of class ZipInfo instances for files in the
178         archive."""
179         return self.filelist
180 
181     def printdir(self):
182         """Print a table of contents for the zip file."""
183         print "%-46s %19s %12s" % ("File Name", "Modified    ", "Size")
184         for zinfo in self.filelist:
185             date = "%d-%02d-%02d %02d:%02d:%02d" % zinfo.date_time[:6]
186             print "%-46s %s %12d" % (zinfo.filename, date, zinfo.file_size)
187 
188     def testzip(self):
189         """Read all the files and check the CRC."""
190         chunk_size = 2 ** 20
191         for zinfo in self.filelist:
192             try:
193                 # Read by chunks, to avoid an OverflowError or a
194                 # MemoryError with very large embedded files.
195                 with self.open(zinfo.filename, "r") as f:
196                     while f.read(chunk_size):     # Check CRC-32
197                         pass
198             except BadZipfile:
199                 return zinfo.filename
200 
201     def getinfo(self, name):
202         """Return the instance of ZipInfo given 'name'."""
203         info = self.NameToInfo.get(name)
204         if info is None:
205             raise KeyError(
206                 'There is no item named %r in the archive' % name)
207 
208         return info
209 
210     def setpassword(self, pwd):
211         """Set default password for encrypted files."""
212         self.pwd = pwd
213 
214     @property
215     def comment(self):
216         """The comment text associated with the ZIP file."""
217         return self._comment
218 
219     @comment.setter
220     def comment(self, comment):
221         # check for valid comment length
222         if len(comment) > ZIP_MAX_COMMENT:
223             import warnings
224             warnings.warn('Archive comment is too long; truncating to %d bytes'
225                           % ZIP_MAX_COMMENT, stacklevel=2)
226             comment = comment[:ZIP_MAX_COMMENT]
227         self._comment = comment
228         self._didModify = True
229 
230     def read(self, name, pwd=None):
231         """Return file bytes (as a string) for name."""
232         return self.open(name, "r", pwd).read()
233 
234     def open(self, name, mode="r", pwd=None):
235         """Return file-like object for 'name'."""
236         if mode not in ("r", "U", "rU"):
237             raise RuntimeError, 'open() requires mode "r", "U", or "rU"'
238         if not self.fp:
239             raise RuntimeError, \
240                   "Attempt to read ZIP archive that was already closed"
241 
242         # Only open a new file for instances where we were not
243         # given a file object in the constructor
244         if self._filePassed:
245             zef_file = self.fp
246             should_close = False
247         else:
248             zef_file = open(self.filename, 'rb')
249             should_close = True
250 
251         try:
252             # Make sure we have an info object
253             if isinstance(name, ZipInfo):
254                 # 'name' is already an info object
255                 zinfo = name
256             else:
257                 # Get info object for name
258                 zinfo = self.getinfo(name)
259 
260             zef_file.seek(zinfo.header_offset, 0)
261 
262             # Skip the file header:
263             fheader = zef_file.read(sizeFileHeader)
264             if len(fheader) != sizeFileHeader:
265                 raise BadZipfile("Truncated file header")
266             fheader = struct.unpack(structFileHeader, fheader)
267             if fheader[_FH_SIGNATURE] != stringFileHeader:
268                 raise BadZipfile("Bad magic number for file header")
269 
270             fname = zef_file.read(fheader[_FH_FILENAME_LENGTH])
271             if fheader[_FH_EXTRA_FIELD_LENGTH]:
272                 zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH])
273 
274             if fname != zinfo.orig_filename:
275                 raise BadZipfile, \
276                         'File name in directory "%s" and header "%s" differ.' % (
277                             zinfo.orig_filename, fname)
278 
279             # check for encrypted flag & handle password
280             is_encrypted = zinfo.flag_bits & 0x1
281             zd = None
282             if is_encrypted:
283                 if not pwd:
284                     pwd = self.pwd
285                 if not pwd:
286                     raise RuntimeError, "File %s is encrypted, " \
287                         "password required for extraction" % name
288 
289                 zd = _ZipDecrypter(pwd)
290                 # The first 12 bytes in the cypher stream is an encryption header
291                 #  used to strengthen the algorithm. The first 11 bytes are
292                 #  completely random, while the 12th contains the MSB of the CRC,
293                 #  or the MSB of the file time depending on the header type
294                 #  and is used to check the correctness of the password.
295                 bytes = zef_file.read(12)
296                 h = map(zd, bytes[0:12])
297                 if zinfo.flag_bits & 0x8:
298                     # compare against the file type from extended local headers
299                     check_byte = (zinfo._raw_time >> 8) & 0xff
300                 else:
301                     # compare against the CRC otherwise
302                     check_byte = (zinfo.CRC >> 24) & 0xff
303                 if ord(h[11]) != check_byte:
304                     raise RuntimeError("Bad password for file", name)
305 
306             return ZipExtFile(zef_file, mode, zinfo, zd,
307                     close_fileobj=should_close)
308         except:
309             if should_close:
310                 zef_file.close()
311             raise
312 
313     def extract(self, member, path=None, pwd=None):
314         """Extract a member from the archive to the current working directory,
315            using its full name. Its file information is extracted as accurately
316            as possible. `member' may be a filename or a ZipInfo object. You can
317            specify a different directory using `path'.
318         """
319         if not isinstance(member, ZipInfo):
320             member = self.getinfo(member)
321 
322         if path is None:
323             path = os.getcwd()
324 
325         return self._extract_member(member, path, pwd)
326 
327     def extractall(self, path=None, members=None, pwd=None):
328         """Extract all members from the archive to the current working
329            directory. `path' specifies a different directory to extract to.
330            `members' is optional and must be a subset of the list returned
331            by namelist().
332         """
333         if members is None:
334             members = self.namelist()
335 
336         for zipinfo in members:
337             self.extract(zipinfo, path, pwd)
338 
339     def _extract_member(self, member, targetpath, pwd):
340         """Extract the ZipInfo object 'member' to a physical
341            file on the path targetpath.
342         """
343         # build the destination pathname, replacing
344         # forward slashes to platform specific separators.
345         arcname = member.filename.replace('/', os.path.sep)
346 
347         if os.path.altsep:
348             arcname = arcname.replace(os.path.altsep, os.path.sep)
349         # interpret absolute pathname as relative, remove drive letter or
350         # UNC path, redundant separators, "." and ".." components.
351         arcname = os.path.splitdrive(arcname)[1]
352         arcname = os.path.sep.join(x for x in arcname.split(os.path.sep)
353                     if x not in ('', os.path.curdir, os.path.pardir))
354         if os.path.sep == '\\':
355             # filter illegal characters on Windows
356             illegal = ':<>|"?*'
357             if isinstance(arcname, unicode):
358                 table = {ord(c): ord('_') for c in illegal}
359             else:
360                 table = string.maketrans(illegal, '_' * len(illegal))
361             arcname = arcname.translate(table)
362             # remove trailing dots
363             arcname = (x.rstrip('.') for x in arcname.split(os.path.sep))
364             arcname = os.path.sep.join(x for x in arcname if x)
365 
366         targetpath = os.path.join(targetpath, arcname)
367         targetpath = os.path.normpath(targetpath)
368 
369         # Create all upper directories if necessary.
370         upperdirs = os.path.dirname(targetpath)
371         if upperdirs and not os.path.exists(upperdirs):
372             os.makedirs(upperdirs)
373 
374         if member.filename[-1] == '/':
375             if not os.path.isdir(targetpath):
376                 os.mkdir(targetpath)
377             return targetpath
378 
379         with self.open(member, pwd=pwd) as source, \
380              file(targetpath, "wb") as target:
381             shutil.copyfileobj(source, target)
382 
383         return targetpath
384 
385     def _writecheck(self, zinfo):
386         """Check for errors before writing a file to the archive."""
387         if zinfo.filename in self.NameToInfo:
388             import warnings
389             warnings.warn('Duplicate name: %r' % zinfo.filename, stacklevel=3)
390         if self.mode not in ("w", "a"):
391             raise RuntimeError, 'write() requires mode "w" or "a"'
392         if not self.fp:
393             raise RuntimeError, \
394                   "Attempt to write ZIP archive that was already closed"
395         if zinfo.compress_type == ZIP_DEFLATED and not zlib:
396             raise RuntimeError, \
397                   "Compression requires the (missing) zlib module"
398         if zinfo.compress_type not in (ZIP_STORED, ZIP_DEFLATED):
399             raise RuntimeError, \
400                   "That compression method is not supported"
401         if not self._allowZip64:
402             requires_zip64 = None
403             if len(self.filelist) >= ZIP_FILECOUNT_LIMIT:
404                 requires_zip64 = "Files count"
405             elif zinfo.file_size > ZIP64_LIMIT:
406                 requires_zip64 = "Filesize"
407             elif zinfo.header_offset > ZIP64_LIMIT:
408                 requires_zip64 = "Zipfile size"
409             if requires_zip64:
410                 raise LargeZipFile(requires_zip64 +
411                                    " would require ZIP64 extensions")
412 
413     def write(self, filename, arcname=None, compress_type=None):
414         """Put the bytes from filename into the archive under the name
415         arcname."""
416         if not self.fp:
417             raise RuntimeError(
418                   "Attempt to write to ZIP archive that was already closed")
419 
420         st = os.stat(filename)
421         isdir = stat.S_ISDIR(st.st_mode)
422         mtime = time.localtime(st.st_mtime)
423         date_time = mtime[0:6]
424         # Create ZipInfo instance to store file information
425         if arcname is None:
426             arcname = filename
427         arcname = os.path.normpath(os.path.splitdrive(arcname)[1])
428         while arcname[0] in (os.sep, os.altsep):
429             arcname = arcname[1:]
430         if isdir:
431             arcname += '/'
432         zinfo = ZipInfo(arcname, date_time)
433         zinfo.external_attr = (st[0] & 0xFFFF) << 16L      # Unix attributes
434         if compress_type is None:
435             zinfo.compress_type = self.compression
436         else:
437             zinfo.compress_type = compress_type
438 
439         zinfo.file_size = st.st_size
440         zinfo.flag_bits = 0x00
441         zinfo.header_offset = self.fp.tell()    # Start of header bytes
442 
443         self._writecheck(zinfo)
444         self._didModify = True
445 
446         if isdir:
447             zinfo.file_size = 0
448             zinfo.compress_size = 0
449             zinfo.CRC = 0
450             zinfo.external_attr |= 0x10  # MS-DOS directory flag
451             self.filelist.append(zinfo)
452             self.NameToInfo[zinfo.filename] = zinfo
453             self.fp.write(zinfo.FileHeader(False))
454             return
455 
456         with open(filename, "rb") as fp:
457             # Must overwrite CRC and sizes with correct data later
458             zinfo.CRC = CRC = 0
459             zinfo.compress_size = compress_size = 0
460             # Compressed size can be larger than uncompressed size
461             zip64 = self._allowZip64 and \
462                     zinfo.file_size * 1.05 > ZIP64_LIMIT
463             self.fp.write(zinfo.FileHeader(zip64))
464             if zinfo.compress_type == ZIP_DEFLATED:
465                 cmpr = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION,
466                      zlib.DEFLATED, -15)
467             else:
468                 cmpr = None
469             file_size = 0
470             while 1:
471                 buf = fp.read(1024 * 8)
472                 if not buf:
473                     break
474                 file_size = file_size + len(buf)
475                 CRC = crc32(buf, CRC) & 0xffffffff
476                 if cmpr:
477                     buf = cmpr.compress(buf)
478                     compress_size = compress_size + len(buf)
479                 self.fp.write(buf)
480         if cmpr:
481             buf = cmpr.flush()
482             compress_size = compress_size + len(buf)
483             self.fp.write(buf)
484             zinfo.compress_size = compress_size
485         else:
486             zinfo.compress_size = file_size
487         zinfo.CRC = CRC
488         zinfo.file_size = file_size
489         if not zip64 and self._allowZip64:
490             if file_size > ZIP64_LIMIT:
491                 raise RuntimeError('File size has increased during compressing')
492             if compress_size > ZIP64_LIMIT:
493                 raise RuntimeError('Compressed size larger than uncompressed size')
494         # Seek backwards and write file header (which will now include
495         # correct CRC and file sizes)
496         position = self.fp.tell()       # Preserve current position in file
497         self.fp.seek(zinfo.header_offset, 0)
498         self.fp.write(zinfo.FileHeader(zip64))
499         self.fp.seek(position, 0)
500         self.filelist.append(zinfo)
501         self.NameToInfo[zinfo.filename] = zinfo
502 
503     def writestr(self, zinfo_or_arcname, bytes, compress_type=None):
504         """Write a file into the archive.  The contents is the string
505         'bytes'.  'zinfo_or_arcname' is either a ZipInfo instance or
506         the name of the file in the archive."""
507         if not isinstance(zinfo_or_arcname, ZipInfo):
508             zinfo = ZipInfo(filename=zinfo_or_arcname,
509                             date_time=time.localtime(time.time())[:6])
510 
511             zinfo.compress_type = self.compression
512             if zinfo.filename[-1] == '/':
513                 zinfo.external_attr = 0o40775 << 16   # drwxrwxr-x
514                 zinfo.external_attr |= 0x10           # MS-DOS directory flag
515             else:
516                 zinfo.external_attr = 0o600 << 16     # ?rw-------
517         else:
518             zinfo = zinfo_or_arcname
519 
520         if not self.fp:
521             raise RuntimeError(
522                   "Attempt to write to ZIP archive that was already closed")
523 
524         if compress_type is not None:
525             zinfo.compress_type = compress_type
526 
527         zinfo.file_size = len(bytes)            # Uncompressed size
528         zinfo.header_offset = self.fp.tell()    # Start of header bytes
529         self._writecheck(zinfo)
530         self._didModify = True
531         zinfo.CRC = crc32(bytes) & 0xffffffff       # CRC-32 checksum
532         if zinfo.compress_type == ZIP_DEFLATED:
533             co = zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION,
534                  zlib.DEFLATED, -15)
535             bytes = co.compress(bytes) + co.flush()
536             zinfo.compress_size = len(bytes)    # Compressed size
537         else:
538             zinfo.compress_size = zinfo.file_size
539         zip64 = zinfo.file_size > ZIP64_LIMIT or \
540                 zinfo.compress_size > ZIP64_LIMIT
541         if zip64 and not self._allowZip64:
542             raise LargeZipFile("Filesize would require ZIP64 extensions")
543         self.fp.write(zinfo.FileHeader(zip64))
544         self.fp.write(bytes)
545         if zinfo.flag_bits & 0x08:
546             # Write CRC and file sizes after the file data
547             fmt = '<LQQ' if zip64 else '<LLL'
548             self.fp.write(struct.pack(fmt, zinfo.CRC, zinfo.compress_size,
549                   zinfo.file_size))
550         self.fp.flush()
551         self.filelist.append(zinfo)
552         self.NameToInfo[zinfo.filename] = zinfo
553 
554     def __del__(self):
555         """Call the "close()" method in case the user forgot."""
556         self.close()
557 
558     def close(self):
559         """Close the file, and for mode "w" and "a" write the ending
560         records."""
561         if self.fp is None:
562             return
563 
564         try:
565             if self.mode in ("w", "a") and self._didModify: # write ending records
566                 pos1 = self.fp.tell()
567                 for zinfo in self.filelist:         # write central directory
568                     dt = zinfo.date_time
569                     dosdate = (dt[0] - 1980) << 9 | dt[1] << 5 | dt[2]
570                     dostime = dt[3] << 11 | dt[4] << 5 | (dt[5] // 2)
571                     extra = []
572                     if zinfo.file_size > ZIP64_LIMIT \
573                             or zinfo.compress_size > ZIP64_LIMIT:
574                         extra.append(zinfo.file_size)
575                         extra.append(zinfo.compress_size)
576                         file_size = 0xffffffff
577                         compress_size = 0xffffffff
578                     else:
579                         file_size = zinfo.file_size
580                         compress_size = zinfo.compress_size
581 
582                     if zinfo.header_offset > ZIP64_LIMIT:
583                         extra.append(zinfo.header_offset)
584                         header_offset = 0xffffffffL
585                     else:
586                         header_offset = zinfo.header_offset
587 
588                     extra_data = zinfo.extra
589                     if extra:
590                         # Append a ZIP64 field to the extra's
591                         extra_data = struct.pack(
592                                 '<HH' + 'Q'*len(extra),
593                                 1, 8*len(extra), *extra) + extra_data
594 
595                         extract_version = max(45, zinfo.extract_version)
596                         create_version = max(45, zinfo.create_version)
597                     else:
598                         extract_version = zinfo.extract_version
599                         create_version = zinfo.create_version
600 
601                     try:
602                         filename, flag_bits = zinfo._encodeFilenameFlags()
603                         centdir = struct.pack(structCentralDir,
604                         stringCentralDir, create_version,
605                         zinfo.create_system, extract_version, zinfo.reserved,
606                         flag_bits, zinfo.compress_type, dostime, dosdate,
607                         zinfo.CRC, compress_size, file_size,
608                         len(filename), len(extra_data), len(zinfo.comment),
609                         0, zinfo.internal_attr, zinfo.external_attr,
610                         header_offset)
611                     except DeprecationWarning:
612                         print >>sys.stderr, (structCentralDir,
613                         stringCentralDir, create_version,
614                         zinfo.create_system, extract_version, zinfo.reserved,
615                         zinfo.flag_bits, zinfo.compress_type, dostime, dosdate,
616                         zinfo.CRC, compress_size, file_size,
617                         len(zinfo.filename), len(extra_data), len(zinfo.comment),
618                         0, zinfo.internal_attr, zinfo.external_attr,
619                         header_offset)
620                         raise
621                     self.fp.write(centdir)
622                     self.fp.write(filename)
623                     self.fp.write(extra_data)
624                     self.fp.write(zinfo.comment)
625 
626                 pos2 = self.fp.tell()
627                 # Write end-of-zip-archive record
628                 centDirCount = len(self.filelist)
629                 centDirSize = pos2 - pos1
630                 centDirOffset = pos1
631                 requires_zip64 = None
632                 if centDirCount > ZIP_FILECOUNT_LIMIT:
633                     requires_zip64 = "Files count"
634                 elif centDirOffset > ZIP64_LIMIT:
635                     requires_zip64 = "Central directory offset"
636                 elif centDirSize > ZIP64_LIMIT:
637                     requires_zip64 = "Central directory size"
638                 if requires_zip64:
639                     # Need to write the ZIP64 end-of-archive records
640                     if not self._allowZip64:
641                         raise LargeZipFile(requires_zip64 +
642                                            " would require ZIP64 extensions")
643                     zip64endrec = struct.pack(
644                             structEndArchive64, stringEndArchive64,
645                             44, 45, 45, 0, 0, centDirCount, centDirCount,
646                             centDirSize, centDirOffset)
647                     self.fp.write(zip64endrec)
648 
649                     zip64locrec = struct.pack(
650                             structEndArchive64Locator,
651                             stringEndArchive64Locator, 0, pos2, 1)
652                     self.fp.write(zip64locrec)
653                     centDirCount = min(centDirCount, 0xFFFF)
654                     centDirSize = min(centDirSize, 0xFFFFFFFF)
655                     centDirOffset = min(centDirOffset, 0xFFFFFFFF)
656 
657                 endrec = struct.pack(structEndArchive, stringEndArchive,
658                                     0, 0, centDirCount, centDirCount,
659                                     centDirSize, centDirOffset, len(self._comment))
660                 self.fp.write(endrec)
661                 self.fp.write(self._comment)
662                 self.fp.flush()
663         finally:
664             fp = self.fp
665             self.fp = None
666             if not self._filePassed:
667                 fp.close()
zipfile的源代碼
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import zipfile
 7 z = zipfile.ZipFile("ziptest.zip","w")  #建立一個叫」ziptest.zip「的壓縮文件
 8 z.write(r"D:\python\daima\DAY5\README",arcname="README") #將該文件放入到」ziptest.zip「的壓縮文件中,後面的arcname參數的意思是指壓縮這個文件的便可,不用壓縮這個文件的所在的絕對路徑。若是不加這個參數的話,會把該文件的當前位置的絕對路徑都一塊兒壓縮了
 9 z.write(r"D:\python\daima\DAY3\modify.txt",arcname="modify.txt") #同上
10 z.write("day5.zip") #將當前路徑的文件壓縮到」ziptest.zip「的壓縮文件中
11 z.close()  #關閉壓縮文件,這個時候就能夠將文件存入進去了。
zipfile壓縮的用法
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import zipfile
 7 z = zipfile.ZipFile("ziptest.zip","r")
 8 z.extract("README") #進行解壓一個叫「README」的文件
 9 z.extractall(path=r"D:\python\daima\DAY5\test_1") #將z這個壓縮包的內容所有解壓出來,path表示執行解壓後的存放路徑
10 z.extractall(members=["modify.txt"])  #表示將z這個壓縮包所有解壓出來,出了列表中的文件,注意不支持模糊匹配喲!
11 z.close()
zipfile解壓的用法

12>.tarfile用法擴充

  1 #!/usr/bin/env python
  2 #_*_coding:utf-8_*_
  3 #@author :yinzhengjie
  4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
  5 #EMAIL:y1053419035@qq.com
  6 class TarFile(object):
  7     """The TarFile Class provides an interface to tar archives.
  8     """
  9 
 10     debug = 0                   # May be set from 0 (no msgs) to 3 (all msgs)
 11 
 12     dereference = False         # If true, add content of linked file to the
 13                                 # tar file, else the link.
 14 
 15     ignore_zeros = False        # If true, skips empty or invalid blocks and
 16                                 # continues processing.
 17 
 18     errorlevel = 1              # If 0, fatal errors only appear in debug
 19                                 # messages (if debug >= 0). If > 0, errors
 20                                 # are passed to the caller as exceptions.
 21 
 22     format = DEFAULT_FORMAT     # The format to use when creating an archive.
 23 
 24     encoding = ENCODING         # Encoding for 8-bit character strings.
 25 
 26     errors = None               # Error handler for unicode conversion.
 27 
 28     tarinfo = TarInfo           # The default TarInfo class to use.
 29 
 30     fileobject = ExFileObject   # The default ExFileObject class to use.
 31 
 32     def __init__(self, name=None, mode="r", fileobj=None, format=None,
 33             tarinfo=None, dereference=None, ignore_zeros=None, encoding=None,
 34             errors=None, pax_headers=None, debug=None, errorlevel=None):
 35         """Open an (uncompressed) tar archive `name'. `mode' is either 'r' to
 36            read from an existing archive, 'a' to append data to an existing
 37            file or 'w' to create a new file overwriting an existing one. `mode'
 38            defaults to 'r'.
 39            If `fileobj' is given, it is used for reading or writing data. If it
 40            can be determined, `mode' is overridden by `fileobj's mode.
 41            `fileobj' is not closed, when TarFile is closed.
 42         """
 43         modes = {"r": "rb", "a": "r+b", "w": "wb"}
 44         if mode not in modes:
 45             raise ValueError("mode must be 'r', 'a' or 'w'")
 46         self.mode = mode
 47         self._mode = modes[mode]
 48 
 49         if not fileobj:
 50             if self.mode == "a" and not os.path.exists(name):
 51                 # Create nonexistent files in append mode.
 52                 self.mode = "w"
 53                 self._mode = "wb"
 54             fileobj = bltn_open(name, self._mode)
 55             self._extfileobj = False
 56         else:
 57             if name is None and hasattr(fileobj, "name"):
 58                 name = fileobj.name
 59             if hasattr(fileobj, "mode"):
 60                 self._mode = fileobj.mode
 61             self._extfileobj = True
 62         self.name = os.path.abspath(name) if name else None
 63         self.fileobj = fileobj
 64 
 65         # Init attributes.
 66         if format is not None:
 67             self.format = format
 68         if tarinfo is not None:
 69             self.tarinfo = tarinfo
 70         if dereference is not None:
 71             self.dereference = dereference
 72         if ignore_zeros is not None:
 73             self.ignore_zeros = ignore_zeros
 74         if encoding is not None:
 75             self.encoding = encoding
 76 
 77         if errors is not None:
 78             self.errors = errors
 79         elif mode == "r":
 80             self.errors = "utf-8"
 81         else:
 82             self.errors = "strict"
 83 
 84         if pax_headers is not None and self.format == PAX_FORMAT:
 85             self.pax_headers = pax_headers
 86         else:
 87             self.pax_headers = {}
 88 
 89         if debug is not None:
 90             self.debug = debug
 91         if errorlevel is not None:
 92             self.errorlevel = errorlevel
 93 
 94         # Init datastructures.
 95         self.closed = False
 96         self.members = []       # list of members as TarInfo objects
 97         self._loaded = False    # flag if all members have been read
 98         self.offset = self.fileobj.tell()
 99                                 # current position in the archive file
100         self.inodes = {}        # dictionary caching the inodes of
101                                 # archive members already added
102 
103         try:
104             if self.mode == "r":
105                 self.firstmember = None
106                 self.firstmember = self.next()
107 
108             if self.mode == "a":
109                 # Move to the end of the archive,
110                 # before the first empty block.
111                 while True:
112                     self.fileobj.seek(self.offset)
113                     try:
114                         tarinfo = self.tarinfo.fromtarfile(self)
115                         self.members.append(tarinfo)
116                     except EOFHeaderError:
117                         self.fileobj.seek(self.offset)
118                         break
119                     except HeaderError, e:
120                         raise ReadError(str(e))
121 
122             if self.mode in "aw":
123                 self._loaded = True
124 
125                 if self.pax_headers:
126                     buf = self.tarinfo.create_pax_global_header(self.pax_headers.copy())
127                     self.fileobj.write(buf)
128                     self.offset += len(buf)
129         except:
130             if not self._extfileobj:
131                 self.fileobj.close()
132             self.closed = True
133             raise
134 
135     def _getposix(self):
136         return self.format == USTAR_FORMAT
137     def _setposix(self, value):
138         import warnings
139         warnings.warn("use the format attribute instead", DeprecationWarning,
140                       2)
141         if value:
142             self.format = USTAR_FORMAT
143         else:
144             self.format = GNU_FORMAT
145     posix = property(_getposix, _setposix)
146 
147     #--------------------------------------------------------------------------
148     # Below are the classmethods which act as alternate constructors to the
149     # TarFile class. The open() method is the only one that is needed for
150     # public use; it is the "super"-constructor and is able to select an
151     # adequate "sub"-constructor for a particular compression using the mapping
152     # from OPEN_METH.
153     #
154     # This concept allows one to subclass TarFile without losing the comfort of
155     # the super-constructor. A sub-constructor is registered and made available
156     # by adding it to the mapping in OPEN_METH.
157 
158     @classmethod
159     def open(cls, name=None, mode="r", fileobj=None, bufsize=RECORDSIZE, **kwargs):
160         """Open a tar archive for reading, writing or appending. Return
161            an appropriate TarFile class.
162 
163            mode:
164            'r' or 'r:*' open for reading with transparent compression
165            'r:'         open for reading exclusively uncompressed
166            'r:gz'       open for reading with gzip compression
167            'r:bz2'      open for reading with bzip2 compression
168            'a' or 'a:'  open for appending, creating the file if necessary
169            'w' or 'w:'  open for writing without compression
170            'w:gz'       open for writing with gzip compression
171            'w:bz2'      open for writing with bzip2 compression
172 
173            'r|*'        open a stream of tar blocks with transparent compression
174            'r|'         open an uncompressed stream of tar blocks for reading
175            'r|gz'       open a gzip compressed stream of tar blocks
176            'r|bz2'      open a bzip2 compressed stream of tar blocks
177            'w|'         open an uncompressed stream for writing
178            'w|gz'       open a gzip compressed stream for writing
179            'w|bz2'      open a bzip2 compressed stream for writing
180         """
181 
182         if not name and not fileobj:
183             raise ValueError("nothing to open")
184 
185         if mode in ("r", "r:*"):
186             # Find out which *open() is appropriate for opening the file.
187             for comptype in cls.OPEN_METH:
188                 func = getattr(cls, cls.OPEN_METH[comptype])
189                 if fileobj is not None:
190                     saved_pos = fileobj.tell()
191                 try:
192                     return func(name, "r", fileobj, **kwargs)
193                 except (ReadError, CompressionError), e:
194                     if fileobj is not None:
195                         fileobj.seek(saved_pos)
196                     continue
197             raise ReadError("file could not be opened successfully")
198 
199         elif ":" in mode:
200             filemode, comptype = mode.split(":", 1)
201             filemode = filemode or "r"
202             comptype = comptype or "tar"
203 
204             # Select the *open() function according to
205             # given compression.
206             if comptype in cls.OPEN_METH:
207                 func = getattr(cls, cls.OPEN_METH[comptype])
208             else:
209                 raise CompressionError("unknown compression type %r" % comptype)
210             return func(name, filemode, fileobj, **kwargs)
211 
212         elif "|" in mode:
213             filemode, comptype = mode.split("|", 1)
214             filemode = filemode or "r"
215             comptype = comptype or "tar"
216 
217             if filemode not in ("r", "w"):
218                 raise ValueError("mode must be 'r' or 'w'")
219 
220             stream = _Stream(name, filemode, comptype, fileobj, bufsize)
221             try:
222                 t = cls(name, filemode, stream, **kwargs)
223             except:
224                 stream.close()
225                 raise
226             t._extfileobj = False
227             return t
228 
229         elif mode in ("a", "w"):
230             return cls.taropen(name, mode, fileobj, **kwargs)
231 
232         raise ValueError("undiscernible mode")
233 
234     @classmethod
235     def taropen(cls, name, mode="r", fileobj=None, **kwargs):
236         """Open uncompressed tar archive name for reading or writing.
237         """
238         if mode not in ("r", "a", "w"):
239             raise ValueError("mode must be 'r', 'a' or 'w'")
240         return cls(name, mode, fileobj, **kwargs)
241 
242     @classmethod
243     def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs):
244         """Open gzip compressed tar archive name for reading or writing.
245            Appending is not allowed.
246         """
247         if mode not in ("r", "w"):
248             raise ValueError("mode must be 'r' or 'w'")
249 
250         try:
251             import gzip
252             gzip.GzipFile
253         except (ImportError, AttributeError):
254             raise CompressionError("gzip module is not available")
255 
256         try:
257             fileobj = gzip.GzipFile(name, mode, compresslevel, fileobj)
258         except OSError:
259             if fileobj is not None and mode == 'r':
260                 raise ReadError("not a gzip file")
261             raise
262 
263         try:
264             t = cls.taropen(name, mode, fileobj, **kwargs)
265         except IOError:
266             fileobj.close()
267             if mode == 'r':
268                 raise ReadError("not a gzip file")
269             raise
270         except:
271             fileobj.close()
272             raise
273         t._extfileobj = False
274         return t
275 
276     @classmethod
277     def bz2open(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs):
278         """Open bzip2 compressed tar archive name for reading or writing.
279            Appending is not allowed.
280         """
281         if mode not in ("r", "w"):
282             raise ValueError("mode must be 'r' or 'w'.")
283 
284         try:
285             import bz2
286         except ImportError:
287             raise CompressionError("bz2 module is not available")
288 
289         if fileobj is not None:
290             fileobj = _BZ2Proxy(fileobj, mode)
291         else:
292             fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel)
293 
294         try:
295             t = cls.taropen(name, mode, fileobj, **kwargs)
296         except (IOError, EOFError):
297             fileobj.close()
298             if mode == 'r':
299                 raise ReadError("not a bzip2 file")
300             raise
301         except:
302             fileobj.close()
303             raise
304         t._extfileobj = False
305         return t
306 
307     # All *open() methods are registered here.
308     OPEN_METH = {
309         "tar": "taropen",   # uncompressed tar
310         "gz":  "gzopen",    # gzip compressed tar
311         "bz2": "bz2open"    # bzip2 compressed tar
312     }
313 
314     #--------------------------------------------------------------------------
315     # The public methods which TarFile provides:
316 
317     def close(self):
318         """Close the TarFile. In write-mode, two finishing zero blocks are
319            appended to the archive.
320         """
321         if self.closed:
322             return
323 
324         if self.mode in "aw":
325             self.fileobj.write(NUL * (BLOCKSIZE * 2))
326             self.offset += (BLOCKSIZE * 2)
327             # fill up the end with zero-blocks
328             # (like option -b20 for tar does)
329             blocks, remainder = divmod(self.offset, RECORDSIZE)
330             if remainder > 0:
331                 self.fileobj.write(NUL * (RECORDSIZE - remainder))
332 
333         if not self._extfileobj:
334             self.fileobj.close()
335         self.closed = True
336 
337     def getmember(self, name):
338         """Return a TarInfo object for member `name'. If `name' can not be
339            found in the archive, KeyError is raised. If a member occurs more
340            than once in the archive, its last occurrence is assumed to be the
341            most up-to-date version.
342         """
343         tarinfo = self._getmember(name)
344         if tarinfo is None:
345             raise KeyError("filename %r not found" % name)
346         return tarinfo
347 
348     def getmembers(self):
349         """Return the members of the archive as a list of TarInfo objects. The
350            list has the same order as the members in the archive.
351         """
352         self._check()
353         if not self._loaded:    # if we want to obtain a list of
354             self._load()        # all members, we first have to
355                                 # scan the whole archive.
356         return self.members
357 
358     def getnames(self):
359         """Return the members of the archive as a list of their names. It has
360            the same order as the list returned by getmembers().
361         """
362         return [tarinfo.name for tarinfo in self.getmembers()]
363 
364     def gettarinfo(self, name=None, arcname=None, fileobj=None):
365         """Create a TarInfo object for either the file `name' or the file
366            object `fileobj' (using os.fstat on its file descriptor). You can
367            modify some of the TarInfo's attributes before you add it using
368            addfile(). If given, `arcname' specifies an alternative name for the
369            file in the archive.
370         """
371         self._check("aw")
372 
373         # When fileobj is given, replace name by
374         # fileobj's real name.
375         if fileobj is not None:
376             name = fileobj.name
377 
378         # Building the name of the member in the archive.
379         # Backward slashes are converted to forward slashes,
380         # Absolute paths are turned to relative paths.
381         if arcname is None:
382             arcname = name
383         drv, arcname = os.path.splitdrive(arcname)
384         arcname = arcname.replace(os.sep, "/")
385         arcname = arcname.lstrip("/")
386 
387         # Now, fill the TarInfo object with
388         # information specific for the file.
389         tarinfo = self.tarinfo()
390         tarinfo.tarfile = self
391 
392         # Use os.stat or os.lstat, depending on platform
393         # and if symlinks shall be resolved.
394         if fileobj is None:
395             if hasattr(os, "lstat") and not self.dereference:
396                 statres = os.lstat(name)
397             else:
398                 statres = os.stat(name)
399         else:
400             statres = os.fstat(fileobj.fileno())
401         linkname = ""
402 
403         stmd = statres.st_mode
404         if stat.S_ISREG(stmd):
405             inode = (statres.st_ino, statres.st_dev)
406             if not self.dereference and statres.st_nlink > 1 and \
407                     inode in self.inodes and arcname != self.inodes[inode]:
408                 # Is it a hardlink to an already
409                 # archived file?
410                 type = LNKTYPE
411                 linkname = self.inodes[inode]
412             else:
413                 # The inode is added only if its valid.
414                 # For win32 it is always 0.
415                 type = REGTYPE
416                 if inode[0]:
417                     self.inodes[inode] = arcname
418         elif stat.S_ISDIR(stmd):
419             type = DIRTYPE
420         elif stat.S_ISFIFO(stmd):
421             type = FIFOTYPE
422         elif stat.S_ISLNK(stmd):
423             type = SYMTYPE
424             linkname = os.readlink(name)
425         elif stat.S_ISCHR(stmd):
426             type = CHRTYPE
427         elif stat.S_ISBLK(stmd):
428             type = BLKTYPE
429         else:
430             return None
431 
432         # Fill the TarInfo object with all
433         # information we can get.
434         tarinfo.name = arcname
435         tarinfo.mode = stmd
436         tarinfo.uid = statres.st_uid
437         tarinfo.gid = statres.st_gid
438         if type == REGTYPE:
439             tarinfo.size = statres.st_size
440         else:
441             tarinfo.size = 0L
442         tarinfo.mtime = statres.st_mtime
443         tarinfo.type = type
444         tarinfo.linkname = linkname
445         if pwd:
446             try:
447                 tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0]
448             except KeyError:
449                 pass
450         if grp:
451             try:
452                 tarinfo.gname = grp.getgrgid(tarinfo.gid)[0]
453             except KeyError:
454                 pass
455 
456         if type in (CHRTYPE, BLKTYPE):
457             if hasattr(os, "major") and hasattr(os, "minor"):
458                 tarinfo.devmajor = os.major(statres.st_rdev)
459                 tarinfo.devminor = os.minor(statres.st_rdev)
460         return tarinfo
461 
462     def list(self, verbose=True):
463         """Print a table of contents to sys.stdout. If `verbose' is False, only
464            the names of the members are printed. If it is True, an `ls -l'-like
465            output is produced.
466         """
467         self._check()
468 
469         for tarinfo in self:
470             if verbose:
471                 print filemode(tarinfo.mode),
472                 print "%s/%s" % (tarinfo.uname or tarinfo.uid,
473                                  tarinfo.gname or tarinfo.gid),
474                 if tarinfo.ischr() or tarinfo.isblk():
475                     print "%10s" % ("%d,%d" \
476                                     % (tarinfo.devmajor, tarinfo.devminor)),
477                 else:
478                     print "%10d" % tarinfo.size,
479                 print "%d-%02d-%02d %02d:%02d:%02d" \
480                       % time.localtime(tarinfo.mtime)[:6],
481 
482             print tarinfo.name + ("/" if tarinfo.isdir() else ""),
483 
484             if verbose:
485                 if tarinfo.issym():
486                     print "->", tarinfo.linkname,
487                 if tarinfo.islnk():
488                     print "link to", tarinfo.linkname,
489             print
490 
491     def add(self, name, arcname=None, recursive=True, exclude=None, filter=None):
492         """Add the file `name' to the archive. `name' may be any type of file
493            (directory, fifo, symbolic link, etc.). If given, `arcname'
494            specifies an alternative name for the file in the archive.
495            Directories are added recursively by default. This can be avoided by
496            setting `recursive' to False. `exclude' is a function that should
497            return True for each filename to be excluded. `filter' is a function
498            that expects a TarInfo object argument and returns the changed
499            TarInfo object, if it returns None the TarInfo object will be
500            excluded from the archive.
501         """
502         self._check("aw")
503 
504         if arcname is None:
505             arcname = name
506 
507         # Exclude pathnames.
508         if exclude is not None:
509             import warnings
510             warnings.warn("use the filter argument instead",
511                     DeprecationWarning, 2)
512             if exclude(name):
513                 self._dbg(2, "tarfile: Excluded %r" % name)
514                 return
515 
516         # Skip if somebody tries to archive the archive...
517         if self.name is not None and os.path.abspath(name) == self.name:
518             self._dbg(2, "tarfile: Skipped %r" % name)
519             return
520 
521         self._dbg(1, name)
522 
523         # Create a TarInfo object from the file.
524         tarinfo = self.gettarinfo(name, arcname)
525 
526         if tarinfo is None:
527             self._dbg(1, "tarfile: Unsupported type %r" % name)
528             return
529 
530         # Change or exclude the TarInfo object.
531         if filter is not None:
532             tarinfo = filter(tarinfo)
533             if tarinfo is None:
534                 self._dbg(2, "tarfile: Excluded %r" % name)
535                 return
536 
537         # Append the tar header and data to the archive.
538         if tarinfo.isreg():
539             with bltn_open(name, "rb") as f:
540                 self.addfile(tarinfo, f)
541 
542         elif tarinfo.isdir():
543             self.addfile(tarinfo)
544             if recursive:
545                 for f in os.listdir(name):
546                     self.add(os.path.join(name, f), os.path.join(arcname, f),
547                             recursive, exclude, filter)
548 
549         else:
550             self.addfile(tarinfo)
551 
552     def addfile(self, tarinfo, fileobj=None):
553         """Add the TarInfo object `tarinfo' to the archive. If `fileobj' is
554            given, tarinfo.size bytes are read from it and added to the archive.
555            You can create TarInfo objects using gettarinfo().
556            On Windows platforms, `fileobj' should always be opened with mode
557            'rb' to avoid irritation about the file size.
558         """
559         self._check("aw")
560 
561         tarinfo = copy.copy(tarinfo)
562 
563         buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
564         self.fileobj.write(buf)
565         self.offset += len(buf)
566 
567         # If there's data to follow, append it.
568         if fileobj is not None:
569             copyfileobj(fileobj, self.fileobj, tarinfo.size)
570             blocks, remainder = divmod(tarinfo.size, BLOCKSIZE)
571             if remainder > 0:
572                 self.fileobj.write(NUL * (BLOCKSIZE - remainder))
573                 blocks += 1
574             self.offset += blocks * BLOCKSIZE
575 
576         self.members.append(tarinfo)
577 
578     def extractall(self, path=".", members=None):
579         """Extract all members from the archive to the current working
580            directory and set owner, modification time and permissions on
581            directories afterwards. `path' specifies a different directory
582            to extract to. `members' is optional and must be a subset of the
583            list returned by getmembers().
584         """
585         directories = []
586 
587         if members is None:
588             members = self
589 
590         for tarinfo in members:
591             if tarinfo.isdir():
592                 # Extract directories with a safe mode.
593                 directories.append(tarinfo)
594                 tarinfo = copy.copy(tarinfo)
595                 tarinfo.mode = 0700
596             self.extract(tarinfo, path)
597 
598         # Reverse sort directories.
599         directories.sort(key=operator.attrgetter('name'))
600         directories.reverse()
601 
602         # Set correct owner, mtime and filemode on directories.
603         for tarinfo in directories:
604             dirpath = os.path.join(path, tarinfo.name)
605             try:
606                 self.chown(tarinfo, dirpath)
607                 self.utime(tarinfo, dirpath)
608                 self.chmod(tarinfo, dirpath)
609             except ExtractError, e:
610                 if self.errorlevel > 1:
611                     raise
612                 else:
613                     self._dbg(1, "tarfile: %s" % e)
614 
615     def extract(self, member, path=""):
616         """Extract a member from the archive to the current working directory,
617            using its full name. Its file information is extracted as accurately
618            as possible. `member' may be a filename or a TarInfo object. You can
619            specify a different directory using `path'.
620         """
621         self._check("r")
622 
623         if isinstance(member, basestring):
624             tarinfo = self.getmember(member)
625         else:
626             tarinfo = member
627 
628         # Prepare the link target for makelink().
629         if tarinfo.islnk():
630             tarinfo._link_target = os.path.join(path, tarinfo.linkname)
631 
632         try:
633             self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
634         except EnvironmentError, e:
635             if self.errorlevel > 0:
636                 raise
637             else:
638                 if e.filename is None:
639                     self._dbg(1, "tarfile: %s" % e.strerror)
640                 else:
641                     self._dbg(1, "tarfile: %s %r" % (e.strerror, e.filename))
642         except ExtractError, e:
643             if self.errorlevel > 1:
644                 raise
645             else:
646                 self._dbg(1, "tarfile: %s" % e)
647 
648     def extractfile(self, member):
649         """Extract a member from the archive as a file object. `member' may be
650            a filename or a TarInfo object. If `member' is a regular file, a
651            file-like object is returned. If `member' is a link, a file-like
652            object is constructed from the link's target. If `member' is none of
653            the above, None is returned.
654            The file-like object is read-only and provides the following
655            methods: read(), readline(), readlines(), seek() and tell()
656         """
657         self._check("r")
658 
659         if isinstance(member, basestring):
660             tarinfo = self.getmember(member)
661         else:
662             tarinfo = member
663 
664         if tarinfo.isreg():
665             return self.fileobject(self, tarinfo)
666 
667         elif tarinfo.type not in SUPPORTED_TYPES:
668             # If a member's type is unknown, it is treated as a
669             # regular file.
670             return self.fileobject(self, tarinfo)
671 
672         elif tarinfo.islnk() or tarinfo.issym():
673             if isinstance(self.fileobj, _Stream):
674                 # A small but ugly workaround for the case that someone tries
675                 # to extract a (sym)link as a file-object from a non-seekable
676                 # stream of tar blocks.
677                 raise StreamError("cannot extract (sym)link as file object")
678             else:
679                 # A (sym)link's file object is its target's file object.
680                 return self.extractfile(self._find_link_target(tarinfo))
681         else:
682             # If there's no data associated with the member (directory, chrdev,
683             # blkdev, etc.), return None instead of a file object.
684             return None
685 
686     def _extract_member(self, tarinfo, targetpath):
687         """Extract the TarInfo object tarinfo to a physical
688            file called targetpath.
689         """
690         # Fetch the TarInfo object for the given name
691         # and build the destination pathname, replacing
692         # forward slashes to platform specific separators.
693         targetpath = targetpath.rstrip("/")
694         targetpath = targetpath.replace("/", os.sep)
695 
696         # Create all upper directories.
697         upperdirs = os.path.dirname(targetpath)
698         if upperdirs and not os.path.exists(upperdirs):
699             # Create directories that are not part of the archive with
700             # default permissions.
701             os.makedirs(upperdirs)
702 
703         if tarinfo.islnk() or tarinfo.issym():
704             self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname))
705         else:
706             self._dbg(1, tarinfo.name)
707 
708         if tarinfo.isreg():
709             self.makefile(tarinfo, targetpath)
710         elif tarinfo.isdir():
711             self.makedir(tarinfo, targetpath)
712         elif tarinfo.isfifo():
713             self.makefifo(tarinfo, targetpath)
714         elif tarinfo.ischr() or tarinfo.isblk():
715             self.makedev(tarinfo, targetpath)
716         elif tarinfo.islnk() or tarinfo.issym():
717             self.makelink(tarinfo, targetpath)
718         elif tarinfo.type not in SUPPORTED_TYPES:
719             self.makeunknown(tarinfo, targetpath)
720         else:
721             self.makefile(tarinfo, targetpath)
722 
723         self.chown(tarinfo, targetpath)
724         if not tarinfo.issym():
725             self.chmod(tarinfo, targetpath)
726             self.utime(tarinfo, targetpath)
727 
728     #--------------------------------------------------------------------------
729     # Below are the different file methods. They are called via
730     # _extract_member() when extract() is called. They can be replaced in a
731     # subclass to implement other functionality.
732 
733     def makedir(self, tarinfo, targetpath):
734         """Make a directory called targetpath.
735         """
736         try:
737             # Use a safe mode for the directory, the real mode is set
738             # later in _extract_member().
739             os.mkdir(targetpath, 0700)
740         except EnvironmentError, e:
741             if e.errno != errno.EEXIST:
742                 raise
743 
744     def makefile(self, tarinfo, targetpath):
745         """Make a file called targetpath.
746         """
747         source = self.extractfile(tarinfo)
748         try:
749             with bltn_open(targetpath, "wb") as target:
750                 copyfileobj(source, target)
751         finally:
752             source.close()
753 
754     def makeunknown(self, tarinfo, targetpath):
755         """Make a file from a TarInfo object with an unknown type
756            at targetpath.
757         """
758         self.makefile(tarinfo, targetpath)
759         self._dbg(1, "tarfile: Unknown file type %r, " \
760                      "extracted as regular file." % tarinfo.type)
761 
762     def makefifo(self, tarinfo, targetpath):
763         """Make a fifo called targetpath.
764         """
765         if hasattr(os, "mkfifo"):
766             os.mkfifo(targetpath)
767         else:
768             raise ExtractError("fifo not supported by system")
769 
770     def makedev(self, tarinfo, targetpath):
771         """Make a character or block device called targetpath.
772         """
773         if not hasattr(os, "mknod") or not hasattr(os, "makedev"):
774             raise ExtractError("special devices not supported by system")
775 
776         mode = tarinfo.mode
777         if tarinfo.isblk():
778             mode |= stat.S_IFBLK
779         else:
780             mode |= stat.S_IFCHR
781 
782         os.mknod(targetpath, mode,
783                  os.makedev(tarinfo.devmajor, tarinfo.devminor))
784 
785     def makelink(self, tarinfo, targetpath):
786         """Make a (symbolic) link called targetpath. If it cannot be created
787           (platform limitation), we try to make a copy of the referenced file
788           instead of a link.
789         """
790         if hasattr(os, "symlink") and hasattr(os, "link"):
791             # For systems that support symbolic and hard links.
792             if tarinfo.issym():
793                 if os.path.lexists(targetpath):
794                     os.unlink(targetpath)
795                 os.symlink(tarinfo.linkname, targetpath)
796             else:
797                 # See extract().
798                 if os.path.exists(tarinfo._link_target):
799                     if os.path.lexists(targetpath):
800                         os.unlink(targetpath)
801                     os.link(tarinfo._link_target, targetpath)
802                 else:
803                     self._extract_member(self._find_link_target(tarinfo), targetpath)
804         else:
805             try:
806                 self._extract_member(self._find_link_target(tarinfo), targetpath)
807             except KeyError:
808                 raise ExtractError("unable to resolve link inside archive")
809 
810     def chown(self, tarinfo, targetpath):
811         """Set owner of targetpath according to tarinfo.
812         """
813         if pwd and hasattr(os, "geteuid") and os.geteuid() == 0:
814             # We have to be root to do so.
815             try:
816                 g = grp.getgrnam(tarinfo.gname)[2]
817             except KeyError:
818                 g = tarinfo.gid
819             try:
820                 u = pwd.getpwnam(tarinfo.uname)[2]
821             except KeyError:
822                 u = tarinfo.uid
823             try:
824                 if tarinfo.issym() and hasattr(os, "lchown"):
825                     os.lchown(targetpath, u, g)
826                 else:
827                     if sys.platform != "os2emx":
828                         os.chown(targetpath, u, g)
829             except EnvironmentError, e:
830                 raise ExtractError("could not change owner")
831 
832     def chmod(self, tarinfo, targetpath):
833         """Set file permissions of targetpath according to tarinfo.
834         """
835         if hasattr(os, 'chmod'):
836             try:
837                 os.chmod(targetpath, tarinfo.mode)
838             except EnvironmentError, e:
839                 raise ExtractError("could not change mode")
840 
841     def utime(self, tarinfo, targetpath):
842         """Set modification time of targetpath according to tarinfo.
843         """
844         if not hasattr(os, 'utime'):
845             return
846         try:
847             os.utime(targetpath, (tarinfo.mtime, tarinfo.mtime))
848         except EnvironmentError, e:
849             raise ExtractError("could not change modification time")
850 
851     #--------------------------------------------------------------------------
852     def next(self):
853         """Return the next member of the archive as a TarInfo object, when
854            TarFile is opened for reading. Return None if there is no more
855            available.
856         """
857         self._check("ra")
858         if self.firstmember is not None:
859             m = self.firstmember
860             self.firstmember = None
861             return m
862 
863         # Read the next block.
864         self.fileobj.seek(self.offset)
865         tarinfo = None
866         while True:
867             try:
868                 tarinfo = self.tarinfo.fromtarfile(self)
869             except EOFHeaderError, e:
870                 if self.ignore_zeros:
871                     self._dbg(2, "0x%X: %s" % (self.offset, e))
872                     self.offset += BLOCKSIZE
873                     continue
874             except InvalidHeaderError, e:
875                 if self.ignore_zeros:
876                     self._dbg(2, "0x%X: %s" % (self.offset, e))
877                     self.offset += BLOCKSIZE
878                     continue
879                 elif self.offset == 0:
880                     raise ReadError(str(e))
881             except EmptyHeaderError:
882                 if self.offset == 0:
883                     raise ReadError("empty file")
884             except TruncatedHeaderError, e:
885                 if self.offset == 0:
886                     raise ReadError(str(e))
887             except SubsequentHeaderError, e:
888                 raise ReadError(str(e))
889             break
890 
891         if tarinfo is not None:
892             self.members.append(tarinfo)
893         else:
894             self._loaded = True
895 
896         return tarinfo
897 
898     #--------------------------------------------------------------------------
899     # Little helper methods:
900 
901     def _getmember(self, name, tarinfo=None, normalize=False):
902         """Find an archive member by name from bottom to top.
903            If tarinfo is given, it is used as the starting point.
904         """
905         # Ensure that all members have been loaded.
906         members = self.getmembers()
907 
908         # Limit the member search list up to tarinfo.
909         if tarinfo is not None:
910             members = members[:members.index(tarinfo)]
911 
912         if normalize:
913             name = os.path.normpath(name)
914 
915         for member in reversed(members):
916             if normalize:
917                 member_name = os.path.normpath(member.name)
918             else:
919                 member_name = member.name
920 
921             if name == member_name:
922                 return member
923 
924     def _load(self):
925         """Read through the entire archive file and look for readable
926            members.
927         """
928         while True:
929             tarinfo = self.next()
930             if tarinfo is None:
931                 break
932         self._loaded = True
933 
934     def _check(self, mode=None):
935         """Check if TarFile is still open, and if the operation's mode
936            corresponds to TarFile's mode.
937         """
938         if self.closed:
939             raise IOError("%s is closed" % self.__class__.__name__)
940         if mode is not None and self.mode not in mode:
941             raise IOError("bad operation for mode %r" % self.mode)
942 
943     def _find_link_target(self, tarinfo):
944         """Find the target member of a symlink or hardlink member in the
945            archive.
946         """
947         if tarinfo.issym():
948             # Always search the entire archive.
949             linkname = "/".join(filter(None, (os.path.dirname(tarinfo.name), tarinfo.linkname)))
950             limit = None
951         else:
952             # Search the archive before the link, because a hard link is
953             # just a reference to an already archived file.
954             linkname = tarinfo.linkname
955             limit = tarinfo
956 
957         member = self._getmember(linkname, tarinfo=limit, normalize=True)
958         if member is None:
959             raise KeyError("linkname %r not found" % linkname)
960         return member
961 
962     def __iter__(self):
963         """Provide an iterator object.
964         """
965         if self._loaded:
966             return iter(self.members)
967         else:
968             return TarIter(self)
969 
970     def _dbg(self, level, msg):
971         """Write debugging output to sys.stderr.
972         """
973         if level <= self.debug:
974             print >> sys.stderr, msg
975 
976     def __enter__(self):
977         self._check()
978         return self
979 
980     def __exit__(self, type, value, traceback):
981         if type is None:
982             self.close()
983         else:
984             # An exception occurred. We must not call close() because
985             # it would try to write end-of-archive blocks and padding.
986             if not self._extfileobj:
987                 self.fileobj.close()
988             self.closed = True
989 # class TarFile
990 
991 TarFile
tarfile的源代碼
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import tarfile
 7 tar = tarfile.open("yinzhengjie.tar","w")
 8 tar.add(r"D:\python\daima\DAY5\day5.zip",arcname="day5.zip") 
 9 tar.add(r"D:\python\daima\DAY3\學生信息.xlsx",arcname="test_11") #至關於將「學生信息.xlsx」這個文件重命名爲"test_11"並添加到壓縮文件「yinzhengjie.tar」中
10 tar.close() 
tarfile壓縮的用法
1 #!/usr/bin/env python
2 #_*_coding:utf-8_*_
3 #@author :yinzhengjie
4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
5 #EMAIL:y1053419035@qq.com
6 import tarfile
7 tar = tarfile.open("yinzhengjie.tar","r") #注意解壓的時候,打開的模式應該是"r",而不是"w"喲!
8 tar.extractall(path=r"D:\python\daima\DAY6\test")   #其中的path表示設置的解壓後的存放地址
9 tar.close()
tarfile解壓的用法

 

8.shelve模塊

  它是對pickle的上層封裝,比pickle用起來更簡單,shelve模塊是一個簡單的k,v將內存數據經過文件持久化的模塊,能夠持久化任何pickle可支持的python數據格式。

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import shelve
 7 f = shelve.open('shelve_test')  # 打開一個文件
 8 class Test(object):
 9     def __init__(self, n):
10         self.n = n
11 def func():
12     print("my name is yinzhengjie!")
13 t = Test(123)
14 name = ["yinzhengjie", "lijing", "test"]
15 f["first"] = name  # 序列化列表
16 f["second"] = t  # 序列化類,注意只是序列化的類名,若是想要在新的操做系統中調用的話,須要導入喲!
17 f["third"] = func #序列化函數,注意只是序列化的函數名,若是想要在新的操做系統中調用的話,須要導入喲!
18 f.close()
序列化數據
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import shelve
 7 f = shelve.open('shelve_test')
 8 for i in f.keys(): 
 9     print(i)
10 print(f.get("first")) #至關於load,取得裏面的數據,可是若是要讀取類或者函數的話須要導入相應的模塊喲!
11 
12 
13 #以上代碼執行結果以下:
14 first
15 second
16 third
17 ['yinzhengjie', 'lijing', 'test']
反序列化數據

 

9.xml模塊

  xml是實現不一樣語言或程序之間進行數據交換的協議,跟json差很少,但json使用起來更簡單,不過,古時候,在json還沒誕生的黑暗年代,你們只能選擇用xml呀,至今不少傳統公司如金融行業的不少系統的接口還主要是xml。

  xml的格式以下,就是經過<>節點來區別數據結構的:

 1 <?xml version="1.0"?>
 2 <data>
 3     <country name="Liechtenstein">
 4         <rank updated="yes">2</rank>
 5         <year>2008</year>
 6         <gdppc>141100</gdppc>
 7         <neighbor name="Austria" direction="E"/>
 8         <neighbor name="Switzerland" direction="W"/>
 9         <hotspots>
10             <test1>維也納</test1>
11             <test2>薩爾斯堡</test2>
12         </hotspots>
13     </country>
14     <country name="Singapore">
15         <rank updated="yes">5</rank>
16         <year>2011</year>
17         <gdppc>59900</gdppc>
18         <neighbor name="Malaysia" direction="N"/>
19     </country>
20     <country name="Panama">
21         <rank updated="yes">69</rank>
22         <year>2011</year>
23         <gdppc>13600</gdppc>
24         <neighbor name="Costa Rica" direction="W"/>
25         <neighbor name="Colombia" direction="E"/>
26     </country>
27 </data>
xmltest.xml
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import xml.etree.ElementTree as ET
 7 tree = ET.parse("xmltest.xml") #解析(能夠理解爲打開一個文件)
 8 root = tree.getroot() #找到根節點
 9 print(root.tag)  #打印根節點的名字
10 # 遍歷xml文檔
11 print("*"*50,"我是分割線","*"*50)
12 for child in root:
13     print(child.tag, child.attrib) #打印前者子節點的名字「country」,並打印其屬性。
14     for i in child:
15         print(i.tag, i.text,i.attrib) #打印子節點的參數名,已經其所對應的值,最後打印其屬性。
16 # 只遍歷year 節點
17 print("*" * 50, "我是分割線", "*" * 50)
18 for node in root.iter('year'):
19     print(node.tag, node.text)
20 
21 # 只遍歷country節點
22 print("*" * 50, "我是分割線", "*" * 50)
23 for node in root.iter('country'):
24     print(node.tag, node.text,node.attrib)
25 
26 print("*" * 50, "我是分割線", "*" * 50)
27 for node in root.iter('test1'):
28     print(node.tag, node.text,node.attrib)
29 
30 
31 #以上代碼執行結果以下:
32 data
33 ************************************************** 我是分割線 **************************************************
34 country {'name': 'Liechtenstein'}
35 rank 2 {'updated': 'yes'}
36 year 2008 {}
37 gdppc 141100 {}
38 neighbor None {'direction': 'E', 'name': 'Austria'}
39 neighbor None {'direction': 'W', 'name': 'Switzerland'}
40 hotspots 
41              {}
42 country {'name': 'Singapore'}
43 rank 5 {'updated': 'yes'}
44 year 2011 {}
45 gdppc 59900 {}
46 neighbor None {'direction': 'N', 'name': 'Malaysia'}
47 country {'name': 'Panama'}
48 rank 69 {'updated': 'yes'}
49 year 2011 {}
50 gdppc 13600 {}
51 neighbor None {'direction': 'W', 'name': 'Costa Rica'}
52 neighbor None {'direction': 'E', 'name': 'Colombia'}
53 ************************************************** 我是分割線 **************************************************
54 year 2008
55 year 2011
56 year 2011
57 ************************************************** 我是分割線 **************************************************
58 country 
59          {'name': 'Liechtenstein'}
60 country 
61          {'name': 'Singapore'}
62 country 
63          {'name': 'Panama'}
64 ************************************************** 我是分割線 **************************************************
65 test1 維也納 {}
查看xml的方法
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import xml.etree.ElementTree as ET
 7 tree = ET.parse("xmltest.xml")
 8 root = tree.getroot()
 9 # 修改
10 for node in root.iter('year'): #表示操做的對象是「year」字段
11     new_year = int(node.text) + 1  #將年份加「1」
12     node.text = str(new_year)  #把數字轉換成字符串
13     node.set("updated", "yes")  #set是更改屬性,增長了一個屬性updated="yes"
14 tree.write("xmltest2.xml",encoding="utf-8") #將修改後的內容寫入xml文件中。
修改xml文件的屬性
 1 <data>
 2     <country name="Liechtenstein">
 3         <rank updated="yes">2</rank>
 4         <year updated="yes">2009</year>
 5         <gdppc>141100</gdppc>
 6         <neighbor direction="E" name="Austria" />
 7         <neighbor direction="W" name="Switzerland" />
 8         <hotspots>
 9             <test1>維也納</test1>
10             <test2>薩爾斯堡</test2>
11         </hotspots>
12     </country>
13     <country name="Singapore">
14         <rank updated="yes">5</rank>
15         <year updated="yes">2012</year>
16         <gdppc>59900</gdppc>
17         <neighbor direction="N" name="Malaysia" />
18     </country>
19     <country name="Panama">
20         <rank updated="yes">69</rank>
21         <year updated="yes">2012</year>
22         <gdppc>13600</gdppc>
23         <neighbor direction="W" name="Costa Rica" />
24         <neighbor direction="E" name="Colombia" />
25     </country>
26 </data>
xmltest2.xml
 
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import xml.etree.ElementTree as ET
 7 tree = ET.parse("xmltest.xml")
 8 root = tree.getroot()
 9 # 刪除node
10 for country in root.findall('country'):
11     rank = int(country.find('rank').text)
12     if rank > 50: #若是國家排名大於50就刪除這個國家
13         root.remove(country)
14 tree.write('output.xml',encoding="utf-8")
刪除xml文件的方法
 1 <data>
 2     <country name="Liechtenstein">
 3         <rank updated="yes">2</rank>
 4         <year>2008</year>
 5         <gdppc>141100</gdppc>
 6         <neighbor direction="E" name="Austria" />
 7         <neighbor direction="W" name="Switzerland" />
 8         <hotspots>
 9             <test1>維也納</test1>
10             <test2>薩爾斯堡</test2>
11         </hotspots>
12     </country>
13     <country name="Singapore">
14         <rank updated="yes">5</rank>
15         <year>2011</year>
16         <gdppc>59900</gdppc>
17         <neighbor direction="N" name="Malaysia" />
18     </country>
19     </data>
output.xml

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import xml.etree.ElementTree as ET
 7 new_xml = ET.Element("namelist")  #生成一個根節點
 8 name = ET.SubElement(new_xml, "name", attrib={"enrolled": "yes"})  #SubElement建立了一個子節點,其中名字叫「name」,屬性是{"enrolled": "yes"}
 9 age = ET.SubElement(name, "age", attrib={"checked": "no"}) #在name下又建立了一個子節點
10 sex = ET.SubElement(name, "sex")
11 sex.text = '33'
12 
13 name2 = ET.SubElement(new_xml, "name", attrib={"enrolled": "no"})
14 age = ET.SubElement(name2, "age")
15 age.text = '19'
16 et = ET.ElementTree(new_xml)  # 生成文檔對象
17 et.write("test.xml", encoding="utf-8", xml_declaration=True)
18 # ET.dump(new_xml)  # 打印生成的格式
建立xml文件的方法
1 <?xml version='1.0' encoding='utf-8'?>
2 <namelist>
3     <name enrolled="yes">
4         <age checked="no" />
5         <sex>33</sex></name>
6     <name enrolled="no">
7         <age>19</age>
8     </name>
9 </namelist>
test.xml

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import xml.etree.ElementTree as ET
 7 tree = ET.parse("xmltest.xml")
 8 root = tree.getroot()
 9 for country in root.findall('country'):
10    rank = int(country.find('rank').text)
11    if rank > 50:
12      root.remove(country)
13    else:
14        sub_ele = ET.SubElement(country,"population",attrib={"enrolled":"yes"})  #建立一個新的「population」
15        sub_ele.text = str(100000000000)
16 tree.write('output2.xml')
在原xml的基礎上新增文件內容
 1 <data>
 2     <country name="Liechtenstein">
 3         <rank updated="yes">2</rank>
 4         <year>2008</year>
 5         <gdppc>141100</gdppc>
 6         <neighbor direction="E" name="Austria" />
 7         <neighbor direction="W" name="Switzerland" />
 8         <hotspots>
 9             <test1>&#32500;&#20063;&#32435;</test1>
10             <test2>&#33832;&#23572;&#26031;&#22561;</test2>
11         </hotspots>
12     <population enrolled="yes">100000000000</population></country>
13     <country name="Singapore">
14         <rank updated="yes">5</rank>
15         <year>2011</year>
16         <gdppc>59900</gdppc>
17         <neighbor direction="N" name="Malaysia" />
18     <population enrolled="yes">100000000000</population></country>
19     </data>
output2.xml

 

10.PyYAML模塊

請參考官網:http://pyyaml.org/wiki/PyYAMLDocumentation

 saltstack自動化運維工具的配置文件就是要ongodb這種格式寫的。

11.ConfigParser模塊

   用於生成和修改常見配置文檔,當前模塊的名稱在 python 3.x 版本中變動爲 configparser。相似於apache和mysql的配置文件就用這個模塊生成的。

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import configparser
 7 config = configparser.ConfigParser()  #生成一個根節點實例
 8 config["DEFAULT"] = {'ServerAliveInterval': '45',
 9                      'Compression': 'yes',
10                      'CompressionLevel': '9'}
11 config['bitbucket.org'] = {}  #生成一個空的字典·
12 config['bitbucket.org']['User'] = 'yinzhengjie'  #往這個空字典中添加信息
13 config['topsecret.server.com'] = {}  #經過config實例生成一個叫「topsecret.server.com」的空節點
14 topsecret = config['topsecret.server.com']  #找到上面那個空節點而後賦值給一個變量,以後能夠往裏面賦值參數
15 topsecret['Host Port'] = '50022'  # mutates the parser
16 topsecret['ForwardX11'] = 'no'  # same here
17 config['DEFAULT']['ForwardX11'] = 'yes' #給「DEFAULT」字段賦值
18 with open('example.ini', 'w') as configfile: #最終將數據寫入到「example.ini」文件中.
19     config.write(configfile)
自動生成一個configparser文件
 1 [DEFAULT]
 2 compressionlevel = 9
 3 serveraliveinterval = 45
 4 compression = yes
 5 forwardx11 = yes
 6 
 7 [bitbucket.org]
 8 user = yinzhengjie
 9 
10 [topsecret.server.com]
11 host port = 50022
12 forwardx11 = no
example.ini
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import configparser
 7 config = configparser.ConfigParser() #生成一個實例
 8 print(config.sections())  #打印出有多少個子配置實例
 9 print("*"*50,"我是分割線","*"*50)
10 print(config.read('example.ini')) #讀取這個文件
11 print("*"*50,"我是分割線","*"*50)
12 print(config.sections())  #打印出有多少個子配置實例,當read到數據以後,就能夠看到相關的子實例,而且默認不打印「[DEFAULT]」這個子實例。
13 print("*"*50,"我是分割線","*"*50)
14 print('bitbucket.org' in config)  #取布爾值,判斷bitbucket.org是否在config這個實例中。
15 print('bytebong.com' in config)
16 print("*"*50,"我是分割線","*"*50)
17 print(config['bitbucket.org']['User'])  #取「config['bitbucket.org']"子實例中的'User'變量所對應的值
18 print("*"*50,"我是分割線","*"*50)
19 print(config['DEFAULT']['Compression'])
20 topsecret = config['topsecret.server.com']
21 print(topsecret['ForwardX11'])
22 print("*"*50,"我是分割線","*"*50)
23 for key in config['bitbucket.org']: #遍歷出'bitbucket.org'子實例中的變量
24     print(key)
25 print(config['bitbucket.org']['ForwardX11'])
configparser查詢用法實例

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import configparser
 7 config = configparser.ConfigParser() #生成一個實例
 8 config.read('example.ini')  #讀取這個文件,注意,若是缺乏這個步驟,沒法進行修改或者讀取喲!
 9 config.set("bitbucket.org","name","Yinzhengjie")  #在bitbucket.org的sections添加"name"的值爲"Yinzhengjie"
10 config.set("bitbucket.org","user","尹正傑")   #在bitbucket.org的sections修改"user"的值爲"尹正傑"
11 config.write(open("test_1.cfg","w",encoding="utf-8"))
configparser修改和新增的實例
 1 [DEFAULT]
 2 compressionlevel = 9
 3 serveraliveinterval = 45
 4 compression = yes
 5 forwardx11 = yes
 6 
 7 [bitbucket.org]
 8 user = 尹正傑
 9 name = Yinzhengjie
10 
11 [topsecret.server.com]
12 host port = 50022
13 forwardx11 = no
test_1.cfg

 

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import configparser
 7 config = configparser.ConfigParser() #生成一個實例
 8 config.read('example.ini')
 9 config.remove_section("bitbucket.org")  #刪除整個section!
10 config.remove_option("DEFAULT","forwardx11")  #刪除某個section的一個子項,
11 config.write(open("test_rm.ini","w",encoding="utf-8")) #若是有漢子的話須要制定編碼格式才能正常打印漢子喲~
configparser刪除的實例
1 [DEFAULT]
2 compressionlevel = 9
3 serveraliveinterval = 45
4 compression = yes
5 
6 [topsecret.server.com]
7 host port = 50022
8 forwardx11 = no
test_rm.ini

 

12.hashlib模塊

  用於加密相關的操做,3.x裏代替了md5模塊和sha模塊,主要提供 SHA1, SHA224, SHA256, SHA384, SHA512 ,MD5 算法.

1>.MD5算法參數詳解:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.md5()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.hexdigest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.hexdigest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.md5()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.hexdigest())
17 
18 '''
19 擴展:
20     MD5值是沒法被反解的,網上有人說能破解是騙人的,之因此能破解,是由於他們已經將算好的md5值存入到數據庫中,而後跟你你輸入的MD5值給你返回一個明文的字符串。
21 '''
22 
23 
24 #以上代碼執行結果以下:
25 5d41402abc4b2a76b9719d911017c592
26 1c7bdaafeb36ea7e3236d01afeee39cf
27 1d19d8f2d5037b0f3e9a2d020930ba91
十六進制md5算法實例展現
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.md5()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.digest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.digest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.md5()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.digest())
17 
18 '''
19 擴展:
20     MD5值是沒法被反解的,網上有人說能破解是騙人的,之因此能破解,是由於他們已經將算好的md5值存入到數據庫中,而後跟你你輸入的MD5值給你返回一個明文的字符串。
21 '''
22 
23 #以上代碼執行結果以下:
24 b']A@*\xbcK*v\xb9q\x9d\x91\x10\x17\xc5\x92'
25 b'\x1c{\xda\xaf\xeb6\xea~26\xd0\x1a\xfe\xee9\xcf'
26 b'\x1d\x19\xd8\xf2\xd5\x03{\x0f>\x9a-\x02\t0\xba\x91'
二進制md5算法實例展現

 

 

2>.sha1算法參數詳解:

  Google已經將改算法破解了,只是尚未公佈,目前不多人用這種算法了!

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.sha1()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.digest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.digest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.sha1()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.digest())
17 
18 '''
19 擴展:
20     MD5值是沒法被反解的,網上有人說能破解是騙人的,之因此能破解,是由於他們已經將算好的md5值存入到數據庫中,而後跟你你輸入的MD5值給你返回一個明文的字符串。
21 '''
22 
23 #以上代碼執行結果以下:
24 b'\xaa\xf4\xc6\x1d\xdc\xc5\xe8\xa2\xda\xbe\xde\x0f;H,\xd9\xae\xa9CM'
25 b'p\xff\xe5<\x08\xb9D?\xabJ\xcdC2f\x84\xa07\xd6\xc2c'
26 b'\xad\x06\x8b\x91)\x1c \x99\x82*6D^\xb2DA\x12_3\xa6'
二進制sha1算法實例展現
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.sha1()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.hexdigest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.hexdigest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.sha1()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.hexdigest())
17 
18 '''
19 擴展:
20     MD5值是沒法被反解的,網上有人說能破解是騙人的,之因此能破解,是由於他們已經將算好的md5值存入到數據庫中,而後跟你你輸入的MD5值給你返回一個明文的字符串。
21 '''
22 
23 
24 #以上代碼執行結果以下:
25 aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
26 70ffe53c08b9443fab4acd43326684a037d6c263
27 ad068b91291c2099822a36445eb24441125f33a6
十六進制sha1算法實例展現

 

3>.sha256算法參數詳解:

  這個是沒有被破解的,連谷歌破解的僅僅是sha1,而256加密後的明顯字節變成啊,有木有。

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.sha256()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.digest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.digest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.sha256()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.digest())
17 
18 
19 #以上代碼執行結果以下:
20 b',\xf2M\xba_\xb0\xa3\x0e&\xe8;*\xc5\xb9\xe2\x9e\x1b\x16\x1e\\\x1f\xa7B^s\x043b\x93\x8b\x98$'
21 b'\xec\xf6\x8e\x01\x17\xac!:\xb9<\xe4\xab\xee\x13\x03\xcc\xe4r\xb0\xdc\xfb\xcbm\xd4\xec\xa2\xc9P\x02\xfdi\xb7'
22 b'\xf4\x9d\xe7o\xe3\x01A\xf28\xd0\xc1b4\xa0\xbf\x01\x88\xbf\x9a4\xb4\xe8\xdd\xb6\\P\x8c&\xd5\xb1\xaf\x06'
二進制sha256算法實例展現
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.sha256()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.hexdigest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.hexdigest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.sha256()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.hexdigest())
17 
18 
19 #以上代碼執行結果以下:
20 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
21 ecf68e0117ac213ab93ce4abee1303cce472b0dcfbcb6dd4eca2c95002fd69b7
22 f49de76fe30141f238d0c16234a0bf0188bf9a34b4e8ddb65c508c26d5b1af06
十六進制sha256算法實例展現

 

4>.sha512算法參數詳解:

  這個也沒有破解,明顯的效果就是加密後的值變的比256還要長呢!

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.sha512()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.digest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.digest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.sha512()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.digest())
17 
18 
19 #以上代碼執行結果以下:
20 b'\x9bq\xd2$\xbdb\xf3x]\x96\xd4j\xd3\xea=s1\x9b\xfb\xc2\x89\x0c\xaa\xda\xe2\xdf\xf7%\x19g<\xa7##\xc3\xd9\x9b\xa5\xc1\x1d|z\xccn\x14\xb8\xc5\xda\x0cFcG\\.\\:\xde\xf4os\xbc\xde\xc0C'
21 b"7\x8fb\xe6'\x11\xcc\xa8I\x9b\x89=\xcf\xac\x06\xdc\xbc\xb7GyG\x96\xd9=\xfc\xa7r\xc6\xba\x9ep\x96\xd7X\x05\x82\xbd\x87\xae\x94\x90UD\xdd\xdf\x94-\xa5\xcd\xf9o\x89\xdc\xcf\x85pr\x9ekvE\x12\xcc\x0f"
22 b'\xea\x1b\xda\xce3r>\x83\x98\x94\xd7\x7fp\xad}\x84w\xb3o\xd2\xf4ZMB\xb6\xb9c|t]\xa5\xf7]*\xb2v\xf10\xa8&\x19\xeb\xc7\xe5;\x9d0\x92o\x9b\xa8\x91v\xc5\x03\xd4\x82Z\xb3;\xea[\x01h'
二進制sha512算法實例展現
 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hashlib
 7 m = hashlib.sha512()
 8 m.update(b"hello") #字節格式輸入
 9 print(m.hexdigest()) #用十六進制輸出一段md5值,注意,只要輸入的值不變,這個值就不會變的!
10 m.update(b"my name is yinzhengjie")
11 print(m.hexdigest())
12 
13 #注意,將上面兩個字段拼接起來,其中的MD5值也是會發生變化的
14 m2 = hashlib.sha512()
15 m2.update(b"hello  my name is yinzhengjie")
16 print(m2.hexdigest())
17 
18 
19 #以上代碼執行結果以下:
20 9b71d224bd62f3785d96d46ad3ea3d73319bfbc2890caadae2dff72519673ca72323c3d99ba5c11d7c7acc6e14b8c5da0c4663475c2e5c3adef46f73bcdec043
21 378f62e62711cca8499b893dcfac06dcbcb747794796d93dfca772c6ba9e7096d7580582bd87ae94905544dddf942da5cdf96f89dccf8570729e6b764512cc0f
22 ea1bdace33723e839894d77f70ad7d8477b36fd2f45a4d42b6b9637c745da5f75d2ab276f130a82619ebc7e53b9d30926f9ba89176c503d4825ab33bea5b0168
十六進制sha512算法實例展現

 

5>.hmac模塊

  若是你以爲以上的加密方法仍是不夠安全~厲害了,你的安全感可真低啊,看來是傷的不輕,必定是一個有故事的人,不過針對你這種人呢~還有一種算法爲你特別定製hmac,等你成爲了一個開發大神,你能夠本身寫一個算法,由於你畢竟只相信你本身嘛,哈哈~

  散列消息鑑別碼,簡稱HMAC,是一種基於消息鑑別碼MAC(Message Authentication Code)的鑑別機制。使用HMAC時,消息通信的雙方,經過驗證消息中加入的鑑別密鑰K來鑑別消息的真僞;通常用於網絡通訊中消息加密,前提是雙方先要約定好key,就像接頭暗號同樣,而後消息發送把用key把消息加密,接收方用key + 消息明文再加密,拿加密後的值 跟 發送者的相對比是否相等,這樣就能驗證消息的真實性,及發送者的合法性了。(它內部對咱們建立 key 和 內容 再進行處理而後再加密)

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import hmac
 7 h = hmac.new("我本有心向明月".encode("utf-8"), "奈何明月照溝渠".encode("utf-8"),) #"我本有心向明月"這就比如雙方(A,B)已經約定好了的key,(相似於咱們第一次登陸linux服務器,使用ssh登錄方式都會提示你,讓你輸入"yes"後才能輸入密碼。),接受者(A)接收到了"奈何明月照溝渠"這個明文消息和加密後的字符「489f9932949514ab24894559150088c0」,而後用定義好的key和去加密"奈何明月照溝渠"這個明文字符,若是加密後生成的字符是「489f9932949514ab24894559150088c0」就證實這個消息是發送者(B)發送過來的數據,它只能驗證消息的合法來源。若是中間人截獲了明文消息加以修改的,就會被發現!
 8 print(h.hexdigest())
 9 
10 
11 #以上代碼執行結果以下:
12 489f9932949514ab24894559150088c0
hmac算法實例展現

 

 

13.subproces模塊

  subproces基本上就是爲了取代os.system和os.spawn*模塊的。

 

1>.subprocess.run調用shell命令,只能保存執行後的狀態,不能保存命令的執行結果!

 1 #不含參數的調用linux命令的方法
 2 >>> a = subprocess.run("df")                               
 3 Filesystem     1K-blocks    Used Available Use% Mounted on
 4 /dev/sda2        8854456 4170968   4227040  50% /
 5 tmpfs             502172     228    501944   1% /dev/shm
 6 /dev/sda1         289293   28463    245470  11% /boot
 7 
 8 #含參數的調用linux命令的方法
 9 >>> a = subprocess.run(["df","-h"])
10 Filesystem      Size  Used Avail Use% Mounted on
11 /dev/sda2       8.5G  3.8G  4.3G  48% /
12 tmpfs           491M  228K  491M   1% /dev/shm
13 /dev/sda1       283M   28M  240M  11% /boot
14 >>> 
15 
16 
17 
18 #調用複雜的linux命令的方法,須要加「shell=True」,表示將前面引號的內容放在一個終端(terminal)去執行,須要注意的是這個不能保存命令輸出的結果,而是保存命令執行的結果喲!通常非「0」就表示命令沒有執行成功,而結果是「0」表示執行命令實成功的,可是命令的輸出結果是沒法保存的!切記!
19 >>> a = subprocess.run("df -h | grep /dev/sda1",shell=True)
20 /dev/sda1       283M   28M  240M  11% /boot
21 >>> a.returncode
22 0
subprocess.run實例

2>.執行命令,返回命令的執行狀態,「0」 or 非 「0」

 1 #執行命令,返回命令執行狀態 , 0 or 非0
 2 >>> retcode = subprocess.call(["ls", "-l"])
 3 total 96
 4 -rw-------. 1 root root  3321 Oct 13 10:26 anaconda-ks.cfg
 5 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Desktop
 6 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Documents
 7 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Downloads
 8 -rw-r--r--. 1 root root 41433 Oct 13 10:26 install.log
 9 -rw-r--r--. 1 root root  9154 Oct 13 10:24 install.log.syslog
10 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Music
11 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Pictures
12 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Public
13 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Templates
14 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Videos
15 >>>
16 
17 
18 #執行命令,若是命令結果爲0,就正常返回,不然拋異常
19 >>> subprocess.check_call(["ls", "-l"])    
20 total 96
21 -rw-------. 1 root root  3321 Oct 13 10:26 anaconda-ks.cfg
22 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Desktop
23 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Documents
24 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Downloads
25 -rw-r--r--. 1 root root 41433 Oct 13 10:26 install.log
26 -rw-r--r--. 1 root root  9154 Oct 13 10:24 install.log.syslog
27 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Music
28 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Pictures
29 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Public
30 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Templates
31 drwxr-xr-x. 2 root root  4096 Oct 13 22:03 Videos
32 0
33 >>>
subprocess.call與subprocess.check_call函數的調用方法

3>.接收字符串格式命令,返回元組形式,第1個元素是執行狀態,第2個是命令結果

1 >>> subprocess.getstatusoutput('ls /bin/pwd')
2 (0, '/bin/pwd')
3 >>> 
subprocess.getstatusoutput函數的調用方法

4>.接收字符串格式命令,並返回結果

1 >>> subprocess.getoutput('ifconfig | grep eth0')
2 'eth0      Link encap:Ethernet  HWaddr 00:0C:29:D4:DB:87  '
3 >>> 
subprocess.getoutput函數調用方法

5>.執行命令,並返回結果,注意是返回結果,不是打印,下例結果返回給res

1 >>> res=subprocess.check_output(['pwd'])    
2 >>> res
3 b'/root\n'
4 >>> 
subprocess.check_output函數調用方法

 6>.上面那些方法,底層都是封裝的subprocess.Popen

 1 >>> p = subprocess.Popen("df -h|grep /dev/sda1",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
 2 >>> p.stdout.read()
 3 b'/dev/sda1       283M   28M  240M  11% /boot\n'
 4 >>> 
 5 
 6 「」「
 7 注意:
 8 咱們來對第一行的進行講解一下
 9 subprocess.Popen表示打開一個終端(只是啓動一個進程),stdin=subprocess.PIPE表示輸入經過subprocess.PIPE這個管道傳輸,stdout=subprocess.PIPE表示輸出也經過subprocess.PIPE這個管道傳輸,stderr=subprocess.PIPE同理。
10 」「」
subprocess.Popen函數用法

7>.檢查命令是否執行完畢

 1 >>> p = subprocess.Popen("top -bn 5",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
 2 >>> p.poll()
 3 >>> p.poll()
 4 >>> p.poll()
 5 >>> p.poll()
 6 >>> p.poll()
 7 >>> p.poll()
 8 0
 9 >>> p.poll()
10 0
11 >>> 
12 ‘’‘
13 poll()
14 Check if child process has terminated. Returns returncode
15 ’‘’
poll()方法調用案例【不須要等】
1 >>> p = subprocess.Popen("top -bn 5",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
2 >>> p.wait()
3 0
4 >>> 
5 
6 '''
7 wait()
8 Wait for child process to terminate. Returns returncode attribute.
9 '''
wait()方法調用案例【須要等】返回執行狀態
1 >>> p = subprocess.Popen("top -bn 5",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True) 
2 >>> p.poll()
3 >>> p.terminate()
4 >>> p.poll()     
5 143
6 
7 '''
8 terminate() 殺掉所啓動進程,此時p.poll返回值應該是非「0」,由於不是正常結束的!沒有執行完畢就被殺掉了。
9 '''
terminate()方法調用案例,直接殺掉啓動進程
 1 >>> p = subprocess.Popen("df -h;sleep 100",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
 2 >>> p.poll()       
 3 >>> p.poll()
 4 >>> p.communicate(timeout=2)
 5 Traceback (most recent call last):
 6   File "<stdin>", line 1, in <module>
 7   File "/usr/local/lib/python3.5/subprocess.py", line 1068, in communicate
 8     stdout, stderr = self._communicate(input, endtime, timeout)
 9   File "/usr/local/lib/python3.5/subprocess.py", line 1699, in _communicate
10     self._check_timeout(endtime, orig_timeout)
11   File "/usr/local/lib/python3.5/subprocess.py", line 1094, in _check_timeout
12     raise TimeoutExpired(self.args, orig_timeout)
13 subprocess.TimeoutExpired: Command 'df -h;sleep 100' timed out after 2 seconds
14 >>> 
15 
16 ‘’‘
17 communicate() 等待任務結束,咱們須要在裏面添加一個參數,默認單位是「s」,若是程序執行時間超過指定的時間就會拋出一個「TimeoutExpired」的字樣喲,不過咱們能夠用異常處理來吧這個錯誤解決掉!
18 ’‘’
communicate()函數調用方法 
 1 >>> def name():
 2 ...     print("my name is yinzhengjie!")
 3 ... 
 4 >>> p = subprocess.Popen("pwd",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, preexec_fn=name)    >>> p.stdout.read()
 5 b'my name is yinzhengjie!\n/root\n'
 6 >>> 
 7 
 8 ‘’‘
 9 preexec_fn:只在Unix平臺下有效,用於指定一個可執行對象(callable object),它將在子進程運行以前被調用,運行結果見上例。
10 ’‘’
preexec_fn參數調用案例
1 >>> p = subprocess.Popen("pwd",cwd="/usr/local",stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)  
2 >>> p.stdout.read()
3 b'/usr/local\n'
4 >>> 
5 
6 '''
7 cwd:用於設置子進程的當前目錄
8 '''
cwd參數調用案例
1 >>> p = subprocess.Popen("echo $name_str",cwd="/usr/local",shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE,env={"name_str":"yinzhengjie"})
2 >>> p.stdout.read()
3 b'yinzhengjie\n'
4 >>> 
5 ‘’‘
6 提示:
7 env:用於指定子進程的環境變量。若是env = None,子進程的環境變量將從父進程中繼承。
8 ’‘’
env參數調用案例

 

14.re模塊

  正在表達式主要是用於模糊匹配,Python中的正則用法比較簡單,可是規則比較麻煩,你能夠根據你的需求配置不一樣的規則

經常使用正則表達式符號:

1 '\A'    只從字符開頭匹配,re.search("\Yin","yinzhengjie") 是匹配不到的
2 '\Z'    匹配字符結尾,同$
3 '\d'    匹配數字0-9
4 '\D'    匹配非數字
5 '\w'    匹配[A-Za-z0-9]
6 '\W'    匹配非[A-Za-z0-9]
7 's'     匹配空白字符、\t、\n、\r , re.search("\s+","ab\tc1\n3").group() 結果 '\t'
1 re.match 從頭開始匹配
2 re.search 匹配包含
3 re.findall 把全部匹配到的字符放到以列表中的元素返回
4 re.splitall 以匹配到的字符當作列表分隔符
5 re.sub      匹配字符並替換
最經常使用的匹配語法

1>.'.'     默認匹配除\n以外的任意一個字符,若指定flag DOTALL,則匹配任意字符,包括換行

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.match("yin","yinzhengjie"))  #精確匹配,前面是條件,後面是內容
 8 print(re.match(".","yinzhengjie"))  #模糊匹配單個字符
 9 print(re.match("...","yinzhengjie"))
10 print(re.match("....","yin\nzhengjie"))  #從開頭開始模糊匹配4個字符,沒法識別換行符"\n",
11 
12 
13 #以上代碼執行結果以下:
14 <_sre.SRE_Match object; span=(0, 3), match='yin'>
15 <_sre.SRE_Match object; span=(0, 1), match='y'>
16 <_sre.SRE_Match object; span=(0, 3), match='yin'>
17 None
'.' 的用法展現

2>.'^'     匹配字符開頭,若指定flags MULTILINE,這種也能夠匹配上(r"^a","\nabc\neee",flags=re.MULTILINE)

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.match("^Y","Yinzhengjie"))
 8 print(re.match("jie","Yinzhengjie")) #表示從頭開始匹配
 9 print(re.search("jie","yinzhengjie"))  #表示匹配包含
10 
11 
12 
13 #以上代碼執行結果以下:
14 <_sre.SRE_Match object; span=(0, 1), match='Y'>
15 None
16 <_sre.SRE_Match object; span=(8, 11), match='jie'>
'^'的用法展現

3>.'$'     匹配字符結尾,或e.search("foo$","bfoo\nsdfsf",flags=re.MULTILINE).group()也能夠

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("jie$","Yinzhengjie")) #匹配包含,不從開頭開始匹配喲
 8 
 9 
10 #以上代碼執行結果以下:
11 <_sre.SRE_Match object; span=(8, 11), match='jie'>
$'用法展現

4>.'+'     匹配前一個字符1次或屢次,re.findall("ab+","ab+cd+abb+bba") 結果['ab''abb']

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.match(".+","Yinzhengjie")) #表示從頭開始匹配
 8 print(re.match("^.+","Yinzhengjie"))
 9 print(re.match(".+","Yinzheng\njie")) #表示匹配包含
10 
11 
12 
13 #以上代碼執行結果以下:
14 <_sre.SRE_Match object; span=(0, 11), match='Yinzhengjie'>
15 <_sre.SRE_Match object; span=(0, 11), match='Yinzhengjie'>
16 <_sre.SRE_Match object; span=(0, 8), match='Yinzheng'>
'+' 的用法展現

5>.'*'     匹配*號前的字符0次或屢次,re.findall("ab*","cabb3abcbbac")  結果爲['abb''ab''a']

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("Y*","Yinzhengjie"))
 8 print(re.search("jie*","Yinzhengjie"))
 9 print(re.search("e*","Yinzhengjie"))
10 
11 
12 #以上代碼執行結果以下:
13 <_sre.SRE_Match object; span=(0, 1), match='Y'>
14 <_sre.SRE_Match object; span=(8, 11), match='jie'>
15 <_sre.SRE_Match object; span=(0, 0), match=''>
'*'用法展現

6>.'?'     匹配前一個字符1次或0

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("Y?","Yinzhengjie"))
 8 print(re.search("jie?","Yinzhengjie"))
 9 print(re.search("e?","Yinzhengjieee"))
10 
11 
12 #以上代碼執行結果以下:
13 <_sre.SRE_Match object; span=(0, 1), match='Y'>
14 <_sre.SRE_Match object; span=(8, 11), match='jie'>
15 <_sre.SRE_Match object; span=(0, 0), match=''>
'?'用法展現

 7>.'{m}'   匹配前一個字符m次

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("jie{1}","Yinzhengjiejie"))
 8 print(re.search("(jie){2}","Yinzhengjiejie"))
 9 print(re.search("e{3}","Yinzhengjieee"))
10 
11 #以上代碼執行結果以下:
12 <_sre.SRE_Match object; span=(8, 11), match='jie'>
13 <_sre.SRE_Match object; span=(8, 14), match='jiejie'>
14 <_sre.SRE_Match object; span=(10, 13), match='eee'>
'{m}' 用法展現

8>.'{n,m}' 匹配前一個字符n到m次,re.findall("ab{1,3}","abb abc abbcbbb") 結果'abb''ab''abb']

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("e{1,3}","Yinzhengjieee"))
 8 print(re.search("e{2,3}","Yinzhengjieee"))
 9 print(re.search("e{3,10}","Yinzhengjieee"))
10 
11 
12 #以上代碼執行結果以下:
13 <_sre.SRE_Match object; span=(5, 6), match='e'>
14 <_sre.SRE_Match object; span=(10, 13), match='eee'>
15 <_sre.SRE_Match object; span=(10, 13), match='eee'>
'{n,m}'用法展現

9>.'|'     匹配|左或|右的字符,re.search("abc|ABC","ABCBabcCD").group() 結果'ABC'

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("e|E","YinzhengjiE"))
 8 print(re.search("e|E","YinzhEngjie"))
 9 
10 #以上代碼執行結果以下:
11 <_sre.SRE_Match object; span=(5, 6), match='e'>
12 <_sre.SRE_Match object; span=(5, 6), match='E'>
'|' 用法展現

10>.'(...)' 分組匹配,re.search("(abc){2}a(123|456)c""abcabca456c").group() 結果 abcabca456c

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("\d","612401199607237057")) #匹配一個數字
 8 print(re.search("\d+","612401199607237057"))  #匹配數字一次或者屢次,通常是貪婪匹配,會盡可能把匹配最長的數字給你打印出來
 9 print(re.search("\d...","612401199607237057")) #匹配一個數字和3個任意字符
10 print(re.search("\d{4}","612401199607237057"))  #匹配4個數字
11 print(re.search("[0-9]{4}","612401199607237057")) #[0-9]其實就是等效於\d
12 print(re.search("(\d{3})(\d{3})(\d...(3))","612401199607237057"))  #這個有點繞,我當時弄了很久才研究明白,給你們分享一下如何去看,前2個「(\d{3})」表示匹配三個數字類型的字符,也就是要匹配6個數字,「(\d...(3))」這個的意思是:匹配一個數字和三個任意字符,並且最後結尾標識符必須是數字「3」。
13 print(re.search("(\d{3})(\d{3})(\d(7))","612401199607237057")) #表示匹配以數字「7」結尾前面加七個數字[(\d{3})(\d{3})是6個數字後面還有個\d因此總共是7個數字,後面的(3)咱們能夠理解是以「3」結尾便可。若是你看懂我上面的例子下面這個就不是事情啦!]
14 print(re.search("(\d{3})(\d{3})(\d...(3))","612401199607237057").group()) #取到值的結果
15 print(re.search("(\d{3})(\d{3})(\d...(3))","612401199607237057").groups())  #將取到的結果進行分組,這樣你們就能看明白上面我舉的那個例子啦
16 number = re.search("(\d{3})(\d{3})(\d...(3))","612401199607237057").groups()  #咱們也能夠將分組的信息賦值給一個變量,便於取值,以下:
17 print(number[0])
18 print(number[3])
19 
20 
21 #以上代碼執行結果以下:
22 <_sre.SRE_Match object; span=(0, 1), match='6'>
23 <_sre.SRE_Match object; span=(0, 18), match='612401199607237057'>
24 <_sre.SRE_Match object; span=(0, 4), match='6124'>
25 <_sre.SRE_Match object; span=(0, 4), match='6124'>
26 <_sre.SRE_Match object; span=(0, 4), match='6124'>
27 <_sre.SRE_Match object; span=(3, 14), match='40119960723'>
28 <_sre.SRE_Match object; span=(4, 12), match='01199607'>
29 40119960723
30 ('401', '199', '60723', '3')
31 401
32 3
'(...)' 分組匹配案例展現

 11>.'\d','\D','\w','\W' 用法:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("\D","!Yinzhengjie612401199607237057jie!!!!")) #匹配一個非數字
 8 print(re.search("\D+","Yinzhengjie612401199607237057jie!!!!"))  #匹配多個非數字,
 9 print(re.search("\D+\d+","Yinzhengjie612401199607237057jie!!!!")) #匹配非數字和數字(注意,不會匹配到「jie!!!!」,由於它不是數字)
10 print(re.search("\D+\d+\D+","Yinzhengjie612401199607237057jie!!!!")) #都會被匹配到喲
11 print(re.search("\w+","Yinzhengjie612401199607237057jie!!!!")) #匹配[A-Za-z0-9]
12 print(re.search("\W+","Yinzhengjie612401199607237057jie!!!!"))  #匹配非字母大小寫和數字,即不匹配[A-Za-z0-9],用於匹配特殊字符
13 
14 
15 #以上代碼執行結果以下:
16 <_sre.SRE_Match object; span=(0, 1), match=''>
17 <_sre.SRE_Match object; span=(0, 11), match='Yinzhengjie'>
18 <_sre.SRE_Match object; span=(0, 29), match='Yinzhengjie612401199607237057'>
19 <_sre.SRE_Match object; span=(0, 36), match='Yinzhengjie612401199607237057jie!!!!'>
20 <_sre.SRE_Match object; span=(0, 32), match='Yinzhengjie612401199607237057jie'>
21 <_sre.SRE_Match object; span=(32, 36), match='!!!!'>
用法展現

12>.'s' 與'(?P<name>...)' 分組匹配用法:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("\s+","!Yinzhengjie612401199607237057    !!!!"))        #能夠匹配空格
 8 print(re.search("\s+","!Yinzhengjie612401199607237057   \t !!!!"))      #能夠匹配\t
 9 print(re.search("\s+","!Yinzhengjie612401199607237057   \n !!!!"))      #能夠匹配\n
10 print(re.search("\s+","!Yinzhengjie612401199607237057   \r !!!!"))      #能夠匹配\r
11 print(re.search("\s+","!Yinzhengjie612401199607237057   \r\n !!!!"))    #能夠匹配\r\n
12 print(re.search("(?P<province>[0-9]{4})(?P<city>[0-9]{2})(?P<birthday>[0-9]{4})","612401199907237057").groupdict() ) #這個看起來很複雜,其實挺簡單的,這個比'(...)'分組匹配[是將結果放入一個tuple中]要簡單的多,而我們這個是將其變成一個字典,咱們先看第一部分(?P<province>[0-9]{4}),就是讓「province」爲key,讓「[0-9]{4}」爲value,後面2個(?P<city>[0-9]{2})與(?P<birthday>[0-9]{4})也是同樣的道理
13 a = re.search("(?P<province>[0-9]{4})(?P<city>[0-9]{2})(?P<birthday>[0-9]{4})","612401199907237057").groupdict()  #贊成,咱們分組後能夠對其進行取值
14 print(a.get("birthday"))
15 print(a.get("province"))
'(?P...)' 分組匹配詳解

 13>.findall的用法:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.findall("[a-z]","Yinzhengjie612401199607237057jie!!!!\n6666\r@@@\t")) #只匹配小寫字母並將結果返回爲一個列表
 8 print(re.findall("[A-z]","Yinzhengjie612401199607237057jie!!!!\n6666\r@@@\t"))  #只匹配大寫和小寫的字母
 9 print(re.findall("[0-9]","Yinzhengjie612401199607237057jie!!!!\n6666\r@@@\t"))  #只匹配數字
10 print(re.findall("\w","Yinzhengjie612401199607237057jie!!!!\n6666\r@@@\t"))  #只匹配數字和字母
11 print(re.findall("\W","Yinzhengjie612401199607237057jie!!!!\n6666\r@@@\t"))  #只匹配特殊字符
12 
13 
14 
15 #以上代碼執行結果以下:
16 ['i', 'n', 'z', 'h', 'e', 'n', 'g', 'j', 'i', 'e', 'j', 'i', 'e']
17 ['Y', 'i', 'n', 'z', 'h', 'e', 'n', 'g', 'j', 'i', 'e', 'j', 'i', 'e']
18 ['6', '1', '2', '4', '0', '1', '1', '9', '9', '6', '0', '7', '2', '3', '7', '0', '5', '7', '6', '6', '6', '6']
19 ['Y', 'i', 'n', 'z', 'h', 'e', 'n', 'g', 'j', 'i', 'e', '6', '1', '2', '4', '0', '1', '1', '9', '9', '6', '0', '7', '2', '3', '7', '0', '5', '7', 'j', 'i', 'e', '6', '6', '6', '6']
20 ['!', '!', '!', '!', '\n', '\r', '@', '@', '@', '\t']
re.findall用法展現

14>.split用法

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.split("\W","172.30.1.2")) #取以特殊字符爲分隔符取值
 8 print(re.split("\W","192.168@2!24")) #同上
 9 
10 
11 
12 #以上代碼執行結果以下:
13 ['172', '30', '1', '2']
14 ['192', '168', '2', '24']
re.split用法展現

15>.sub用法:

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.sub("\d{4}","2017","I was born in 1991-05-19,alex was born in 1937-9-15",count=1)) #表示對"I was born in 1991-05-19"這個字符串進行匹配"\d{4}"前4個數字,而後將匹配的到的數據用"2017"去替換掉,注意:count=1表示只匹配1次,若是不指定的話會默認匹配全部的連續的數字喲!
 8 print(re.sub("\d{4}","2017","I was born in 1991-05-19,alex was born in 1937-9-15",count=2)) #默認匹配2次
 9 print(re.sub("\d{4}","2017","I was born in 1991-05-19,alex was born in 1937-9-15",))  #默認所有匹配
10 
11 
12 #以上代碼執行結果以下:
13 I was born in 2017-05-19,alex was born in 1937-9-15
14 I was born in 2017-05-19,alex was born in 2017-9-15
15 I was born in 2017-05-19,alex was born in 2017-9-15
re.sub用法展現

 16>.反斜槓的困擾

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 '''
 8         與大多數編程語言相同,正則表達式裏使用"\"做爲轉義字符,這就可能形成反斜槓困擾。假如你須要匹配文本中的字符"\",那麼使用編程語言表示的正則表達式裏將須要4個反斜槓"\\\\":前兩個和後兩個分別用於在編程語言裏轉義成反斜槓,轉換成兩個反斜槓後再在正則表達式裏轉義成一個反斜槓。Python裏的原生字符串很好地解決了這個問題,這個例子中的正則表達式可使用r"\\"表示。一樣,匹配一個數字的"\\d"能夠寫成r"\d"。有了原生字符串,你不再用擔憂是否是漏寫了反斜槓,寫出來的表達式也更直觀。
 9 '''
10 
11 print(re.search("\\d","\database"))   #表示匹配數字
12 print(re.search(r"\\d","\database"))  #表示匹配字母「d」(能夠把一些特殊的符號匹配出來如:」\D「,」\Z「等等),法一
13 print(re.search("\\\\d","\database")) #表示匹配字母「d」,法二
14 
15 
16 
17 #以上代碼執行結果以下:
18 None
19 <_sre.SRE_Match object; span=(0, 2), match='\\d'>
20 <_sre.SRE_Match object; span=(0, 2), match='\\d'>
特殊字母如何轉義成爲普通字符

17>.僅需瞭解幾個匹配模式

 1 #!/usr/bin/env python
 2 #_*_coding:utf-8_*_
 3 #@author :yinzhengjie
 4 #blog:http://www.cnblogs.com/yinzhengjie/tag/python%E8%87%AA%E5%8A%A8%E5%8C%96%E8%BF%90%E7%BB%B4%E4%B9%8B%E8%B7%AF/
 5 #EMAIL:y1053419035@qq.com
 6 import re
 7 print(re.search("[a-z]{2}","My Name Is Yinzhengjie",flags=re.I)) #忽略大小寫
 8 print(re.search(".+","My Name Is\r\n Yinzhengjie",flags=re.S))  #點任意匹配模式,改變'.'的行爲,能夠匹配"\n","\r","\r\n"等等。
 9 print(re.search("^M","\nMy Name Is Yinzhengjie",flags=re.I))
10 print(re.search("^M","\nMy Name Is Yinzhengjie",flags=re.M))  #匹配以M開頭,能夠過濾掉換行符,用「flags=re.M」實現。
11 
12 
13 #以上代碼執行結果以下:
14 <_sre.SRE_Match object; span=(0, 2), match='My'>
15 <_sre.SRE_Match object; span=(0, 24), match='My Name Is\r\n Yinzhengjie'>
16 None
17 <_sre.SRE_Match object; span=(1, 2), match='M'>
擴充Tips
相關文章
相關標籤/搜索