python 的解釋器,有不少種,但市場佔有率99.9%的都是基於c語言編寫的CPython. 在這個解釋器裏規定了GIL。python
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython’s memory management is not thread-safe. (However, since the GIL exists, other features have grown to depend on the guarantees that it enforces.)服務器
意思:不管有多少個cpu,python在執行時會淡定的在同一時刻只容許一個線程運行。(一個進程能夠開多個線程,但線程只能同時運行一個)多線程
下面舉例子:併發
def add(): sum=0 for i in range(10000000): sum+=i print("sum",sum) def mul(): sum2=1 for i in range(1,100000): sum2*=i print("sum2",sum2) import threading,time start=time.time() t1=threading.Thread(target=add) t2=threading.Thread(target=mul) l=[] l.append(t1) l.append(t2) print(time.ctime()) add() mul() print("cost time %s"%(time.ctime()))
執行結果 Sat Apr 14 20:10:39 2018 sum 49999995000000 sum2 282422940796034787429342157802453551847 cost time Sat Apr 14 20:10:50 2018 sum2太大,截取了一部分 可已看出串行須要11秒,(-_-我還開着一些別的軟件,須要佔用大量內存)
而後咱們用多線程分別執行兩個程序 def add(): sum=0 for i in range(10000000): sum+=i print("sum",sum) def mul(): sum2=1 for i in range(1,100000): sum2*=i print("sum2",sum2) import threading,time start=time.time() t1=threading.Thread(target=add) t2=threading.Thread(target=mul) l=[] l.append(t1) l.append(t2) print(time.ctime()) for t in l: t.start() for t in l: t.join() print("cost time %s"%(time.ctime()))
執行結果: Sat Apr 14 20:14:51 2018 sum 49999995000000 sum2 28242294079603478742934215780245355 cost time Sat Apr 14 20:15:01 2018 能夠看出執行速度加快了。 可是減小的時間不是不少。這是爲何呢?
首先咱們須要知道任務類型分兩種:CPU密集型、IO密集型app
像上面的例子就是CPU密集型,須要大量的計算ui
而另外一種就是須要頻繁的進行輸入輸出(一遇到IO,就切換)spa
接着寫一個IO密集型的例子:線程
import threading import time def music(): print("begin to listen %s"%time.ctime()) time.sleep(3) print("stop to listen %s" % time.ctime()) def game(): print("begin to play game %s"%time.ctime()) time.sleep(5) print("stop to play game %s" % time.ctime()) if __name__ == '__main__': t1= threading.Thread(target=music) t2 = threading.Thread(target=game) t1.start() t2.start() t1.join() t2.join() print("ending")
輸出結果: begin to listen Sat Apr 14 20:23:03 2018 begin to play game Sat Apr 14 20:23:03 2018 stop to listen Sat Apr 14 20:23:06 2018 stop to play game Sat Apr 14 20:23:08 2018 ending Process finished with exit code 0
import threading import time def music(): print("begin to listen %s"%time.ctime()) time.sleep(3) print("stop to listen %s" % time.ctime()) def game(): print("begin to play game %s"%time.ctime()) time.sleep(5) print("stop to play game %s" % time.ctime()) if __name__ == '__main__': music() game() # t1= threading.Thread(target=music) # t2 = threading.Thread(target=game) # t1.start() # t2.start() # t1.join() # t2.join() print("ending")
輸出結果: begin to listen Sat Apr 14 20:24:44 2018 stop to listen Sat Apr 14 20:24:47 2018 begin to play game Sat Apr 14 20:24:47 2018 stop to play game Sat Apr 14 20:24:52 2018 ending Process finished with exit code 0
很明顯 對於IO密集型多線程的優點很是明顯.code
同步鎖Lock |
多個線程都在同時操做同一個共享資源,因此形成了資源破壞,怎麼辦呢?(join會形成串行,失去所線程的意義)server
import time import threading def addNum(): global num #在每一個線程中都獲取這個全局變量 temp=num time.sleep(0.0001) num =temp-1 #對此公共變量進行-1操做 num = 100 #設定一個共享變量 thread_list = [] for i in range(100): t = threading.Thread(target=addNum) t.start() thread_list.append(t) for t in thread_list: #等待全部線程執行完畢 t.join() print('final num:', num ) 結果: final num: 87 Process finished with exit code 0
線程之間競爭資源,誰搶到誰執行
咱們能夠經過同步鎖來解決這種問題 R=threading.Lock() #### def sub(): global num R.acquire() temp=num-1 time.sleep(0.1) num=temp R.release()
# 即運行到此就變成了串行,本語言無力改變
線程死鎖和遞歸鎖RLock |
import threading,time class myThread(threading.Thread): def doA(self): lockA.acquire() print(self.name,"gotlockA",time.ctime()) time.sleep(3) lockB.acquire() print(self.name,"gotlockB",time.ctime()) lockB.release() lockA.release() def doB(self): lockB.acquire() print(self.name,"gotlockB",time.ctime()) time.sleep(2) lockA.acquire() print(self.name,"gotlockA",time.ctime()) lockA.release() lockB.release() def run(self): self.doA() self.doB() if __name__=="__main__": lockA=threading.Lock() lockB=threading.Lock() threads=[] for i in range(5): threads.append(myThread()) for t in threads: t.start()
結果就卡到這裏了,?,Thread-1申請lockB|Thread-2申請lockB,可是二者都申請不到因而產生了死鎖
因而------當某個線程申請到一個鎖,其他線程不能再申請。因而有了遞歸鎖(其實就是內部維護了一個counter變量,counter記錄了acquire的次數,從而使得資源能夠被屢次acquire。直到一個線程全部的acquire都被release,其餘的線程才能得到資源。)
import threading,time class myThread(threading.Thread): def doA(self): R_lock.acquire() print(self.name,"gotlockA",time.ctime()) time.sleep(3) R_lock.acquire() print(self.name,"gotlockB",time.ctime()) R_lock.release() R_lock.release() def doB(self): R_lock.acquire() print(self.name,"gotlockB",time.ctime()) time.sleep(2) R_lock.acquire() print(self.name,"gotlockA",time.ctime()) R_lock.release() R_lock.release() def run(self): self.doA() self.doB() if __name__=="__main__": R_lock = threading.RLock() threads=[] for i in range(5): threads.append(myThread()) for t in threads: t.start()
結果:
同步條件(Event) |
An event is a simple synchronization object;the event represents an internal flag, and threads can wait for the flag to be set, or set or clear the flag themselves.
事件是一個簡單的同步對象;事件表示一個內部標誌,
線程能夠等待設置標誌,或設置或清除標誌自己。
event = threading.Event()
#客戶機線程能夠等待設置標誌。
# a client thread can wait for the flag to be set event.wait()
一個服務器線程能夠設置或重置它。
# a server thread can set or reset it event.set() event.clear()
若是設置了標誌,等待方法不會執行任何操做。
若是標記被清除,等待將阻塞直到它再次被設置。
任何數量的線程均可以等待相同的事件。
If the flag is set, the wait method doesn’t do anything. If the flag is cleared, wait will block until it becomes set again. Any number of threads may wait for the same event. import threading,time class Boss(threading.Thread): def run(self): print("BOSS:今晚你們都要加班到22:00。") print(event.isSet()) event.set() time.sleep(5) print("BOSS:<22:00>能夠下班了。") print(event.isSet()) event.set() class Worker(threading.Thread): def run(self): event.wait() print("Worker:哎……命苦啊!") time.sleep(1) event.clear() event.wait() print("Worker:OhYeah!") if __name__=="__main__": event=threading.Event() threads=[] for i in range(5): threads.append(Worker()) threads.append(Boss()) for t in threads: t.start() for t in threads: t.join()
信號量 |
信號量用來控制線程併發數的,BoundedSemaphore或Semaphore管理一個內置的計數 器,每當調用acquire()時-1,調用release()時+1。
計數器不能小於0,當計數器爲 0時,acquire()將阻塞線程至同步鎖定狀態,直到其餘線程調用release()。(相似於停車位的概念)
BoundedSemaphore與Semaphore的惟一區別在於前者將在調用release()時檢查計數 器的值是否超過了計數器的初始值,若是超過了將拋出一個異常。
import threading,time class myThread(threading.Thread): def run(self): if semaphore.acquire(): print(self.name) time.sleep(5) semaphore.release() if __name__=="__main__": semaphore=threading.Semaphore(5) thrs=[] for i in range(100): thrs.append(myThread()) for t in thrs: t.start()