基於unix環境(linux,macOS)python
主進程須要等待子進程結束以後,主進程才結束linux
主進程時刻監測子進程的運行狀態,當子進程結束以後,一段時間以內,將子進程進行回收.json
爲何主進程不在子進程結束後立刻對其回收呢?安全
unix針對於上面的問題,提供了一個機制.網絡
全部的子進程結束以後,立馬會釋放掉文件的操做連接,內存的大部分數據,可是會保留一些內容: 進程號,結束時間,運行狀態,等待主進程監測,回收.併發
殭屍進程: 全部的子進程結束以後,在被主進程回收以前,都會進入殭屍進程狀態.dom
殭屍進程有無危害???異步
若是父進程不對殭屍進程進行回收(wait/waitpid),產生大量的殭屍進程,這樣就會佔用內存,佔用進程pid號.ui
孤兒進程:unix
父進程因爲某種緣由結束了,可是你的子進程還在運行中,這樣你的這些子進程就成了孤兒進程.你的父進程若是結束了,你的全部的孤兒進程就會被init進程的回收,init就變成了你的父進程,對你進行回收.
殭屍進程如何解決???
父進程產生了大量子進程,可是不回收,這樣就會造成大量的殭屍進程,解決方式就是直接殺死父進程,將全部的殭屍進程變成孤兒進程進程,由init進行回收.
互斥鎖就是在保證子進程串行的同時,也保證了子進程執行順序的隨機性,以及數據的安全性
# 三個同事 同時用一個打印機打印內容. # 三個進程模擬三個同事, 輸出平臺模擬打印機.
版本一: from multiprocessing import Process import time import random import os def task1(): print(f'{os.getpid()}開始打印了') time.sleep(random.randint(1,3)) print(f'{os.getpid()}打印結束了') def task2(): print(f'{os.getpid()}開始打印了') time.sleep(random.randint(1,3)) print(f'{os.getpid()}打印結束了') def task3(): print(f'{os.getpid()}開始打印了') time.sleep(random.randint(1,3)) print(f'{os.getpid()}打印結束了') if __name__ == '__main__': p1 = Process(target=task1) p2 = Process(target=task2) p3 = Process(target=task3) p1.start() p2.start() p3.start() # 如今是全部的進程都併發的搶佔打印機, # 併發是以效率優先的,可是目前咱們的需求: 順序優先. # 多個進程共強一個資源時, 要保證順序優先: 串行,一個一個來.
版本二: from multiprocessing import Process import time import random import os def task1(p): print(f'{p}開始打印了') time.sleep(random.randint(1,3)) print(f'{p}打印結束了') def task2(p): print(f'{p}開始打印了') time.sleep(random.randint(1,3)) print(f'{p}打印結束了') def task3(p): print(f'{p}開始打印了') time.sleep(random.randint(1,3)) print(f'{p}打印結束了') if __name__ == '__main__': p1 = Process(target=task1,args=('p1',)) p2 = Process(target=task2,args=('p2',)) p3 = Process(target=task3,args=('p3',)) p2.start() p2.join() p1.start() p1.join() p3.start() p3.join() # 咱們利用join 解決串行的問題,保證了順序優先,可是這個誰先誰後是固定的. # 這樣不合理. 你在爭搶同一個資源的時候,應該是先到先得,保證公平.
版本3: from multiprocessing import Process from multiprocessing import Lock import time import random import os def task1(p,lock): ''' 一把鎖不能連續鎖兩次 lock.acquire() lock.acquire() lock.release() lock.release() ''' lock.acquire() print(f'{p}開始打印了') time.sleep(random.randint(1,3)) print(f'{p}打印結束了') lock.release() def task2(p,lock): lock.acquire() print(f'{p}開始打印了') time.sleep(random.randint(1,3)) print(f'{p}打印結束了') lock.release() def task3(p,lock): lock.acquire() print(f'{p}開始打印了') time.sleep(random.randint(1,3)) print(f'{p}打印結束了') lock.release() if __name__ == '__main__': mutex = Lock() p1 = Process(target=task1,args=('p1',mutex)) p2 = Process(target=task2,args=('p2',mutex)) p3 = Process(target=task3,args=('p3',mutex)) p2.start() p1.start() p3.start()
版本四: from multiprocessing import Process from multiprocessing import Lock import time import random import sys import os def task(name,Lock): Lock.acquire() print(f"{name} is running") time.sleep(random.randint(1,4)) print(f"{name} is gone") Lock.release() if __name__ == '__main__': mutex = Lock() for i in range(3): p = Process(target=getattr(sys.modules[__name__],'task'),args=(f"p{i}",mutex)) p.start()
lock與join的區別.
共同點: 均可以把併發變成串行, 保證了順序.
不一樣點: join人爲設定順序,lock讓其爭搶順序,保證了公平性.
# 當不少進程共強一個資源(數據)時, 你要保證順序(數據的安全),必定要串行. # 互斥鎖: 能夠公平性的保證順序以及數據的安全. # 基於文件的進程之間的通訊: # 效率低. # 本身加鎖麻煩並且很容易出現死鎖.
進程在內存級別是隔離的,可是文件在磁盤上,
from multiprocessing import Process import json import time import os import random def search(): time.sleep(random.randint(1,3)) # 模擬網絡延遲(查詢環節) with open('ticket.json',encoding='utf-8') as f1: dic = json.load(f1) print(f'{os.getpid()} 查看了票數,剩餘{dic["count"]}') def paid(): with open('ticket.json', encoding='utf-8') as f1: dic = json.load(f1) if dic['count'] > 0: dic['count'] -= 1 time.sleep(random.randint(1,3)) # 模擬網絡延遲(購買環節) with open('ticket.json', encoding='utf-8',mode='w') as f1: json.dump(dic,f1) print(f'{os.getpid()} 購買成功') def task(): search() paid() if __name__ == '__main__': for i in range(6): p = Process(target=task) p.start()
from multiprocessing import Process from multiprocessing import Lock import json import time import os import random def search(): time.sleep(random.randint(1,3)) # 模擬網絡延遲(查詢環節) with open('ticket.json',encoding='utf-8') as f1: dic = json.load(f1) print(f'{os.getpid()} 查看了票數,剩餘{dic["count"]}') def paid(): with open('ticket.json', encoding='utf-8') as f1: dic = json.load(f1) if dic['count'] > 0: dic['count'] -= 1 time.sleep(random.randint(1,3)) # 模擬網絡延遲(購買環節) with open('ticket.json', encoding='utf-8',mode='w') as f1: json.dump(dic,f1) print(f'{os.getpid()} 購買成功') def task(lock): search() lock.acquire() paid() lock.release() if __name__ == '__main__': mutex = Lock() for i in range(6): p = Process(target=task,args=(mutex,)) p.start()
from multiprocessing import Process from multiprocessing import Queue import random import time import os def check(q): time.sleep(random.randint(1,3)) num = q.qsize() print(f"{os.getpid()}查票,剩餘{num}") def paid(q): time.sleep(random.randint(1,3)) try: q.get(block = False) if q.qsize()>=0: print(f"{os.getpid()}購買成功,剩餘{q.qsize()}") else: print('沒票了') except Exception: print('沒票了') def task(q): check(q) paid(q) if __name__ == '__main__': q = Queue() for i in range(3): q.put(1) for i in range(10): p = Process(target=task,args=(q,)) p.start()