day09 併發編程

一. 目錄

  1.進程的概念和兩種建立方式python

  2.多進程爬蟲mysql

  3.守護進程linux

  4.進程隊列git

  5.進程隊列簡單應用(實現數據共享)程序員

  6.線程的兩種建立方式github

  7.線程和進程的效率對比sql

  8.線程共享統一進程的數據json

  9.死鎖現象windows

  10.線程隊列的三種應用安全

  11.多線程執行計算密集型任務

  12. 線程池和進程池

  13. 回調函數

  14.守護線程

  15. 協程

  16.GlL 全局解釋器鎖

二. 內容

一.進程的概念和兩種建立方式

    專業詞描述:

操做系統的兩大做用 1.把硬件醜陋複雜的接口隱藏起來,爲應用程序提供良好的接口
2.管理,調度進程,而且把進程之間對硬件的競爭變的有序化

多道技術:
1.產生背景:爲了實現單cpu下的併發效果
2.分爲兩個部
1.空間上的複用(必須實現硬件層面的隔離)
2.時間上的複用(複用的是cpu的時間片)
何時切換?
1.正在執行的任務遇到阻塞
2.正在執行的任務運行時間過程(系統控制的)

進程:正在運行的一個過程,一個任務,由操做系統負責調度,由cpu負責 執行
程序:程序員寫的代碼
併發:僞並行,單核+多道
並行:只有多核才能實現真正的並行
同步:一個進程在執行某個任務時,另一個進程必須等待其執行完畢才能往下走
異步:一個進程在執行某個任務時,另一個進程無須等待其執行完畢,繼續往下走

進程的建立:
1.系統初始化
2.與用戶交互
3.執行一個進程的過程當中的調用
4.批處理任務

系統的調用
1. linux:fork
2.window:CreateProcess

linux下的進程與windows的區別:
1: linux的進程有父子關係,是一種樹形結構,是父子關係,windows沒這種關係
2:Linux建立新的進程須要copy父進程的地址空間,winx下最開始建立進程,兩個進程之間不同。

     進程的概述:進程是正在執行的程序的實例,是操做系統動態執行的基本單元。進程是一個實體,每個進程都都有本身的地址空間,通常包括文本區域(python的文件) 、數據區域(python文件中的一些變量數據)和堆棧。文本區域存儲處理執行的代碼,數據區域存儲變量的進程執行期間使用的動態分配內存。堆棧區域存儲活動過程調用的指定和本地變量。

 進程的終止:

   1.正常退出

   2.出錯退出

   3.嚴重錯誤

   4.被其餘程殺死

在windows中只有句柄的概念

進程的三種狀態:就緒 運行 阻塞

進程併發的實現:進程表裏面會記錄程序上次執行的狀態,一遍下次執行的時候接着執行。

 開啓多進程方法一: 

import os
import time
import os
import random
from multiprocessing import Process
print(os.cpu_count()) #查看有幾個cpu
def func():
print("func funcation")
time.sleep(random.randint(1,3))

if __name__ == '__main__':
f = Process(target=func,name="p2") #指定進程名字
f.start() #告訴系統我要建立一個子進程
print("f name is %s" %f.name) #m默認從process -1開始f name is Process-1,能夠本身指定
print("主進程")
# 進程要等到子進程執行完才能結束,不然子進程就變成殭屍進程了

方法二:經過定義類繼承process實現多進程
#方法2
from multiprocessing import Process
import time
import os
import random
class Myprocess(Process):
def __init__(self,func):
super().__init__()
self.func = func

def run(self):
self.func()
def func1():
print("子進程1測試")
print("子進程1pid",os.getpid())

def func2():
print("子進程2測試")
print("子進程2pid2",os.getpid())

if __name__ == '__main__':
p1 = Myprocess(func1)
p2 = Myprocess(func2)
p1.start() #調用子進程中的run方法
p2.start()
print("主進程pid",os.getpid())
join方法:把父進程卡住,等待子進程結束才執行父進程
import time
from multiprocessing import Process
def func(name):
time.sleep(3)
print("%s is writing" %name)

if __name__ == '__main__':
p1 = Process(target=func,args=("ivy",))
p2 = Process(target=func,args=("zoe",))
p3 = Process(target=func, args=("zoe",))
# p1.start()
# p2.start() #主進程發起建立子進程的請求,由操做系統來建立。
# p1.join() #卡着等子進程結束,卡的是主進程,子進程一直在後臺運行
# p2.join()

p_1 = [p1,p2,p3]
for p in p_1:
p.start()

for p in p_1:
p.join()
print("主進程")

進程的常見方法及其說明
import time
import os
from multiprocessing import Process
def func(name):
time.sleep(3)
print("%s is writing" %name)


if __name__ == '__main__':
p1 = Process(target=func,args=("ivy",))
p1.daemon = True #主進程運行完畢,子進程就回收了
p1.start()
print(p1.name) #打印進程名字
print(os.getpid()) #查看當前進程id
print(os.getppid()) #查看主進程id
p1.terminate() #殺進程
print(p1.is_alive()) #查看進程是否存活
print("主進程")

基於多進程實現socket通訊
服務端:
import socket
from multiprocessing import Process
server = socket.socket(socket.AF_INET,type=socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
server.bind(("127.0.0.1",8080))
server.listen(5)

def talk(conn,addr):
while True:
try:
msg = conn.recv(1024)
if not msg:break
conn.send(msg.upper())
except Exception:
break

if __name__ == '__main__':
while True:
conn,addr = server.accept()
p = Process(target=talk,args=(conn,addr))
p.start()
客戶端:
import socket
client = socket.socket()
client.connect(("127.0.0.1",8080))
while True:
msg = input("客戶端說:")
client.send(msg.encode("utf-8"))
msg_server = client.recv(1024)
print(msg_server.decode("utf-8"))

二.多進程爬蟲

import requests
import time
import os
from multiprocessing import Process

urls = ["http://p1.music.126.net/EAJfo8I22hDJErMR7WyOUQ==/109951162860207008.jpg",
"http://p0.qhimgs4.com/t01ba9168ef323dfc7a.jpg",
"http://m.iqiyipic.com/u7/image/20181107/b3/98/uv_20036427021_m_601_720_405.jpg"]


def download(url,i):
time.sleep(1)
url = requests.get(url)
new_url = url.content
with open("image%s.jpg" %(i),mode="wb") as f:
f.write(new_url)
print(os.getpid())

if __name__ == '__main__':
start_time = time.time()
p_l = []
for i,url in enumerate(urls):
p = Process(target=download,args=(url,i+1))
p_l.append(p)
p.start()
[p.join() for p in p_l]
print("主進程")
end_time = time.time()
print("執行時間",end_time- start_time)

三.守護進程

  守護進程把xx設置成守護進程,當主進程結束後xx也結束

  把f1設置成守護進程 沉睡一秒,此時主進程已經結束 f1也就跟着結束了。

  守護進程不能再開子進程

import time
from multiprocessing import Process
def func1():
time.sleep(1)
print("我是func1")

def func2():
print("我是func2")

if __name__ == '__main__':
f1 = Process(target=func1)
f2 = Process(target=func2)
f1.daemon = True
f1.start()
f2.start()
f2.join()
print("我是主進程")

四.進程隊列

      進程與進程之間的通訊須要IPC進制來實現,進程之間通訊通常有兩種方式,管道和隊列,而隊列就是基於管道和鎖來實現的。加鎖的弊端至關於進程變成串行的形式運行,降到了執行效率。優點是保證了數據不錯亂。隊列的特色是先進先出

隊列經常使用方法:

from multiprocessing import Process,Queue
q = Queue(5) #裏面能夠傳值,默認表明無限大,隊列先進先出 堆棧:先進後出
q.put("hello")
q.put("world")
q.put("hello world")
#q.put("d",False) #表明隊列滿了就不能往裏面放了等同於 nowait
q.put("d",timeout=2) #表明等兩秒
#ps 也能夠放對象
print(q.get())
print(q.get())
print(q.get())
print(q.get(block=False)) #gen put同樣
print(q.full()) #判斷是否已滿
print(q.empty()) #判斷是否爲空
print(q.qsize()) #判斷大小

五.進程通訊的三種方式

     1.IPC隊列簡單應用(實現數據共享之生產者消費者模型)

     2.基於文件

     3.Manages模塊

     雖然進程之間是相互隔離的,可是進程是共享一套操做系統和文件

from multiprocessing import Process
def work(filename,msg):
with open(filename,mode="a",encoding="utf-8") as f:
f.write(msg)
f.write("\n")

if __name__ == '__main__':
for i in range(5):
p = Process(target=work,args=("a,txt","進程%s" %str(i)))
p.start()

第一種通訊方式基於IPC的Queue模塊,生產者消費者模型
例子1:
#生產者消費者模型:爲了平衡消費者和生產者的數據,兩個進程互不打擾,互相不影響對方。

import time
import random
from multiprocessing import Process,Queue

def consumer(q,name):
while True:
time.sleep(random.randint(1,3))
ret = q.get()
print("\033[41m消費者%s拿到了%s\033[0m" %(name,ret))

def producer(seq,q,name):
for item in seq:
time.sleep(random.randint(1, 3))
q.put(item)
print('\033[42m生產者%s生產了%s\033[0m' %(name,item))

if __name__ == '__main__':
q = Queue()
c = Process(target=consumer,args=(q,"ivy"))
c.start()

seq = ["包子%s" %i for i in range(10)]
producer(seq,q,"廚師1") #主進程充當生產者
print("主進程")

 例2:基於不一樣子進程作生產者和消費者,若是生產者隊列爲空則退出

import time
import random
from multiprocessing import Process,Queue

def consumer(q,name):
while True:
time.sleep(random.randint(1,3))
ret = q.get()
if ret is None:break
print("\033[41m消費者%s拿到了%s\033[0m" %(name,ret))

def producer(seq,q,name):
for item in seq:
time.sleep(random.randint(1, 3))
q.put(item)
print('\033[42m生產者%s生產了%s\033[0m' %(name,item))
q.put(None)

if __name__ == '__main__':
q = Queue()
c = Process(target=consumer,args=(q,"ivy"))
c.start()

seq = ["包子%s" %i for i in range(10)]
p = Process(target=producer,args=(seq,q,"廚師1"))
p.start()

print("主進程")

 例3:基於JoinableQueue模塊和守護進程實現隊列生產者生產一個 消費者消費一個

import time
import random
from multiprocessing import Process,JoinableQueue

def consumer(q,name):
while True:
time.sleep(random.randint(1,3))
ret = q.get()
q.task_done()
# if ret is None:break
print("\033[41m消費者%s拿到了%s\033[0m" %(name,ret))

def producer(seq,q,name):
for item in seq:
time.sleep(random.randint(1, 3))
q.put(item)
print('\033[42m生產者%s生產了%s\033[0m' %(name,item))
q.join()
print("+++++++++++++++>>>")

if __name__ == '__main__':
q = JoinableQueue()
c = Process(target=consumer,args=(q,"ivy"))
c.daemon = True #設置守護進程,主進程結束c就結束
c.start()

seq = ["包子%s" %i for i in range(10)]
p = Process(target=producer,args=(seq,q,"廚師1"))
p.start()
p.join() #主進程等待p,p等待c把數據去完,c一旦取完數據p.join就不在阻塞
#而主進程結束,主進程結束會回收守護進程c,並且c此時也沒有存在的必要

print("主進程")

第二種中通信方式基於Manage模塊
例1:進程同步多個進程之間一塊兒修改數據
from multiprocessing import Manager,Process
import os
def work(d,lst):
lst.append(os.getpid())
d[os.getpid()] = os.getpid()

if __name__ == '__main__':
m = Manager()
lst = m.list(["init"])
d = m.dict({"name":"Ivy"})

p_1 = []
for i in range(5):
p = Process(target=work,args=(d,lst))
p_1.append(p)
p.start()
[p.join() for p in p_1]
print(d)
print(lst)

基於Manage作數據共享
from multiprocessing import Process,Manager,Lock

def work(d,lock):
with lock:
d["count"] -=1

if __name__ == '__main__':
lock = Lock()
m = Manager()
d = m.dict({'count':100})
p_l= []
for i in range(100):
p = Process(target=work,args=(d,lock))
p_l.append(p)
p.start()
[p.join() for p in p_l]
print('主進程',d)

六.線程的兩種建立方式

   線程的概念:一個進程裏面執行有一個控制線程,線程是cpu的執行單位,進程只是把一堆資源結合在一塊兒。真正在cpu上調度的是進程裏面的線程。多線程是一個進程裏面有多個進程。

爲何要用多線程:由於開啓進程的時候須要劃分地址空間,在這個過程當中耗時長。多個活共享一個資源的時候推薦使用多線程,線程比進程更輕量。線程用的是一個進程裏面的資源,建立過程比較快。IO密集的時候多線程的優點比較明顯,對於cpu密集型多線程並不能體現效果。

python的多線程用不了多核

線程:一條流水線的執行過程是一個線程,一條流水線必須屬於一個車間。一個車間的運行過程就是一個進程
一個進程內至少有一個線程,進程是一個資源單位,線程纔是cpu的執行單位

多線程:一個車間內有多條流水線,多個流水線共享該車間的資源(多線程共享一個進程的資源)
線程的開銷遠遠小於進程,

爲何是要使用多線程
1.共享資源
2.建立開銷小

建立方法一:
from threading import Thread
def work(name):
print("%s say hello" %name)

if __name__ == '__main__':
t = Thread(target=work,args=("Ivy",))
t.start()
print("主線程")

建立方法二:
from threading import Thread
class Work(Thread):
def __init__(self,name):
super().__init__()
self.name = name

def run(self):
print("%s say hello" %self.name)

if __name__ == '__main__':
t = Work("Ivy")
t.start()

基於多線程寫socket
server端
from socket import *
from threading import Thread


def server(ip,port):
s = socket(AF_INET, SOCK_STREAM)
s.bind((ip,port))
s.listen(5)
while True:
conn,addr = s.accept()
print("client",addr)
t = Thread(target=talk,args=(conn,addr))
t.start()


def talk(conn,addr):
try:
while True:
res = conn.recv(1024)
if not res:break
print("client %s:%s msg:%s" %(addr[0],addr[1],res))
conn.send(res.upper())
except Exception:
pass
finally:
conn.close()
if __name__ == '__main__':
server("127.0.0.1",8080)

client端
from socket import *
c = socket()
c.connect(("127.0.0.1",8080))
while True:
msg = input(">>: ").strip()
if not msg:continue
c.send(msg.encode("utf-8"))
res = c.recv(1024)
print("from server msg:" ,res.decode("utf-8"))

線程的經常使用方法:
import time
import threading
from threading import Thread
def work():
time.sleep(2)
print("%s say hello" %threading.current_thread().getName())

if __name__ == '__main__':
t = Thread(target=work)
# #t.daemon = True
# t.setDaemon(True)
t.start()
print(threading.enumerate()) #查看當前活躍的進程是一個列表
print(threading.active_count()) #當前活躍的線程數
print("主進程",threading.current_thread().getName())
基於多線程實現對文件的格式化保存
from threading import Thread

msg_l=[]
format_l = []
def talk():
while True:
msg = input(">>: ").strip()
if not msg:continue
msg_l.append(msg)

def format():
while True:
if msg_l:
res = msg_l.pop()
res = res.upper()
format_l.append(res)

def save():
while True:
if format_l:
res = format_l.pop()
with open("db.txt","a",encoding="utf-8") as f:
f.write("%s\n"%res)

if __name__ == '__main__':
t1 = Thread(target=talk)
t2 = Thread(target=format)
t3 = Thread(target=save)
t1.start()
t2.start()
t3.start()

七.線程和進程的效率對比

線程與進程的區別

   線程共享建立他進程的地址空間,線程能夠直接訪問裏面的數據,線程能夠跟他進程裏面的線程通訊。進程和進程通信必須使用IPC。線程的建立開啓小,主線程可直接控制子線程。進程只能控制子進程,改變子進程不能影響父進程。

python解釋器的進程是直接調用操做系統的系統。屬於內核級別的進程。

八.線程共享同一進程的數據

  能夠經過事件實現數據共享

from threading import Event,Thread
import threading
import time
def conn_mysql():
print("%s waiting....."%threading.current_thread().getName())
e.wait()
print("%s start to connect mysql...." % threading.current_thread().getName())
time.sleep(2)

def check_mysql():
print("%s checking....." % threading.current_thread().getName())
time.sleep(4)
e.set()

if __name__ == '__main__':
e = Event()
c1 = Thread(target=conn_mysql)
c2 = Thread(target=conn_mysql)
c3 = Thread(target=conn_mysql)
c4 = Thread(target=check_mysql)
c1.start()
c2.start()
c3.start()
c4.start()

九.加鎖和解決死鎖現象(互斥鎖和遞歸鎖)

死鎖案列

from threading import Thread,Lock
import time
class MyThread(Thread):
def run(self):
self.f1()
self.f2()

def f1(self):
mutaxA.acquire()
print("\033[46m%s拿到A鎖\033[0m" %self.name)
mutaxB.acquire()
print("\033[43m%s拿到B鎖\033[0m" % self.name)
mutaxB.release()
mutaxA.release()

def f2(self):
mutaxB.acquire()
time.sleep(1)
print("\033[43m%s拿到B鎖\033[0m" % self.name)
mutaxA.acquire()
print("\033[42m%s拿到A鎖\033[0m" % self.name)
mutaxA.release()
mutaxB.release()

if __name__ == '__main__':
mutaxA = Lock()
mutaxB = Lock()
# t = MyThread()
# t.start()
for i in range(20):
t = MyThread()
t.start()

基於遞歸鎖來解決:遞歸鎖裏面使用的是計算器,遇到鎖的時候加1,釋放鎖減1,只有等到計數器數字爲1的時候別人才能拿到鎖。
from threading import Thread,Lock,RLock
import time
class MyThread(Thread):
def run(self):
self.f1()
self.f2()

def f1(self):
mutaxA.acquire()
print("\033[46m%s拿到A鎖\033[0m" %self.name)
mutaxB.acquire()
print("\033[43m%s拿到B鎖\033[0m" % self.name)
mutaxB.release()
mutaxA.release()

def f2(self):
mutaxB.acquire()
time.sleep(1)
print("\033[43m%s拿到B鎖\033[0m" % self.name)
mutaxA.acquire()
print("\033[42m%s拿到A鎖\033[0m" % self.name)
mutaxA.release()
mutaxB.release()

if __name__ == '__main__':
mutaxA = mutaxB = RLock()
# mutaxA = Lock()
# mutaxB = Lock()
# t = MyThread()
# t.start()
for i in range(20):
t = MyThread()
t.start()

信號量鎖:至關於同一時間有幾我的能夠拿鎖
from threading import Thread,Semaphore
import time
def work(id):
with sem:
time.sleep(2)
print("%s say hello" %id)

if __name__ == '__main__':
sem = Semaphore(5)
for i in range(20):
t = Thread(target=work,args=(1,))
t.start()

例1:以搶票爲例加鎖

from multiprocessing import Process,Lock
import json
import time
import os
import random

def work(dbfile,name,lock):
lock.acquire()
with open(dbfile,encoding="utf-8") as f:
dic = json.loads(f.read())

if dic["count"] > 0:
dic["count"] -=1
time.sleep(random.randint(1,3))
with open(dbfile,"w",encoding="utf-8") as f:
f.write(json.dumps(dic))
print("\033[43m%s搶票成功\033[0m" %name)
else:
print("\033[45m%s 搶票失敗\033[0m" %name)
lock.release()

if __name__ == '__main__':
lock = Lock()
p_l = []
for i in range(100):
p = Process(target=work,args=("a.txt","用戶%s" %i,lock))
p_l.append(p)
p.start()
[p.join() for p in p_l]
print("主進程")

例2:加鎖第二種寫法上下文管理with
from multiprocessing import Process,Lock
import json
import time
import os
import random

def work(dbfile,name,lock):
# lock.acquire()
with lock:
with open(dbfile,encoding="utf-8") as f:
dic = json.loads(f.read())

if dic["count"] > 0:
dic["count"] -=1
time.sleep(random.randint(1,3))
with open(dbfile,"w",encoding="utf-8") as f:
f.write(json.dumps(dic))
print("\033[43m%s搶票成功\033[0m" %name)
else:
print("\033[45m%s 搶票失敗\033[0m" %name)
# lock.release()

if __name__ == '__main__':
lock = Lock()
p_l = []
for i in range(100):
p = Process(target=work,args=("a.txt","用戶%s" %i,lock))
p_l.append(p)
p.start()
[p.join() for p in p_l]

解決死鎖把Lock 換成RLock便可

十.線程隊列的三種應用

   第一種:先進先出

import queue
q = queue.Queue(5)
q.put("hello")
q.put("world")
q.put("hello world")
q.put_nowait("hey")
print(q.qsize())

print(q.get())
print(q.get())
print(q.get())
print(q.empty())
print(q.full())
print(q.get_nowait())

第二種先進後出
import queue
q = queue.LifoQueue(5)
q.put("a")
q.put("b")
q.put_nowait("c")

print(q.get())
print(q.get())
print(q.get())

第三種指定優先級
只能指定元組或者列表的形式,數字越小,優先級最大
import queue
q = queue.PriorityQueue(5)
q.put((1,"c"))
q.put((2,"a"))
q.put((3,"b"))

print(q.get())
print(q.get())
print(q.get())

十一.多線程執行計算密集型任務

  對於IO密集型來講,使用多進程沒有,對於計算密集行使用多進程比較佔優點。

       一個cpu在同一時間只能處理一個進程裏面的線程。緣由跟GIL相關。

十二. 線程池和進程池

   進程池:通常開進程可參考cpu的核數

import os
import time
import random
from concurrent.futures import ThreadPoolExecutor,ProcessPoolExecutor
def func(n):
time.sleep(random.randint(1,3))
return n*n

if __name__ == '__main__':
pool = ProcessPoolExecutor(max_workers=5)
p_lst = []
for i in range(10):
ret = pool.submit(func,i) #異步提交任務,func是函數名,i是func函數的參數
p_lst.append(ret)
# pool.shutdown() #鎖定線程池,不讓新任務再提交進來了.輕易不用
#[i.result() for i in p_lst]
for i in p_lst:
print(i.result()) #有join的效果

十三. 回調函數

回調函數方法1:使用Pool模塊

import os
from multiprocessing import Pool,Process

def work(n):
return n*n

if __name__ == '__main__':
pool = Pool(5)
res_l = []
for i in range(6):
res = pool.apply_async(work,args=(i,))
res_l.append(res)

for res in res_l:
print(res.get()) ps:我也不太明白

回調函數2:基於模塊實現
#把一個任務的執行結果給另一個函數去處理,應用場景爬蟲

from concurrent.futures import ThreadPoolExecutor,ProcessPoolExecutor
def func1(x,y):
return x+y

def func2(n):
print(n)
print(n.result())

if __name__ == '__main__':
pool = ProcessPoolExecutor(max_workers=5,)
pool.submit(func1,5,10).add_done_callback(func2)
ps:若是要用線程,建立對象的時候把ProcessPoolExecutor換成ThreadPoolExecutor

十四. 守護線程

守護線程
import time
import threading
from threading import Thread
def work():
time.sleep(2)
print("say hello")

if __name__ == '__main__':
t = Thread(target=work)
#t.daemon = True
t.setDaemon(True)
t.start()
print("主進程")

十五. 協程

單線程下的併發,協程 是一種用戶態的輕量級線程
python的線程屬於內核級別的
協程是單線程下的併發,當遇到io是自動切換到別的協程,必須在一個單線程下實現併發,不須要加鎖,本質是是串行運行。
只是切換速度很快。

要實現協程,主要用戶本身控制切換,保存狀態。
yield實現兩個程序之間快速切換的例子
import time
def consumer():
#print(item)
x = 2222222222222
y=33333333333333
a = "aaaaaaaaaaaaaaaaaaaa"
b = "ccccccccccccccc"
while True:
item = yield

def producere(target,seq):
for item in seq:
target.send(item)

g=consumer()
next(g)

start_time = time.time()
producere(g,range(100000))
stop_time = time.time()
print("運行時間",stop_time-start_time)
greenlet模塊的switch方法切換
from greenlet import greenlet

def test1():
print("test1,first")
gr2.switch()
print("test1,second")
gr2.switch()

def test2():
print("test2,first")
gr1.switch()
print("test2,second")

gr1 = greenlet(test1)
gr2 = greenlet(test2)
gr1.switch()
gevent實現協程
import gevent

def eat(name):
print("%s eat food first" %name)
gevent.sleep(5)
print("%s eat food second" % name)

def play(name):
print("%s play phone 1" %name)
gevent.sleep(10)
print("%s play phone 1" % name)

g1 = gevent.spawn(eat,"ivy")
g2 = gevent.spawn(play,"zoe")
g1.join()
g2.join()
print("主")

完整版的gevent

打添丁實現的,若是不打補丁不會識別time的sleep方法
from gevent import monkey;monkey.patch_all()
import gevent
import time

def eat(name):
print("%s eat food first" %name)
time.sleep(5)
print("%s eat food second" % name)

def play(name):
print("%s play phone 1" %name)
time.sleep(10)
print("%s play phone 1" % name)

g1 = gevent.spawn(eat,"ivy")
g2 = gevent.spawn(play,"zoe")
g1.join()
g2.join()
print("主")

經過gevent實現爬蟲
from gevent import monkey; monkey.patch_all()
import requests
import time
import gevent

def get_page(url):
print("get page:%s" %url)
response = requests.get(url)
if response.status_code == 200:
print(response.text)

start_time = time.time()
g1 = gevent.spawn(get_page,url = "https://www.python.org")
g2 = gevent.spawn(get_page,url="https://yahoo.com")
g3 = gevent.spawn(get_page,url = "https://github.com")
gevent.joinall([g1,g2,g3])
stop_time = time.time()
print("時長",stop_time-start_time)

十六. GIL全局解釋器鎖

只有cpython纔有,cpython的線程管理不安全,
在python中同一個進程下開的線程只能有一個cpu執行
GIL保護的是解釋器的數據
針對不一樣的數據使用不一樣的鎖去保護
python解釋器調用的是操做系統的原生線程,誰先拿到GIL鎖誰先執行,保護共享數據知識補充:1.定時執行任務

知識點補充

定時去運行一個任務
from threading import Timer
def hello(name):
print("%s say hello" %name)

t = Timer(3,hello,args=("Ivy",))
t.start()
相關文章
相關標籤/搜索