Python(協程)

day32html

協程,又稱微線程,纖程。英文名Coroutine。一句話說明什麼是線程:協程是一種用戶態的輕量級線程python

協程擁有本身的寄存器上下文和棧。協程調度切換時,將寄存器上下文和棧保存到其餘地方,在切回來的時候,恢復先前保存的寄存器上下文和棧。所以:git

協程能保留上一次調用時的狀態(即全部局部狀態的一個特定組合),每次過程重入時,就至關於進入上一次調用的狀態,換種說法:進入上一次離開時所處邏輯流的位置。github

 

複習yield網絡

 1 def f():
 2     print('ok1')
 3     count = yield 5
 4     print('count:', count)
 5     print('ok2')
 6     yield 67
 7 
 8 gen = f()#生成器對象
 9 
10 ret1 = next(gen)#返回5    等同ret1 = gen.send(None)
11 print(ret1)
12 #從count = yield處開始執行
13 ret2 = gen.send(10)#返回67
14 print(ret2)

第十行執行到yield = 5,返回一個5。併發

下一次從count = yeild開始執行。函數

ok1
5
count: 10
ok2
67

Process finished with exit code 0

 

yield協程併發url

 

 1 def consumer(name):
 2     print("--->starting eating baozi...")
 3     while True:
 4         new_baozi = yield
 5         print("[%s] is eating baozi %s" % (name, new_baozi))
 6         # time.sleep(1)
 7 
 8 def producer():
 9     next(con)
10     next(con2)
11     n = 0
12     while n < 5:
13         n += 1
14         con.send(n)
15         con2.send(n)
16         print("\033[32;1m[producer]\033[0m is making baozi %s" % n)
17 
18 if __name__ == '__main__':
19     con = consumer("c1")#建立一個生成器對象, 其中有yield
20     con2 = consumer("c2")
21     p = producer()

 

能夠在19行處設斷點調試。spa

從con,con2中選擇con分析程序的執行過程。線程

next(con)進入到consumer函數,執行到new_baozi = yield阻斷,new_baozi = yeild並未執行,且無返回值。

到while循環的con.send(n),使new_baozi = n,而且print,即第9行。

因爲是while循環,循環到new_baozi = yield處繼續阻斷,等下一次for循環的con.send(n)。

producer中send完,consumer便會當即輸出,與協程相相似。

 

咱們先給協程一個標準定義,即符合什麼條件就能稱之爲協程:

  1. 必須在只有一個單線程裏實現併發
  2. 修改共享數據不需加鎖
  3. 用戶程序裏本身保存多個控制流的上下文棧
  4. 一個協程遇到IO操做自動切換到其它協程

基於上面這4點定義,咱們剛纔用yield實現的程並不能算是合格的線程,由於它有一點功能沒實現,哪一點呢?

 

執行結果:

--->starting eating baozi...
--->starting eating baozi...
[c1] is eating baozi 1
[c2] is eating baozi 1
[producer] is making baozi 1
[c1] is eating baozi 2
[c2] is eating baozi 2
[producer] is making baozi 2
[c1] is eating baozi 3
[c2] is eating baozi 3
[producer] is making baozi 3
[c1] is eating baozi 4
[c2] is eating baozi 4
[producer] is making baozi 4
[c1] is eating baozi 5
[c2] is eating baozi 5
[producer] is making baozi 5

Process finished with exit code 0

 

greenlet

完成不一樣任務間的切換。

 1 # -*- coding:utf-8 -*-
 2 from greenlet import greenlet
 3 
 4 def test1():
 5     print(12)
 6     gr2.switch()
 7     print(34)
 8     gr2.switch()
 9 
10 def test2():
11     print(56)
12     gr1.switch()
13     print(78)
14 
15 gr1 = greenlet(test1)
16 gr2 = greenlet(test2)
17 gr1.switch()#先執行test1函數,遇到gr2.switch(),到test2中,從以前中止處開始執行
18 #完成不一樣任務間的切換

執行結果:

12
56
34
78

Process finished with exit code 0

 

gevent

 1 import gevent
 2 import time
 3 
 4 def func1():
 5     print('\033[31;1m a\033[0m', time.ctime())
 6     gevent.sleep(2)#模擬IO阻塞的狀況,阻塞時執行func2中
 7     print('\033[31;1m b\033[0m', time.ctime())
 8 
 9 
10 def func2():
11     print('\033[32;1m c\033[0m', time.ctime())
12     gevent.sleep(1)#不是time.sleep()
13     print('\033[32;1m d\033[0m', time.ctime())
14 
15 
16 gevent.joinall([
17     gevent.spawn(func1),
18     gevent.spawn(func2),
19     # gevent.spawn(func3),
20 ])

先執行func1,到gevent.sleep(2)時,模擬了IO阻塞,立刻執行func2,當func2中的阻塞1秒後,func1還處於阻塞狀態,func2繼續執行。

執行結果:

 a Tue Nov  6 16:37:04 2018
 c Tue Nov  6 16:37:04 2018
 d Tue Nov  6 16:37:05 2018
 b Tue Nov  6 16:37:06 2018

Process finished with exit code 0

 

協程實戰

 

 1 from gevent import monkey
 2 monkey.patch_all()#能夠檢測到阻塞
 3 
 4 import gevent
 5 from urllib.request import urlopen
 6 import time
 7 
 8 def f(url):
 9     print('GET: %s' % url)
10     resp = urlopen(url)
11     data = resp.read()
12 
13     with open('xiaohua.html', 'wb') as f:
14         f.write(data)
15 
16     print('%d bytes received from %s.' % (len(data), url))
17 
18                                    # f('http://www.xiaohuar.com/')
19 a = time.time()
20 gevent.joinall([
21     gevent.spawn(f, 'https://www.python.org/'),
22     gevent.spawn(f, 'https://www.yahoo.com/'),
23     gevent.spawn(f, 'https://github.com/'),
24 ])
25 
26 b = time.time()
27 
28 # l = ['https://www.python.org/', 'https://www.yahoo.com/', 'https://github.com/']
29 # for url in l:
30 #     f(url)
31 # c = time.time()
32 
33 # print('普通方法所需時間:', c - b)#10s
34 
35 print('使用協程所需時間:', b - a)# 2s

 

爬蟲使用協程,爬取網絡更快。遇到阻塞會更換任務。

並且monkey能夠檢測到是否阻塞。

執行結果:

GET: https://www.python.org/
GET: https://www.yahoo.com/
GET: https://github.com/
65038 bytes received from https://github.com/.
48925 bytes received from https://www.python.org/.
517942 bytes received from https://www.yahoo.com/.
使用協程所需時間: 5.216813802719116

Process finished with exit code 0
相關文章
相關標籤/搜索