python由於其全局解釋器鎖GIL而沒法經過線程實現真正的平行計算。這個論斷咱們不展開,可是有個概念咱們要說明,IO密集型 vs. 計算密集型。python
IO密集型:讀取文件,讀取網絡套接字頻繁。編程
計算密集型:大量消耗CPU的數學與邏輯運算,也就是咱們這裏說的平行計算。網絡
而concurrent.futures模塊,能夠利用multiprocessing實現真正的平行計算。併發
核心原理是:concurrent.futures會以子進程的形式,平行的運行多個python解釋器,從而令python程序能夠利用多核CPU來提高執行速度。因爲子進程與主解釋器相分離,因此他們的全局解釋器鎖也是相互獨立的。每一個子進程都可以完整的使用一個CPU內核。異步
Future總結異步編程
1. python3自帶,python2須要安裝 2. Executer對象 它是一個抽象類,它提供了異步執行的方法,他不能直接使用,但能夠經過它的子類 ThreadPoolExecuter和ProcessPoolExecuter 2.1 Executer.submit(fn,*args,**kwargs) fn:須要異步執行的函數 *args,**kwargs fn接受的參數 該方法的做用就是提交一個可執行的回調task,它返回一個Future對象 2.2 map(fn,*iterables, timeout=None, chunksize=1) map(task,URLS) # 返回一個map()迭代器,這個迭代器中的回調執行返回的結果是有序的 3. Future對象相關 future能夠理解爲一個在將來完成的操做,這是異步編程的基礎 一般狀況下咱們在遇到IO操做的時候,將會發生阻塞,cpu不能作其餘事情 而future的引入幫助咱們在這段等待時間能夠完成其餘的操做 3.1 done(): 若是當前線程已取消/已成功,返回True。 3.2 cance(): 若是當前線程正在執行,而且不能取消調用,返回Flase。不然調用取消,返回True 3.3 running(): 若是當前的線程正在執行,則返回True 3.4 result(): 返回調用返回的值,若是調用還沒有完成,則此方法等待 若是等待超時,會拋出concurrent.futures.TimeoutError 若是沒有指定超時時間,則等待無時間限制 若是在完成以前,取消了Future,則會引起CancelledError 4. as_completed(): 在多個Future實例上的迭代器將會被返回 這些Future實例由fs完成時產生。 由fs返回的任何重複的Future,都會被返回一次。 裏面保存的都是已經執行完成的Future對象 5. wait(): 返回一個元祖,元祖包含兩個元素 1. 已完成的future集合 2. 未完成的future集合
# coding=utf-8 from concurrent import futures from concurrent.futures import Future import time def return_future(msg): time.sleep(3) return msg pool = futures.ThreadPoolExecutor(max_workers=2) t1 = pool.submit(return_future,'hello') t2 = pool.submit(return_future,'world') time.sleep(3) print(t1.done()) # 若是順利完成,則返回True time.sleep(3) print(t2.done()) print(t1.result()) # 獲取future的返回值 time.sleep(3) print(t2.result()) print("主線程")
map
(func,* iterables,timeout = None,chunksize = 1 )函數
# coding=utf-8 import time from concurrent.futures import Future,as_completed from concurrent.futures import ThreadPoolExecutor as Pool import requests import time URLS = ['http://www.baidu.com', 'http://qq.com', 'http://sina.com'] def task(url,timeout=10): return requests.get(url=url,timeout=timeout) pool = Pool() result = pool.map(task,URLS) start_time = time.time() # 按照URLS的順序返回 for res in result: print("{} {}".format(res.url,len(res.content))) # 無序的 with Pool(max_workers=3) as executer: future_task = [executer.submit(task,url) for url in URLS] for f in as_completed(future_task): if f.done(): f_ret = f.result() # f.result()獲得task的返回值,requests對象 print('%s, done, result: %s, %s' % (str(f), f_ret.url, len(f_ret.content))) print("耗時",time.time() - start_time) print("主線程")
Future能夠理解爲一個將來完成的操做
當咱們執行io操做的時候,在等待返回結果以前會產生阻塞
cpu不能作其餘事情,而Future的引入幫助咱們在等待的這段時間能夠完成其餘操做url
from concurrent.futures import ThreadPoolExecutor as Pool from concurrent.futures import as_completed import requests import time URLS = ['http://www.baidu.com', 'http://qq.com', 'http://sina.com'] def task(url,timeout=10): return requests.get(url=url,timeout=timeout) # start_time = time.time() # for url in URLS: # ret = task(url) # print("{} {}".format(ret.url,len(ret.content))) # print("耗時",time.time() - start_time) with Pool(max_workers=3) as executor: # 建立future任務 future_task = [executor.submit(task,url) for url in URLS] for f in future_task: if f.running(): print("%s is running"%str(f)) for f in as_completed(future_task): try: ret = f.done() if ret: f_ret = f.result() print('%s, done, result: %s, %s' % (str(f), f_ret.url, len(f_ret.content))) except Exception as e: f.cance() print(e) """ url不是按照順序返回的,說明併發時,當訪問某一個url時,若是沒有獲得返回結果,不會發生阻塞 <Future at 0x1c63990e6d8 state=running> is running <Future at 0x1c639922780 state=running> is running <Future at 0x1c639922d30 state=running> is running <Future at 0x1c63990e6d8 state=finished returned Response>, done, result: http://www.baidu.com/, 2381 <Future at 0x1c639922780 state=finished returned Response>, done, result: https://www.qq.com?fromdefault, 243101 <Future at 0x1c639922d30 state=finished returned Response>, done, result: http://sina.com/, 23103 """
concurrent.futures.
wait
(fs, timeout=None, return_when=ALL_COMPLETED)wait()會返回一個tuple,
tuple會包含兩個集合
1. 已完成的集合
2. 未完成的集合
使用wait()會得到更大的自由度,他接受三個參數
FIRST_COMPLETED, FIRST_EXCEPTION和ALL_COMPLETE
默認爲ALL_COMPLETE
from concurrent.futures import Future from concurrent.futures import ThreadPoolExecutor as Pool from concurrent.futures import as_completed,wait import requests URLS = ['http://www.baidu.com', 'http://qq.com', 'http://sina.com'] def task(url,timeout=10): return requests.get(url=url,timeout=timeout) with Pool(max_workers=3) as execute : fulture_task = [execute.submit(task,url) for url in URLS] for f in fulture_task: if f.running(): print("%s"%(str(f))) """ 而且wait還有timeout和return_when兩個參數 return_when有三個常量 FIRST_COMPLETED 任何一個future_task執行完成時/取消時,改函數返回 FIRST_EXCEPTION 任何一個future_task發生異常時,該函數返回,若是沒有異常發生,等同於ALL_COMPLETED ALL_COMPLETED 當全部的future_task執行完畢返回 """ results = wait(fulture_task,return_when="FIRST_COMPLETED")# done = results[0] for d in done: print(d)
concurrent.futures.
as_completed
(fs, timeout=None)在多個Future實例上的迭代器將會被返回
這些Future實例由fs完成時產生。
由fs返回的任何重複的Future,都會被返回一次。
裏面保存的都是已經執行完成的Future對象
from concurrent.futures import ThreadPoolExecutor as Pool from concurrent.futures import as_completed import requests import time URLS = ['http://www.baidu.com', 'http://qq.com', 'http://sina.com'] def task(url,timeout=10): return requests.get(url=url,timeout=timeout) with Pool(max_workers=3) as executor: # 建立future任務 future_task = [executor.submit(task,url) for url in URLS] for f in future_task: if f.running(): print("%s is running"%str(f)) for f in as_completed(future_task): try: ret = f.done() if ret: f_ret = f.result() print('%s, done, result: %s, %s' % (str(f), f_ret.url, len(f_ret.content))) except Exception as e: f.cance() print(e)