在編寫爬蟲時,性能的消耗主要在IO請求中,當單進程單線程模式下請求URL時必然會引發等待,從而使得請求總體變慢。python
import requests def fetch_async(url): response = requests.get(url) return response url_list = ['http://www.github.com', 'http://www.bing.com'] for url in url_list: fetch_async(url)
線程池不能太多,由於線程的上下文切換,浪費時間,會下降總體效率;react
每一個線程發出請求以後就阻塞,等待返回數據,這中間的時間線程處於空閒狀態;git
from concurrent.futures import ThreadPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response url_list = ['http://www.github.com', 'http://www.bing.com'] pool = ThreadPoolExecutor(5) for url in url_list: pool.submit(fetch_async, url) pool.shutdown(wait=True)
from concurrent.futures import ThreadPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response def callback(future): print(future.result()) url_list = ['http://www.github.com', 'http://www.bing.com'] pool = ThreadPoolExecutor(5) for url in url_list: v = pool.submit(fetch_async, url) v.add_done_callback(callback) pool.shutdown(wait=True)
from concurrent.futures import ProcessPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response url_list = ['http://www.github.com', 'http://www.bing.com'] pool = ProcessPoolExecutor(5) for url in url_list: pool.submit(fetch_async, url) pool.shutdown(wait=True)
from concurrent.futures import ProcessPoolExecutor import requests def fetch_async(url): response = requests.get(url) return response def callback(future): print(future.result()) url_list = ['http://www.github.com', 'http://www.bing.com'] pool = ProcessPoolExecutor(5) for url in url_list: v = pool.submit(fetch_async, url) v.add_done_callback(callback) pool.shutdown(wait=True)
多線程和多進程的區別:github
IO密集型操做,使用多線程,由於不調用CPUweb
計算密集型操做,使用多進程,調用CPU數據庫
線程之間共用資源,能夠節省資源空間服務器
進程之間不共享資源,比較佔用資源空間cookie
經過上述代碼都可以完成對請求性能的提升,對於多線程和多進程的缺點是在IO阻塞時會形成了線程和進程的浪費,因此異步IO會是首選:網絡
asyncio
能夠實現單線程併發IO操做。若是僅用在客戶端,發揮的威力不大。若是把asyncio
用在服務器端,例如Web服務器,因爲HTTP鏈接就是IO操做,所以能夠用單線程+coroutine
實現多用戶的高併發支持。多線程
asyncio
實現了TCP、UDP、SSL等協議,aiohttp
則是基於asyncio
實現的HTTP框架。
import asyncio @asyncio.coroutine def func1(): print('before...func1......') yield from asyncio.sleep(5) print('end...func1......') tasks = [func1(), func1()] loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*tasks)) loop.close()
import asyncio @asyncio.coroutine def fetch_async(host, url='/'): print(host, url) reader, writer = yield from asyncio.open_connection(host, 80) request_header_content = """GET %s HTTP/1.0\r\nHost: %s\r\n\r\n""" % (url, host,) request_header_content = bytes(request_header_content, encoding='utf-8') writer.write(request_header_content) yield from writer.drain() text = yield from reader.read() print(host, url, text) writer.close() tasks = [ fetch_async('www.cnblogs.com', '/wupeiqi/'), fetch_async('dig.chouti.com', '/pic/show?nid=4073644713430508&lid=10273091') ] loop = asyncio.get_event_loop() results = loop.run_until_complete(asyncio.gather(*tasks)) loop.close()
import aiohttp import asyncio @asyncio.coroutine def fetch_async(url): print(url) response = yield from aiohttp.request('GET', url) # data = yield from response.read() # print(url, data) print(url, response) response.close() tasks = [fetch_async('http://www.google.com/'), fetch_async('http://www.chouti.com/')] event_loop = asyncio.get_event_loop() results = event_loop.run_until_complete(asyncio.gather(*tasks)) event_loop.close()
import asyncio import requests @asyncio.coroutine def fetch_async(func, *args): loop = asyncio.get_event_loop() future = loop.run_in_executor(None, func, *args) response = yield from future print(response.url, response.content) tasks = [ fetch_async(requests.get, 'http://www.cnblogs.com/wupeiqi/'), fetch_async(requests.get, 'http://dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091') ] loop = asyncio.get_event_loop() results = loop.run_until_complete(asyncio.gather(*tasks)) loop.close()
Python內部socket在發送完數據後等待接收數據,是阻塞的,monkey.patch_all()以後,就會把內部全部的socket換成gevent封裝的異步IO操做;
gevent是第三方庫,經過greenlet實現協程,greenlet能夠實現協程,不過每一次都要人爲的去指向下一個該執行的協程,顯得太過麻煩。
當一個greenlet遇到IO操做時,好比訪問網絡,就自動切換到其餘的greenlet,等到IO操做完成,再在適當的時候切換回來繼續執行。因爲IO操做很是耗時,常常使程序處於等待狀態,有了gevent爲咱們自動切換協程,就保證總有greenlet在運行,而不是等待IO。
import gevent import requests from gevent import monkey monkey.patch_all() def fetch_async(method, url, req_kwargs): print(method, url, req_kwargs) response = requests.request(method=method, url=url, **req_kwargs) print(response.url, response.content) # ##### 發送請求 ##### gevent.joinall([ gevent.spawn(fetch_async, method='get', url='https://www.python.org/', req_kwargs={}), gevent.spawn(fetch_async, method='get', url='https://www.yahoo.com/', req_kwargs={}), gevent.spawn(fetch_async, method='get', url='https://github.com/', req_kwargs={}), ]) # ##### 發送請求(協程池控制最大協程數量) ##### # from gevent.pool import Pool # pool = Pool(5)最多同時5個協程 # gevent.joinall([ # pool.spawn(fetch_async, method='get', url='https://www.python.org/', req_kwargs={}), # pool.spawn(fetch_async, method='get', url='https://www.yahoo.com/', req_kwargs={}), # pool.spawn(fetch_async, method='get', url='https://www.github.com/', req_kwargs={}), # ])
import grequests request_list = [ grequests.get('http://httpbin.org/delay/1', timeout=0.001), grequests.get('http://fakedomain/'), grequests.get('http://httpbin.org/status/500') ] # ##### 執行並獲取響應列表 ##### # response_list = grequests.map(request_list) # print(response_list) # ##### 執行並獲取響應列表(處理異常) ##### # def exception_handler(request, exception): # print(request,exception) # print("Request failed") # response_list = grequests.map(request_list, exception_handler=exception_handler) # print(response_list)
from twisted.web.client import getPage, defer from twisted.internet import reactor def one_done(arg):
print('finished...')
def all_done(arg): reactor.stop() def callback(contents): print(contents) deferred_list = [] # 列表裏是一些特殊對象,封裝了已經向URL發送請求的對象 url_list = ['http://www.bing.com', 'http://www.baidu.com', ] for url in url_list: deferred = getPage(bytes(url, encoding='utf8'))#發送HTTP請求 deferred.addCallback(callback)#執行回調函數 deferred_list.append(deferred) dlist = defer.DeferredList(deferred_list) dlist.addBoth(all_done)#給每一個對象添加回調函數 reactor.run()#檢測是否有執行完成的請求,每完成一個執行一次one_done,等全部的請求都回來,執行all_done(),這是個死循環,須要all_done來中止它
from tornado.httpclient import AsyncHTTPClient from tornado.httpclient import HTTPRequest from tornado import ioloop COUNT = 0 def handle_response(response): """ 處理返回值內容(須要維護計數器,來中止IO循環),調用 ioloop.IOLoop.current().stop() """ global COUNT COUNT -= 1 if response.error: print("Error:", response.error) else: print(response.body) if COUNT == 0: ioloop.IOLoop.current().stop() def func(): url_list = [ 'http://www.baidu.com', 'http://www.bing.com', ] global COUNT COUNT = len(url_list) for url in url_list: print(url) http_client = AsyncHTTPClient() http_client.fetch(HTTPRequest(url), handle_response) ioloop.IOLoop.current().add_callback(func) ioloop.IOLoop.current().start() # 也是個死循環,須要自定義一箇中止條件,一個簡單的計數器
from twisted.internet import reactor from twisted.web.client import getPage import urllib.parse def one_done(arg): print(arg) reactor.stop() post_data = urllib.parse.urlencode({'check_data': 'adf'}) post_data = bytes(post_data, encoding='utf8') headers = {b'Content-Type': b'application/x-www-form-urlencoded'} response = getPage(bytes('http://dig.chouti.com/login', encoding='utf8'), method=bytes('POST', encoding='utf8'), postdata=post_data, cookies={}, headers=headers) response.addBoth(one_done) reactor.run()
grequests(gevent+requests) --> Twisted --> Tornado --> asyncio
IO多路複用:監聽多個socket對象(while循環),誰有變化就處理誰,利用這個特性,能夠開發出不少操做,好比異步IO模塊;
異步IO:當進程執行到一個IO(等待外部數據)的時候,不等待,直到數據接收成功,再回來處理,其實就是回調;
利用非阻塞的socket+IO多路複用,能夠實現僞併發;
import select import socket import time class HttpRequest(object): """封裝請求和相應的基本數據""" def __init__(self, sock, host, callback): self.sock = sock self.callback = callback self.host = host def fileno(self): """請求sockect對象的文件描述符,用於select監聽""" return self.sock.fileno() class HttpResponse: def __init__(self,recv_data): self.recv_data = recv_data self.header_dict = {} self.body = None self.initialize() def initialize(self): # 把響應頭和響應體分開 headers, body = self.recv_data.split(b'\r\n\r\n', 1) self.body = body header_list = headers.split(b'\r\n') for head in header_list: head = str(head,encoding='utf-8') v = head.split(':',1) if len(v) == 2: self.header_dict[v[0]] = v[1] elif len(v) == 1: self.header_dict['method'] = v[0] class AsyncRequest(object): def __init__(self): self.conn = [] # 檢測是否有數據返回 self.connections = []#檢測是否已經連接成功 def add_request(self, host, callback,): """建立一個要請求""" try: sk = socket.socket() sk.setblocking(False) sk.connect((host, 80)) except BlockingIOError as e: pass # print('已經向遠程發送鏈接的請求') req = HttpRequest(sk, host, callback) self.connections.append(req) self.conn.append(req) def run(self): """事件循環,用於檢測請求的socket是否已經就緒,從而執行相關操做""" while True: rlist, wlist, elist = select.select(self.conn, self.connections, self.conn, 0.05) for w in wlist: # 已經鏈接成功遠程服務器,開始向遠程發送請求數據 print(w.host,'鏈接成功。。。') data = "GET / HTTP/1.0\r\nHost:%s\r\n\r\n"%(w.host,) w.sock.sendall(bytes(data,encoding='utf-8')) # 鏈接成功,發送請求以後,移除監聽對象 self.connections.remove(w) for r in rlist: sock = r.sock recv_data = bytes() while True: # 服務端返回的數據可能不少,須要循環接收 try: data = sock.recv(8096) recv_data += data r.write(recv_data) except Exception as e: break # print(recv_data) response = HttpResponse(recv_data) r.callback(r.host,response) sock.close() # 接收完成,關閉連接 self.conn.remove(r) # 移除監聽對象 # 若是接收數據的對象列表爲空,說明全部接收數據完成,結束循環 if len(self.conn) == 0: break if __name__ == '__main__': def callback_1(host,response): print(host,'保存到文件',response.header_dict,response.body) def callback_2(host,response): print(host,'保存到數據庫',response.header_dict,response.body) obj = AsyncRequest() url_list = [ {'host': 'www.cnblogs.com','callback': callback_1}, {'host': 'www.baidu.com','callback': callback_2}, {'host': 'www.zhihu.com', 'callback': callback_2}, ] for item in url_list: obj.add_request(**item) obj.run()
import select import socket import time class AsyncTimeoutException(TimeoutError): """ 請求超時異常類 """ def __init__(self, msg): self.msg = msg super(AsyncTimeoutException, self).__init__(msg) class HttpContext(object): """封裝請求和相應的基本數據""" def __init__(self, sock, host, port, method, url, data, callback, timeout=5): """ sock: 請求的客戶端socket對象 host: 請求的主機名 port: 請求的端口 port: 請求的端口 method: 請求方式 url: 請求的URL data: 請求時請求體中的數據 callback: 請求完成後的回調函數 timeout: 請求的超時時間 """ self.sock = sock self.callback = callback self.host = host self.port = port self.method = method self.url = url self.data = data self.timeout = timeout self.__start_time = time.time() self.__buffer = [] def is_timeout(self): """當前請求是否已經超時""" current_time = time.time() if (self.__start_time + self.timeout) < current_time: return True def fileno(self): """請求sockect對象的文件描述符,用於select監聽""" return self.sock.fileno() def write(self, data): """在buffer中寫入響應內容""" self.__buffer.append(data) def finish(self, exc=None): """在buffer中寫入響應內容完成,執行請求的回調函數""" if not exc: response = b''.join(self.__buffer) self.callback(self, response, exc) else: self.callback(self, None, exc) def send_request_data(self): content = """%s %s HTTP/1.0\r\nHost: %s\r\n\r\n%s""" % ( self.method.upper(), self.url, self.host, self.data,) return content.encode(encoding='utf8') class AsyncRequest(object): def __init__(self): self.fds = [] self.connections = [] def add_request(self, host, port, method, url, data, callback, timeout): """建立一個要請求""" client = socket.socket() client.setblocking(False) try: client.connect((host, port)) except BlockingIOError as e: pass # print('已經向遠程發送鏈接的請求') req = HttpContext(client, host, port, method, url, data, callback, timeout) self.connections.append(req) self.fds.append(req) def check_conn_timeout(self): """檢查全部的請求,是否有已經鏈接超時,若是有則終止""" timeout_list = [] for context in self.connections: if context.is_timeout(): timeout_list.append(context) for context in timeout_list: context.finish(AsyncTimeoutException('請求超時')) self.fds.remove(context) self.connections.remove(context) def running(self): """事件循環,用於檢測請求的socket是否已經就緒,從而執行相關操做""" while True: r, w, e = select.select(self.fds, self.connections, self.fds, 0.05) if not self.fds: return for context in r: sock = context.sock while True: try: data = sock.recv(8096) if not data: self.fds.remove(context) context.finish() break else: context.write(data) except BlockingIOError as e: break except TimeoutError as e: self.fds.remove(context) self.connections.remove(context) context.finish(e) break for context in w: # 已經鏈接成功遠程服務器,開始向遠程發送請求數據 if context in self.fds: data = context.send_request_data() context.sock.sendall(data) self.connections.remove(context) self.check_conn_timeout() if __name__ == '__main__': def callback_func(context, response, ex): """ :param context: HttpContext對象,內部封裝了請求相關信息 :param response: 請求響應內容 :param ex: 是否出現異常(若是有異常則值爲異常對象;不然值爲None) :return: """ print(context, response, ex) obj = AsyncRequest() url_list = [ {'host': 'www.google.com', 'port': 80, 'method': 'GET', 'url': '/', 'data': '', 'timeout': 5, 'callback': callback_func}, {'host': 'www.baidu.com', 'port': 80, 'method': 'GET', 'url': '/', 'data': '', 'timeout': 5, 'callback': callback_func}, {'host': 'www.bing.com', 'port': 80, 'method': 'GET', 'url': '/', 'data': '', 'timeout': 5, 'callback': callback_func}, ] for item in url_list: print(item) obj.add_request(**item) obj.running()