上一篇文章中咱們介紹了爬蟲的實現,及爬蟲爬取數據的功能,這裏會遇到幾個問題,比方站點中robots.txt文件,裏面有禁止爬取的URL。還有爬蟲是否支持代理功能。及有些站點對爬蟲的風控措施。設計的爬蟲下載限速功能。
一、解析robots.txt
首先,咱們需要解析robots.txt文件。以免下載禁止爬取的URL。適用Python自帶的robotparser模塊,就可以輕鬆的完畢這項工做,如如下的代碼。html
robotparser模塊首先載入robots.txt文件。而後經過can_fetch()函數肯定指定的用戶代理是否贊成訪問網頁。python
def get_robots(url):
"""Initialize robots parser for this domain """
rp = robotparser.RobotFileParser()
rp.set_url(urlparse.urljoin(url, '/robots.txt'))
rp.read()
return rp
爲了將該功能集成到爬蟲中,咱們需要在crawl循環中加入該檢查。web
while crawl_queue:
url = crawl_queue.pop()
# check url passes robots.txt restrictions
if rp.can_fetch(user_agent,url):
...
else:
print 'Blocked by robots.txt:',url
二、支持代理
有時咱們需要使用代理訪問某個站點。比方,Netflix屏蔽了美國之外的大多數國家。express
使用urllib2支持代理並無想象的那麼easy(可以嘗試使用更友好的Python HTTP 模塊requests來實現該功能)如下是使用urllib2支持代理的代碼。
proxy = …
opener = urllib2.build_opener()
proxy_params = {urlparse.urlparse(url).scheme:proxy}
opener.add_header(urllib2.ProxyHandler(proxy_params))
response = opener.open(request)
如下是集成了該功能的新版本號download函數。api
def download(url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers)
opener = urllib2.build_opener()
if proxy:
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
try:
response = opener.open(request)
html = response.read()
code = response.code
except urllib2.URLError as e:
print 'Download error:', e.reason
html = ''
if hasattr(e, 'code'):
code = e.code
if num_retries > 0 and 500 <= code < 600:
# retry 5XX HTTP errors
html = download(url, headers, proxy, num_retries - 1, data)
else:
code = None
return html
三、下載限速
假設咱們爬取站點的速度過快,就會面臨被封禁或是形成server過載的風險。爲了減小這些風險,咱們可以在兩次下載之間加入延時,從而對爬蟲限速。如下是實現了該功能的類的代碼。markdown
class Throttle:
"""Throttle downloading by sleeping between requests to same domain """
def __init__(self, delay):
# amount of delay between downloads for each domain
self.delay = delay
# timestamp of when a domain was last accessed
self.domains = {}
def wait(self, url):
"""Delay if have accessed this domain recently """
domain = urlparse.urlsplit(url).netloc
last_accessed = self.domains.get(domain)
if self.delay > 0 and last_accessed is not None:
sleep_secs = self.delay - (datetime.now() - last_accessed).seconds
if sleep_secs > 0:
time.sleep(sleep_secs)
self.domains[domain] = datetime.now()
Throttle類記錄了每個域名上次訪問的時間。假設當前時間距離上次訪問時間小於指定延時。則運行睡眠操做。咱們可以在每次下載以前調用Throttle對爬蟲進行限速。
四、避免爬蟲陷阱
眼下,咱們的爬蟲會跟蹤所有以前沒有訪問過的連接。app
但是。一些站點會動態生成頁面內容,這樣就會出現無限多的網頁。dom
比方,站點有一個在線日曆功能。提供了可以訪問下一個月或下一年的連接,那麼下個月的頁面中相同會有訪問再下個月的連接。這樣頁面就會無止境的連接下去。這樣的狀況成爲爬蟲陷阱。
想要避免陷入爬蟲陷阱。一個簡單的方法是積累到達當前網頁通過了多少個連接,也就是深度。當到達最大深度時,爬蟲就再也不向對列中加入該網頁中的連接了。函數
要實現這一功能,咱們需要改動seen變量。該變量原先僅僅記錄訪問過的網頁連接,現在改動爲一個字典,添加了頁面深度的記錄。
def link_crawler(…,max_length = 2):
max_length = 2
seen = {}
…
depth = seen[url]
if depth != max_depth:
for link in links:
if link not in seen:
seen[link] = depth + 1
crawl_queue.qppend(link)
現在有了這一功能,咱們就有信心爬蟲的終於必定可以完畢。假設想要禁用該功能。僅僅需要將max_depth設爲一個負數就能夠,此時當前深度永遠不會與之相等。
終於版本號post
import re
import urlparse
import urllib2
import time
from datetime import datetime
import robotparser
import Queue
from scrape_callback3 import ScrapeCallback
def link_crawler(seed_url, link_regex=None, delay=5, max_depth=-1, max_urls=-1, headers=None, user_agent='wswp', proxy=None, num_retries=1, scrape_callback=None):
"""Crawl from the given seed URL following links matched by link_regex """
# the queue of URL's that still need to be crawled
crawl_queue = [seed_url]
# the URL's that have been seen and at what depth
seen = {seed_url: 0}
# track how many URL's have been downloaded
num_urls = 0
rp = get_robots(seed_url)
throttle = Throttle(delay)
headers = headers or {}
if user_agent:
headers['User-agent'] = user_agent
while crawl_queue:
url = crawl_queue.pop()
depth = seen[url]
# check url passes robots.txt restrictions
if rp.can_fetch(user_agent, url):
throttle.wait(url)
html = download(url, headers, proxy=proxy, num_retries=num_retries)
links = []
if scrape_callback:
links.extend(scrape_callback(url, html) or [])
if depth != max_depth:
# can still crawl further
if link_regex:
# filter for links matching our regular expression
links.extend(link for link in get_links(html) if re.match(link_regex, link))
for link in links:
link = normalize(seed_url, link)
# check whether already crawled this link
if link not in seen:
seen[link] = depth + 1
# check link is within same domain
if same_domain(seed_url, link):
# success! add this new link to queue
crawl_queue.append(link)
# check whether have reached downloaded maximum
num_urls += 1
if num_urls == max_urls:
break
else:
print 'Blocked by robots.txt:', url
class Throttle:
"""Throttle downloading by sleeping between requests to same domain """
def __init__(self, delay):
# amount of delay between downloads for each domain
self.delay = delay
# timestamp of when a domain was last accessed
self.domains = {}
def wait(self, url):
"""Delay if have accessed this domain recently """
domain = urlparse.urlsplit(url).netloc
last_accessed = self.domains.get(domain)
if self.delay > 0 and last_accessed is not None:
sleep_secs = self.delay - (datetime.now() - last_accessed).seconds
if sleep_secs > 0:
time.sleep(sleep_secs)
self.domains[domain] = datetime.now()
def download(url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers)
opener = urllib2.build_opener()
if proxy:
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
try:
response = opener.open(request)
html = response.read()
code = response.code
except urllib2.URLError as e:
print 'Download error:', e.reason
html = ''
if hasattr(e, 'code'):
code = e.code
if num_retries > 0 and 500 <= code < 600:
# retry 5XX HTTP errors
html = download(url, headers, proxy, num_retries - 1, data)
else:
code = None
return html
def normalize(seed_url, link):
"""Normalize this URL by removing hash and adding domain """
link, _ = urlparse.urldefrag(link) # remove hash to avoid duplicates
return urlparse.urljoin(seed_url, link)
def same_domain(url1, url2):
"""Return True if both URL's belong to same domain """
return urlparse.urlparse(url1).netloc == urlparse.urlparse(url2).netloc
def get_robots(url):
"""Initialize robots parser for this domain """
rp = robotparser.RobotFileParser()
rp.set_url(urlparse.urljoin(url, '/robots.txt'))
rp.read()
return rp
def get_links(html):
"""Return a list of links from html """
# a regular expression to extract all links from the webpage
webpage_regex = re.compile('<a[^>]+href=["\'](.*?)["\']', re.IGNORECASE)
# list of all links from the webpage
return webpage_regex.findall(html)
if __name__ == '__main__':
# link_crawler('http://example.webscraping.com', '/(index|view)', delay=0, num_retries=1, user_agent='BadCrawler')
# link_crawler('http://example.webscraping.com', '/(index|view)', delay=0, num_retries=1, max_depth=1,
# user_agent='GoodCrawler')
link_crawler('http://fund.eastmoney.com',r'/fund.html#os_0;isall_0;ft_;pt_1',max_depth=-1,scrape_callback=ScrapeCallback
認爲好。就打賞下小編吧~