爬蟲就是咱們利用某種程序代替人工批量讀取、獲取網站上的資料信息。而反爬則是跟爬蟲的對立面,是不遺餘力阻止非人爲的採集網站信息,兩者相生相剋,水火不容,到目前爲止大部分的網站都仍是能夠輕易的爬取資料信息。
爬蟲想要繞過被反的策略就是儘量的讓服務器人你不是機器程序,因此在程序中就要把本身假裝成瀏覽器訪問網站,這能夠極大程度下降被反的機率,那如何作到假裝瀏覽器呢?html
好比:python
user_agent_list = [ "Opera/9.80 (X11; Linux i686; U; hu) Presto/2.9.168 Version/11.50", "Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11", "Opera/9.80 (X11; Linux i686; U; es-ES) Presto/2.8.131 Version/11.11", "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/5.0 Opera 11.11", "Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.0; U; en) Presto/2.8.99 Version/11.10", "Opera/9.80 (Windows NT 5.1; U; zh-tw) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.1; Opera Tablet/15165; U; en) Presto/2.8.149 Version/11.1", "Opera/9.80 (X11; Linux x86_64; U; Ubuntu/10.10 (maverick); pl) Presto/2.7.62 Version/11.01", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0", "Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16", "Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14", "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14", "Opera/12.80 (Windows NT 5.1; U; en) Presto/2.10.289 Version/12.02", "Opera/9.80 (Windows NT 6.1; U; es-ES) Presto/2.9.181 Version/12.00", "Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00", "Opera/12.0(Windows NT 5.2;U;en)Presto/22.9.168 Version/12.00", "Opera/12.0(Windows NT 5.1;U;en)Presto/22.9.168 Version/12.00", "Mozilla/5.0 (Windows NT 5.1) Gecko/20100101 Firefox/14.0 Opera/12.0", "Opera/9.80 (Windows NT 6.1; WOW64; U; pt) Presto/2.10.229 Version/11.62", "Opera/9.80 (Windows NT 6.0; U; pl) Presto/2.10.229 Version/11.62", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; de) Presto/2.9.168 Version/11.52", "Opera/9.80 (Windows NT 5.1; U; en) Presto/2.9.168 Version/11.51", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; de) Opera 11.51", "Opera/9.80 (X11; Linux x86_64; U; fr) Presto/2.9.168 Version/11.50", ] referer_list = ["https://www.test.com/", "https://www.baidu.com/"]
獲取隨機數,即每次採集都會根據隨機數提取隨機用戶代理、引用地址(注:如有多個頁面循環採集,最好採集完單個等待個幾秒鐘再繼續採集,減少服務器的壓力。):web
import random import re, urllib.request, lxml.html import requests import time, random def get_randam(data): return random.randint(0, len(data)-1) def crawl(): headers = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9', 'Connection': 'keep-alive', 'host': 'test.com', 'Referer': 'https://test.com/', } random_index = get_randam(user_agent_list) random_agent = user_agent_list[random_index] headers['User-Agent'] = random_agent random_index_01 = get_randam(referer_list) random_agent_01 = referer_list[random_index_01] headers['Referer'] = random_agent_01 session = requests.session() url = "https://www.test.com/" html_data = session.get(url, headers=headers, timeout=180) html_data.raise_for_status() html_data.encoding = 'utf-8-sig' data = html_data.text data_doc = lxml.html.document_fromstring(data) ...(對網頁數據進行解析、提取、存儲等) time.sleep(random.randint(3, 5))
根據代理ip的匿名程度,代理ip能夠分爲下面四類:瀏覽器
下面我採用免費的高匿代理IP進行採集:服務器
#代理IP: https://www.xicidaili.com/nn import requests proxies = { "http": "http://117.30.113.248:9999", "https": "https://120.83.120.157:9999" } r=requests.get("https://www.baidu.com", proxies=proxies) r.raise_for_status() r.encoding = 'utf-8-sig' print(r.text)
注意:踩坑經歷,以前誤把proxies裏面的key設置成大寫的HTTP/HTTPS,致使請求不走代理,過了幾個月才發現這個問題,頭皮發麻啊cookie
以前也常常寫一些採集亞馬遜的爬蟲,可是採集沒多久就被識別出來是程序爬蟲,會默認跳到一個robotecheck頁面,也就是叫你輸入一個圖片驗證碼,只是爲了驗證究竟是不是人爲在訪問他們的網站。session