反反爬的主要思路就是:儘量的去模擬瀏覽器,瀏覽器在如何操做,代碼中就如何去實現。瀏覽器先請求了地址url1,保留了cookie在本地,以後請求地址url2,帶上了以前的cookie,代碼中也能夠這樣去實現。javascript
不少時候,爬蟲中攜帶的headers字段,cookie字段,url參數,post的參數不少,不清楚哪些有用,哪些沒用的狀況下,只可以去嘗試,由於每一個網站都是不相同的。固然在盲目嘗試以前,能夠參考別人的思路,咱們本身也應該有一套嘗試的流程。php
經過User-Agent字段反爬的話,只須要給他在請求以前添加User-Agent便可,更好的方式是使用User-Agent池來解決,咱們能夠考慮收集一堆User-Agent的方式,或者是隨機生成User-Agentcss
import random def get_ua(): first_num = random.randint(55, 62) third_num = random.randint(0, 3200) fourth_num = random.randint(0, 140) os_type = [ '(Windows NT 6.1; WOW64)', '(Windows NT 10.0; WOW64)', '(X11; Linux x86_64)', '(Macintosh; Intel Mac OS X 10_12_6)' ] chrome_version = 'Chrome/{}.0.{}.{}'.format(first_num, third_num, fourth_num) ua = ' '.join(['Mozilla/5.0', random.choice(os_type), 'AppleWebKit/537.36', '(KHTML, like Gecko)', chrome_version, 'Safari/537.36'] ) return ua
例如豆瓣電視劇中,經過referer字段來反爬,咱們只須要添加上便可html
若是目標網站不須要登陸 每次請求帶上前一次返回的cookie,好比requests模塊的sessionjava
若是目標網站須要登陸 準備多個帳號,經過一個程序獲取帳號對應的cookie,組成cookie池,其餘程序使用這些cookiepython
在請求目標網站的時候,咱們看到的彷佛就請求了一個網站,然而實際上在成功請求目標網站以前,中間可能有經過js實現的跳轉,咱們肉眼不可見,這個時候能夠經過點擊perserve log按鈕實現觀察頁面跳轉狀況web
在這些請求中,若是請求數量不少,通常來說,只有那些response中帶cookie字段的請求是有用的,意味着經過這個請求,對方服務器有設置cookie到本地chrome
對應的須要分析js,觀察加密的實現過程npm
在下一小節,學習了selenium這個問題會變得很容易json
對應的須要分析js,觀察加密的實現過程,學習了selenium這個問題會變得很容易
經過打碼平臺或者是機器學習的方法識別驗證碼,其中打碼平臺廉價易用,更值得推薦.
同一個ip大量請求了對方服務器,有更大的可能性會被識別爲爬蟲,對應的經過購買高質量的ip的方式可以結局問題
解決思路:切換到手機版
解決思路:計算css的偏移
Selenium是一個Web的自動化測試工具,最初是爲網站自動化測試而開發的,Selenium 能夠直接運行在瀏覽器上,它支持全部主流的瀏覽器(包括PhantomJS這些無界面的瀏覽器),能夠接收指令,讓瀏覽器自動加載頁面,獲取須要的數據,甚至頁面截屏
PhantomJS 是一個基於Webkit的「無界面」(headless)瀏覽器,它會把網站加載到內存並執行頁面上的 JavaScript
Chromedriver 也是一個可以被selenium驅動的瀏覽器,可是和PhantomJS的區別在於它是有界面的
最簡單的安裝方式是:解壓後把bin目錄下的可執行文件移動到環境變量下,好比/usr/bin
或者是/usr/local/bin
下面
注意:Chromedriver和電腦上的chrome版本有對應關係,建議使用最新的Chromedriver版本而且更新chrome瀏覽器到最新版
知識點:
加載網頁: selenium經過控制瀏覽器,因此對應的獲取的數據都是elements中的內容
from selenium import webdriver driver = webdriver.PhantomJS(「c:…/pantomjs.exe」) driver.get("http://www.baidu.com/") driver.save_screenshot("長城.png")
定位和操做:
driver.find_element_by_id(「kw」).send_keys(「長城」) driver.find_element_by_id("su").click()
查看請求信息:
driver.page_source driver.get_cookies() driver.current_url
退出
driver.close() #退出當前頁面 driver.quit() #退出瀏覽器
知識點:
定位元素語法:
find_element_by_id (返回一個元素) find_elements_by_xpath (返回一個包含元素的列表) find_elements_by_link_text (根據鏈接文本獲取元素列表) find_elements_by_partial_link_text (根據鏈接包含的文本獲取元素列表) find_elements_by_tag_name (根據標籤名獲取元素列表) find_elements_by_class_name (根據類名獲取元素列表)
注意: find_element
和find_elements
的區別 by_link_text
和by_partial_link_tex
的區別:所有文本和包含某個文本
使用:
以豆瓣首頁爲例:https://www.douban.com/
from selenium import webdriver driver =webdriver.Chrome() driver.get("https://www.douban.com/") ret1 = driver.find_element_by_id("anony-nav") print(ret1) # 輸出爲:<selenium.webdriver.remote.webelement.WebElement (session="ea6f94544ac3a56585b2638d352e97f3", element="0.5335773935305805-1")> ret2 = driver.find_elements_by_id("anony-nav") print(ret2) #輸出爲:[<selenium.webdriver.remote.webelement.WebElement (session="ea6f94544ac3a56585b2638d352e97f3", element="0.5335773935305805-1")>] ret3 = driver.find_elements_by_xpath("//*[@id='anony-nav']/h1/a") print(len(ret3)) #輸出爲:1 ret4 = driver.find_elements_by_tag_name("h1") print(len(ret4)) #輸出爲:1 ret5 = driver.find_elements_by_link_text("下載豆瓣 App") print(len(ret5)) #輸出爲:1 ret6 = driver.find_elements_by_partial_link_text("豆瓣") print(len(ret6)) #輸出爲:28 driver.close()
find_element_by_xapth
也是這樣element.text
element.get_attribute("href")
from selenium import webdriver driver =webdriver.Chrome() driver.get("https://www.douban.com/") ret4 = driver.find_elements_by_tag_name("h1") print(ret4[0].text) #輸出:豆瓣 ret5 = driver.find_elements_by_link_text("下載豆瓣 App") print(ret5[0].get_attribute("href")) #輸出:https://www.douban.com/doubanapp/app?channel=nimingye driver.close()
經過driver.get_cookies()
可以獲取全部的cookie
# 把cookie轉化爲字典 {cookie[‘name’]: cookie[‘value’] for cookie in driver.get_cookies()} #刪除一條cookie driver.delete_cookie("CookieName") # 刪除全部的cookie driver.delete_all_cookies()
爲何須要等待
若是網站採用了動態html技術,那麼頁面上的部分元素出現時間便不能肯定,這個時候就能夠設置一個等待時間,強制要求在時間內出現,不然報錯
頁面等待的方法 time.sleep(10)
爬取鬥魚直播平臺的全部房間信息:https://www.douyu.com/directory/all
# coding=utf-8 from selenium import webdriver import time class DouYu: def __init__(self): self.start_url = "https://www.douyu.com/directory/all" self.driver = webdriver.Chrome() def get_content_list(self): #提取數據 li_list = self.driver.find_elements_by_xpath("//ul[@id='live-list-contentbox']/li") content_list = [] for li in li_list: item = {} item["title"] = li.find_element_by_xpath("./a").get_attribute("title") item["anchor"] = li.find_element_by_xpath(".//span[@class='dy-name ellipsis fl']").text item["watch_num"] = li.find_element_by_xpath(".//span[@class='dy-num fr']").text print(item) content_list.append(item) #提取下一頁的元素 next_url = self.driver.find_elements_by_xpath("//a[@class='shark-pager-next']") next_url = next_url[0] if len(next_url)>0 else None return content_list,next_url def save_content_list(self,content_lsit):#保存 pass def run(self): #實現主要邏輯 #1. start_url #2. 發送請求,獲取響應 self.driver.get(self.start_url) #3. 提取數據 content_list,next_url = self.get_content_list() #4.保存 self.save_content_list(content_list) # 5. 下一頁數據的提取 while next_url is not None: next_url.click() #頁面沒有徹底加載完,會報錯 time.sleep(3) content_list,next_url = self.get_content_list() self.save_content_list(content_list) if __name__ == '__main__': douyu = DouYu() douyu.run()
frame是html中經常使用的一種技術,即一個頁面中嵌套了另外一個網頁,selenium默認是訪問不了frame中的內容的,對應的解決思路是 driver.switch_to.frame()
動手:模擬登錄qq郵箱
在使用selenium登陸qq郵箱的過程當中,咱們會發現,沒法在郵箱的登陸input標籤中輸入內容,經過觀察源碼能夠發現,form表單在一個frame中,因此須要切換到frame中
# coding=utf-8 from selenium import webdriver import time driver = webdriver.Chrome() driver.get("https://mail.qq.com/") driver.switch_to.frame("login_frame") driver.find_element_by_id("u").send_keys("hello") time.sleep(5) driver.quit()
網易雲音樂的爬蟲demo
# coding=utf-8 import requests from lxml import etree import re from selenium import webdriver from copy import deepcopy class Music163: def __init__(self): self.start_url = "http://music.163.com/discover/playlist" self.headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36"} def parse_url(self,url): print(url) resp = requests.get(url,headers=self.headers) return resp.content.decode() def get_category_list(self):#獲取大分類和小分類 resp = self.parse_url(self.start_url) html = etree.HTML(resp) dl_list = html.xpath("//div[@class='bd']/dl") category_list = [] for dl in dl_list: b_cate = dl.xpath("./dt/text()")[0] if len(dl.xpath("./dt/text()"))>0 else None a_list = dl.xpath("./dd/a") for a in a_list: item = {} item["b_cate"]= b_cate item["s_cate"] = a.xpath("./text()")[0] if len(a.xpath("./text()"))>0 else None item["s_href"] = "http://music.163.com"+a.xpath("./@href")[0] if len(a.xpath("./@href"))>0 else None category_list.append(item) return category_list def get_playlist_list(self,item,total_playlist_list):#獲取小分類中的playlist列表 playlist_list = [] if item["s_href"] is not None: scate_resp = self.parse_url(item["s_href"]) scate_html = etree.HTML(scate_resp) li_list = scate_html.xpath("//ul[@id='m-pl-container']/li") for li in li_list: item["playlist_title"] = li.xpath("./p[@class='dec']/a/@title")[0] if len(li.xpath("./p[@class='dec']/a/@title"))>0 else None print(item["playlist_title"]) item["playlist_href"] = "http://music.163.com"+li.xpath("./p[@class='dec']/a/@href")[0] if len(li.xpath("./p[@class='dec']/a/@href"))>0 else None item["author_name"] = li.xpath("./p[last()]/a/@title")[0] if len(li.xpath("./p[last()]/a/@title"))>0 else None item["author_href"] = "http://music.163.com"+li.xpath("./p[last()]/a/@href")[0] if len(li.xpath("./p[last()]/a/@href"))>0 else None playlist_list.append(deepcopy(item)) total_playlist_list.extend(playlist_list) next_url = scate_html.xpath("//a[text()='下一頁']/@href")[0] if len(scate_html.xpath("//a[text()='下一頁']/@href"))>0 else None if next_url is not None and next_url!='javascript:void(0)': item["s_href"] = "http://music.163.com"+next_url #遞歸,調用本身,獲取下一頁的播放列表,直到下一頁沒有的時候再也不遞歸 return self.get_playlist_list(item,total_playlist_list) return total_playlist_list def get_playlist_info(self,playlist): #獲取單個播放別表的信息 if playlist["playlist_href"] is not None: playlist_resp = self.parse_url(playlist["playlist_href"]) playlist["covers"] = re.findall("\"images\": .*?\[\"(.*?)\"\],",playlist_resp) playlist["covers"] = playlist["covers"][0] if len(playlist["covers"])>0 else None playlist["create_time"] = re.findall("\"pubDate\": \"(.*?)\"",playlist_resp) playlist["create_time"] = playlist["create_time"][0] if len(playlist["create_time"])>0 else None playlist_html = etree.HTML(playlist_resp) playlist["favorited_times"] = playlist_html.xpath("//a[@data-res-action='fav']/@data-count")[0] if len(playlist_html.xpath("//a[@data-res-action='fav']/@data-count"))>0 else None playlist["shared_times"] = playlist_html.xpath("//a[@data-res-action='share']/@data-count")[0] if len(playlist_html.xpath("//a[@data-res-action='share']/@data-count"))>0 else None playlist["desc"] = playlist_html.xpath("//p[@id='album-desc-dot']/text()") playlist["played_times"] = playlist_html.xpath("//strong[@id='play-count']/text()")[0] if len(playlist_html.xpath("//strong[@id='play-count']/text()"))>0 else None playlist["tracks"] = self.get_playlist_tracks(playlist["playlist_href"]) return playlist def get_playlist_tracks(self,href): #獲取每一個歌單的歌曲信息 driver = webdriver.Chrome() driver.get(href) driver.switch_to.frame("g_iframe") tr_list = driver.find_elements_by_xpath("//tbody/tr") playlist_tracks = [] for tr in tr_list: track = {} track["name"] = tr.find_element_by_xpath("./td[2]//b").get_attribute("title") track["duration"] = tr.find_element_by_xpath("./td[3]/span").text track["singer"] = tr.find_element_by_xpath("./td[4]/div").get_attribute("title") track["album_name"] = tr.find_element_by_xpath("./td[5]//a").get_attribute("title") playlist_tracks.append(track) driver.quit() return playlist_tracks def run(self): categroy_list = self.get_category_list() #獲取分類 for cate in categroy_list: total_playlist_list = self.get_playlist_list(cate,[]) #獲取每一個分類下的全部播放列表 print("-"*100) print(total_playlist_list) print("-"*100) for playlist in total_playlist_list: print(playlist,"*"*100) playlist = self.get_playlist_info(playlist) #獲取每一個播放列表下的全部歌曲信息 print(playlist) if __name__ == '__main__': music_163 = Music163() music_163.run()
import time # 基本用法 # from selenium import webdriver # driver = webdriver.Chrome() # driver.get("http:\\www.baidu.com") # driver.save_screenshot('./baidu.png') # time.sleep(2) # driver.quit() # 無界面使用方法 # from selenium import webdriver # from selenium.webdriver.chrome.options import Options # # chrome_options = Options() # chrome_options.add_argument('--headless') # driver = webdriver.Chrome(chrome_options=chrome_options) # driver.get("http:\\www.baidu.com") # driver.save_screenshot('./baidu.png') # time.sleep(1) # driver.quit() # 跳轉窗口用法 from selenium import webdriver from selenium.webdriver.chrome.options import Options # chrome_options = Options() # chrome_options.add_argument('--headless') # driver = webdriver.Chrome(chrome_options=chrome_options) # driver = webdriver.Chrome() # 獲取當前窗口句柄 句柄通常指惟一標識符 # now_handle = driver.current_window_handle # 獲取全部窗口句柄 # all_handles = driver.window_handles # 切換回原窗口 # driver.switch_to_window(now_handle) # time.sleep(2) # 設置代理 # from selenium import webdriver # chromeOptions = webdriver.ChromeOptions() # 必定要注意,=兩邊不能有空格,不能是這樣--proxy-server = http://202.20.16.82:10152 # chromeOptions.add_argument("--proxy-server=http://202.20.16.82:10152") # browser = webdriver.Chrome(chrome_options = chromeOptions) # 設置請求頭 # from selenium import webdriver # options = webdriver.ChromeOptions() # 設置中文 # options.add_argument('lang=zh_CN.UTF-8') # 更換頭部 # options.add_argument('user-agent="Mozilla/5.0 (iPod; U; CPU iPhone OS 2_1 like Mac OS X; ja-jp) AppleWebKit/525.18.1 (KHTML, like Gecko) Version/3.1.1 Mobile/5F137 Safari/525.20"') # browser = webdriver.Chrome(chrome_options=options) # url = "https://httpbin.org/get?show_env=1" # browser.get(url) # browser.quit() # 設置不加載圖片 from selenium import webdriver # options = webdriver.ChromeOptions() # prefs = { # 'profile.default_content_setting_values': { # 'images': 2 # } # } # options.add_experimental_option('prefs', prefs) # browser = webdriver.Chrome(chrome_options=options) # browser = webdriver.Chrome() # url = "http://image.baidu.com/" # browser.get(url) # input("是否有圖") # browser.quit()
爬取開始吧demo
from selenium import webdriver options = webdriver.ChromeOptions() prefs = { 'profile.default_content_setting_values': { 'images': 2 # 2表示禁用圖片加載,提升速度 } } options.add_experimental_option('prefs', prefs) options.add_argument('--headless') driver = webdriver.Chrome(chrome_options=options) driver.get("https://www.kaishiba.com/project/more") driver.save_screenshot('./startBa.png') while True: js = "window.scrollTo(0,document.body.scrollHeight)" # 模擬瀏覽器的下拉動做 driver.execute_script(js) # time.sleep(random.randint(2,5)) l_list = driver.find_elements_by_xpath("//li[@class='programCard']") print('獲取%d條數據' % len(l_list)) if len(l_list)==1800: with open('./start.html','w',encoding='utf-8') as f: f.write(driver.page_source) break driver.quit()
如今不少網站都會使用驗證碼來進行反爬,因此爲了可以更好的獲取數據,須要瞭解如何使用打碼平臺爬蟲中的驗證碼
可以解決通用的驗證碼識別
極驗驗證碼智能識別輔助:http://jiyandoc.c2567.com/
可以解決複雜驗證碼的識別
下面代碼是雲打碼平臺提供,作了個簡單修改,只用傳入response.content 便可識別圖片
import requests import json import time class YDMHttp: apiurl = 'http://api.yundama.com/api.php' username = '' password = '' appid = '' appkey = '' def __init__(self, username, password, appid, appkey): self.username = username self.password = password self.appid = str(appid) self.appkey = appkey def request(self, fields, files=[]): response = self.post_url(self.apiurl, fields, files) response = json.loads(response) return response def balance(self): data = {'method': 'balance', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey} response = self.request(data) if (response): if (response['ret'] and response['ret'] < 0): return response['ret'] else: return response['balance'] else: return -9001 def login(self): data = {'method': 'login', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey} response = self.request(data) if (response): if (response['ret'] and response['ret'] < 0): return response['ret'] else: return response['uid'] else: return -9001 def upload(self, filename, codetype, timeout): data = {'method': 'upload', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey, 'codetype': str(codetype), 'timeout': str(timeout)} file = {'file': filename} response = self.request(data, file) if (response): if (response['ret'] and response['ret'] < 0): return response['ret'] else: return response['cid'] else: return -9001 def result(self, cid): data = {'method': 'result', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey, 'cid': str(cid)} response = self.request(data) return response and response['text'] or '' def decode(self, filename, codetype, timeout): cid = self.upload(filename, codetype, timeout) if (cid > 0): for i in range(0, timeout): result = self.result(cid) if (result != ''): return cid, result else: time.sleep(1) return -3003, '' else: return cid, '' def post_url(self, url, fields, files=[]): # for key in files: # files[key] = open(files[key], 'rb'); res = requests.post(url, files=files, data=fields) return res.text username = 'whoarewe' # 用戶名 password = '***' # 密碼 appid = 4283 # appid appkey = '02074c64f0d0bb9efb2df455537b01c3' # appkey filename = 'getimage.jpg' # 文件位置 codetype = 1004 # 驗證碼類型 # 超時 timeout = 60 def indetify(response_content): if (username == 'username'): print('請設置好相關參數再測試') else: # 初始化 yundama = YDMHttp(username, password, appid, appkey) # 登錄雲打碼 uid = yundama.login(); print('uid: %s' % uid) # 查詢餘額 balance = yundama.balance(); print('balance: %s' % balance) # 開始識別,圖片路徑,驗證碼類型ID,超時時間(秒),識別結果 cid, result = yundama.decode(response_content, codetype, timeout) print('cid: %s, result: %s' % (cid, result)) return result def indetify_by_filepath(file_path): if (username == 'username'): print('請設置好相關參數再測試') else: # 初始化 yundama = YDMHttp(username, password, appid, appkey) # 登錄雲打碼 uid = yundama.login(); print('uid: %s' % uid) # 查詢餘額 balance = yundama.balance(); print('balance: %s' % balance) # 開始識別,圖片路徑,驗證碼類型ID,超時時間(秒),識別結果 cid, result = yundama.decode(file_path, codetype, timeout) print('cid: %s, result: %s' % (cid, result)) return result if __name__ == '__main__': pass
這是驗證碼裏面很是簡單的一種類型,對應的只須要獲取驗證碼的地址,而後請求,經過打碼平臺識別便可
這種驗證碼的類型是更加常見的一種類型,對於這種驗證碼,你們須要思考:
在登陸的過程當中,假設我輸入的驗證碼是對的,對方服務器是如何判斷當前我輸入的驗證碼是顯示在我屏幕上的驗證碼,而不是其餘的驗證碼呢?
在獲取網頁的時候,請求驗證碼,以及提交驗證碼的時候,對方服務器確定經過了某種手段驗證我以前獲取的驗證碼和最後提交的驗證碼是同一個驗證碼,那這個手段是什麼手段呢?
很明顯,就是經過cookie來實現的,因此對應的,在請求頁面,請求驗證碼,提交驗證碼的到時候須要保證cookie的一致性,對此可使用requests.session來解決
from selenium import webdriver
options = webdriver.ChromeOptions()
prefs = {
'profile.default_content_setting_values': {
'images': 2 # 2表示禁用圖片加載,提升速度
}
}
options.add_experimental_option('prefs', prefs)
options.add_argument('--headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.kaishiba.com/project/more")
driver.save_screenshot('./startBa.png')
while True:
js = "window.scrollTo(0,document.body.scrollHeight)" # 模擬瀏覽器的下拉動做
driver.execute_script(js)
# time.sleep(random.randint(2,5))
l_list = driver.find_elements_by_xpath("//li[@class='programCard']")
print('獲取%d條數據' % len(l_list))
if len(l_list)==1800:
with open('./start.html','w',encoding='utf-8') as f:
f.write(driver.page_source)
break
driver.quit()