AJAX(Asynchronouse JavaScript And XML:異步JavaScript和XML)經過在後臺與服務器進行少許數據交換,Ajax 可使網頁實現異步更新,這意味着能夠在不從新加載整個網頁的狀況下,對網頁的某部分進行局部更新。傳統的網頁(不使用Ajax)若是須要更新內容,必須重載整個網頁頁面。javascript
由於傳統的網頁在傳輸數據格式方面,使用的是XML
語法,所以叫作AJAX
,其實如今數據交互基本上都是使用JSON
。使用AJAX加載的數據,即便使用了JS將數據渲染到了瀏覽器中,在右鍵->查看網頁源代碼
仍是不能看到經過ajax加載的數據,只能看到使用這個url加載的html代碼。php
法1:直接分析ajax調用的接口。而後經過代碼請求這個接口。css
法2:使用Selenium+chromedriver模擬瀏覽器行爲獲取數據。html
方式 | 優勢 | 缺點 |
---|---|---|
分析接口 | 直接能夠請求到數據。不須要作一些解析工做。代碼量少,性能高。 | 分析接口比較複雜,特別是一些經過js混淆的接口,要有必定的js功底。容易被發現是爬蟲。 |
selenium | 直接模擬瀏覽器的行爲。瀏覽器能請求到的,使用selenium也能請求到。爬蟲更穩定。 | 代碼量多。性能低。 |
Selenium
至關因而一個機器人。能夠模擬人類在瀏覽器上的一些行爲,自動處理瀏覽器上的一些行爲,好比點擊,填充數據,刪除cookie等。chromedriver
是一個驅動Chrome
瀏覽器的驅動程序,使用他才能夠驅動瀏覽器。固然針對不一樣的瀏覽器有不一樣的driver。如下列出了不一樣瀏覽器及其對應的driver:java
Selenium
:Selenium
有不少語言的版本,有java、ruby、python等。咱們下載python版本的就能夠了。 pip install selenium
chromedriver
:下載完成後,放到不須要權限的純英文目錄下就能夠了。 如今以一個簡單的獲取百度首頁的例子來說下Selenium
和chromedriver
如何快速入門:python
from selenium import webdriver # chromedriver的絕對路徑 driver_path = r'D:\ProgramApp\chromedriver\chromedriver.exe' # 初始化一個driver,而且指定chromedriver的路徑 driver = webdriver.Chrome(executable_path=driver_path) # 請求網頁 driver.get("https://www.baidu.com/") # 經過page_source獲取網頁源代碼 print(driver.page_source)
#-*-coding = utf-8 -*- from selenium import webdriver from selenium.webdriver.common.by import By # chromedriver的絕對路徑 driver_path = r'D:\ProgramApp\chromedriver\chromedriver.exe' # 初始化一個driver,而且指定chromedriver的路徑 driver = webdriver.Chrome(executable_path=driver_path) # 請求網頁 driver.get("https://www.baidu.com/") # 經過page_source獲取網頁源代碼 print(driver.page_source) selenium經常使用操做 1.關閉頁面: driver.close():關閉當前頁面。 driver.quit():退出整個瀏覽器。 2.定位元素: a)find_element_by_id:根據id來查找某個元素。等價於: submitTag = driver.find_element_by_id('su') submitTag1 = driver.find_element(By.ID,'su') b)find_element_by_class_name:根據類名查找元素。 等價於: submitTag = driver.find_element_by_class_name('su') submitTag1 = driver.find_element(By.CLASS_NAME,'su') c)find_element_by_name:根據name屬性的值來查找元素。等價於: submitTag = driver.find_element_by_name('email') submitTag1 = driver.find_element(By.NAME,'email') d)find_element_by_tag_name:根據標籤名來查找元素。等價於: submitTag = driver.find_element_by_tag_name('div') submitTag1 = driver.find_element(By.TAG_NAME,'div') e)find_element_by_xpath:根據xpath語法來獲取元素。等價於: submitTag = driver.find_element_by_xpath('//div') submitTag1 = driver.find_element(By.XPATH,'//div') f)find_element_by_css_selector:根據css選擇器選擇元素。等價於: submitTag = driver.find_element_by_css_selector('//div') submitTag1 = driver.find_element(By.CSS_SELECTOR,'//div') 要注意,find_element是獲取第一個知足條件的元素。find_elements是獲取全部知足條件的元素。 3.操做表單元素: a)操做輸入框:分爲兩步。第一步:找到這個元素。第二步:使用send_keys(value),將數據填充進去。示例代碼以下: inputTag = driver.find_element_by_id('kw') inputTag.send_keys('python') 使用clear方法能夠清除輸入框中的內容。示例代碼以下: inputTag.clear() b)操做checkbox:由於要選中checkbox標籤,在網頁中是經過鼠標點擊的。所以想要選中checkbox標籤,那麼先選中這個標籤,而後執行click事件。示例代碼以下: rememberTag = driver.find_element_by_name("rememberMe") rememberTag.click() c)選擇select:select元素不能直接點擊。由於點擊後還須要選中元素。這時候selenium就專門爲select標籤提供了一個類selenium.webdriver.support.ui.Select。將獲取到的元素當成參數傳到這個類中,建立這個對象。之後就可使用這個對象進行選擇了。示例代碼以下: from selenium.webdriver.support.ui import Select # 選中這個標籤,而後使用Select建立對象 selectTag = Select(driver.find_element_by_name("jumpMenu")) # 根據索引選擇 selectTag.select_by_index(1) # 根據值選擇 selectTag.select_by_value("http://www.95yueba.com") # 根據可視的文本選擇 selectTag.select_by_visible_text("95秀客戶端") # 取消選中全部選項 selectTag.deselect_all() d)操做按鈕:操做按鈕有不少種方式。好比單擊、右擊、雙擊等。這裏講一個最經常使用的。就是點擊。直接調用click函數就能夠了。示例代碼以下: inputTag = driver.find_element_by_id('su') inputTag.click() 4.行爲鏈: 有時候在頁面中的操做可能要有不少步,那麼這時候可使用鼠標行爲鏈類ActionChains來完成。好比如今要將鼠標移動到某個元素上並執行點擊事件。那麼示例代碼以下: inputTag = driver.find_element_by_id('kw') submitTag = driver.find_element_by_id('su') actions = ActionChains(driver) actions.move_to_element(inputTag) actions.send_keys_to_element(inputTag,'python') actions.move_to_element(submitTag) actions.click(submitTag) actions.perform() 還有更多的鼠標相關的操做: click_and_hold(element):點擊但不鬆開鼠標。 context_click(element):右鍵點擊。 double_click(element):雙擊。 更多方法請參考:http://selenium-python.readthedocs.io/api.html
#-*-coding = utf-8 -*- from selenium import webdriver from selenium.webdriver.common.by import By # chromedriver的絕對路徑 driver_path = r'E:\study\chromedriver\chromedriver.exe' # 初始化一個driver,而且指定chromedriver的路徑 driver = webdriver.Chrome(executable_path=driver_path) # 請求網頁 driver.get("https://www.baidu.com/") # 經過page_source獲取網頁源代碼 print(driver.page_source) # 5.Cookie操做: # a)獲取全部的cookie: for cookie in driver.get_cookies(): print(cookie) # b)根據cookie的key獲取value: #value = driver.get_cookie(key) print(driver.get_cookie('PSTM')) # c)刪除全部的cookie: driver.delete_all_cookies() # d)刪除某個cookie: #driver.delete_cookie(key) # 6頁面等待: # 如今的網頁愈來愈多采用了 Ajax 技術,這樣程序便不能肯定什麼時候某個元素徹底加載出來了。若是實際頁面等待時間過長致使某個dom元素還沒出來,可是你的代碼直接使用了這個WebElement,那麼就會拋出NullPointer的異常。爲了解決這個問題。因此 Selenium 提供了兩種等待方式:一種是隱式等待、一種是顯式等待。 # a)隱式等待:調用driver.implicitly_wait。那麼在獲取不可用的元素以前,會先等待10秒中的時間。示例代碼以下: driver = webdriver.Chrome(executable_path=driver_path) driver.implicitly_wait(10) # 請求網頁 driver.get("https://www.douban.com/") # b)顯示等待:顯示等待是代表某個條件成立後才執行獲取元素的操做。也能夠在等待的時候指定一個最大的時間,若是超過這個時間那麼就拋出一個異常。顯示等待應該使用selenium.webdriver.support.excepted_conditions指望的條件和selenium.webdriver.support.ui.WebDriverWait來配合完成。示例代碼以下: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.get("http://somedomain/url_that_delays_loading") try: element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "myDynamicElement")) ) finally: driver.quit() # 一些其餘的等待條件: # presence_of_element_located:某個元素已經加載完畢了。 # presence_of_all_emement_located:網頁中全部知足條件的元素都加載完畢了。 # element_to_be_cliable:某個元素是能夠點擊了。 # 更多條件請參考:http://selenium-python.readthedocs.io/waits.html
7.頁面切換 有時候窗口中有不少子tab頁面。這時候確定是須要進行切換的。selenium提供了一個叫作switch_to_window來進行切換,具體切換到哪一個頁面,能夠從driver.window_handles中找到。示例代碼以下: # 打開一個新的頁面 driver.execute_script("window.open('https://www.douban.com/')") print(driver.window_handles) # 切換到這個新的頁面中 driver.switch_to_window(self.driver.window_handles[1]) print(driver.current_url) #注意 #雖然在瀏覽器窗口中切換到了新的頁面,可是driver中尚未切換 #若是想要在代碼中切換到新的界面,那麼應該使用driver.switch_to_window來切換到指定的窗口 #從driver.window_handles中取出具體第幾個窗口 #driver.window_handles是一個列表,裏面裝的都是窗口句柄,它會按照打開的頁面順序來存儲窗口的句柄。 8.設置代理ip 有時候頻繁爬取一些網頁。服務器發現你是爬蟲後會封掉你的ip地址。這時候咱們能夠更改代理ip。更改代理ip,不一樣的瀏覽器有不一樣的實現方式。這裏以Chrome瀏覽器爲例來說解: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument("--proxy-server=http://110.73.2.248:8123") driver_path = r"D:\ProgramApp\chromedriver\chromedriver.exe" driver = webdriver.Chrome(executable_path=driver_path,chrome_options=options) driver.get('http://httpbin.org/ip') 9. WebElement元素 from selenium.webdriver.remote.webelement import WebElement類是每一個獲取出來的元素的所屬類。 有一些經常使用的屬性: get_attribute:這個標籤的某個屬性的值。 screentshot:獲取當前頁面的截圖。這個方法只能在driver上使用。 driver的對象類,也是繼承自WebElement。
import requests from lxml import etree import time import re headers = { "Accept":"application/json, text/javascript, */*; q=0.01", "Accept-Encoding":"gzip, deflate, br", "Accept-Language":"zh-CN,zh;q=0.9", "Connection":"keep-alive", "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36", "Referer":"https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=", "Origin":"https://www.lagou.com", "Host":"www.lagou.com", "Content-Type":"application/x-www-form-urlencoded; charset=UTF-8", "Cookie":"_ga=GA1.2.1602115737.1553064534; user_trace_token=20190320144853-39b1375a-4adc-11e9-a253-525400f775ce; LGUID=20190320144853-39b13f88-4adc-11e9-a253-525400f775ce; WEBTJ-ID=20190408120043-169fb1afd63488-06179b118ca307-7a1437-2073600-169fb1afd648ed; _gid=GA1.2.1826141825.1554696044; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%22169fb1bb2c41ea-04951e55adc96a-7a1437-2073600-169fb1bb2c58d0%22%2C%22%24device_id%22%3A%22169fb1bb2c41ea-04951e55adc96a-7a1437-2073600-169fb1bb2c58d0%22%7D; sajssdk_2015_cross_new_user=1; _putrc=4C5D2603888320CA; JSESSIONID=ABAAABAAADEAAFIB00F5DDE71D51610901CB9E0031812BA; login=true; unick=%E4%BC%8D%E6%99%93%E4%B8%BD; showExpriedIndex=1; showExpriedCompanyHome=1; showExpriedMyPublish=1; hasDeliver=49; gate_login_token=7b04a40da89145a1fbc90a3d719616d28c8b0a303344ac37; index_location_city=%E6%88%90%E9%83%BD; X_MIDDLE_TOKEN=1221e6b5040722dc86f5ceb557e11965; _gat=1; LGSID=20190408151935-a9976fbf-59ce-11e9-8cc8-5254005c3644; PRE_UTM=m_cf_cpc_baidu_pc; PRE_HOST=www.baidu.com; PRE_SITE=https%3A%2F%2Fwww.baidu.com%2Fbaidu.php%3Fsc.Ks000001qLT2daZnZWIez3ktR_jhHue3tONZubxU9mivhxeuj-Fxrjg6NnVcKTp-GYJ_YRvrc9_yOJ4uV-IEpfnPazPz7ctjve1qlDokCDfHYo9PV0uDfTmN1OunNUcCRU-sJuR8RZz60PAXzfKybAdvuCxUedbt8aWtTjAdCCuO298TwT8zN1-T5EG3kgkOweg0DHGIbvP55IZbr6.DY_NR2Ar5Od663rj6tJQrGvKD7ZZKNfYYmcgpIQC8xxKfYt_U_DY2yP5Qjo4mTT5QX1BsT8rZoG4XL6mEukmryZZjzsLTJplePXO-8zNqrw5Q9tSMj_qTr1x9tqvZul3xg1sSxW9qx-9LdoDkY4QPSl81_4pqO24rM-8dQjPakb3dS5iC0.U1Yk0ZDqs2v4VnL30ZKGm1Yk0Zfqs2v4VnL30A-V5HcsP0KM5gK1n6KdpHdBmy-bIykV0ZKGujYzr0KWpyfqnWcv0AdY5HDsnHIxnH0krNtknjc1g1nsnHNxn1msnfKopHYs0ZFY5HDLn6K-pyfq0AFG5HcsP0KVm1Y3nHDYP1fsrjuxnH0snNtkg1Dsn-ts0Z7spyfqn0Kkmv-b5H00ThIYmyTqn0K9mWYsg100ugFM5H00TZ0qPWm1PHm1rj640A4vTjYsQW0snj0snj0s0AdYTjYs0AwbUL0qn0KzpWYs0Aw-IWdsmsKhIjYs0ZKC5H00ULnqn0KBI1Ykn0K8IjYs0ZPl5fK9TdqGuAnqTZnVUhC0IZN15Hnkn1fknHT4P1DvPHR1PW61P100ThNkIjYkPHRYP10LrHTkPjTY0ZPGujd9rAwBmhuWrj0snjDzrj0Y0AP1UHYsPbm3wWTsrH0srjwarDcz0A7W5HD0TA3qn0KkUgfqn0KkUgnqn0KlIjYs0AdWgvuzUvYqn7tsg1Kxn7ts0Aw9UMNBuNqsUA78pyw15HKxn7tsg1nkrjm4nNts0ZK9I7qhUA7M5H00uAPGujYknjT1P1fkrjcY0ANYpyfqQHD0mgPsmvnqn0KdTA-8mvnqn0KkUymqn0KhmLNY5H00uMGC5H00uh7Y5H00XMK_Ignqn0K9uAu_myTqnfK_uhnqn0KEIjYs0AqzTZfqnanscznsc100mLFW5HRdPj0Y%26word%3D%25E6%258B%2589%25E5%258B%25BE%25E7%25BD%2591%26ck%3D1701.10.72.227.558.354.602.254%26shh%3Dwww.baidu.com%26sht%3D62095104_19_oem_dg%26us%3D1.0.1.0.1.301.0%26bc%3D110101; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2Flp%2Fhtml%2Fcommon.html%3Futm_source%3Dm_cf_cpc_baidu_pc%26m_kw%3Dbaidu_cpc_cd_e110f9_d2162e_%25E6%258B%2589%25E5%258B%25BE%25E7%25BD%2591; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1553064535,1554696044,1554707975; TG-TRACK-CODE=index_search; SEARCH_ID=16b25888bc6f489f981996ef505d6930; X_HTTP_TOKEN=3704e5535eab672a10080745514b2c7fac0430c282; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1554708001; LGRID=20190408152002-b9743b5b-59ce-11e9-9a84-525400f775ce", "X-Anit-Forge-Code":"0" , "X-Anit-Forge-Token":'', "X-Requested-With":'XMLHttpRequest' } def get_detail_page_url(): datas =[] url = 'https://www.lagou.com/jobs/positionAjax.json' form_data = { "first":"faise", "pn":1, "kd":"python" } params = { 'city':'成都', 'needAddtionalResult':'false' } for pn in range(1,14): form_data['pn'] = pn response = requests.request(method='post',url=url,headers=headers,params = params,data = form_data) result = response.json() result_list = result['content']['positionResult']['result'] for position in result_list: position_id = position['positionId'] detail_url = 'https://www.lagou.com/jobs/%s.html'%position_id data = parse_detail_page(detail_url) datas.append(data) time.sleep(2) return datas def parse_detail_page(url): resonse = requests.request(method='get',url=url,headers = headers) text = resonse.text html = etree.fromstring(text,parser=etree.HTMLParser()) position_name = html.xpath('//span[@class="name"]/text()')[0].strip() detail_list = html.xpath('//dd[@class="job_request"]//span') salary = detail_list[0].xpath('text()')[0].strip() city = detail_list[1].xpath('text()')[0].strip() city = re.sub(r'[\s/]','',city) work_years = detail_list[2].xpath('text()')[0].strip() work_years = re.sub(r'[\s/]','',work_years) education = detail_list[3].xpath('text()')[0].strip() education = re.sub(r'[\s/]','',education) job_details = ''.join(html.xpath('//div[@class="job-detail"]//p//text()')) data = { "position_name":position_name, "salay":salary, "city":city, "work_years":work_years, "education":education, "job_details":job_details } return data def main(): datas = get_detail_page_url() print(datas) if __name__ == '__main__': main()
from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from lxml import etree import re import time class Lagouspider(object): driver_path = r'E:\study\chromedriver\chromedriver.exe' def __init__(self): self.driver = webdriver.Chrome(executable_path=Lagouspider.driver_path) self.url = 'https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=' self.positions = [] def run(self): while True: self.driver.get(self.url) source = self.driver.page_source WebDriverWait(driver=self.driver,timeout=10).until(EC.presence_of_element_located((By.XPATH,'//div[@class="pager_container"]/span[last()]'))) self.parse_list_page(source) next_btn = self.driver.find_element_by_xpath('//div[@class="pager_container"]/span[last()]') if "pager_next_disabled" in next_btn.get_attribute('class'): break else: next_btn.click() time.sleep(1) def parse_list_page(self,source): html = etree.HTML(source) links = html.xpath('//a[@class="position_link"]/@href') for link in links: self.request_detail_page(link) time.sleep(1) def request_detail_page(self,url): self.driver.execute_script("window.open('%s')"%url) self.driver.switch_to.window(self.driver.window_handles[1]) WebDriverWait(self.driver,timeout=10).until(EC.presence_of_element_located((By.XPATH,'//span[@class="name"]'))) source = self.driver.page_source self.parse_detail_page(source) #關閉當前詳情頁 self.driver.close() #切換回職位列表頁 self.driver.switch_to.window(self.driver.window_handles[0]) def parse_detail_page(self,source): html = etree.HTML(source) position_name = html.xpath('//span[@class="name"]/text()')[0].strip() detail_list = html.xpath('//dd[@class="job_request"]//span') salary = detail_list[0].xpath('text()')[0].strip() city = detail_list[1].xpath('text()')[0].strip() city = re.sub(r'[\s/]', '', city) work_years = detail_list[2].xpath('text()')[0].strip() work_years = re.sub(r'[\s/]', '', work_years) education = detail_list[3].xpath('text()')[0].strip() education = re.sub(r'[\s/]', '', education) desc = ''.join(html.xpath('//dd[@class="job_bt"]//text()')).strip() data = { "name": position_name, "salay": salary, "city": city, "work_years": work_years, "education": education, "desc": desc } print(data) print('+'*40) self.positions.append(data) if __name__ == '__main__': spider = Lagouspider() spider.run()
>>>>>>>待續git