爬蟲 --- 07. 全站爬取(手動), post請求,cookie, 傳參,中間件,selenium

一.全站數據的爬取(手動)

- yield scrapy.Request(url,callback):callback回調一個函數用於數據解析
# 爬取陽光熱線前五頁數據

import scrapy
from sunLinePro.items import SunlineproItem class SunSpider(scrapy.Spider): name = 'sun' # allowed_domains = ['www.xxx.com'] start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page='] #通用的url模板(不能夠修改) url = 'http://wz.sun0769.com/index.php/question/questionType?type=4&page=%d' page = 1 def parse(self, response): print('--------------------------page=',self.page) tr_list = response.xpath('//*[@id="morelist"]/div/table[2]//tr/td/table//tr') for tr in tr_list: title = tr.xpath('./td[2]/a[2]/text()').extract_first() status = tr.xpath('./td[3]/span/text()').extract_first() item = SunlineproItem() item['title'] = title item['status'] = status yield item if self.page < 5: #手動對指定的url進行請求發送 count = self.page * 30 new_url = format(self.url%count) self.page += 1 # 手動對指定的url進行請求發送 yield scrapy.Request(url=new_url,callback=self.parse)

 

二.如何進行post請求發送 和cookie處理

 

  1.post請求的發送

    - post請求的發送:
- 重寫父類的start_requests(self)方法
- 在該方法內部只須要調用yield scrapy.FormRequest(url,callback,formdata)

 

import scrapy


class PostdemoSpider(scrapy.Spider):
    name = 'postDemo'
    # allowed_domains = ['www.xxx.com']
    #https://fanyi.baidu.com/sug
    start_urls = ['https://fanyi.baidu.com/sug']
    #父類方法,就是將start_urls中的列表元素進行get請求的發送
    # def start_requests(self):
    #     for url in self.start_urls:
    #         yield scrapy.Request(url=url,callback=self.parse)

    def start_requests(self):
        for url in self.start_urls:
            data = {
                'kw':'cat'
            }
            #post請求的手動發送使用的是FormRequest
            yield scrapy.FormRequest(url=url,callback=self.parse,formdata=data)

    def parse(self, response):
        print(response.text)

 

  2.cookie的處理

    - cookie處理:scrapy默認狀況下會自動進行cookie處理

 

 

三.請求傳參

請求傳參:
    - 使用場景:若是使用scrapy爬取的數據沒有在同一張頁面中,則必須使用請求傳參
    - 編碼流程:
        - 需求:爬取的是首頁中電影的名稱和詳情頁中電影的簡介(全站數據爬取)
        - 基於起始url進行數據解析(parse)
            - 解析數據
                - 電影的名稱
                - 詳情頁的url
                - 對詳情頁的url發起手動請求(指定的回調函數parse_detail),進行請求傳參(meta)
                    meta傳遞給parse_detail這個回調函數
                - 封裝一個其餘頁碼對應url的一個通用的URL模板
                - 在for循環外部,手動對其餘頁的url進行手動請求發送(須要指定回調函數==》parse)
            - 定義parse_detail回調方法,在其內部對電影的簡介進行解析。解析完畢後,須要將解析到的電影名稱
                和電影的簡介封裝到同一個item中。
                - 接收傳遞過來的item,而且將解析到的數據存儲到item中,將item提交給管道

 

# -*- coding: utf-8 -*-
import scrapy
from moviePro.items import MovieproItem

class MovieSpider(scrapy.Spider):
    name = 'movie'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://www.4567tv.tv/frim/index1.html']

    #通用的url模板只適用於非第一頁
    url = 'https://www.4567tv.tv/frim/index1-%d.html'
    page = 2

    #電影名稱(首頁),簡介(詳情頁)
    def parse(self, response):
        li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
        for li in li_list:
            name = li.xpath('./div/a/@title').extract_first()
            detail_url = 'https://www.4567tv.tv'+li.xpath('./div/a/@href').extract_first()
            item = MovieproItem()
            item['name'] = name

            #對詳情頁的url發起get請求
            #請求傳參:meta參數對應的字典就能夠傳遞給請求對象中指定好的回調函數
            yield scrapy.Request(url=detail_url,callback=self.detail_parse,meta={'item':item})
        if self.page <= 5:
            new_url = format(self.url%self.page)
            self.page += 1
            yield scrapy.Request(url=new_url,callback=self.parse)
#解析詳情頁的頁面數據 def detail_parse(self,response): #回調函數內部經過response.meta就能夠接收到請求傳參傳遞過來的字典 item
= response.meta['item'] desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[2]/text()').extract_first() item['desc'] = desc yield item

 

 四.中間件

 

    - 下載中間件的做用:批量攔截整個工程中發起的全部請求和響應
    - 攔截請求:
        - UA假裝:
        - 代理ip:
    - 攔截響應:

 

   1.UA池 和代理池


UA池:User-Agent池 - 做用:儘量多的將scrapy工程中的請求假裝成不一樣類型的瀏覽器身份。


 代理池:ip代理php

  - 做用:儘量多的將scrapy工程中的請求的IP設置成不一樣的。html

 

 ①在middlewares.py 文件中web

import random

#批量攔截全部的請求和響應
class MiddlewearproDownloaderMiddleware(object):
    #UA池
    user_agent_list = [
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
        "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
        "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
        "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
        "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
        "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
        "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
        "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
        "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
    ]
    #代理池
    PROXY_http = [
        '153.180.102.104:80',
        '195.208.131.189:56055',
    ]
    PROXY_https = [
        '120.83.49.90:9000',
        '95.189.112.214:35508',
    ]

    #攔截正常請求:request就是該方法攔截到的請求,spider就是爬蟲類實例化的一個對象
    def process_request(self, request, spider):
        print('this is process_request!!!')
        #UA假裝
        request.headers['User-Agent'] = random.choice(self.user_agent_list)
        return None

    #攔截全部的響應
    def process_response(self, request, response, spider):

        return response

    #攔截髮生異常的請求對象
    def process_exception(self, request, exception, spider):
        print('this is process_exception!!!!')
        #代理ip的設定
        if request.url.split(':')[0] == 'http':
            request.meta['proxy'] = random.choice(self.PROXY_http)
        else:
            request.meta['proxy'] = random.choice(self.PROXY_https)

        #將修正後的請求對象從新進行請求發送
        return request

 

 ②在settings.py文件中chrome

 

 

 

 

   2.攔截響應和 selenium的使用

攔截響應:
  修改 中間件文件的 process_response 函數



selenium 瀏覽器自動化:
  - 爬蟲類中定義一個屬性bro
  - 爬蟲類中重寫父類的一個方法closed,在該方法中關閉bro
  - 在中間件類的process_response中編寫selenium自動化的相關操做
  

 

示例:爬取  網易新聞 數據瀏覽器

①在 爬蟲文件中cookie

# -*- coding: utf-8 -*-
import scrapy
from wangyiPro.items import WangyiproItem
from selenium import webdriver
class WangyiSpider(scrapy.Spider):
    name = 'wangyi'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://news.163.com/']
    #瀏覽器實例化的操做只會被執行一次
    bro = webdriver.Chrome(executable_path='chromedriver.exe')

    urls = []#最終存放的就是五個板塊對應的url
    def parse(self, response):
        li_list = response.xpath('//*[@id="index2016_wrap"]/div[1]/div[2]/div[2]/div[2]/div[2]/div/ul/li')
        for index in [3,4,6,7,8]:
            li = li_list[index]
            new_url = li.xpath('./a/@href').extract_first()

            self.urls.append(new_url)
            #是五大板塊對應的url進行請求發送
            yield scrapy.Request(url=new_url,callback=self.parse_news)

    #是用來解析每個板塊對應的新聞數據(新聞的標題)
    def parse_news(self,response):
        div_list = response.xpath('//div[@class="ndi_main"]/div')
        for div in div_list:
            title = div.xpath('./div/div[1]/h3/a/text()').extract_first()
            news_detail_url = div.xpath('./div/div[1]/h3/a/@href').extract_first()

            #實例化item對象將解析到的標題和內容存儲到item對象中
            item = WangyiproItem()
            item['title'] = title

            #對詳情頁的url進行手動請求發送以便回去新聞的內容
            yield scrapy.Request(url=news_detail_url,callback=self.parse_detail,meta={'item':item})
    def parse_detail(self,response):
        item = response.meta['item']
        #經過response解析出新聞的內容
        content = response.xpath('//div[@id="endText"]//text()').extract()
        content = ''.join(content)

        item['content'] = content

        yield item


    def closed(self,spider):
        print('爬蟲總體結束!!!')
        self.bro.quit()

 

 

②在 中間件文件 中app

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
from scrapy.http import HtmlResponse
from time import sleep

class WangyiproDownloaderMiddleware(object):


    def process_request(self, request, spider):

        return None

    #攔截整個工程中全部的響應對象
    def process_response(self, request, response, spider):
        if request.url in spider.urls:
            #就要將其對應的響應對象進行處理

            #獲取了在爬蟲類中定義好的瀏覽器對象
            bro = spider.bro
            bro.get(request.url)

            bro.execute_script('window.scrollTo(0,document.body.scrollHeight)')
            sleep(1)
            bro.execute_script('window.scrollTo(0,document.body.scrollHeight)')
            sleep(1)

            #獲取攜帶了新聞數據的頁面源碼數據
            page_text = bro.page_source
            #實例化一個新的響應對象
            new_response = HtmlResponse(url=request.url,body=page_text,encoding='utf-8',request=request)
            return new_response
        else:
            return response

    def process_exception(self, request, exception, spider):

        pass

 

 ③注意事項dom

1. 導入瀏覽器啓動文件

2. 修改 settings.py 文件

3,修改 items.py 文件

4,持久化存儲式,修改 管道 文件
相關文章
相關標籤/搜索