用scrapy框架爬取映客直播用戶頭像

1. 建立項目 scrapy startproject yingke cd yingkehtml

2. 建立爬蟲  scrapy genspider livedom

3. 分析http://www.inke.cn/hotlive_list.html網頁的response,找到響應裏面數據的規律,並找到的位置,經過response.xpath()獲取到scrapy

4. 經過在pipline裏面進行數據的清洗,過濾,保存ide

5. 實現翻頁,進行下一頁的請求處理url

6. 運行爬蟲 scrapy crawl livespa

說明:這個程序直接在parse方法裏面進行圖片保存,保存在本地,正常使用yield關鍵字進行在pipline中保存。code

# -*- coding: utf-8 -*-
import scrapy
import re


class LiveSpider(scrapy.Spider):
    name = 'live'
    allowed_domains = ['inke.cn']
    start_urls = ['http://www.inke.cn/hotlive_list.html?page=1']

    def parse(self, response):
        div_list = response.xpath("//div[@class='list_box']")

        for div in div_list:
            item = {}
            img_src = div.xpath("./div[@class='list_pic']/a/img/@src").extract_first()
            item["user_name"] = div.xpath(
                "./div[@class='list_user_info']/span[@class='list_user_name']/text()").extract_first()
            print(item["user_name"])
            yield scrapy.Request(  # 發送詳情頁的請求
                img_src,
                callback=self.parse_img,
                meta={"item": item}
            )
        # 下一頁
        now_page = re.findall("page=(.*)", response.request.url)[0]
        now_page= int(now_page)

        next_url = "http://www.inke.cn/hotlive_list.html?page={}".format(str(now_page+ 1))
        yield scrapy.Request(
            next_url,
            callback=self.parse
        )

    def parse_img(self, response):
        user_name = response.meta["item"]["user_name"]

        with open("images/{}.png".format(user_name), "wb") as f:

            f.write(response.body)

運行效果:orm

相關文章
相關標籤/搜索