Scrapy抓取51JOB職位數據

51JOB的數據相比BOSS直聘仍是好作不少,首先仍是在items.py中進行定義:html

import scrapy


class PositionViewItem(scrapy.Item):
    # define the fields for your item here like:
    
    name :scrapy.Field = scrapy.Field()#名稱
    salary :scrapy.Field = scrapy.Field()#薪資
    education :scrapy.Field = scrapy.Field()#學歷
    experience :scrapy.Field = scrapy.Field()#經驗
    jobjd :scrapy.Field = scrapy.Field()#工做ID
    district :scrapy.Field = scrapy.Field()#地區
    category :scrapy.Field = scrapy.Field()#行業分類
    scale :scrapy.Field = scrapy.Field()#規模
    corporation :scrapy.Field = scrapy.Field()#公司名稱
    url :scrapy.Field = scrapy.Field()#職位URL
    createtime :scrapy.Field = scrapy.Field()#發佈時間
    posistiondemand :scrapy.Field = scrapy.Field()#崗位職責
    cortype :scrapy.Field = scrapy.Field()#公司性質

而後也是採起直接搜索全國-數據分析職位的url做爲起始url,記得須要模擬一個請求頭:python

name :str = 'job51Analysis'
    url :str = 'https://search.51job.com/list/000000,000000,0000,00,9,99,%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare='

    headers :Dict = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0',
        'Referer': 'https://mkt.51job.com/tg/sem/pz_2018.html?from=baidupz'
    }

    def start_requests(self) -> Request:
        yield Request(self.url, headers=self.headers)

直接把定義好的headers做爲參數傳進Request裏就能夠了.安全

首先也是用默認的回調函數parse(比較懶,臨時用就沒有自定義):scrapy

if response.status == 200:
            PositionInfos :selector.SelectorList = response.selector.xpath(r'//div[@class="el"]')

如何取得單個職位的信息呢?首先用xpath把單個職位的list選出來,以後再用這個list去二次選擇,這樣就能夠獲取了.ide

for positioninfo in PositionInfos:#遍歷取得的selectorlist
                pvi = PositionViewItem()
                pvi['name'] :str = ''.join(positioninfo.xpath(r'p[@class="t1 "]/span/a/text()').extract()).strip()
                pvi['salary'] :str = ''.join(positioninfo.xpath(r'span[@class="t4"]/text()').extract())
                pvi['createtime'] :str = ''.join(positioninfo.xpath(r'span[@class="t5"]/text()').extract())
                pvi['district'] :str = ''.join(positioninfo.xpath(r'span[@class="t3"]/text()').extract())
                pvi['corporation'] :str = ''.join(positioninfo.xpath(r'span[@class="t2"]/a/text()').extract()).strip()
                pvi['url'] :str = ''.join(positioninfo.xpath(r'p[@class="t1 "]/span/a/@href').extract())

因爲51JOB中的職位信息用一層搜索是看不全的,須要點擊進去處理下一層路徑,所以在這裏獲取職位詳細信息的url,以後進行下一級處理:函數

#處理二級路徑
                if len(pvi['url']) > 0:
                    request :Request = Request(pvi['url'], callback=self.positiondetailparse, headers=self.headers)
                    request.meta['positionViewItem'] = pvi
                    yield request

以上的代碼用到了自定義的callback函數,用來對二級路徑進行處理,另外在request中加入了meta屬性,能夠用來把參數經過request進行傳遞(這個好像是get請求,因此不建議傳太長,不太安全也不規範),這樣的話在positiondetailparse這個方法中就能夠獲取傳過去的item實例了.post

def positiondetailparse(self, response) -> PositionViewItem:
        if response.status == 200:
            pvi :PositionViewItem = response.meta['positionViewItem']
            pvi['posistiondemand'] :str = ''.join(response.selector.xpath(r'//div[@class="bmsg job_msg inbox"]//p/text()').extract()).strip()
            pvi['cortype'] :str = ''.join(response.selector.xpath(r'//div[@class="com_tag"]/p[@class="at"][1]/@title').extract()).strip()#xpath從1開始須要注意
            pvi['scale'] :str = ''.join(response.selector.xpath(r'//div[@class="com_tag"]/p[@class="at"][2]/@title').extract()).strip()
            pvi['category'] :str = ''.join(response.selector.xpath(r'//div[@class="com_tag"]/p[@class="at"][3]/@title').extract())
            pvi['education'] :str = ''.join(response.selector.xpath(r'//p[@class="msg ltype"]/text()[3]').extract()).strip()
            yield pvi

解析二級路徑中的信息,須要注意的是xpath選擇器中的元素個數是從1開始的,不是0.url

把全部的信息取得以後,返回一個pvi給pipeline,用來進行處理和存儲.spa

 

單個職位信息抓取完成以後,天然也須要下一頁的信息,在parse中:excel

nexturl = ''.join(response.selector.xpath(r'//li[@class="bk"][2]/a/@href').extract())
            print(nexturl)
            if nexturl:
                # nexturl = urljoin(self.url, ''.join(nexturl))
                print(nexturl)
                yield Request(nexturl, headers=self.headers)

若是不加callback參數,就會默認調用parse這個方法,從而達到解析下一頁的目的.

最後要在pipelines.py里加入處理item數據的程序,這裏我選擇把數據存到csv當中.

import os
import csv

class LearningPipeline(object):

    def __init__(self):
        self.file = open('51job.csv', 'a+', encoding='utf-8', newline='')
        self.writer = csv.writer(self.file, dialect='excel')

    def process_item(self, item, spider):
        if item['name']:
            self.writer.writerow([item['name'], item['salary'], item['district'], item['createtime'], item['education'],
            item['posistiondemand'], item['corporation'], item['cortype'], item['scale'], item['category']])
        return item

    def close_spider(self, spider):
        self.file.close()

初始化方法裏默認打開這個文件,而後process_item是默認的處理item 的方法,返回一個item就會調用一次!

close_spider方法是關閉爬蟲時用的,寫一個關閉文件就能夠了.

須要注意的是輸出的csv文件用excel打開是有中文亂碼的,我把文件用記事本打開以後以ASCII的方式另存一份,亂碼就消失了.

 

好了,接下來就能夠開始運行爬蟲了,這只是一個很是初級的爬蟲,也很簡易.留着備忘吧!

相關文章
相關標籤/搜索