python3爬蟲抓取智聯招聘職位信息代碼

上代碼,有問題歡迎留言指出。html

# -*- coding: utf-8 -*-
"""
Created on Tue Aug  7 20:41:09 2018
@author: brave-man
blog: http://www.cnblogs.com/zrmw/
"""

import requests
from bs4 import BeautifulSoup
import json

def getDetails(url):
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0'}
    res = requests.get(url, headers = headers)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    soup = json.loads(str(soup))
    
    try:
        with open('jobDetails.txt', 'w') as f:
            print('建立 {} 文件成功'.format('jobDetails.txt'))
    except:
        print('failure')
    
    details = {}    
    for i in soup['data']['results']:
        jobName = i['jobName']
        salary = i['salary']
        company = i['company']['name']
        companyUrl = i['company']['url']
        positionURL = i['positionURL']
        details = {'jobName': jobName,
                   'salary': salary,
                   'company': company,
                   'companyUrl': companyUrl,
                   'positionURL': positionURL
                   }
#        print(details)
        toFile(details)

def toFile(d):
    dj = json.dumps(d)
    try:
        with open('jobDetails.txt', 'a') as f:
            f.write(dj)
#            print('sucessful')
    except:
        print('Error')

def main():
    url = 'https://fe-api.zhaopin.com/c/i/sou?pageSize=60&cityId=635&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=python&kt=3&lastUrlQuery={"jl":"635","kw":"python","kt":"3"}'
    getDetails(url)

if __name__ == "__main__":
    main()

執行完上述代碼後,會在代碼同目錄下建立一個保存職位信息的txt文件,jobDetails.txt。python

這只是獲取一頁招聘信息的代碼,後續會添加,如何獲取url和全部頁的招聘信息的代碼。json

智聯招聘網站仍是有一點點小坑的,就是否是全部的招聘職位詳情頁面都是使用智聯的官網格式,點開某個招聘職位以後,連接定向到某公司官網的招聘網站上,後面遇到的時候會具體處理。api

相關文章
相關標籤/搜索