python爬取盤搜的有效連接

 

 由於盤搜搜索出來的連接有不少已經失效了,影響找數據的效率,所以想到了用爬蟲來過濾出有效的連接,順便練練手~python

這是本次爬取的目標網址http://www.pansou.com,首先先搜索個python,以後打開開發者工具,json

能夠發現這個連接下的json數據就是咱們要爬取的數據了,把多餘的參數去掉,app

剩下的連接格式爲http://106.15.195.249:8011/search_new?q=python&p=1,q爲搜索內容,p爲頁碼工具

 

如下是代碼實現:url

import requests
import json
from multiprocessing.dummy import Pool as ThreadPool
from multiprocessing import Queue
import sys

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"
}
q1 = Queue()
q2 = Queue()
urls = [] # 存取url列表

# 讀取url
def get_urls(query):
    # 遍歷50頁
    for i in range(1,51):
        # 要爬取的url列表,返回值是json數據,q參數是搜索內容,p參數是頁碼
        url = "http://106.15.195.249:8011/search_new?&q=%s&p=%d" % (query,i)
        urls.append(url)

# 獲取數據
def get_data(url):
    print("開始加載,請等待...")
    # 獲取json數據並把json數據轉換爲字典
    resp = requests.get(url, headers=headers).content.decode("utf-8")
    resp = json.loads(resp)

    # 若是搜素數據爲空就拋出異常中止程序
    if resp['list']['data'] == []:
        raise Exception

    # 遍歷每一頁數據的長度
    for num in range(len(resp['list']['data'])):
        # 獲取百度雲連接
        link = resp['list']['data'][num]['link']
        # 獲取標題
        title = resp['list']['data'][num]['title']
        # 訪問百度雲連接,判斷若是頁面源代碼中有「失效時間:」這段話的話就代表連接有效,連接無效的頁面是沒有這段話的
        link_content = requests.get(link, headers=headers).content.decode("utf-8")
        if "失效時間:" in link_content:
            # 把標題放進隊列1
            q1.put(title)
            # 把連接放進隊列2
            q2.put(link)
            # 寫入csv文件
            with open("wangpanziyuan.csv", "a+", encoding="utf-8") as file:
                file.write(q1.get()+","+q2.get() + "\n")
    print("ok")

if __name__ == '__main__':
    # 括號內填寫搜索內容
    get_urls("python")
    # 建立線程池
    pool = ThreadPool(3)
    try:
        results = pool.map(get_data, urls)
    except Exception as e:
        print(e)
    pool.close()
    pool.join()
    print("退出")
相關文章
相關標籤/搜索