本文的文字及圖片來源於網絡,僅供學習、交流使用,不具備任何商業用途,若有問題請及時聯繫咱們以做處理。php
如下文章一級Python技術 ,做者派森醬html
剛接觸Python的新手、小白,能夠複製下面的連接去免費觀看Python的基礎入門教學視頻python
https://v.douyu.com/author/y6AZ4jn9jwKW
今天在知乎上看到一個關於【世紀佳緣找對象靠譜嗎?】的討論,其中關注的人有1903,被瀏覽了1940753次,355個回答中大多數都是不靠譜。用Python爬取世紀佳緣的數據是否能證實它的不靠譜?json
在PC端打開世紀佳緣網站,搜索20到30歲,不限地區的女友網絡
翻了幾頁找到一個search_v2.php的連接,它的返回值是一個不規則的json串,其中包含了暱稱,性別,是否婚配,匹配條件等等app
點開Hearders拉到最下面,在它的參數中sex是性別,stc是年齡,p是分頁,listStyle是有照片less
經過url +參數的get方式,抓取了10000頁的數據總計240116ide
須要安裝的模塊有openpyxl,用於過濾特殊的字符學習
# coding:utf-8 import csv import json import requests from openpyxl.cell.cell import ILLEGAL_CHARACTERS_RE import re line_index = 0 def fetchURL(url): headers = { 'accept': '*/*', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36', 'Cookie': 'guider_quick_search=on; accessID=20201021004216238222; PHPSESSID=11117cc60f4dcafd131b69d542987a46; is_searchv2=1; SESSION_HASH=8f93eeb87a87af01198f418aa59bccad9dbe5c13; user_access=1; Qs_lvt_336351=1603457224; Qs_pv_336351=4391272815204901400%2C3043552944961503700' } r = requests.get(url, headers=headers) r.raise_for_status() return r.text.encode("gbk", 'ignore').decode("gbk", "ignore") def parseHtml(html): html = html.replace('\\', '') html = ILLEGAL_CHARACTERS_RE.sub(r'', html) s = json.loads(html,strict=False) global line_index userInfo = [] for key in s['userInfo']: line_index = line_index + 1 a = (key['uid'],key['nickname'],key['age'],key['work_location'],key['height'],key['education'],key['matchCondition'],key['marriage'],key['shortnote'].replace('\n',' ')) userInfo.append(a) with open('sjjy.csv', 'a', newline='') as f: writer = csv.writer(f) writer.writerows(userInfo) if __name__ == '__main__': for i in range(1, 10000): url = 'http://search.jiayuan.com/v2/search_v2.php?key=&sex=f&stc=23:1,2:20.30&sn=default&sv=1&p=' + str(i) + '&f=select&listStyle=bigPhoto' html = fetchURL(url) print(str(i) + '頁' + str(len(html)) + '*********' * 20) parseHtml(html)
在處理數據去掉重複的時候發現有好多重複的,還覺得是代碼寫的有問題呢,查了很久的bug最後才發現網站在100頁上只有數據有好多重複的,下面兩個圖分別是110頁數據和111頁數據,是否是有不少熟面孔。fetch
110頁數據
111頁數據
過濾重複後的數據只剩下 1872 了,這個水分還真大
def filterData(): filter = [] csv_reader = csv.reader(open("sjjy.csv", encoding='gbk')) i = 0 for row in csv_reader: i = i + 1 print('正在處理:' + str(i) + '行') if row[0] not in filter: filter.append(row[0]) print(len(filter))