最近作了一個python3做業題目,涉及到:html
涉及到的庫有:python
放出代碼方便你們快速參考,實現一個小demo。git
搜索引擎的設計與實現github
["http://fiba.qq.com/a/20190420/001968.htm",
"http://sports.qq.com/a/20190424/000181.htm",
"http://sports.qq.com/a/20190423/007933.htm",
"http://new.qq.com/omn/SPO2019042400075107"]
複製代碼
過程:網絡爬蟲,頁面分析、中文提取分析、創建索引,要求應用教材中的第三方庫,中間過程在內存中完成,輸出該過程的運行時間;正則表達式
檢索:提示輸入一個關鍵詞進行檢索;算法
輸出:輸入的連接列表的按照關鍵詞的出現頻率由高到低排序輸出,並以JSON格式輸出詞頻信息等輔助信息;未出現關鍵詞的文檔連接不輸出,最後輸出檢索時間,例如:json
1 "http:xxxxxx.htm" 3
2 "https:xxxx.htm" 2
3 "https:xxxxx.htm" 1
複製代碼
代碼實現的主要步驟是:網頁爬蟲
crawler
函數bs4_page_clean
函數re_chinese
函數jieba_create_index
函數search
函數import requests
from bs4 import BeautifulSoup
import json
import re
import jieba
import time
USER_AGENT = {'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) '
'Chrome/20.0.1092.0 Safari/536.6'}
URL_TIMEOUT = 10
SLEEP_TIME = 2
# dict_result格式:{"1":
# {"url": "xxxxx", "word": {"word1": x, "word2": x, "word3": x}}
# "2":
# {"url": "xxxxx", "word": {"word1": x, "word2": x, "word3": x}}
# }
dict_result = {}
# dict_search格式:[
# [url, count]
# [url, count]
# ]
list_search_result = []
def crawler(list_URL):
for i, url in enumerate(list_URL):
print("網頁爬取:", url, "...")
page = requests.get(url, headers=USER_AGENT, timeout=URL_TIMEOUT)
page.encoding = page.apparent_encoding # 防止編碼解析錯誤
result_clean_page = bs4_page_clean(page)
result_chinese = re_chinese(result_clean_page)
# print("網頁中文內容:", result_chinese)
dict_result[i + 1] = {"url": url, "word": jieba_create_index(result_chinese)}
print("爬蟲休眠中...")
time.sleep(SLEEP_TIME)
def bs4_page_clean(page):
print("正則表達式:清除網頁標籤等無關信息...")
soup = BeautifulSoup(page.text, "html.parser")
[script.extract() for script in soup.findAll('script')]
[style.extract() for style in soup.findAll('style')]
reg1 = re.compile("<[^>]*>")
content = reg1.sub('', soup.prettify())
return str(content)
def re_chinese(content):
print("正則表達式:提取中文...")
pattern = re.compile(u'[\u1100-\uFFFD]+?')
result = pattern.findall(content)
return ''.join(result)
def jieba_create_index(string):
list_word = jieba.lcut_for_search(string)
dict_word_temp = {}
for word in list_word:
if word in dict_word_temp:
dict_word_temp[word] += 1
else:
dict_word_temp[word] = 1
return dict_word_temp
def search(string):
for k, v in dict_result.items():
if string in v["word"]:
list_search_result.append([v["url"], v["word"][string]])
# 使用詞頻對列表進行排序
list_search_result.sort(key=lambda x: x[1], reverse=True)
if __name__ == "__main__":
list_URL_sport = input("請輸入網址列表:")
list_URL_sport = list_URL_sport.split(",")
print(list_URL_sport)
# 刪除輸入的網頁雙引號
for i in range(len(list_URL_sport)):
list_URL_sport[i] = list_URL_sport[i][1:-1]
print(list_URL_sport)
# list_URL_sport = ["http://fiba.qq.com/a/20190420/001968.htm",
# "http://sports.qq.com/a/20190424/000181.htm",
# "http://sports.qq.com/a/20190423/007933.htm",
# "http://new.qq.com/omn/SPO2019042400075107"]
time_start_crawler = time.time()
crawler(list_URL_sport)
time_end_crawler = time.time()
print("網頁爬取和分析時間:", time_end_crawler - time_start_crawler)
word = input("請輸入查詢的關鍵詞:")
time_start_search = time.time()
search(word)
time_end_search = time.time()
print("檢索時間:", time_end_search - time_start_search)
for i, row in enumerate(list_search_result):
print(i+1, row[0], row[1])
print("詞頻信息:")
print(json.dumps(dict_result, ensure_ascii=False))
複製代碼
我目前是一名後端開發工程師。主要關注後端開發,數據安全,網絡爬蟲,物聯網,邊緣計算等方向。後端
微信:yangzd1102安全
Github:@qqxx6661
我的博客:
若是文章對你有幫助,不妨收藏起來並轉發給您的朋友們~