我主要是用了兩個方法來抽去正文內容,第一個方法,諸如xpath,css,正則表達式,beautifulsoup來解析新聞頁面的時候,老是會遇到這樣那樣各類奇奇怪怪的問題,讓人很頭疼。第二個方法是後面標紅的,主要推薦用newspaper庫css
在導師公司,須要利用重度搜索引擎來最快的獲取想要的內容,再創建語料庫,因而我用python 的 beautifulsoup 和urllib 來抓取一些網頁內容來作訓練語料。html
搜索關鍵詞是 「人名 公司 說」,其實只要三步就能夠完成,第一個是直接在百度主頁上搜,而後先是在百度結果搜索頁上的連接獲取下來,第二個是進入結果搜索頁主頁面上的的那些連接,而後抓去正文內容,第三個就是將獲取到的正文內容保存下來,對內容進行分詞,好比在正文裏面找到人說過的話,能夠用說、表示、說到、曾經說、「 」來判斷,這些不作描述了,主要說一下抽取正文內容。python
提取連接web
經過網頁源碼,發現這些超連接都是在<div class = "main"> </div>標籤之間(不一樣網站有不一樣的格式),用beautifulsoup來提取比較好,若用urllib容提取到其餘url,不易區分。 例以下圖正則表達式
代碼以下: app
#encoding=utf-8
#coding=utf-8
import urllib,urllib2
from bs4 import BeautifulSoup
import re
import os
import string
#獲得url的list
def get_url_list(purl):
#鏈接
req = urllib2.Request(purl,headers={'User-Agent':"Magic Browser"})
page = urllib2.urlopen(req)
soup = BeautifulSoup(page.read())
#讀取標籤
a_div = soup.find('div',{'class':'main'})
b_div = a_div.find('div',{'class':'left'})
c_div = b_div.find('div',{'class':'newsList'})
links4 = []
#獲得url的list
for link_aa in c_div:
for link_bb in link_aa:
links4.append(link_bb.find('a'))
links4 = list(set(links4))
links4.remove(-1)
links4.remove(None)
return links4
#從list中找到想要的新聞連接
#找到要訪問的下一個頁面的url
def get_url(links):
url = []
url2 = ''
url3 = ''
url4 = ''
i = 0
for link in links:
if link.contents == [u'後一天']:
continue
#上一頁 和 下一頁 的標籤狀況比較複雜
#取出「上一頁」標籤的url(貌似無論用)
if str(link.contents).find(u'/> ') != -1:
continue
#取出「下一頁」標籤的url
if str(link.contents).find(u' <img') != -1:
url2 = link.get("href")
i = 1
continue
if link.contents == [u'前一天']:
url3 = link.get("href")
continue
url.append(link.get("href"))
if(i == 1):
url4 = url2
else:
url4 = url3
return url , url4
def main():
link_url = []
link_url_all = []
link_url_all_temp = []
next_url = ''
#開始的url
purl = 'http://news.ifeng.com/listpage/4550/20140903/1/rtlist.shtml'
link_url = get_url_list(purl)
link_url_all , next_url = get_url(link_url)
#作了100次循環
for i in range(100):
link_url = get_url_list(next_url)
link_url_all_temp , next_url = get_url(link_url)
link_url_all = link_url_all + link_url_all_temp
#將所有url存檔
path = 'd:\\url.txt'
fp = open(path,'w')
for link in link_url_all:
fp.write(str(link)+'\n')
fp.close()
if __name__ == '__main__':
main()
可是!!!!這個方法並很差一點都很差 ,太不方便
咱們能夠用python 的一個庫 newspaper的Article包,直接抽取文章內容,不用再分析web頁面上的什麼diva,css啊網站
from newspaper import Article
url = 'http://news.ifeng.com/a/20180504/58107235_0.shtml'
news = Article(url, language='zh')
news .download()
news .parse()
print(news.text)
print(news.title)
# print(news.html)
# print(news.authors)
# print(news.top_image)
# print(news.movies)
# print(news.keywords)
# print(news.summary)
也能夠直接導入包ui
import newspaper
news = newspaper.build(url, language='zh')
article = news.articles[0]
article.download()
article.parse()
print(article.text)
我用到的是幾代碼爲:搜索引擎
第一段代碼爲獲取結果搜索頁的url連接,將這些連接保存到一個txt文件google
第二段代碼是進入就過搜索頁的連接,抽去正文內容
這個方法比上一個真是好到爆,這個庫包就是好用!!!強烈推薦
def baidu_search(wd,pn_max,save_file_name):
#百度搜索爬蟲,給定關鍵詞和頁數以及存儲到哪一個文件中,返回結果去重複後的url集合
url= "https://www.baidu.com/s"
# url = "https://www.google.com.hk"
# return_set = set()
with open(save_file_name, 'a', encoding='utf-8') as out_data:
for page in range(pn_max):
pn = page*10
querystring = {
"wd":wd,
"pn":pn,
"oq":wd,
"ie":"utf-8",
"usm":2,
}
headers = {
'pragma': "no-cache",
'accept-encoding': "gzip, deflate, br",
'accept-language': "zh-CN,zh;q=0.8",
'upgrade-insecure-requests': "1",
'user-agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36",
'accept': "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
'cache-control': "no-cache",
'connection': "keep-alive",
}
try:
response = requests.request("GET", url, headers = headers, params=querystring)
html=etree.HTML(response.text, parser=etree.HTMLParser(encoding='utf-8'))
# print(response.text)
titles=[]
titles_tags=html.xpath('//div[@id="content_left"]/div/h3/a')
for tag in titles_tags:
title=tag.xpath('string(.)').strip()
titles.append(title)
urls=html.xpath('//div[@id="content_left"]/div/h3/a/@href')
print(len(urls))
for data in zip(titles, urls):
# out_data.write(data[0] + ',' + data[1] + '\n')
out_data.write(data[1] + '\n')
except Exception as e:
print ("頁面加載失敗", e)
continue
# def exc_search_content():
# url = "https://www.baidu.com/s"
# response = requests.request("GET", url)
# responser = requests.request("GET",response.url)
# html = etree.HTML(responser.content, parser=etree.HTMLParser(encoding='utf-8'))
# urls = html.xpath
if __name__ == '__main__':
wd = "馬雲 阿里巴巴 曾經說"
pn = 4
save_file_name = "save_url.txt"
return_set = baidu_search(wd,pn,save_file_name)
import urllib
import re
import os
import string
from bs4 import BeautifulSoup
import logging
from newspaper import Article
urlLinks = []
save_urls = 'save_url.txt'
file = open(save_urls,'r')
#讀取以前保存的url
for line in file:
urlLinks.append(line)
file.close()
print(len(urlLinks))
print(urlLinks)
for link in urlLinks:
try:
news = Article(link.strip(), language = 'zh')
news.download()
news.parse()
print(news.text)
print('-------------------------------------------------------------------------------------------------------')
except Exception as e:
print("Error: 網頁爲404錯誤")
如今說一下安裝這個庫,在settings配置環境,添加包newspaper的時候老是添加不進去。那就pip吧!
因而打開命令行窗口,輸入pip3 install --ignore-installed --upgrade newspaper3k,等待一會就安裝好了。
若是文章沒有指明使用的什麼語言的時候,Newspaper會嘗試自動識別。
或者如今python3版本用的都是newspaper3k,因此再pycharm左下角打開命令行直接輸 pip install newspaper3k就能夠了--------------------- 做者:lijiaqi0612 來源:CSDN 原文:https://blog.csdn.net/lijiaqi0612/article/details/81707128 版權聲明:本文爲博主原創文章,轉載請附上博文連接!