獲取一篇新聞的所有信息

給定一篇新聞的連接newsUrl,獲取該新聞的所有信息php

標題、做者、發佈單位、審覈、來源html

發佈時間:轉換成datetime類型正則表達式

點擊:api

  • newsUrl
  • newsId(使用正則表達式re)
  • clickUrl(str.format(newsId))
  • requests.get(clickUrl)
  • newClick(用字符串處理,或正則表達式)
  • int()

整個過程包裝成一個簡單清晰的函數。函數

嘗試去爬取一個你感興趣的網頁。url

代碼:spa

複製代碼
import re
import requests
from bs4 import BeautifulSoup
from datetime import datetime

# 點擊次數
def click(url):
    id = re.findall('(\d{1,5})',url)[-1]
    clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(id)
    resClick = requests.get(clickUrl)
    newsClick = int (resClick.text.split('.html')[-1].lstrip("('").rstrip("');"))
    return newsClick

# 獲取時間
def newsdt(showinfo):
    newsDate = showinfo.split()[0].split(':')[1]
    newsTime = showinfo.split()[1]
    newsDT = newsDate+' '+newsTime
    dt = datetime.strptime(newsDT,'%Y-%m-%d %H:%M:%S')
    return dt

# 新聞信息
def news(url):
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text,'html.parser')
    newsTitle = soup.select('.show-title')[0].text #標題
    author = soup.select('.show-info')[0].text.split()[2] #做者
    auditor = soup.select('.show-info')[0].text.split()[3] #審覈
    source = soup.select('.show-info')[0].text.split()[4] #來源
    showinfo = soup.select('.show-info')[0].text
    newsDT = newsdt(showinfo) #時間
    newsClick = click(url) #點擊次數
    news = print(newsTitle,newsDT,author,auditor,source,newsClick)
    return news

url='http://news.gzcc.cn/html/2019/xibusudi_0328/11088.html'
news(url)
複製代碼

運行結果:code

相關文章
相關標籤/搜索