做業來源:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE1/homework/2881
1. 簡單說明爬蟲原理
經過訪問請求爬取網頁上的數據
2. 理解爬蟲開發過程
1).簡要說明瀏覽器工做原理;
URL解析/DNS解析查找域名IP地址,網絡鏈接發起HTTP請求,HTTP報文傳輸過程,服務器接收數據,服務器響應請求/MVC,服務器返回數據,客戶端接收數據,瀏覽器加載/渲染頁面,打印繪製輸出所看到的網頁。
2).使用 requests 庫抓取網站數據;
requests.get(url) 獲取校園新聞首頁html代碼
import requests url ='http://news.gzcc.cn/html/xiaoyuanxinwen/' html = requests.get(url) html.encoding ='utf-8' print(html.text)
2.瞭解網頁php
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>做業一</title> <stytle> </stytle> </head> <body> <form> <div id="name" class='content'><label>idddd</label><input type="text"></div> <div id="password" class='content'><label>passssword</label><input type="text"></div> </form> <button>德瑪西亞</button> <button>拿錘子的人</button> <button>已經作出了選擇</button> <button>獵殺陷入黑暗的</button> </body> </html>
3.提取一篇校園新聞的標題、發佈時間、發佈單位、做者、點擊次數、內容等信息html
import requests from bs4 import BeautifulSoup from datetime import datetime url = 'http://news.gzcc.cn/html/2019/xiaoyuanxinwen_0320/11029.html' html = requests.get(url) html.encoding = 'utf-8' soup = BeautifulSoup(html.text,'html.parser') title = soup.select('.show-title')[0].text time = soup.select('.show-info')[0].text.split()[0:2] time = ' '.join(time)[5:] time = datetime.strptime(time,'%Y-%m-%d %H:%M:%S') add = str(soup.select('.show-info')[0].text.split()[4]) writer = str(soup.select('.show-info')[0].text.split()[2]) DingUrl="http://oa.gzcc.cn/api.php?op=count&id=11052&modelid=80" ding=int(requests.get(DingUrl).text.split('.html')[-1][2:-3]) Ding = '點擊次數:{}'.format(ding) Nr = soup.select('#content')[0].text.split() print(title) print('發佈時間:{}'.format(time)) print(add) print(writer) print(Ding) print(Nr[0]+'\n'+Nr[2]+'\n'+Nr[4]+'\n'+Nr[6])