初學Python,對爬蟲也是隻知其一;不知其二,剛好有個實驗須要一些數據,因此本次爬取的對象來自中國農業信息網中下屬的一個科技板塊種植技術的文章(http://www.agri.cn/kj/syjs/zzjs/)html
首先,分析網站結構:各文章標題以列表的形式展現,點擊標題得到則是文章的正文,如圖所示:mysql
分析網頁源碼,不難看出圖一所示的URL規律,其首頁爲 http://www.agri.cn/kj/syjs/zzjs/index.htm ,其後續頁面分別爲 http://www.agri.cn/kj/syjs/zzjs/index_1.htm 、http://www.agri.cn/kj/syjs/zzjs/index_2.htm …… 等等以此類推,所以能夠使用循環對URL來賦值獲得想要的頁數。sql
下一步,獲取新聞的標題和URL,經過解析網頁源碼,發現全部標題包含在以下結構中:數據庫
獲得結構後,能夠據此使用Beautifulsoup獲得所需的a標籤連接和標題,並將其存入dictionary中,使用key保存標題,而URL做爲其value值app
最後,將獲得的dictionary遍歷,取出每一個連接並解析網頁代碼,獲得須要的文章信息,最後一併存進數據庫,代碼以下:網站
# -*- coding: UTF-8 -*- from bs4 import BeautifulSoup import requests import sys import pymysql import re #--------set page amount---------- def set_download_urls(): downloadUrls = [] baseUrl = 'http://www.agri.cn/kj/syjs/zzjs/' downloadUrls.append('http://www.agri.cn/kj/syjs/zzjs/index.htm') for i in range(1,10): url = baseUrl + 'index_' + str(i) + '.htm' downloadUrls.append(url) return downloadUrls #--------get download page urls def get_download_tables(): downloadUrls = set_download_urls() tables = [] for url in downloadUrls: req = requests.get(url) req.encoding = 'utf-8' html = req.text table_bf = BeautifulSoup(html) tables.append(table_bf.find('table',width=500,align='center')) return tables #---------get article links------------ def get_download_url(): downloadTables = get_download_tables() articles = [] for each in downloadTables: articles.append(each.find_all('a',class_='link03')) return articles def read_article_info(): articles = get_download_url() baseUrl = 'http://www.agri.cn/kj/syjs/zzjs' dict = {} for each in articles: for item in each: dict[item.string] = baseUrl + item.get('href')[1:] return dict #---------method of save to MySQL----------- def save_mysql(title,date,source,content,tech_code,info_code): db = pymysql.connect('localhost','root','123456','persona') cursor = db.cursor() sql = 'INSERT INTO information_stock (title,date,source,content,tech_code,info_code) VALUES ("%s","%s","%s","%s",%s,%s)' % (title,date,source,content,tech_code,info_code) try: cursor.execute(sql) db.commit() print("write success") except Exception as e: db.rollback() print("write fail") print(e) db.close() #---------get content info and save --------------- def get_content(title,url): print(title + '---->' + url) req = requests.get(url) req.encoding = 'utf-8' html = req.text table_bf = BeautifulSoup(html) article = table_bf.find('table',width=640) #----article content----- #content = article.find(class_='TRS_Editor').get_text() #content = article.find('div',attrs={'id':re.compile("TRS_")}).select("p") content = article.select("p") info = article.find(class_='hui_12-12').get_text() date = info[3:19] source = info.split(":")[3] text = "" for item in content: text += item.get_text() text += "\n" #print(text) save_mysql(title,date,source,text,0,0) #--------save all article ----------- def save_data(): dict = read_article_info() for key,value in dict.items(): get_content(key,value) save_data()
爬取結果入庫:ui