最近愛上了python,就很是喜歡使用python來練手,在上次的基礎上完善一下代碼,實現採集wordpress程序的網站的整站數據的爬蟲程序,本站也是採用的wordpress,我就拿吾八哥網(http://www.5bug.wang/)來練手了!簡單分析下這個爬蟲的思路,從首頁開始,抓取href標籤,到子頁面後仍是要繼續找href標籤,那麼很容易想到要用到遞歸了,直接貼代碼吧!寫了點簡單的註釋,以下:html
import re import bs4 import urllib.request url_home = 'http://www.5bug.wang/' #要採集的網站 url_pattern = url_home + '([\s\S]*)\.html' #正則表達式匹配文章頁面,此處需完善爲更好的寫法 url_set = set() url_cache = set() url_count = 0 url_maxCount = 1000 #最大采集數量 #採集匹配文章內容的href標籤 def spiderURL(url, pattern): html = urllib.request.urlopen(url).read().decode('utf8') soup = bs4.BeautifulSoup(html, 'html.parser') links = soup.find_all('a', href = re.compile(pattern)) for link in links: if link['href'] not in url_cache: url_set.add(link['href']) return soup #採集的過程 異常處理還須要完善,對於一些加了防採集的站,還須要處理header的,下次咱們再學習 spiderURL(url_home, url_pattern) while len(url_set) != 0: try: url = url_set.pop() url_cache.add(url) soup = spiderURL(url, url_pattern) page = soup.find('div', {'class':'content'}) title = page.find('h1').get_text() autor = page.find('h4').get_text() content = page.find('article').get_text() print(title, autor, url) except Exception as e: print(url, e) continue else: url_count += 1 finally: if url_count == url_maxCount: break print('一共採集了: ' + str(url_count) + ' 條數據')