這個爬蟲是從outofmemory看到的,只有100行,內容是抓取淘寶商品信息,包括商品名、賣家id、地區、價格等信息,json格式,做者說他曾經抓取到了一千萬條信息。html
出於對這個爬蟲能力的感嘆,我好奇的對它進行了分析,發現原理是如此的簡單,感嘆python
的強大之餘,好也把分析的心得記錄一下,引爲後來的經驗。python
如今這個爬蟲能不能用就沒有保證了,不過沒有關係,只是做爲一個學習的例子。數據庫
代碼能夠到原來的網址下,爲免失效,現張貼以下:json
import time import leveldb from urllib.parse import quote_plus import re import json import itertools import sys import requests from queue import Queue from threading import Thread URL_BASE = 'http://s.m.taobao.com/search?q={}&n=200&m=api4h5&style=list&page={}' def url_get(url): # print('GET ' + url) header = dict() header['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' header['Accept-Encoding'] = 'gzip,deflate,sdch' header['Accept-Language'] = 'en-US,en;q=0.8' header['Connection'] = 'keep-alive' header['DNT'] = '1' #header['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36' header['User-Agent'] = 'Mozilla/12.0 (compatible; MSIE 8.0; Windows NT)' return requests.get(url, timeout = 5, headers = header).text def item_thread(cate_queue, db_cate, db_item): while True: try: cate = cate_queue.get() post_exist = True try: state = db_cate.Get(cate.encode('utf-8')) if state != b'OK': post_exist = False except: post_exist = False if post_exist == True: print('cate-{}: {} already exists ... Ignore'.format(cate, title)) continue db_cate.Put(cate.encode('utf-8'), b'crawling') for item_page in itertools.count(1): url = URL_BASE.format(quote_plus(cate), item_page) for tr in range(5): try: items_obj = json.loads(url_get(url)) break except KeyboardInterrupt: quit() except Exception as e: if tr == 4: raise e if len(items_obj['listItem']) == 0: break for item in items_obj['listItem']: item_obj = dict( _id = int(item['itemNumId']), name = item['name'], price = float(item['price']), query = cate, category = int(item['category']) if item['category'] != '' else 0, nick = item['nick'], area = item['area']) db_item.Put(str(item_obj['_id']).encode('utf-8'), json.dumps(item_obj, ensure_ascii = False).encode('utf-8')) print('Get {} items from {}: {}'.format(len(items_obj['listItem']), cate, item_page)) if 'nav' in items_obj: for na in items_obj['nav']['navCatList']: try: db_cate.Get(na['name'].encode('utf-8')) except: db_cate.Put(na['name'].encode('utf-8'), b'waiting') db_cate.Put(cate.encode('utf-8'), b'OK') print(cate, 'OK') except KeyboardInterrupt: break except Exception as e: print('An {} exception occured'.format(e)) def cate_thread(cate_queue, db_cate): while True: try: for key, value in db_cate.RangeIter(): if value != b'OK': print('CateThread: put {} into queue'.format(key.decode('utf-8'))) cate_queue.put(key.decode('utf-8')) time.sleep(10) except KeyboardInterrupt: break except Exception as e: print('CateThread: {}'.format(e)) if __name__ == '__main__': db_cate = leveldb.LevelDB('./taobao-cate') db_item = leveldb.LevelDB('./taobao-item') orig_cate = '正裝' try: db_cate.Get(orig_cate.encode('utf-8')) except: db_cate.Put(orig_cate.encode('utf-8'), b'waiting') cate_queue = Queue(maxsize = 1000) cate_th = Thread(target = cate_thread, args = (cate_queue, db_cate)) cate_th.start() item_th = [Thread(target = item_thread, args = (cate_queue, db_cate, db_item)) for _ in range(5)] for item_t in item_th: item_t.start() cate_th.join()
一個只有一百行的代碼,也不用花太多心思就能夠看懂了,不過其中一些有意思的心得仍是能夠分享下。vim
vim
打開,在使用了fold
功能後,能夠清晰的看到代碼由import
部分,三個自定義函數和一個main
了,因此能夠直接從main
開始看。main
創建(也能夠是已經創建的)了兩個數據庫,分別是db_cate
和db_item
,還定義了開始時抓取的商品種類(category)orig_cate
。db_cate
中嘗試訪問下orig_cate
,若是沒有這個種類,就加入這個種類,屬性設置爲waiting
,leveldb
就是一個key-value數據庫,使用起來很是的方便。cate_queue
,而後就創建一個種類的線程cate_th
,會調用本身定義的一個函數cate_thread
,參數是隊列和種類數據庫。item_th
,調用定義的item_thread
函數參數是隊列和兩個數據庫。cate_thread
,這個函數會重複從種類數據庫cate_db
中遍歷取出種類名,而後看這個種類是否是已經抓取過了,若是沒有,就加入到種類隊列cate_queue
。item_thead
,從種類隊列cate_queue
中取出一個種類,再從種類數據庫中查看其狀態,若是是ok
,就取下一個種類;若是不是ok
,就標記爲crawling
,而後就使用這個類別和一個遍歷的序號就能夠得到一個網址,而後就重複的嘗試獲取這個頁面的數據,再分析,保存到item_db
中,再把種類在cate_db
中標記爲ok
,也就是完成,同時,把頁面有的種類信息放到cate_db
數據庫中。這個爬蟲的結構很清晰,一個數據庫用來保存種類的狀態信息,一個數據庫保存獲取到的信息,一個隊列做爲進程間通訊的工具,數據庫使用key-value
,網頁抓取使用requests
。參考這個結構,不少爬蟲均可以寫出來了。api