Tornado 4.3文檔翻譯: 用戶指南-示例-一個併發網絡爬蟲

譯者說

Tornado 4.3於2015年11月6日發佈,該版本正式支持Python3.5async/await關鍵字,而且用舊版本CPython編譯Tornado一樣可使用這兩個關鍵字,這無疑是一種進步。其次,這是最後一個支持Python2.6Python3.2的版本了,在後續的版本了會移除對它們的兼容。如今網絡上尚未Tornado4.3的中文文檔,因此爲了讓更多的朋友能接觸並學習到它,我開始了這個翻譯項目,但願感興趣的小夥伴能夠一塊兒參與翻譯,項目地址是tornado-zh on Github,翻譯好的文檔在Read the Docs上直接能夠看到。歡迎Issues or PR。html

示例 - 一個併發網絡爬蟲

Tornado的 tornado.queues 模塊實現了異步生產者/消費者模式的協程, 相似於經過Python 標準庫的 queue實現線程模式.python

一個yield Queue.get 的協程直到隊列中有值的時候纔會暫停. 若是隊列設置了最大長度yield Queue.put 的協程直到隊列中有空間纔會暫停.git

一個Queue從0開始對完成的任務進行計數. Queue.put 加計數;Queue.task_done減小計數.github

這裏的網絡爬蟲的例子, 隊列開始的時候只包含base_url. 當一個worker抓取到一個頁面它會解析連接並把它添加到隊列中, 而後調用Queue.task_done 減小計數一次. 最後, 當一個worker抓取到的頁面URL都是以前抓取到過的而且隊列中沒有任務了.因而worker調用 Queue.task_done 把計數減到0. 等待 Queue.join 的主協程取消暫停而且完成.web

import time
    from datetime import timedelta
    
    try:
        from HTMLParser import HTMLParser
        from urlparse import urljoin, urldefrag
    except ImportError:
        from html.parser import HTMLParser
        from urllib.parse import urljoin, urldefrag
    
    from tornado import httpclient, gen, ioloop, queues
    
    base_url = 'http://www.tornadoweb.org/en/stable/'
    concurrency = 10
    
    
    @gen.coroutine
    def get_links_from_url(url):
        """Download the page at `url` and parse it for links.
    
        Returned links have had the fragment after `#` removed, and have been made
        absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
        'http://www.tornadoweb.org/en/stable/gen.html'.
        """
        try:
            response = yield httpclient.AsyncHTTPClient().fetch(url)
            print('fetched %s' % url)
    
            html = response.body if isinstance(response.body, str) \
                else response.body.decode()
            urls = [urljoin(url, remove_fragment(new_url))
                    for new_url in get_links(html)]
        except Exception as e:
            print('Exception: %s %s' % (e, url))
            raise gen.Return([])
    
        raise gen.Return(urls)
    
    
    def remove_fragment(url):
        pure_url, frag = urldefrag(url)
        return pure_url
    
    
    def get_links(html):
        class URLSeeker(HTMLParser):
            def __init__(self):
                HTMLParser.__init__(self)
                self.urls = []
    
            def handle_starttag(self, tag, attrs):
                href = dict(attrs).get('href')
                if href and tag == 'a':
                    self.urls.append(href)
    
        url_seeker = URLSeeker()
        url_seeker.feed(html)
        return url_seeker.urls
    
    
    @gen.coroutine
    def main():
        q = queues.Queue()
        start = time.time()
        fetching, fetched = set(), set()
    
        @gen.coroutine
        def fetch_url():
            current_url = yield q.get()
            try:
                if current_url in fetching:
                    return
    
                print('fetching %s' % current_url)
                fetching.add(current_url)
                urls = yield get_links_from_url(current_url)
                fetched.add(current_url)
    
                for new_url in urls:
                    # Only follow links beneath the base URL
                    if new_url.startswith(base_url):
                        yield q.put(new_url)
    
            finally:
                q.task_done()
    
        @gen.coroutine
        def worker():
            while True:
                yield fetch_url()
    
        q.put(base_url)
    
        # Start workers, then wait for the work queue to be empty.
        for _ in range(concurrency):
            worker()
        yield q.join(timeout=timedelta(seconds=300))
        assert fetching == fetched
        print('Done in %d seconds, fetched %s URLs.' % (
            time.time() - start, len(fetched)))
    
    
    if __name__ == '__main__':
        import logging
        logging.basicConfig()
        io_loop = ioloop.IOLoop.current()
        io_loop.run_sync(main)
相關文章
相關標籤/搜索