day38 爬蟲之Scrapy + Flask框架

s1617day3html


內容回顧:
Scrapy
- 建立project
- 建立爬蟲
- 編寫
- 類
- start_urls = ['http://www.xxx.com']
- def parse(self,response):

yield Item對象
yield Request對象

- pipeline
- process_item
@classmethod
- from_clawer
- open_spider
- close_spider
配置

- request對象("地址",回調函數)
- 執行

高性能相關:
- 多線程【IO】和多進程【計算】
- 儘量利用線程:
一個線程(Gevent),基於協程:
- 協程,greenlet
- 遇到IO就切換
一個線程(Twisted,Tornado),基於事件循環:
- IO多路複用
- Socket,setBlocking(Flase)
git

今日內容:
- Scrapy
- Cookie操做
- Pipeline
- 中間件
- 擴展
- 自定義命令
- 其餘
- scrapy-redis
- Tornado和Flask
- 基本流程



內容詳細:
1. Scrapy

- start_requests
- 可迭代對象
- 生成器

內部iter()
from scrapy.crawler import Crawler
Crawler.crawl

def start_requests(self):
for url in self.start_urls:
yield Request(url=url,callback=self.parse)
# return [Request(url=url,callback=self.parse),]
- cookie
cookie_jar = CookieJar()
cookie_jar.extract_cookies(response, response.request)

- pipeline
- 5個方法
- process_item
- return item
- raise DropItem()

- 去重規則
DUPEFILTER_CLASS = 'sp2.my_filter.MyDupeFilter'
from scrapy.utils.request import request_fingerprintgithub

class MyDupeFilter(object):
def __init__(self):
self.visited = set()web

@classmethod
def from_settings(cls, settings):
return cls()redis

def request_seen(self, request):
fp = request_fingerprint(request)
if fp in self.visited:
return True
self.visited.add(fp)sql

def open(self): # can return deferred
passmongodb

def close(self, reason): # can return a deferred
passjson

def log(self, request, spider): # log that a request has been filtered
pass

from scrapy.utils.request import request_fingerprint
from scrapy.http import Requestflask


obj1 = Request(url='http://www.baidu.com?a=1&b=2',headers={'Content-Type':'application/text'},callback=lambda x:x)
obj2 = Request(url='http://www.baidu.com?b=2&a=1',headers={'Content-Type':'application/json'},callback=lambda x:x)cookie

v1 = request_fingerprint(obj1,include_headers=['Content-Type'])
print(v1)

v2 = request_fingerprint(obj2,include_headers=['Content-Type'])
print(v2)

- 自定義命令
- 目錄
xx.py
Class Foo(ScrapyCommand)
run方法

- settings
COMMANDS_MODULE = "sp2.目錄"

- scrapy xx

- 下載中間件
- __init__
- from_crawler
- process_request
- None
- response
- request
- process_response
- process_exception

應用:
- 定製請求頭(代理)
- HTTPS

注意:
默認代理規則:from scrapy.downloadermiddlewares.httpproxy import HttpProxyMiddleware
設置代理兩種方式
- 環境變量
os.environ['xxxxxxxxxxx_proxy']
os.environ['xxxxxxxxxxx_proxy']
os.environ['xxxxxxxxxxx_proxy']
os.environ['xxxxxxxxxxx_proxy']
程序啓動以前,先設置
import os
os.environ['xxxxxxxxxxx_proxy'] = "sdfsdfsdfsdfsdf"
- 中間件
...

- 爬蟲中間件
class SpiderMiddleware(object):

def __init__(self):
pass

@classmethod
def from_cralwer(cls,cralwer):
return cls()

def process_spider_input(self,response, spider):
"""
下載完成,執行,而後交給parse處理
:param response:
:param spider:
:return:
"""
pass

def process_spider_output(self,response, result, spider):
"""
spider處理完成,返回時調用
:param response:
:param result:
:param spider:
:return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)
"""
return result

def process_spider_exception(self,response, exception, spider):
"""
異常調用
:param response:
:param exception:
:param spider:
:return: None,繼續交給後續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline
"""
return None

def process_start_requests(self,start_requests, spider):
"""
爬蟲啓動時調用
:param start_requests:
:param spider:
:return: 包含 Request 對象的可迭代對象
"""
return start_requests
# return [Request(url='http://www.baidu.com'),]

- 自定義擴展
from scrapy import signals


class MyExtension(object):
def __init__(self):
pass

@classmethod
def from_crawler(cls, crawler):
obj = cls()

crawler.signals.connect(obj.xxxxxx, signal=signals.engine_started)
crawler.signals.connect(obj.rrrrr, signal=signals.spider_closed)

return obj

def xxxxxx(self, spider):
print('open')

def rrrrr(self, spider):
print('open')


EXTENSIONS = {
'sp2.extend.MyExtension': 500,
}


- Https證書,自定義證書
默認:
DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"

自定義:
DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
DOWNLOADER_CLIENTCONTEXTFACTORY = "sp2.https.MySSLFactory"


from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)


class MySSLFactory(ScrapyClientContextFactory):
def getCertificateOptions(self):
from OpenSSL import crypto
v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
return CertificateOptions(
privateKey=v1, # pKey對象
certificate=v2, # X509對象
verify=False,
method=getattr(self, 'method', getattr(self, '_ssl_method', None))
)


- 其餘:配置

參考地址:http://www.cnblogs.com/wupeiqi/articles/6229292.html

2. pip3 install scrapy-redis
需求:10個爬蟲
組件: scrapy-redis,將去重規則和調度器放置到redis中。
流程:鏈接redis,指定調度器時,調用去重規則.request_seen方法

# 鏈接redis
# REDIS_HOST = 'localhost' # 主機名
# REDIS_PORT = 6379 # 端口
REDIS_URL = 'redis://user:pass@hostname:9001' # 鏈接URL(優先於以上配置)
# REDIS_PARAMS = {} # Redis鏈接參數 默認:REDIS_PARAMS = {'socket_timeout': 30,'socket_connect_timeout': 30,'retry_on_timeout': True,'encoding': REDIS_ENCODING,})
# REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient' # 指定鏈接Redis的Python模塊 默認:redis.StrictRedis
# REDIS_ENCODING = "utf-8" # redis編碼類型 默認:'utf-8'

# 去重規則(redis中的set集合)
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"



# 調度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue' # 默認使用優先級隊列(默認),其餘:PriorityQueue(有序集合),FifoQueue(列表)、LifoQueue(列表)
SCHEDULER_QUEUE_KEY = '%(spider)s:requests' # 調度器中請求存放在redis中的key
SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat" # 對保存到redis中的數據進行序列化,默認使用pickle
SCHEDULER_PERSIST = True # 是否在關閉時候保留原來的調度器和去重記錄,True=保留,False=清空
SCHEDULER_FLUSH_ON_START = True # 是否在開始以前清空 調度器和去重記錄,True=清空,False=不清空
SCHEDULER_IDLE_BEFORE_CLOSE = 10 # 去調度器中獲取數據時,若是爲空,最多等待時間(最後沒數據,未獲取到)。
SCHEDULER_DUPEFILTER_KEY = '%(spider)s:dupefilter' # 去重規則,在redis中保存時對應的key


REDIS_START_URLS_AS_SET = False
REDIS_START_URLS_KEY = '%(name)s:start_urls'




方式一:
REDIS_URL = 'redis://user:pass@hostname:9001' # 鏈接URL(優先於以上配置)
# REDIS_PARAMS = {} # Redis鏈接參數 默認:REDIS_PARAMS = {'socket_timeout': 30,'socket_connect_timeout': 30,'retry_on_timeout': True,'encoding': REDIS_ENCODING,})
# REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient' # 指定鏈接Redis的Python模塊 默認:redis.StrictRedis
# REDIS_ENCODING = "utf-8" # redis編碼類型 默認:'utf-8'

# 去重規則(redis中的set集合)
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"



# 調度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue' # 默認使用優先級隊列(默認),其餘:PriorityQueue(有序集合),FifoQueue(列表)、LifoQueue(列表)
SCHEDULER_QUEUE_KEY = '%(spider)s:requests' # 調度器中請求存放在redis中的key
SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat" # 對保存到redis中的數據進行序列化,默認使用pickle
SCHEDULER_PERSIST = True # 是否在關閉時候保留原來的調度器和去重記錄,True=保留,False=清空
SCHEDULER_FLUSH_ON_START = True # 是否在開始以前清空 調度器和去重記錄,True=清空,False=不清空
SCHEDULER_IDLE_BEFORE_CLOSE = 10 # 去調度器中獲取數據時,若是爲空,最多等待時間(最後沒數據,未獲取到)。
SCHEDULER_DUPEFILTER_KEY = '%(spider)s:dupefilter' # 去重規則,在redis中保存時對應的key


class ChoutiSpider(scrapy.Spider):
name = 'chouti'
allowed_domains = ['chouti.com']
cookies = None
cookie_dict = {}
start_urls = ['http://dig.chouti.com/',]

def index(self, response):
print('爬蟲返回結果',response,response.url)

方式二:

REDIS_START_URLS_AS_SET = False
REDIS_START_URLS_KEY = '%(name)s:start_urls'



from scrapy_redis.spiders import RedisSpider
class ChoutiSpider(RedisSpider):
name = 'chouti'
allowed_domains = ['chouti.com']

def index(self, response):
print('爬蟲返回結果',response,response.url)



********************* 基本使用 *********************
類,繼承scrapy_redis


參考博客:http://www.cnblogs.com/wupeiqi/articles/6912807.html


3. Flask Web框架
- pip3 install flask
- Web框架:
- 路由
- 視圖
- 模板渲染

- flask中無socket,依賴 實現wsgi協議的模塊: werkzeug
- URL兩種添加方式:
方式一:
@app.route('/xxxxxxx')
def hello_world():
return 'Hello World!'
方式二:
def index():
return "Index"

app.add_url_rule('/index',view_func=index)
- 路由系統:
- 固定
@app.route('/x1/')
def hello_world():
return 'Hello World!'

- 不固定
@app.route('/user/<username>')
@app.route('/post/<int:post_id>')
@app.route('/post/<float:post_id>')
@app.route('/post/<path:path>')
@app.route('/login', methods=['GET', 'POST'])

@app.route('/xx/<int:nid>') def hello_world(nid): return 'Hello World!'+ str(nid) - 自定製正則 @app.route('/index/<regex("\d+"):nid>') def index(nid): return 'Index' - 視圖 - 模板 - message - 中間件 - Session - 默認:加密cookie實現 - 第三方:Flask-Session redis: RedisSessionInterface memcached: MemcachedSessionInterface filesystem: FileSystemSessionInterface mongodb: MongoDBSessionInterface sqlalchemy: SqlAlchemySessionInterface - 藍圖(文件夾的堆放) - 安裝第三方組件: - Session: Flask-Session - 表單驗證:WTForms - ORM: SQLAchemy 參考博客:http://www.cnblogs.com/wupeiqi/articles/7552008.html 4. Tornado - pip3 install tornado 參考博客:http://www.cnblogs.com/wupeiqi/articles/5702910.html課堂代碼:https://github.com/liyongsan/git_class/tree/master/day38

相關文章
相關標籤/搜索