scrapy中start_url是經過start_requests來進行處理的,其實現代碼以下python
# 這是源代碼
def start_requests(self):
cls = self.__class__
if method_is_overridden(cls, Spider, 'make_requests_from_url'):
warnings.warn(
"Spider.make_requests_from_url method is deprecated; it "
"won't be called in future Scrapy releases. Please "
"override Spider.start_requests method instead (see %s.%s)." % (
cls.__module__, cls.__name__
),
)
for url in self.start_urls:
yield self.make_requests_from_url(url)
else:
for url in self.start_urls:
yield Request(url, dont_filter=True)
複製代碼
因此對應的,若是start_url地址中的url是須要登陸後才能訪問的url地址,則須要重寫start_request方法並在其中手動添加上cookiegit
測試帳號 noobpythoner zhoudawei123github
import scrapy
import re
class Login1Spider(scrapy.Spider):
name = 'login1'
allowed_domains = ['github.com']
start_urls = ['https://github.com/NoobPythoner'] # 這是一個須要登錄之後才能訪問的頁面
def start_requests(self): # 重構start_requests方法
# 這個cookies_str是抓包獲取的
cookies_str = '...' # 抓包獲取
# 將cookies_str轉換爲cookies_dict
cookies_dict = {i.split('=')[0]:i.split('=')[1] for i in cookies_str.split('; ')}
yield scrapy.Request(
self.start_urls[0],
callback=self.parse,
cookies=cookies_dict
)
def parse(self, response): # 經過正則表達式匹配用戶名來驗證是否登錄成功
# 正則匹配的是github的用戶名
result_list = re.findall(r'noobpythoner|NoobPythoner', response.body.decode())
print(result_list)
pass
複製代碼
咱們知道能夠經過scrapy.Request()指定method、body參數來發送post請求;可是一般使用scrapy.FormRequest()來發送post請求ajax
注意:scrapy.FormRequest()可以發送表單和ajax請求,參考閱讀 www.jb51.net/article/146…正則表達式
找到post的url地址:點擊登陸按鈕進行抓包,而後定位url地址爲https://github.com/sessionbash
找到請求體的規律:分析post請求的請求體,其中包含的參數均在前一次的響應中cookie
否登陸成功:經過請求我的主頁,觀察是否包含用戶名session
import scrapy
import re
class Login2Spider(scrapy.Spider):
name = 'login2'
allowed_domains = ['github.com']
start_urls = ['https://github.com/login']
def parse(self, response):
authenticity_token = response.xpath("//input[@name='authenticity_token']/@value").extract_first()
utf8 = response.xpath("//input[@name='utf8']/@value").extract_first()
commit = response.xpath("//input[@name='commit']/@value").extract_first()
#構造POST請求,傳遞給引擎
yield scrapy.FormRequest(
"https://github.com/session",
formdata={
"authenticity_token":authenticity_token,
"utf8":utf8,
"commit":commit,
"login":"noobpythoner",
"password":"***"
},
callback=self.parse_login
)
def parse_login(self,response):
ret = re.findall(r"noobpythoner|NoobPythoner",response.text)
print(ret)
複製代碼
在settings.py中經過設置COOKIES_DEBUG=TRUE 可以在終端看到cookie的傳遞傳遞過程併發