前面文章講到怎麼提取動態網頁的所有內容。接下來返回文章一,怎麼登陸而且保存登陸狀態,以便帶上cookies下次訪問。html
if __name__ == '__main__':
url = 'https://www.zhihu.com/collection/146079773'
res = requests.get(url, verify=False)
resSoup = BeautifulSoup(res.content, 'lxml')
items = resSoup.select("div > h2 > a")
print(len(items))
複製代碼
verify=False
:取消ssl的驗證。 運行這段代碼, 輸出結果未0, 粘貼該網頁到一個沒有登陸知乎的瀏覽器打開,重定向到登陸頁, 說明須要登陸。python
驗證:git
if __name__ == '__main__':
url = 'https://www.zhihu.com/collection/146079773'
# res = requests.get(url, verify=False)
driver = webdriver.Chrome()
driver.get(url)
driver.implicitly_wait(2)
res = driver.page_source
resSoup = BeautifulSoup(res, 'lxml')
items = resSoup.select("div > h2 > a")
print(len(items))
複製代碼
執行代碼,打開瀏覽器,顯示知乎登陸頁,說明訪問收藏夾須要登陸。 github
登陸技巧: 使用selenium打開登陸頁,設定延時時間(好比60s),手動輸入帳號密碼登陸知乎,60秒以後保存cookies到本地,完成登陸。後續請求攜帶保存的cookie進行的登陸。若是cookies過時,則簡單重複這一步驟。 下面是詳細步驟:web
if __name__ == '__main__':
ssl._create_default_https_context = ssl._create_unverified_context
# url = 'https://www.zhihu.com/collection/146079773'
url = "https://www.zhihu.com/signin"
# res = requests.get(url, verify=False)
driver = webdriver.Chrome()
driver.implicitly_wait(5)
driver.get(url)
time.sleep(40)
cookies = driver.get_cookies()
pickle.dump(cookies, open("cookies.pkl", "wb"))
print("save suc")
複製代碼
執行這段代碼,看是否有cookies.pkl文件生成, 成功保存了cookies。docker
接下來用第二段代碼去驗證。json
if __name__ == '__main__':
cookies = pickle.load(open("cookies.pkl", "rb"))
url = 'https://www.zhihu.com/collection/146079773'
driver = webdriver.Chrome()
driver.get("https://www.zhihu.com/signin")
for cookie in cookies:
print(cookie)
driver.add_cookie(cookie)
driver.get(url)
driver.implicitly_wait(2)
res = driver.page_source
resSoup = BeautifulSoup(res, 'lxml')
items = resSoup.select("div > h2 > a")
print(len(items))
複製代碼
打開瀏覽器, 加載任意網頁,接着加載cookies, 打開給定的url。運行代碼, 瀏覽器
至此,最難定義的動態網頁和登陸問題已經解決。 下面就是怎麼保存抓到的數據。 個人想法是先將須要登陸的10頁中全部問題和問題連接提取出來,保存爲json文件之後後續處理。接着對每個問題下的全部圖片連接提取,保存或者直接下載就看我的選擇了。bash
settings.py
文件中的中間鍵,DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
# 'zhihu.middlewares.PhantomJSMiddleware': 100,
}
複製代碼
反爬蟲策略: 對於訪問過快,網頁通常會靜止訪問或者直接封ip。所以對於須要登陸的爬蟲來講,限制訪問速度,好比5秒/次, 或者每一個ip每分鐘最大訪問次數。對於不須要登陸的頁面來講,使用代理ip是最好的選擇,或者下降訪問次數都是可行的辦法。 settings.py
文件的設置,服務器
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 2
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16
複製代碼
這幾個選項都是控制訪問速度的,通常我設置DOWNLOAD_DELAY
便可,即每兩秒訪問一次。
執行代碼以下:
class Zhihu(scrapy.Spider):
name = "zhihu"
cookeis = pickle.load(open("cookies.pkl", "rb"))
urls = []
questions_url = set()
for i in range(1, 11):
temp_url = "https://www.zhihu.com/collection/146079773?page=" + str(i)
urls.append(temp_url)
def start_requests(self):
for url in self.urls:
request = scrapy.Request(url=url, callback=self.parse, cookies=self.cookeis)
yield request
def parse(self, response):
print(response.url)
resSoup = BeautifulSoup(response.body, 'lxml')
items = resSoup.select("div > h2 > a")
print(len(items))
for item in items:
print(item['href'])
self.questions_url.add(item['href'] + "\n")
@classmethod
# 信號的使用
def from_crawler(cls, crawler, *args, **kwargs):
print("from_crawler")
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_closed)
return s
def spider_opened(self, spider):
print("spider close, save urls")
with open("urls.txt", "w") as f:
for url in self.questions_url:
f.write(url)
複製代碼
命令行運行爬蟲,查看url.txt
文件。
能夠看到,成功抓取了44個連接,去除people, zhuanlan等幾個無效連接, 後面便可從該文件讀取內容,拼接連接,利用selenium作中間鍵提取全部的圖片連接。
總結:這本文章講了如何利用selenium去手動登陸網站,保存cookies,之後後續登陸(幾乎能夠登陸全部的網站,限制訪問速度避免被封)。
這三篇文章講解了怎麼使用scrapy去抓取想要的東西。如今無需使用框架,也能夠涉及實現本身的爬蟲。對於怎麼保存圖片,使用代理,後面會作簡單介紹。 後面會寫一篇怎麼將爬蟲部署在服務器上,利用docker搭建python環境去執行爬蟲。
weixin:youquwen1226
github 歡迎來信探討。