第五章感受是第四章的練習項目,無非就是多了一個模擬登陸。html
不分小節記錄了,直接上知識點,可能比較亂。mysql
1.常見的httpcode:sql
2.怎麼找post參數?shell
先找到登陸的頁面,打開firebug,輸入錯誤的帳號和密碼,觀察post_url變換,從而肯定參數。json
3.讀取本地的文件,生成cookies。瀏覽器
1 try: 2 import cookielib #py2 3 except: 4 import http.cookiejar as cookielib #py3
4.用requests登陸知乎cookie
1 # -*- coding: utf-8 -*- 2 __author__ = 'jinxiao' 3 4 import requests 5 try: 6 import cookielib 7 except: 8 import http.cookiejar as cookielib 9 10 import re 11 12 session = requests.session() #實例化session,下面的requests能夠直接換成session 13 session.cookies = cookielib.LWPCookieJar(filename="cookies.txt") #實例化cookies,保存cookies 14 #讀取cookies 15 try: 16 session.cookies.load(ignore_discard=True) 17 except: 18 print ("cookie未能加載") 19 20 #知乎必定要加上瀏覽器的頭,其餘網站不必定,通常都是要的 21 agent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0" 22 header = { 23 "HOST":"www.zhihu.com", 24 "Referer": "https://www.zhizhu.com", 25 'User-Agent': agent 26 } 27 28 def is_login(): 29 #經過我的中心頁面返回狀態碼來判斷是否爲登陸狀態 30 inbox_url = "https://www.zhihu.com/question/56250357/answer/148534773" 31 response = session.get(inbox_url, headers=header, allow_redirects=False) #禁止重定向,判斷爲是否登陸 32 if response.status_code != 200: 33 return False 34 else: 35 return True 36 37 def get_xsrf(): 38 #獲取xsrf code 39 response = session.get("https://www.zhihu.com", headers=header) 40 match_obj = re.match('.*name="_xsrf" value="(.*?)"', response.text) 41 if match_obj: 42 return (match_obj.group(1)) 43 else: 44 return "" 45 46 47 def get_index(): 48 response = session.get("https://www.zhihu.com", headers=header) 49 with open("index_page.html", "wb") as f: 50 f.write(response.text.encode("utf-8")) 51 print ("ok") 52 53 def zhihu_login(account, password): 54 #知乎登陸 55 if re.match("^1\d{10}",account): 56 print ("手機號碼登陸") 57 post_url = "https://www.zhihu.com/login/phone_num" 58 post_data = { 59 "_xsrf": get_xsrf(), 60 "phone_num": account, 61 "password": password 62 } 63 else: 64 if "@" in account: 65 #判斷用戶名是否爲郵箱 66 print("郵箱方式登陸") 67 post_url = "https://www.zhihu.com/login/email" 68 post_data = { 69 "_xsrf": get_xsrf(), 70 "email": account, 71 "password": password 72 } 73 74 response_text = session.post(post_url, data=post_data, headers=header) 75 session.cookies.save() 76 77 zhihu_login("18782902568", "admin123") 78 # get_index() 79 print(is_login())
5.在shell調試中添加UserAgentsession
scrapy shell -s USER_AGENT='...' urlscrapy
6.JsonView插件ide
能夠很好的可視化看json
7.寫入html文件
with open(''e:/zhihu.html'',"wb") as f: f.write(response.text.encode('utf-8'))
8.yield理解
若是是yield item 會到pipelins中處理
若是是yield Request 會到下載器去下載
9.在mysql中怎麼去重,設置主鍵去重,主鍵衝突
解決:在插入的sql語句後面加上 ON DUPLICATE KEY UPDATE content=VALUES(content) #這是須要更新的內容
10.手動輸入驗證碼(zhihu.login_requests.py)
1 def get_captcha(): 2 import time 3 t=str(int(time.time()*1000)) 4 captcha_url="https://www.zhihu.com/captcha.gif?r={0}&type=login".format(t) 5 t=session.get(captcha_url,headers=header) 6 with open("captcha.jpg","wb") as f: 7 f.write(t.content) 8 f.close() 9 captcha=input("輸入驗證碼:") 10 return captcha
#爲何是第五行是session,而不是requests?
#由於requests會從新創建一次繪畫 session,這與後面的參數不符,輸入的驗證碼並非當前的驗證碼。
做者:今孝
出處:http://www.cnblogs.com/jinxiao-pu/p/6749332.html
本文版權歸做者和博客園共有,歡迎轉載,但未經做者贊成必須保留此段聲明,且在文章頁面明顯位置給出原文鏈接。