爬蟲--微信公衆平臺登陸

一 註冊登陸分析模式

第一步:打開https://mp.weixin.qq.com/進入登錄界面html

第二步:輸入帳號密碼點擊登錄python

第三步:等待跳轉後掃碼驗證ajax

第四步:進入主頁面chrome

經過簡單手法的查看chrome瀏覽器的network發現發送的密碼通過了加密。json

二 相關知識點

requests模塊

新建一個py文件後導入requests模塊Ctrl右鍵進入源碼:瀏覽器

Requests is an HTTP library, written in Python, for human beings. Basic GET
usage:

   >>> import requests
   >>> r = requests.get('https://www.python.org')
   >>> r.status_code
   200
   >>> 'Python is a programming language' in r.content
   True

... or POST:

   >>> payload = dict(key1='value1', key2='value2')
   >>> r = requests.post('http://httpbin.org/post', data=payload)
   >>> print(r.text)
   {
     ...
     "form": {
       "key2": "value2",
       "key1": "value1"
     },
     ...
   }

上述就是requests的基本用法。服務器

還有其餘的請求方式,查看源碼發現都是返回了request方法。其中解釋了不少的參數設置。cookie

:param method: method for the new :class:`Request` object.
    :param url: URL for the new :class:`Request` object.
    :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
    :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
    :param json: (optional) json data to send in the body of the :class:`Request`.
    :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
    :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
    :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
        ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
        or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
        defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
        to add for the file.
    :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
    :param timeout: (optional) How long to wait for the server to send data
        before giving up, as a float, or a :ref:`(connect timeout, read
        timeout) <timeouts>` tuple.
    :type timeout: float or tuple
    :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
    :type allow_redirects: bool
    :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
    :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``.
    :param stream: (optional) if ``False``, the response content will be immediately downloaded.
    :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
    :return: :class:`Response <Response>` object
    :rtype: requests.Response
request方法參數

BeautifulSoup

BeautifulSoup是一個模塊,該模塊用於接收一個HTML或XML字符串,而後將其進行格式化,以後遍可使用他提供的方法進行快速查找指定元素,從而使得在HTML或XML中查找指定元素變得簡單。session

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
from  bs4  import  BeautifulSoup
 
html_doc  =  """
<html><head><title>The Dormouse's story</title></head>
<body>
asdf
     <div class="title">
         <b>The Dormouse's story總共</b>
         <h1>f</h1>
     </div>
<div class="story">Once upon a time there were three little sisters; and their names were
     <a  class="sister0" id="link1">Els<span>f</span>ie</a>,
     <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
     <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</div>
ad<br/>sf
<p class="story">...</p>
</body>
</html>
"""
 
soup  =  BeautifulSoup(html_doc, features = "lxml" )
# 找到第一個a標籤
tag1  =  soup.find(name = 'a' )
# 找到全部的a標籤
tag2  =  soup.find_all(name = 'a' )
# 找到id=link2的標籤
tag3  =  soup.select( '#link2' )

安裝:app

1
pip3 install beautifulsoup4

使用示例:

1
2
3
4
5
6
7
8
9
10
11
from  bs4  import  BeautifulSoup
 
html_doc  =  """
<html><head><title>The Dormouse's story</title></head>
<body>
     ...
</body>
</html>
"""
 
soup  =  BeautifulSoup(html_doc, features = "lxml" )

1. name,標籤名稱

1
2
3
4
5
# tag = soup.find('a')
# name = tag.name # 獲取
# print(name)
# tag.name = 'span' # 設置
# print(soup)

2. attr,標籤屬性

1
2
3
4
5
6
# tag = soup.find('a')
# attrs = tag.attrs    # 獲取
# print(attrs)
# tag.attrs = {'ik':123} # 設置
# tag.attrs['id'] = 'iiiii' # 設置
# print(soup)

3. children,全部子標籤

1
2
# body = soup.find('body')
# v = body.children

4. children,全部子子孫孫標籤

1
2
# body = soup.find('body')
# v = body.descendants

5. clear,將標籤的全部子標籤所有清空(保留標籤名)

1
2
3
# tag = soup.find('body')
# tag.clear()
# print(soup)

6. decompose,遞歸的刪除全部的標籤

1
2
3
# body = soup.find('body')
# body.decompose()
# print(soup)

7. extract,遞歸的刪除全部的標籤,並獲取刪除的標籤

1
2
3
# body = soup.find('body')
# v = body.extract()
# print(soup)

8. decode,轉換爲字符串(含當前標籤);decode_contents(不含當前標籤)

1
2
3
4
# body = soup.find('body')
# v = body.decode()
# v = body.decode_contents()
# print(v)

9. encode,轉換爲字節(含當前標籤);encode_contents(不含當前標籤)

1
2
3
4
# body = soup.find('body')
# v = body.encode()
# v = body.encode_contents()
# print(v)

10. find,獲取匹配的第一個標籤

1
2
3
4
5
# tag = soup.find('a')
# print(tag)
# tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tag)

11. find_all,獲取匹配的全部標籤

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# tags = soup.find_all('a')
# print(tags)
 
# tags = soup.find_all('a',limit=1)
# print(tags)
 
# tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tags)
 
 
# ####### 列表 #######
# v = soup.find_all(name=['a','div'])
# print(v)
 
# v = soup.find_all(class_=['sister0', 'sister'])
# print(v)
 
# v = soup.find_all(text=['Tillie'])
# print(v, type(v[0]))
 
 
# v = soup.find_all(id=['link1','link2'])
# print(v)
 
# v = soup.find_all(href=['link1','link2'])
# print(v)
 
# ####### 正則 #######
import  re
# rep = re.compile('p')
# rep = re.compile('^p')
# v = soup.find_all(name=rep)
# print(v)
 
# rep = re.compile('sister.*')
# v = soup.find_all(class_=rep)
# print(v)
 
# rep = re.compile('http://www.oldboy.com/static/.*')
# v = soup.find_all(href=rep)
# print(v)
 
# ####### 方法篩選 #######
# def func(tag):
# return tag.has_attr('class') and tag.has_attr('id')
# v = soup.find_all(name=func)
# print(v)
 
 
# ## get,獲取標籤屬性
# tag = soup.find('a')
# v = tag.get('id')
# print(v)

12. has_attr,檢查標籤是否具備該屬性

1
2
3
# tag = soup.find('a')
# v = tag.has_attr('id')
# print(v)

13. get_text,獲取標籤內部文本內容

1
2
3
# tag = soup.find('a')
# v = tag.get_text('id')
# print(v)

14. index,檢查標籤在某標籤中的索引位置

1
2
3
4
5
6
7
# tag = soup.find('body')
# v = tag.index(tag.find('div'))
# print(v)
 
# tag = soup.find('body')
# for i,v in enumerate(tag):
# print(i,v)

15. is_empty_element,是不是空標籤(是否能夠是空)或者自閉合標籤,

     判斷是不是以下標籤:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'

1
2
3
# tag = soup.find('br')
# v = tag.is_empty_element
# print(v)

16. 當前的關聯標籤

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# soup.next
# soup.next_element
# soup.next_elements
# soup.next_sibling
# soup.next_siblings
 
#
# tag.previous
# tag.previous_element
# tag.previous_elements
# tag.previous_sibling
# tag.previous_siblings
 
#
# tag.parent
# tag.parents

17. 查找某標籤的關聯標籤

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# tag.find_next(...)
# tag.find_all_next(...)
# tag.find_next_sibling(...)
# tag.find_next_siblings(...)
 
# tag.find_previous(...)
# tag.find_all_previous(...)
# tag.find_previous_sibling(...)
# tag.find_previous_siblings(...)
 
# tag.find_parent(...)
# tag.find_parents(...)
 
# 參數同find_all

18. select,select_one, CSS選擇器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
soup.select( "title" )
 
soup.select( "p nth-of-type(3)" )
 
soup.select( "body a" )
 
soup.select( "html head title" )
 
tag  =  soup.select( "span,a" )
 
soup.select( "head > title" )
 
soup.select( "p > a" )
 
soup.select( "p > a:nth-of-type(2)" )
 
soup.select( "p > #link1" )
 
soup.select( "body > a" )
 
soup.select( "#link1 ~ .sister" )
 
soup.select( "#link1 + .sister" )
 
soup.select( ".sister" )
 
soup.select( "[class~=sister]" )
 
soup.select( "#link1" )
 
soup.select( "a#link2" )
 
soup.select( 'a[href]' )
 
soup.select( 'a[href="http://example.com/elsie"]' )
 
soup.select( 'a[href^="http://example.com/"]' )
 
soup.select( 'a[href$="tillie"]' )
 
soup.select( 'a[href*=".com/el"]' )
 
 
from  bs4.element  import  Tag
 
def  default_candidate_generator(tag):
     for  child  in  tag.descendants:
         if  not  isinstance (child, Tag):
             continue
         if  not  child.has_attr( 'href' ):
             continue
         yield  child
 
tags  =  soup.find( 'body' ).select( "a" , _candidate_generator = default_candidate_generator)
print ( type (tags), tags)
 
from  bs4.element  import  Tag
def  default_candidate_generator(tag):
     for  child  in  tag.descendants:
         if  not  isinstance (child, Tag):
             continue
         if  not  child.has_attr( 'href' ):
             continue
         yield  child
 
tags  =  soup.find( 'body' ).select( "a" , _candidate_generator = default_candidate_generator, limit = 1 )
print ( type (tags), tags)

19. 標籤的內容

1
2
3
4
5
6
7
8
9
10
11
12
13
# tag = soup.find('span')
# print(tag.string)          # 獲取
# tag.string = 'new content' # 設置
# print(soup)
 
# tag = soup.find('body')
# print(tag.string)
# tag.string = 'xxx'
# print(soup)
 
# tag = soup.find('body')
# v = tag.stripped_strings  # 遞歸內部獲取全部標籤的文本
# print(v)

20.append在當前標籤內部追加一個標籤

1
2
3
4
5
6
7
8
9
10
# tag = soup.find('body')
# tag.append(soup.find('a'))
# print(soup)
#
# from bs4.element import Tag
# obj = Tag(name='i',attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('body')
# tag.append(obj)
# print(soup)

21.insert在當前標籤內部指定位置插入一個標籤

1
2
3
4
5
6
# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('body')
# tag.insert(2, obj)
# print(soup)

22. insert_after,insert_before 在當前標籤後面或前面插入

1
2
3
4
5
6
7
# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('body')
# # tag.insert_before(obj)
# tag.insert_after(obj)
# print(soup)

23. replace_with 在當前標籤替換爲指定標籤

1
2
3
4
5
6
# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('div')
# tag.replace_with(obj)
# print(soup)

24. 建立標籤之間的關係

1
2
3
4
# tag = soup.find('div')
# a = soup.find('a')
# tag.setup(previous_sibling=a)
# print(tag.previous_sibling)

25. wrap,將指定標籤把當前標籤包裹起來

1
2
3
4
5
6
7
8
9
10
11
# from bs4.element import Tag
# obj1 = Tag(name='div', attrs={'id': 'it'})
# obj1.string = '我是一個新來的'
#
# tag = soup.find('a')
# v = tag.wrap(obj1)
# print(soup)
 
# tag = soup.find('a')
# v = tag.wrap(soup.find('p'))
# print(soup)

26. unwrap,去掉當前標籤,將保留其包裹的標籤

1
2
3
# tag = soup.find('a')
# v = tag.unwrap()
# print(soup)

27. strings stripped_strings

string只能適用於標籤中沒有子標籤的狀況下使用。不然是None。

若是 tag中包含多個字符串 ,可使用.strings 來循環獲取 :

for string in soup.strings: print(repr(string)) 

輸出的字符串中 可能包含了不少空格或行 ,使用 .stripped_strings 能夠去除多餘空白內容 :

for string in soup.stripped_strings: print(repr(string))

 

三 設計request方式

1 經過requests.session()訪問主頁

session方式請求的優勢是不用再來回分析cookies帶來的影響。

 r0 = se.get( url="https://mp.weixin.qq.com" ) 

返回值是整個html頁面。

2 經過post方式將身份信息傳回

在瀏覽器上分析後發現登錄的url是:

https://mp.weixin.qq.com/cgi-bin/bizlogin?action=startlogin

再起requesthead中發送的data模式爲:

{
"username":'********@qq.com',
"pwd":pwd,
"imgcode":None,
"f":"json",
}

查看返回的內容:

{"base_resp":{"err_msg":"ok","ret":0},"redirect_url":"/cgi-bin/readtemplate?t=user/validate_wx_tmpl&lang=zh_CN&account=1972124257%40qq.com&appticket=18193cc664f191a1a93e&bindalias=cg2***16&mobile=132******69&wx_protect=1&grey=1"}

返回了一個redirect鏈接。通過和「https://www.mp.weixin,com」的拼接爲第三次請求的url。

其中注意:1經過對登錄事件的查看發現了在發送前,有通過密碼的md5的簡單處理。JS源碼:

pwd:$.md5(o.substr(0,16)),取前16位進行MD5加密。2在以後的請求中都要在頭中加上Referer信息:即以前訪問的url。

3 get請求第二步拼接的url

查看返回:

是一個整的html頁面即掃碼驗證的頁面。

 查看其network發現頁面經過js一直一秒一個朝着一個url發送請求:判斷其是在ask服務器掃碼是否經過。

這裏涉及到的requests模塊拿到的html頁面沒法獲得js渲染的內容。因此這裏的二維碼的img的標籤想要經過soup格式化是拿不到的:

<a class="qrcode js_qrcode"src=""></a>

因此想要拿到這個二維碼必須經過其餘模塊來模擬,這裏是經過分析二維碼的url來本身生成url(由於二維碼驗證和驗證碼是同一個原理,只要咱們先發送一個帶有keys的圖片給服務器,服務器會獲得這個keys記錄在cookie中,並等待手機端的掃碼。手機端的掃碼也是經過分析圖像獲得keys發送給服務器,服務器驗證後在返回給咱們一個status爲1.在後幾步的實現中會有所體現)。

so,在瀏覽器提取到二維碼的src是一下格式:

/cgi-bin/loginqrcode?action=getqrcode&param=4300&rd=72

每次不一樣的圖片對應不一樣的rd隨機數。因此這裏只要隨便get一個相似url就行(後臺的反應大概是拿到path和後面的rd隨機數,並將二維碼keys綁定在cookie中等待驗證):

r3 = se.get(
    "https://mp.weixin.qq.com/cgi-bin/loginqrcode?action=getqrcode&param=4300&rd=154"
)
re_r3 = r3.content
with open("reqqq.jpg","wb") as f:
    f.write(re_r3)

上述代碼將二維碼照片保存在了本地,經過本身的手機掃碼驗證。(由於涉及到了安卓這裏手動掃)而且經過一個input使腳本block住防止丟失session。

4 在正常的瀏覽器登錄中掃碼後自動登錄

經過兩種方式能夠知道這其中到底發生了什麼:

1 查看源碼:發現js一秒一次發送的請求返回的都是status都是0,只有掃碼驗證後返回的是1。後又post請求一個:

https://mp.weixin.qq.com/cgi-bin/bizlogin?action=login&token=&lang=zh_CN

2 經過chrome瀏覽器的preserveLog方式找到這一次的請求。查看url。

其實這一次的post請求時在告訴服務器萬事俱備,只差token。服務器返回主頁面的url:(放在redirect中)

https://mp.weixin.qq.com/cgi-bin/home?t=home/index&lang=zh_CN&token=54546879

5 最後一次請求上述的url後進入主頁面

r6 = se.get(
        url="https://mp.weixin.qq.com/cgi-bin/home?t=home/index&lang=zh_CN&token=%s"%(token),
        headers={
            "Referer": redirect,
            "Upgrade-Insecure-Requests": "1",
        },
    )

登錄done,以後會有beautifulsoup分析頁面發送消息。。。

源碼

  1 # _*_ coding:utf-8 _*_
  2 # _author:khal_Cgg
  3 # _date:2017/2/10
  4 import hashlib
  5 def create_md5(need_date):
  6     m = hashlib.md5()
  7     m.update(bytes(str(need_date), encoding='utf-8'))
  8     return m.hexdigest()
  9 
 10 pwd = create_md5("*******")
 11 
 12 import requests
 13 se = requests.Session()
 14 r0 = se.get(
 15     url="https://mp.weixin.qq.com"
 16 )
 17 print("===================>",r0.text)
 18 r1 = se.post(url="https://mp.weixin.qq.com/cgi-bin/bizlogin?action=startlogin",
 19              data={
 20                 "username":'*******@qq.com',
 21                 "pwd":pwd,
 22                 "imgcode":None,
 23                 "f":"json",
 24 },
 25              headers={
 26                 "Referer":"https://mp.weixin.qq.com",
 27                 "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.87 Safari/537.36"
 28              },
 29              )
 30 
 31 print("===================>",r1.text)
 32 redirect = r1.json()["redirect_url"]
 33 redirect="https://mp.weixin.qq.com"+redirect
 34 r2 = se.request(
 35     method="get",
 36     url=redirect,
 37 )
 38 print("===================>",r2.text)
 39 r3 = se.get(
 40     "https://mp.weixin.qq.com/cgi-bin/loginqrcode?action=getqrcode&param=4300&rd=154"
 41 )
 42 re_r3 = r3.content
 43 with open("reqqq.jpg","wb") as f:
 44     f.write(re_r3)
 45 from bs4 import BeautifulSoup
 46 import re
 47 # soup = BeautifulSoup(erweima,"html.parser")
 48 # tag_erweima = soup.find_all(name="img",attrs={"class":"qrcode js_qrcode"})
 49 # # print(tag_erweima)
 50 # # print(r2.text)
 51 print(redirect)
 52 # aim_text = r1.text
 53 # print(aim_text)
 54 # soup = BeautifulSoup(aim_text,"html.parser")
 55 # tete = soup.find_all(name="div",attrs={"class":"user_info"})
 56 # print(tete)
 57 # token = re.findall(".*token=(\d+)",aim_text)
 58 # token = "1528871467"
 59 
 60 # print(token)
 61 # user_send_form = {
 62 #     "token":token,
 63 #     "lang":"zh_cn",
 64 #     "f":"json",
 65 #     "ajax":"1",
 66 #     "random":"0.7277543939038833",
 67 #     "user_opnid":'oDm6kwV1TS913EeqE7gxMyTLrBcU'
 68 # }
 69 # r3 =se.post(
 70 #     url="https://mp.weixin.qq.com/cgi-bin/user_tag?action=get_fans_info",
 71 #     data=user_send_form,
 72 #     headers={
 73 #         "Referer":"https://mp.weixin.qq.com",
 74 #     }
 75 # )
 76 # print(
 77 #     r3.text,
 78 # )
 79 yanzheng = input("===>")
 80 if yanzheng=="1":
 81     r4 =se.get(
 82         url="https://mp.weixin.qq.com/cgi-bin/loginqrcode?action=ask&token=&lang=zh_CN&token=&lang=zh_CN&f=json&ajax=1&random=0.28636331791065484",
 83         headers = {
 84                   "Referer": redirect,
 85                   "Upgrade-Insecure-Requests": "1"
 86               },
 87     )
 88     # print(r4.text)
 89     # print(r4.cookies)
 90     r5 = se.post(
 91         url="https://mp.weixin.qq.com/cgi-bin/bizlogin?action=login&token=&lang=zh_CN",
 92         headers={
 93             "Referer":redirect,
 94             "Upgrade-Insecure-Requests":"1"
 95         },
 96     )
 97     # print(r4.text)
 98     end = r5.text
 99     token = re.findall(".*token=(\d+)", end)[0]
100     print(token,type(token))
101     r6 = se.get(
102         url="https://mp.weixin.qq.com/cgi-bin/home?t=home/index&lang=zh_CN&token=%s"%(token),
103         headers={
104             "Referer": redirect,
105             "Upgrade-Insecure-Requests": "1",
106         },
107     )
108     print(r6.text)
源碼

不足

  • 爲何不能allow_redirect?加上也不能跳轉,對requests模塊不足夠了解。
  • 對cookies的分析,哪一個有用與沒用。
相關文章
相關標籤/搜索