python自動化開發-[第二十三天]-初識爬蟲

今日概要:html

  一、爬汽車之家的新聞資訊python

  二、爬github和choutigit

  三、requests和beautifulsoupgithub

  四、輪詢和長輪詢算法

  五、django request.POST和request.bodydjango

1、HTTP知識掃盲

  一、http的get請求 是沒有請求體,全部的參數都放在請求頭的url裏json

  二、http的post請求 將請求內容放到請求體裏後端

  三、http = 請求頭+請求體 響應頭+響應體 api

  四、http是無狀態請求,一個請求,一次響應就會結束瀏覽器

2、爬取汽車之家的新聞頁

#!/usr/bin/python
# -*- coding:utf-8 -*-

import requests
from bs4 import BeautifulSoup


response = requests.get('http://www.autohome.com.cn/news')
response.encoding = 'gbk' #汽車之家的中文是gbk編碼
# print(response.text)

soup = BeautifulSoup(response.text,'html.parser')

tag = soup.find(name='div',attrs={'id':'auto-channel-lazyload-article'})

li_list = tag.find_all('li')


for li in li_list:

    if li.find(name='h3'):
        print(li.find(name='h3').text)

 

2.1爬取鮮花網

import requests
from bs4 import BeautifulSoup


response = requests.get('http://www.hua.com/aiqingxianhua/')


root = BeautifulSoup(response.text,'html.parser') #實例化soup對象

#經過.進行查找,全部的標籤都爲對象
div_list = root.find_all(attrs={"class":"grid-item"})


for div in div_list:
    img_dir = div.find(name='img').get('src')
    title = div.find(name='span',attrs={"class":"product-title"})
    price = div.find(name='span',attrs={"class":"price-num"})

    print(img_dir,title.text,price.text)

'''

//img01.hua.com/uploadpic/newpic/9012247.jpg_220x240.jpg 鮮花/幸福的約定-甦醒玫瑰33枝、紫羅蘭、銀葉菊 339
//img01.hua.com/uploadpic/newpic/9012246.jpg_220x240.jpg 鮮花/鄰家女孩-紅玫瑰33枝、紅色小雛菊 296
//img01.hua.com/uploadpic/newpic/9010011.jpg_220x240.jpg 鮮花/一心一意-玫瑰11枝,粉色勿忘我0.3扎 126
//img01.hua.com/uploadpic/newpic/9012011.jpg_220x240.jpg 鮮花/陽光海岸-19枝香檳玫瑰 218
//img01.hua.com/uploadpic/newpic/9010966.jpg_220x240.jpg 鮮花/一往情深-精品玫瑰禮盒:19枝紅玫瑰,勿忘我適量 235
//img01.hua.com/uploadpic/newpic/9012042.jpg_220x240.jpg 鮮花/熱戀-紅玫瑰50枝 359
//img01.hua.com/uploadpic/newpic/9012041.jpg_220x240.jpg 鮮花/浪漫繽紛-戴安娜粉玫瑰50枝 359
//img01.hua.com/uploadpic/newpic/9012175.jpg_220x240.jpg 鮮花/月光女神-白玫瑰11枝,綠色桔梗5枝,小菊3枝,白色石竹梅4枝 228
//img01.hua.com/uploadpic/newpic/9010947.jpg_220x240.jpg 鮮花/真愛如初-雪山玫瑰11枝、深紫色勿忘我0.3扎 186
//img01.hua.com/uploadpic/newpic/9012177.jpg_220x240.jpg 鮮花/不變的承諾-99枝紅玫瑰 519
'''

  

 

3、爬取gitlab和chouti的新聞頁

 github自動登陸

#!/usr/bin/python
# -*- coding:utf-8 -*-

import requests

from bs4 import BeautifulSoup

r1 = requests.get(url='https://github.com/login')

b1 = BeautifulSoup(r1.text,'html.parser')

auth_token = b1.find(attrs={'name':'authenticity_token'}).get('value')
r1_cookies_data = r1.cookies.get_dict()

print(auth_token)

r2 = requests.post('https://github.com/session', data={
    "commit": "Sign in",
    "utf8": '✓',
    "authenticity_token": auth_token,
    "login": "xxxx",
    "password": "xxxx",
},
                   cookies=r1_cookies_data)


r2_cookies_data  = r2.cookies.get_dict()

print(r1_cookies_data)
print(r2_cookies_data)

all_cookies = {}

all_cookies.update(r1_cookies_data)
all_cookies.update(r2_cookies_data)

#github直接用帶token以後的cookies就行
r3 = requests.get('https://github.com/settings/emails',cookies=r2_cookies_data)
print(r3.text)

 

登陸抽屜並自動點贊

#!/usr/bin/python
# -*- coding:utf-8 -*-

import requests

r1 = requests.get(url='http://dig.chouti.com/')

r1_cookies_data = r1.cookies.get_dict()


r2 = requests.post('http://dig.chouti.com/login',data={'phone':'xxx',"password":"xxx","oneMonth":1}

                   ,cookies=r1_cookies_data)


r2_cookies_data  = r2.cookies.get_dict()

print(r1_cookies_data)
print(r2_cookies_data)

all_cookies = {}

all_cookies.update(r1_cookies_data)
all_cookies.update(r2_cookies_data)


'''
session_id 在第一次請求
{'JSESSIONID': 'aaaIZQdBA4siraQ2m0t8v', 'route': '0c5178ac241ad1c9437c2aafd89a0e50', 'gpsd': 'dd55c4cda0a45f6bc3274a79a7e50316'}
{'puid': '417d102e3c72e88cd6003bc984c569b4', 'gpid': '4c91ec17bd8340bdb75116916e19bc20'}


'''

r3 = requests.post('http://dig.chouti.com/link/vote?linksId=14708906',cookies=r1_cookies_data)
print(r3.text)

'''
{"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_50096919787","likedTime":"1508043437615000","lvCount":"6","nick":"congratula","uvCount":"3","voteTime":"小於1分鐘前"}}}


'''

 注意:有的登陸頁面,登陸的時候不必定會給cookie,須要get一次纔給cookie,而登陸的時候僅僅是受權,get的時候的cookie,這樣就不須要帶第二次的cookie去請求

4、輪詢和長輪詢  

  1. 輪詢客戶端定時向服務器發送Ajax請求,服務器接到請求後立刻返回響應信息並關閉鏈接。
    優勢:後端程序編寫比較容易。
    缺點:請求中有大半是無用,浪費帶寬和服務器資源。
    實例:適於小型應用。

  2. 長輪詢:客戶端向服務器發送Ajax請求,服務器接到請求後hold住鏈接,直到有新消息才返回響應信息並關閉鏈接,客戶端處理完響應信息後再向服務器發送新的請求,服務器端會設置超時時間,當出現超時的時候,服務端會斷開連接,客戶端會再次請求服務端hold住
    優勢:在無消息的狀況下不會頻繁的請求。
    缺點:服務器hold鏈接會消耗資源。
    實例:WebQQ、Hi網頁版、Facebook IM。

  另外,對於長鏈接和socket鏈接也有區分:

    1. 長鏈接:在頁面裏嵌入一個隱蔵iframe,將這個隱蔵iframe的src屬性設爲對一個長鏈接的請求,服務器端就能源源不斷地往客戶端輸入數據。
      優勢:消息即時到達,不發無用請求。
      缺點:服務器維護一個長鏈接會增長開銷。
      實例:Gmail聊天

5、requests的用法

  一、GET請求:    

requests.get(url="http://www.oldboyedu.com")
# data="http GET / http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n"

requests.get(url="http://www.oldboyedu.com/index.html?p=1")
# data="http GET /index.html?p=1 http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n"

requests.get(url="http://www.oldboyedu.com/index.html",params={'p':1})
# data="http GET /index.html?p=1 http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n"

  二、POST請求:

requests.post(url="http://www.oldboyedu.com",data={'name':'alex','age':18}) # 默認請求頭:application/x-www-form-urlencoded
data="http POST / http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\nname=alex&age=18"


requests.post(url="http://www.oldboyedu.com",json={'name':'alex','age':18}) # 默認請求頭:application/json
data="http POST / http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n{"name": "alex", "age": 18}"


requests.post(
	url="http://www.oldboyedu.com",
	params={'p':1},
	json={'name':'alex','age':18}
) # 默認請求頭:application/json

data="http POST /?p=1 http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n{"name": "alex", "age": 18}"

 三、更多參數

def request(method, url, **kwargs):
    """Constructs and sends a :class:`Request <Request>`.

    :param method: method for the new :class:`Request` object.
    :param url: URL for the new :class:`Request` object.
    :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
    :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
    :param json: (optional) json data to send in the body of the :class:`Request`.
    :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
    :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
    :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
        ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
        or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
        defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
        to add for the file.
    :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
    :param timeout: (optional) How long to wait for the server to send data
        before giving up, as a float, or a :ref:`(connect timeout, read
        timeout) <timeouts>` tuple.
    :type timeout: float or tuple
    :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
    :type allow_redirects: bool
    :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
    :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``.
    :param stream: (optional) if ``False``, the response content will be immediately downloaded.
    :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
    :return: :class:`Response <Response>` object
    :rtype: requests.Response

    Usage::

      >>> import requests
      >>> req = requests.request('GET', 'http://httpbin.org/get')
      <Response [200]>
    """

參數列表
更多參數

  verify通常和cert合着用

def param_method_url():
    # requests.request(method='get', url='http://127.0.0.1:8000/test/')
    # requests.request(method='post', url='http://127.0.0.1:8000/test/')
    pass


def param_param():
    # - 能夠是字典
    # - 能夠是字符串
    # - 能夠是字節(ascii編碼之內)

    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params={'k1': 'v1', 'k2': '水電費'})

    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params="k1=v1&k2=水電費&k3=v3&k3=vv3")

    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8'))

    # 錯誤
    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params=bytes("k1=v1&k2=水電費&k3=v3&k3=vv3", encoding='utf8'))
    pass


def param_data():
    # 能夠是字典
    # 能夠是字符串
    # 能夠是字節
    # 能夠是文件對象

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data={'k1': 'v1', 'k2': '水電費'})

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data="k1=v1; k2=v2; k3=v3; k3=v4"
    # )

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data="k1=v1;k2=v2;k3=v3;k3=v4",
    # headers={'Content-Type': 'application/x-www-form-urlencoded'}
    # )

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data=open('data_file.py', mode='r', encoding='utf-8'), # 文件內容是:k1=v1;k2=v2;k3=v3;k3=v4
    # headers={'Content-Type': 'application/x-www-form-urlencoded'}
    # )
    pass


def param_json():
    # 將json中對應的數據進行序列化成一個字符串,json.dumps(...)
    # 而後發送到服務器端的body中,而且Content-Type是 {'Content-Type': 'application/json'}
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     json={'k1': 'v1', 'k2': '水電費'})


def param_headers():
    # 發送請求頭到服務器端
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     json={'k1': 'v1', 'k2': '水電費'},
                     headers={'Content-Type': 'application/x-www-form-urlencoded'}
                     )


def param_cookies():
    # 發送Cookie到服務器端
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     data={'k1': 'v1', 'k2': 'v2'},
                     cookies={'cook1': 'value1'},
                     )
    # 也可使用CookieJar(字典形式就是在此基礎上封裝)
    from http.cookiejar import CookieJar
    from http.cookiejar import Cookie

    obj = CookieJar()
    obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None,
                          discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False,
                          port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False)
                   )
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     data={'k1': 'v1', 'k2': 'v2'},
                     cookies=obj)


def param_files():
    # 發送文件
    # file_dict = {
    # 'f1': open('readme', 'rb')
    # }
    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # files=file_dict)

    # 發送文件,定製文件名
    # file_dict = {
    # 'f1': ('test.txt', open('readme', 'rb'))
    # } #元祖裏套元祖
    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # files=file_dict)

    # 發送文件,定製文件名
    # file_dict = {
    # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf")
    # }
    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # files=file_dict)

    # 發送文件,定製文件名
    # file_dict = {
    #     'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', {'k1': '0'})
    # }
    # requests.request(method='POST',
    #                  url='http://127.0.0.1:8000/test/',
    #                  files=file_dict)

    pass


def param_auth():
    from requests.auth import HTTPBasicAuth, HTTPDigestAuth
    #一些非form表單,瀏覽器自帶的認證框,它們都是經過固定的算法得出來的,能夠用這個去認證
    #r.headers['Authorization'] = _basic_auth_str(self.username, self.password) 查看源碼

    ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('wupeiqi', 'sdfasdfasdf'))
    print(ret.text)

    # ret = requests.get('http://192.168.1.1',
    # auth=HTTPBasicAuth('admin', 'admin'))
    # ret.encoding = 'gbk'
    # print(ret.text)

    # ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass'))
    # print(ret)
    #


def param_timeout():
    # 設置超時時間
    # ret = requests.get('http://google.com/', timeout=1)
    # print(ret)
    # 設置超時時間和斷開時間
    # ret = requests.get('http://google.com/', timeout=(5, 1))
    # print(ret)
    pass


def param_allow_redirects():
    #好比訪問一個網站redirect另一個地址,此次http請求是有返回值的,allow_redirects能夠設置是否跳轉到新的地址,從新發請求
    ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False)
    print(ret.text)


def param_proxies():
    # 設置代理,能夠設置不少
    # proxies = {
    # "http": "61.172.249.96:80",
    # "https": "http://61.185.219.126:3128",
    # }

    # proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'}

    # ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies)
    # print(ret.headers)


    # from requests.auth import HTTPProxyAuth
    # 設置須要認證的代理
    #
    # proxyDict = {
    # 'http': '77.75.105.165',
    # 'https': '77.75.105.165'
    # }
    # auth = HTTPProxyAuth('username', 'mypassword')
    #
    # r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)
    # print(r.text)

    pass


def param_stream():
    #將一個大的數據好比30g,進行分段傳輸
    ret = requests.get('http://127.0.0.1:8000/test/', stream=True)
    print(ret.content)
    ret.close()

    # from contextlib import closing
    # with closing(requests.get('http://httpbin.org/get', stream=True)) as r:
    # # 在此處理響應。
    # for i in r.iter_content():
    # print(i)


def requests_session():
    #建議剛開始 別使用,能記錄cookie等內容 沒必要每次都去 取
    import requests

    session = requests.Session()

    ### 一、首先登錄任何頁面,獲取cookie

    i1 = session.get(url="http://dig.chouti.com/help/service")

    ### 二、用戶登錄,攜帶上一次的cookie,後臺對cookie中的 gpsd 進行受權
    i2 = session.post(
        url="http://dig.chouti.com/login",
        data={
            'phone': "8615131255089",
            'password': "xxxxxx",
            'oneMonth': ""
        }
    )

    i3 = session.post(
        url="http://dig.chouti.com/link/vote?linksId=8589623",
    )
    print(i3.text)

 6、beautifulsoup

    BeautifulSoup是一個模塊,該模塊用於接收一個HTML或XML字符串,而後將其進行格式化,以後遍可使用他提供的方法進行快速查找指定元素,從而使得在HTML或XML中查找指定元素變得簡單。 

    soup = BeautifulSoup(html_doc, features="lxml") #會以xml格式解析,須要額外安裝lxml,比html.parser節省資源

使用示例:  

from bs4 import BeautifulSoup
 
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
    ...
</body>
</html>
"""
 
soup = BeautifulSoup(html_doc, features="lxml")

   1. name,標籤名稱

# tag = soup.find('a')
# name = tag.name # 獲取
# print(name)
# tag.name = 'span' # 設置
# print(soup)
View Code

  2. attr,標籤屬性

# tag = soup.find('a')
# attrs = tag.attrs    # 獲取 結果是一個集合
# print(attrs)
# tag.attrs = {'ik':123} # 設置
# tag.attrs['id'] = 'iiiii' # 設置
# print(soup)
View Code

  3. children,全部子標籤

# body = soup.find('body')
# v = body.children
View Code

  4. children,全部子子孫孫標籤

# body = soup.find('body')
# v = body.descendants
View Code

  5. clear,將標籤的全部子標籤所有清空(保留標籤名)

# tag = soup.find('body')
# tag.clear()
# print(soup)
View Code

  6. decompose,遞歸的刪除全部的標籤,(本身也會刪除)

# body = soup.find('body')
# body.decompose()
# print(soup)
View Code

  7. extract,遞歸的刪除全部的標籤,並獲取刪除的標籤(相似於dict裏的pop)

# body = soup.find('body')
# v = body.extract()
# print(soup)
View Code

  8. decode,轉換爲字符串(含當前標籤);decode_contents(不含當前標籤) (將標籤對象轉換爲格式)

# body = soup.find('body')
# v = body.decode()
# v = body.decode_contents()
# print(v)
View Code

  9. encode,轉換爲字節(含當前標籤);encode_contents(不含當前標籤)

# body = soup.find('body')
# v = body.encode()
# v = body.encode_contents()
# print(v)
View Code

   10. find,獲取匹配的第一個標籤

# tag = soup.find('a')
# print(tag)
# tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tag)
View Code

  11. find_all,獲取匹配的全部標籤 class是類變量  能夠用class_ 去代替

# tags = soup.find_all('a')
# print(tags)
 
# tags = soup.find_all('a',limit=1)
# print(tags)
 
# tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tags)
 
 
# ####### 列表 #######
# v = soup.find_all(name=['a','div'])
# print(v)
 
# v = soup.find_all(class_=['sister0', 'sister'])
# print(v)
 
# v = soup.find_all(text=['Tillie'])
# print(v, type(v[0]))
 
 
# v = soup.find_all(id=['link1','link2'])
# print(v)
 
# v = soup.find_all(href=['link1','link2'])
# print(v)
 
# ####### 正則 #######
import re
# rep = re.compile('p')
# rep = re.compile('^p')
# v = soup.find_all(name=rep)
# print(v)
 
# rep = re.compile('sister.*')
# v = soup.find_all(class_=rep)
# print(v)
 
# rep = re.compile('http://www.oldboy.com/static/.*')
# v = soup.find_all(href=rep)
# print(v)
 
# ####### 方法篩選 #######
# def func(tag):
# return tag.has_attr('class') and tag.has_attr('id')
# v = soup.find_all(name=func)
# print(v)
 
 
# ## get,獲取標籤屬性
# tag = soup.find('a')
# v = tag.get('id')
# print(v)
View Code

  12. has_attr,檢查標籤是否具備該屬性

# tag = soup.find('a')
# v = tag.has_attr('id')
# print(v)
View Code

  13. get_text,獲取標籤內部文本內容

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
    <a id='a1'>123</a>
</body>
</html>
"""

soup = BeautifulSoup(html_doc, features="lxml")

tag = soup.find('a')
v = tag.get_text('id')
print(v)
View Code

  14. index,檢查標籤在某標籤中的索引位置

# tag = soup.find('body')
# v = tag.index(tag.find('div'))
# print(v)
 
# tag = soup.find('body')
# for i,v in enumerate(tag):
# print(i,v)
View Code

  15. is_empty_element,是不是空標籤(是否能夠是空)或者自閉合標籤,

    判斷是不是以下標籤:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'

# tag = soup.find('br')
# v = tag.is_empty_element
# print(v)
View Code

  16. 當前的關聯標籤

#from bs4.element import  Tag

# soup.next #查該標籤內部子代
# soup.next_element 
# soup.next_elements #遞歸查找子代
# soup.next_sibling 
# soup.next_siblings #只查下面的
 
#
# tag.previous #查標籤外部
# tag.previous_element
# tag.previous_elements
# tag.previous_sibling #查兄弟標籤
# tag.previous_siblings #只向上查找
 
#
# tag.parent #查父親
# tag.parents
View Code

  17. 查找某標籤的關聯標籤

# tag.find_next(...)  #能夠加條件篩選
# tag.find_all_next(...)
# tag.find_next_sibling(...)
# tag.find_next_siblings(...)
 
# tag.find_previous(...)
# tag.find_all_previous(...)
# tag.find_previous_sibling(...)
# tag.find_previous_siblings(...)
 
# tag.find_parent(...)
# tag.find_parents(...)
 
# 參數同find_all
View Code

  18. select,select_one, CSS選擇器

soup.select("title")
 
soup.select("p nth-of-type(3)")
 
soup.select("body a")
 
soup.select("html head title")
 
tag = soup.select("span,a")
 
soup.select("head > title")
 
soup.select("p > a")
 
soup.select("p > a:nth-of-type(2)")
 
soup.select("p > #link1")
 
soup.select("body > a")
 
soup.select("#link1 ~ .sister")
 
soup.select("#link1 + .sister")
 
soup.select(".sister")
 
soup.select("[class~=sister]")
 
soup.select("#link1")
 
soup.select("a#link2")
 
soup.select('a[href]')
 
soup.select('a[href="http://example.com/elsie"]')
 
soup.select('a[href^="http://example.com/"]')
 
soup.select('a[href$="tillie"]')
 
soup.select('a[href*=".com/el"]')
 
 
from bs4.element import Tag
 
def default_candidate_generator(tag):
    for child in tag.descendants:
        if not isinstance(child, Tag):
            continue
        if not child.has_attr('href'):
            continue
        yield child
 
tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator)
print(type(tags), tags)
 
from bs4.element import Tag
def default_candidate_generator(tag):
    for child in tag.descendants:
        if not isinstance(child, Tag):
            continue
        if not child.has_attr('href'):
            continue
        yield child
 
tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator, limit=1)
print(type(tags), tags)
View Code

  19. 標籤的內容

# tag = soup.find('span')
# print(tag.string)          # 獲取
# tag.string = 'new content' # 設置
# print(soup)
 
# tag = soup.find('body')
# print(tag.string)
# tag.string = 'xxx'
# print(soup)
 
# tag = soup.find('body')
# v = tag.stripped_strings  # 遞歸內部獲取全部標籤的文本
# print(v)
View Code

  string和text的區別:

    1.string 能夠賦值,text不能夠

               2.string 這個類型<class 'bs4.element.NavigableString'>   text是這個類型 <class 'str'>

  20.append在當前標籤內部追加一個標籤

# tag = soup.find('body')
# tag.append(soup.find('a'))
# print(soup)
#
# from bs4.element import Tag
# obj = Tag(name='i',attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('body')
# tag.append(obj)
# print(soup)
View Code

  21.insert在當前標籤內部指定位置插入一個標籤

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('body')
# tag.insert(2, obj)
# print(soup)
View Code

  22. insert_after,insert_before 在當前標籤後面或前面插入

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('body')
# # tag.insert_before(obj)
# tag.insert_after(obj)
# print(soup)
View Code

  23. replace_with 在當前標籤替換爲指定標籤

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一個新來的'
# tag = soup.find('div')
# tag.replace_with(obj)
# print(soup)
View Code

 

   24. 建立標籤之間的關係(很是規思惟,建立的關係在soup裏是沒法看到的)

# tag = soup.find('div')
# a = soup.find('a')
# tag.setup(previous_sibling=a)
# print(tag.previous_sibling)
View Code

 

   25. wrap,將指定標籤把當前標籤包裹起來

# from bs4.element import Tag
# obj1 = Tag(name='div', attrs={'id': 'it'})
# obj1.string = '我是一個新來的'
#
# tag = soup.find('a')
# v = tag.wrap(obj1)
# print(soup)
 
# tag = soup.find('a')
# v = tag.wrap(soup.find('p'))
# print(soup)
View Code

 

   26. unwrap,去掉當前標籤,將保留其包裹的標籤

# tag = soup.find('a')
# v = tag.unwrap()
# print(soup)
View Code

 

爬取知乎:

相關文章
相關標籤/搜索