Python 爬蟲從入門到進階之路(十二)

以前的文章咱們介紹了 re 模塊和 lxml 模塊來作爬蟲,本章咱們再來看一個 bs4 模塊來作爬蟲。css

和 lxml 同樣,Beautiful Soup 也是一個HTML/XML的解析器,主要的功能也是如何解析和提取 HTML/XML 數據。html

lxml 只會局部遍歷,而Beautiful Soup 是基於HTML DOM的,會載入整個文檔,解析整個DOM樹,所以時間和內存開銷都會大不少,因此性能要低於lxml。正則表達式

BeautifulSoup 用來解析 HTML 比較簡單,API很是人性化,支持CSS選擇器、Python標準庫中的HTML解析器,也支持 lxml 的 XML解析器。工具

Beautiful Soup 3 目前已經中止開發,推薦如今的項目使用Beautiful Soup 4。使用 pip 安裝便可:pip install beautifulsoup4性能

官方文檔:http://beautifulsoup.readthedocs.io/zh_CN/v4.4.0spa

抓取工具 速度 使用難度 安裝難度
正則 最快 困難 無(內置)
BeautifulSoup 最簡單 簡單
lxml 簡單 通常

首先必需要導入 bs4 庫ssr

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 # 打開本地 HTML 文件的方式來建立對象
19 # soup = BeautifulSoup(open('index.html'), "lxml")
20 
21 # 格式化輸出 soup 對象的內容
22 print(soup.prettify())

運行結果:code

 1 <html>
 2  <body>
 3   <div>
 4    <ul>
 5     <li class="item-0">
 6      <a href="link1.html">
 7       first item
 8      </a>
 9     </li>
10     <li class="item-1">
11      <a href="link2.html">
12       second item
13      </a>
14     </li>
15     <li class="item-inactive">
16      <a href="link3.html">
17       <span class="bold">
18        third item
19       </span>
20      </a>
21     </li>
22     <li class="item-1">
23      <a href="link4.html">
24       fourth item
25      </a>
26     </li>
27     <li class="item-0">
28      <a href="link5.html">
29       fifth item
30      </a>
31     </li>
32    </ul>
33   </div>
34  </body>
35 </html>

四大對象種類

Beautiful Soup將複雜HTML文檔轉換成一個複雜的樹形結構,每一個節點都是Python對象,全部對象能夠概括爲4種:xml

  • Tag
  • NavigableString
  • BeautifulSoup
  • Comment

1. Tag

Tag 通俗點講就是 HTML 中的一個個標籤,例如:htm

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li)  # <li class="item-0"><a href="link1.html">first item</a></li>
19 print(soup.a)  # <a href="link1.html">first item</a>
20 print(soup.span)  # <span class="bold">third item</span>
21 print(soup.p)  # None
22 print(type(soup.li))  # <class 'bs4.element.Tag'>

咱們能夠利用 soup 加標籤名輕鬆地獲取這些標籤的內容,這些對象的類型是bs4.element.Tag。可是注意,它查找的是在全部內容中的第一個符合要求的標籤。若是要查詢全部的標籤,後面會進行介紹。

 

對於 Tag,它有兩個重要的屬性,是 name 和 attrs
 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li.attrs)  # {'class': ['item-0']}
19 print(soup.li["class"])  # ['item-0']
20 print(soup.li.get('class'))  # ['item-0']
21 
22 print(soup.li)  # <li class="item-0"><a href="link1.html">first item</a></li>
23 soup.li["class"] = "newClass"  # 能夠對這些屬性和內容等等進行修改
24 print(soup.li)  # <li class="newClass"><a href="link1.html">first item</a></li>
25 
26 del soup.li['class']  # 還能夠對這個屬性進行刪除
27 print(soup.li)  # <li><a href="link1.html">first item</a></li>

 

 

2. NavigableString

既然咱們已經獲得了標籤的內容,那麼問題來了,咱們要想獲取標籤內部的文字怎麼辦呢?很簡單,用 .string 便可,例如

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li.string)  # first item
19 print(soup.a.string)  # first item
20 print(soup.span.string)  # third item
21 # print(soup.p.string)  # AttributeError: 'NoneType' object has no attribute 'string'
22 print(type(soup.li.string))  # <class 'bs4.element.NavigableString'>

3. BeautifulSoup

BeautifulSoup 對象表示的是一個文檔的內容。大部分時候,能夠把它看成 Tag 對象,是一個特殊的 Tag,咱們能夠分別獲取它的類型,名稱,以及屬性來感覺一下

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.name)  # [document]
19 print(soup.attrs)  # {}, 文檔自己的屬性爲空
20 print(type(soup.name))  # <class 'str'>

4. Comment

Comment 對象是一個特殊類型的 NavigableString 對象,其輸出的內容不包括註釋符號。

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5    <a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>
 6  </div>
 7 """
 8 
 9 # 建立 Beautiful Soup 對象
10 soup = BeautifulSoup(html, "lxml")
11 
12 print(soup.a)  # <a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>
13 print(soup.a.string)  # Elsie 
14 print(type(soup.a.string))  # <class 'bs4.element.Comment'>

a 標籤裏的內容其實是註釋,可是若是咱們利用 .string 來輸出它的內容時,註釋符號已經去掉了。

 

遍歷文檔樹

1. 直接子節點 :.contents .children 屬性

.content

tag 的 .content 屬性能夠將tag的子節點以列表的方式輸出,輸出方式爲列表,咱們能夠用列表索引來獲取它的某一個元素

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li.contents)  # [<a href="link1.html">first item</a>]
19 print(soup.li.contents[0])  # <a href="link1.html">first item</a>

.children

它返回的不是一個 list,不過咱們能夠經過遍歷獲取全部子節點。

咱們打印輸出 .children 看一下,能夠發現它是一個 list 生成器對象

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.ul.children)  # <list_iterator object at 0x106388a20>
19 for child in  soup.ul.children:
20     print(child)

輸出結果:

 1 <li class="item-0"><a href="link1.html">first item</a></li>
 2 
 3 
 4 <li class="item-1"><a href="link2.html">second item</a></li>
 5 
 6 
 7 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 8 
 9 
10 <li class="item-1"><a href="link4.html">fourth item</a></li>
11 
12 
13 <li class="item-0"><a href="link5.html">fifth item</a></li>

2. 全部子孫節點: .descendants 屬性

.contents 和 .children 屬性僅包含tag的直接子節點,.descendants 屬性能夠對全部tag的子孫節點進行遞歸循環,和 children相似,咱們也須要遍歷獲取其中的內容。

1 for child in  soup.ul.descendants:
2     print(child)

運行結果:

 1 <li class="item-0"><a href="link1.html">first item</a></li>
 2 <a href="link1.html">first item</a>
 3 first item
 4 
 5 
 6 <li class="item-1"><a href="link2.html">second item</a></li>
 7 <a href="link2.html">second item</a>
 8 second item
 9 
10 
11 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
12 <a href="link3.html"><span class="bold">third item</span></a>
13 <span class="bold">third item</span>
14 third item
15 
16 
17 <li class="item-1"><a href="link4.html">fourth item</a></li>
18 <a href="link4.html">fourth item</a>
19 fourth item
20 
21 
22 <li class="item-0"><a href="link5.html">fifth item</a></li>
23 <a href="link5.html">fifth item</a>
24 fifth item

 

搜索文檔樹

1.find_all(name, attrs, recursive, text, **kwargs)

1)name 參數

name 參數能夠查找全部名字爲 name 的 tag,字符串對象會被自動忽略掉

A.傳字符串

最簡單的過濾器是字符串.在搜索方法中傳入一個字符串參數,Beautiful Soup會查找與字符串完整匹配的內容,下面的例子用於查找文檔中全部的<span>標籤:

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all('span'))  # [<span class="bold">third item</span>]
B.傳正則表達式

若是傳入正則表達式做爲參數,Beautiful Soup會經過正則表達式的 match() 來匹配內容.下面例子中找出全部以 s 開頭的標籤,這表示 <span標籤都應該被找到

 1 from bs4 import BeautifulSoup
 2 import re
 3 
 4 html = """
 5 <div>
 6     <ul>
 7          <li class="item-0"><a href="link1.html">first item</a></li>
 8          <li class="item-1"><a href="link2.html">second item</a></li>
 9          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10          <li class="item-1"><a href="link4.html">fourth item</a></li>
11          <li class="item-0"><a href="link5.html">fifth item</a></li>
12      </ul>
13  </div>
14 """
15 
16 # 建立 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 for tag in soup.find_all(re.compile("^s")):
19     print(tag)
20 # <span class="bold">third item</span>
C.傳列表

若是傳入列表參數,Beautiful Soup會將與列表中任一元素匹配的內容返回.下面代碼找到文檔中全部 <a> 標籤和 <span> 標籤:

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all(["a", "span"]))
18 # [<a href="link1.html">first item</a>, <a href="link2.html">second item</a>, <a href="link3.html"><span class="bold">third item</span></a>, <span class="bold">third item</span>, <a href="link4.html">fourth item</a>, <a href="link5.html">fifth item</a>]

2)keyword 參數

 1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all(href='link1.html'))  # [<a href="link1.html">first item</a>]

3)text 參數

經過 text 參數能夠搜搜文檔中的字符串內容,與 name 參數的可選值同樣, text 參數接受 字符串 , 正則表達式 , 列表

 1 from bs4 import BeautifulSoup
 2 import re
 3 
 4 html = """
 5 <div>
 6     <ul>
 7          <li class="item-0"><a href="link1.html">first item</a></li>
 8          <li class="item-1"><a href="link2.html">second item</a></li>
 9          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10          <li class="item-1"><a href="link4.html">fourth item</a></li>
11          <li class="item-0"><a href="link5.html">fifth item</a></li>
12      </ul>
13  </div>
14 """
15 
16 # 建立 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 print(soup.find_all(text="first item"))  # ['first item']
19 print(soup.find_all(text=["first item", "second item"]))  # ['first item', 'second item']
20 print(soup.find_all(text=re.compile("item")))  # ['first item', 'second item', 'third item', 'fourth item', 'fifth item']

CSS選擇器

這就是另外一種與 find_all 方法有殊途同歸之妙的查找方法.

  • 寫 CSS 時,標籤名不加任何修飾,類名前加.,id名前加#

  • 在這裏咱們也能夠利用相似的方法來篩選元素,用到的方法是 soup.select(),返回類型是 list

(1)經過標籤名查找

 1 from bs4 import BeautifulSoup
 2 import re
 3 
 4 html = """
 5 <div>
 6     <ul>
 7          <li class="item-0"><a href="link1.html">first item</a></li>
 8          <li class="item-1"><a href="link2.html">second item</a></li>
 9          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10          <li class="item-1"><a href="link4.html">fourth item</a></li>
11          <li class="item-0"><a href="link5.html">fifth item</a></li>
12      </ul>
13  </div>
14 """
15 
16 # 建立 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 print(soup.select('span'))  # [<span class="bold">third item</span>]

(2)經過類名查找

1 print(soup.select('.item-0'))  
2 # [<li class="item-0"><a href="link1.html">first item</a></li>, <li class="item-0"><a href="link5.html">fifth item</a></li>]

(3)經過 id 名查找

print(soup.select('#item-0'))  # []

(4)組合查找

1 print(soup.select('li.item-0'))
2 # [<li class="item-0"><a href="link1.html">first item</a></li>, <li class="item-0"><a href="link5.html">fifth item</a></li>]
3 print(soup.select('li.item-0>a')) 
4 # [<a href="link1.html">first item</a>, <a href="link5.html">fifth item</a>]

(5)屬性查找

1 print(soup.select('a[href="link1.html"]'))  # [<a href="link1.html">first item</a>]

  (6) 獲取內容

1 for text in soup.select('li'):
2     print(text.get_text())
3 """
4 first item
5 second item
6 third item
7 fourth item
8 fifth item
9 """
相關文章
相關標籤/搜索