BeautifulSoup是一個模塊,該模塊用於接收一個HTML或XML字符串,而後將其進行格式化,以後即可以使用他提供的方法進行快速查找指定元素,從而使得在HTML或XML中查找指定元素變得簡單。html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
from
bs4
import
BeautifulSoup
html_doc
=
"""
<html><head><title>The Dormouse's story</title></head>
<body>
asdf
<div class="title">
<b>The Dormouse's story總共</b>
<h1>f</h1>
</div>
<div class="story">Once upon a time there were three little sisters; and their names were
<a class="sister0" id="link1">Els<span>f</span>ie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</div>
ad<br/>sf
<p class="story">...</p>
</body>
</html>
"""
soup
=
BeautifulSoup(html_doc, features
=
"lxml"
)
# 找到第一個a標籤
tag1
=
soup.find(name
=
'a'
)
# 找到全部的a標籤
tag2
=
soup.find_all(name
=
'a'
)
# 找到id=link2的標籤
tag3
=
soup.select(
'#link2'
)
|
安裝:python
1
|
pip3 install beautifulsoup4
|
使用示例:git
1
2
3
4
5
6
7
8
9
10
11
|
from
bs4
import
BeautifulSoup
html_doc
=
"""
<html><head><title>The Dormouse's story</title></head>
<body>
...
</body>
</html>
"""
soup
=
BeautifulSoup(html_doc, features
=
"lxml"
)
|
一:對象種類github
(1)Tag app
標記ide
soup對象自己比較特殊,它的name爲[document],對於其餘內部標記,輸出的值爲標記自己的名稱spa
抽取title: print soup.title 抽取a : print soup.a 抽取p : print soup.a
Tag不只能夠獲取name ,還能夠修改name3d
soup.title.name = 'mytitle'
屬性code
print soup.p['class'] print soup.p.get('class')
(2)NavigableStringorm
獲取標記內部的文字
print soup.p.string
轉換成unicode字符串
unicode_string = unicode(soup.p.string)
(3 )BeautifulSoup
BeautifulSoup 對象表示的是一個文檔的所有內容.大部分時候,能夠把它看成 Tag 對象,是一個特殊的 Tag,咱們能夠分別獲取它的類型,名稱,以及屬性來感覺一下
print type(soup.name)
(4)Comment
Comment 對象是一個特殊類型的 NavigableString 對象,其實輸出的內容仍然不包括註釋符號
三:
遍歷文檔數:
(1)子節點
要點:.contents .children 屬性
tag 的 .content 屬性能夠將tag的子節點以列表的方式輸出
1
2
|
print soup.head.contents
#[<title>The Dormouse's story</title>]
|
輸出方式爲列表,咱們能夠用列表索引來獲取它的某一個元素
1
2
|
print soup.head.contents[0]
#<title>The Dormouse's story</title>
|
.children
它返回的不是一個 list,不過咱們能夠經過遍歷獲取全部子節點。
咱們打印輸出 .children 看一下,能夠發現它是一個 list 生成器對象
1
2
|
print soup.head.children
#<listiterator object at 0x7f71457f5710>
|
咱們怎樣得到裏面的內容呢?很簡單,遍歷一下就行了,代碼及結果以下
1
2
|
for child in soup.body.children:
print child
|
1
2
3
4
5
|
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
|
它返回的不是一個 list,不過咱們能夠經過遍歷獲取全部子節點。
咱們打印輸出 .children 看一下,能夠發現它是一個 list 生成器對象
1
2
|
print soup.head.children
#<listiterator object at 0x7f71457f5710>
|
咱們怎樣得到裏面的內容呢?很簡單,遍歷一下就行了,代碼及結果以下
1
2
|
for child in soup.body.children:
print child
|
1
2
3
4
5
|
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
|
1. name,標籤名稱
1 # tag = soup.find('a') 2 # name = tag.name # 獲取 3 # print(name) 4 # tag.name = 'span' # 設置 5 # print(soup)
2. attr,標籤屬性
1 # tag = soup.find('a') 2 # attrs = tag.attrs # 獲取 3 # print(attrs) 4 # tag.attrs = {'ik':123} # 設置 5 # tag.attrs['id'] = 'iiiii' # 設置 6 # print(soup)
3. children,全部子標籤
1 # body = soup.find('body') 2 # v = body.children
4. descendants,全部子子孫孫標籤
1 # body = soup.find('body') 2 # v = body.descendants
5. clear,將標籤的全部子標籤所有清空(保留標籤名)
1 # tag = soup.find('body') 2 # tag.clear() 3 # print(soup)
6. decompose,遞歸的刪除全部的標籤
1 # body = soup.find('body') 2 # body.decompose() 3 # print(soup)
7. extract,遞歸的刪除全部的標籤,並獲取刪除的標籤
1 # body = soup.find('body') 2 # v = body.extract() 3 # print(soup)
8. decode,轉換爲字符串(含當前標籤);decode_contents(不含當前標籤)
1 # body = soup.find('body') 2 # v = body.decode() 3 # v = body.decode_contents() 4 # print(v)
9. encode,轉換爲字節(含當前標籤);encode_contents(不含當前標籤)
1 # body = soup.find('body') 2 # v = body.encode() 3 # v = body.encode_contents() 4 # print(v)
10. find,獲取匹配的第一個標籤
1 # tag = soup.find('a') 2 # print(tag) 3 # tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie') 4 # tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie') 5 # print(tag)
11. find_all,獲取匹配的全部標籤
1 # tags = soup.find_all('a') 2 # print(tags) 3 4 # tags = soup.find_all('a',limit=1) 5 # print(tags) 6 7 # tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie') 8 # # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie') 9 # print(tags) 10 11 12 # ####### 列表 ####### 13 # v = soup.find_all(name=['a','div']) 14 # print(v) 15 16 # v = soup.find_all(class_=['sister0', 'sister']) 17 # print(v) 18 19 # v = soup.find_all(text=['Tillie']) 20 # print(v, type(v[0])) 21 22 23 # v = soup.find_all(id=['link1','link2']) 24 # print(v) 25 26 # v = soup.find_all(href=['link1','link2']) 27 # print(v) 28 29 # ####### 正則 ####### 30 import re 31 # rep = re.compile('p') 32 # rep = re.compile('^p') 33 # v = soup.find_all(name=rep) 34 # print(v) 35 36 # rep = re.compile('sister.*') 37 # v = soup.find_all(class_=rep) 38 # print(v) 39 40 # rep = re.compile('http://www.oldboy.com/static/.*') 41 # v = soup.find_all(href=rep) 42 # print(v) 43 44 # ####### 方法篩選 ####### 45 # def func(tag): 46 # return tag.has_attr('class') and tag.has_attr('id') 47 # v = soup.find_all(name=func) 48 # print(v) 49 50 51 # ## get,獲取標籤屬性 52 # tag = soup.find('a') 53 # v = tag.get('id') 54 # print(v)
12. has_attr,檢查標籤是否具備該屬性
1 # tag = soup.find('a') 2 # v = tag.has_attr('id') 3 # print(v)
13. get_text,獲取標籤內部文本內容
1 # tag = soup.find('a') 2 # v = tag.get_text('id') 3 # print(v)
14. index,檢查標籤在某標籤中的索引位置
1 # tag = soup.find('body') 2 # v = tag.index(tag.find('div')) 3 # print(v) 4 5 # tag = soup.find('body') 6 # for i,v in enumerate(tag): 7 # print(i,v)
15. is_empty_element,是不是空標籤(是否能夠是空)或者自閉合標籤,
判斷是不是以下標籤:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'
1 # tag = soup.find('br') 2 # v = tag.is_empty_element 3 # print(v)
16. 當前的關聯標籤
1 # soup.next 2 # soup.next_element 3 # soup.next_elements 4 # soup.next_sibling 5 # soup.next_siblings 6 7 # 8 # tag.previous 9 # tag.previous_element 10 # tag.previous_elements 11 # tag.previous_sibling 12 # tag.previous_siblings 13 14 # 15 # tag.parent 16 # tag.parents
17. 查找某標籤的關聯標籤
1 # tag.find_next(...) 2 # tag.find_all_next(...) 3 # tag.find_next_sibling(...) 4 # tag.find_next_siblings(...) 5 6 # tag.find_previous(...) 7 # tag.find_all_previous(...) 8 # tag.find_previous_sibling(...) 9 # tag.find_previous_siblings(...) 10 11 # tag.find_parent(...) 12 # tag.find_parents(...) 13 14 # 參數同find_all
18. select,select_one, CSS選擇器
1 soup.select("title") 2 3 soup.select("p nth-of-type(3)") 4 5 soup.select("body a") 6 7 soup.select("html head title") 8 9 tag = soup.select("span,a") 10 11 soup.select("head > title") 12 13 soup.select("p > a") 14 15 soup.select("p > a:nth-of-type(2)") 16 17 soup.select("p > #link1") 18 19 soup.select("body > a") 20 21 soup.select("#link1 ~ .sister") 22 23 soup.select("#link1 + .sister") 24 25 soup.select(".sister") 26 27 soup.select("[class~=sister]") 28 29 soup.select("#link1") 30 31 soup.select("a#link2") 32 33 soup.select('a[href]') 34 35 soup.select('a[href="http://example.com/elsie"]') 36 37 soup.select('a[href^="http://example.com/"]') 38 39 soup.select('a[href$="tillie"]') 40 41 soup.select('a[href*=".com/el"]') 42 43 44 from bs4.element import Tag 45 46 def default_candidate_generator(tag): 47 for child in tag.descendants: 48 if not isinstance(child, Tag): 49 continue 50 if not child.has_attr('href'): 51 continue 52 yield child 53 54 tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator) 55 print(type(tags), tags) 56 57 from bs4.element import Tag 58 def default_candidate_generator(tag): 59 for child in tag.descendants: 60 if not isinstance(child, Tag): 61 continue 62 if not child.has_attr('href'): 63 continue 64 yield child 65 66 tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator, limit=1) 67 print(type(tags), tags)
19. 標籤的內容
1 # tag = soup.find('span') 2 # print(tag.string) # 獲取 3 # tag.string = 'new content' # 設置 4 # print(soup) 5 6 # tag = soup.find('body') 7 # print(tag.string) 8 # tag.string = 'xxx' 9 # print(soup) 10 11 # tag = soup.find('body') 12 # v = tag.stripped_strings # 遞歸內部獲取全部標籤的文本 13 # print(v)
20.append在當前標籤內部追加一個標籤
1 # tag = soup.find('body') 2 # tag.append(soup.find('a')) 3 # print(soup) 4 # 5 # from bs4.element import Tag 6 # obj = Tag(name='i',attrs={'id': 'it'}) 7 # obj.string = '我是一個新來的' 8 # tag = soup.find('body') 9 # tag.append(obj) 10 # print(soup)
21.insert在當前標籤內部指定位置插入一個標籤
1 # from bs4.element import Tag 2 # obj = Tag(name='i', attrs={'id': 'it'}) 3 # obj.string = '我是一個新來的' 4 # tag = soup.find('body') 5 # tag.insert(2, obj) 6 # print(soup)
22. insert_after,insert_before 在當前標籤後面或前面插入
1 # from bs4.element import Tag 2 # obj = Tag(name='i', attrs={'id': 'it'}) 3 # obj.string = '我是一個新來的' 4 # tag = soup.find('body') 5 # # tag.insert_before(obj) 6 # tag.insert_after(obj) 7 # print(soup)
23. replace_with 在當前標籤替換爲指定標籤
1 # from bs4.element import Tag 2 # obj = Tag(name='i', attrs={'id': 'it'}) 3 # obj.string = '我是一個新來的' 4 # tag = soup.find('div') 5 # tag.replace_with(obj) 6 # print(soup)
24. 建立標籤之間的關係(但不會改變標籤的位置)
1 # tag = soup.find('div') 2 # a = soup.find('a') 3 # tag.setup(previous_sibling=a) 4 # print(tag.previous_sibling)
25. wrap,將指定標籤把當前標籤包裹起來
1 # from bs4.element import Tag 2 # obj1 = Tag(name='div', attrs={'id': 'it'}) 3 # obj1.string = '我是一個新來的' 4 # 5 # tag = soup.find('a') 6 # v = tag.wrap(obj1) 7 # print(soup) 8 9 # tag = soup.find('a') 10 # v = tag.wrap(soup.find('p')) 11 # print(soup)
26. unwrap,去掉當前標籤,將保留其包裹的標籤
1 # tag = soup.find('a') 2 # v = tag.unwrap() 3 # print(soup)
轉載自: