python BeautifulSoup的簡單使用

  官網:https://www.crummy.com/software/BeautifulSoup/bs4/doc/css

  參考:http://www.javashuo.com/article/p-bhmtmues-cu.htmlhtml

  

  什麼是BeautifulSoup?函數

    BeautifulSoup是用Python寫的一個HTML/XML的解析器,它能夠很好的處理不規範標記並生成剖析樹(parse tree)。 它提供簡單又經常使用的導航(navigating),搜索以及修改剖析樹的操做。測試

 

  下面經過一個測試例子簡單說明下BeautifulSoup的用法spa

  

    def beautifulSoup_test(self):
        html_doc = """
        <html><head><title>The Dormouse's story</title></head>
        <body>
        <p class="title"><b>The Dormouse's story</b></p>

        <p class="story">Once upon a time there were three little sisters; and their names were
        <a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
        <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
        <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
        <div  class="text" id="div1">測試</div>
        and they lived at the bottom of a well.</p>

        <p class="story">...</p>

        """
        # soup 就是BeautifulSoup處理格式化後的字符串
        soup = BeautifulSoup(html_doc,'lxml')

        # 獲得的是title標籤
        print(soup.title)
        # 輸出:<title>The Dormouse's story</title>

        # 獲得的是文檔中的第一個p標籤,要想獲得全部標籤,得用find_all函數。
        # find_all 函數返回的是一個序列,能夠對它進行循環,依次獲得想到的東西.
        print(soup.p)
        print(soup.find_all('p'))


        print(soup.find(id='link3'))

        # 是返回文本,這個對每個BeautifulSoup處理後的對象獲得的標籤都是生效的
        print(soup.get_text())

        aitems = soup.find_all('a')
        # 獲取標籤a的連接和id
        for item in aitems:
            print(item["href"],item["id"])

        # 一、經過css查找
        print(soup.find_all("a", class_="sister"))
        # 輸出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
        # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

        print(soup.select("p.title"))
        # 輸出:[<p class="title"><b>The Dormouse's story</b></p>]

        # 二、經過屬性進行查找
        print(soup.find_all("a", attrs={"class": "sister"}))
        #輸出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
        # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

        # 三、經過文本進行查找
        print(soup.find_all(text="Elsie"))
        # 輸出:['Elsie']

        print(soup.find_all(text=["Tillie", "Elsie", "Lacie"]))
        # 輸出:['Elsie', 'Lacie', 'Tillie']


        # 四、限制結果個數
        print(soup.find_all("a", limit=2))
        #輸出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

        print(soup.find_all(id="link2"))
        # [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

        print(soup.find_all(id=True))
        #輸出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # 輸出:<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
        # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>,
        # <div class="text" id="div1">測試</div>]
相關文章
相關標籤/搜索