爬蟲基礎-1

爬蟲的概念
  - 爬蟲是模擬瀏覽器發送請求,獲取響應html

爬蟲的流程
  - url--->發送請求,獲取響應--->提取數據--->保存
  - 發送請求,獲取響應--->提取urlpython

爬蟲要根據當前url地址對應的響應爲準 ,當前url地址的elements的內容和url的響應不同ajax

頁面上的數據在哪裏?
  - 當前url地址對應的響應中
  - 其餘的url地址對應的響應中
  - 好比ajax請求中
  - js生成的
  - 部分數據在響應中
  - 所有經過js生成json

requests中解決編解碼的方法
  - response.content.decode()
  - response.content.decode("gbk")
  - response.text瀏覽器

判斷請求否是成功
  assert response.status_code==200cookie

字符串格式化的另外一種方式session

  "i love python {}".format(1)app


使用代理ip
  - 準備一堆的ip地址,組成ip池,隨機選擇一個ip來時用ide

  - 如何隨機選擇代理ip,讓使用次數較少的ip地址有更大的可能性被用到
  - {"ip":ip,"times":0}
  - [{},{},{},{},{}],對這個ip的列表進行排序,按照使用次數進行排序
  - 選擇使用次數較少的10個ip,從中隨機選擇一個post

  - 檢查ip的可用性
  - 能夠使用requests添加超時參數,判斷ip地址的質量
  - 在線代理ip質量檢測的網站


攜帶cookie請求
  - 攜帶一堆cookie進行請求,把cookie組成cookie池

使用requests提供的session類來請求登錄以後的網站的思路
  - 實例化session
  - 先使用session發送請求,登陸對網站,把cookie保存在session中
  - 再使用session請求登錄以後才能訪問的網站,session可以自動的攜帶登陸成功時保存在其中的cookie,進行請求

不發送post請求,使用cookie獲取登陸後的頁面
  - cookie過時時間很長的網站
  - 在cookie過時以前可以拿到全部的數據,比較麻煩
  - 配合其餘程序一塊兒使用,其餘程序專門獲取cookie,當前程序專門請求頁面

字典推導式,列表推到是
  cookies="anonymid=j3jxk555-nrn0wh; _r01_=1; _ga=GA1.2.1274811859.1497951251; _de=BF09EE3A28DED52E6B65F6A4705D973F1383380866D39FF5; ln_uact=mr_mao_hacker@163.com; depovince=BJ; jebecookies=54f5d0fd-9299-4bb4-801c-eefa4fd3012b|||||; JSESSIONID=abcI6TfWH4N4t_aWJnvdw; ick_login=4be198ce-1f9c-4eab-971d-48abfda70a50; p=0cbee3304bce1ede82a56e901916d0949; first_login_flag=1; ln_hurl=http://hdn.xnimg.cn/photos/hdn421/20171230/1635/main_JQzq_ae7b0000a8791986.jpg; t=79bdd322e760beae79c0b511b8c92a6b9; societyguester=79bdd322e760beae79c0b511b8c92a6b9; id=327550029; xnsid=2ac9a5d8; loginfrom=syshome; ch_id=10016; wp_fold=0"

cookies = {i.split("=")[0]:i.split("=")[1] for i in cookies.split("; ")}


[self.url_temp.format(i * 50) for i in range(1000)]


獲取登陸後的頁面的三種方式
- 實例化session,使用session發送post請求,在使用他獲取登錄後的頁面
- headers中添加cookie鍵,值爲cookie字符串
- 在請求方法中添加cookies參數,接收字典形式的cookie。字典形式的cookie中的鍵是cookie的name對應的值,值是cookie的value對應的值

# coding=utf-8
import requests

headers = {
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36",
}

cookies="anonymid=j3jxk555-nrn0wh; _r01_=1; _ga=GA1.2.1274811859.1497951251; _de=BF09EE3A28DED52E6B65F6A4705D973F1383380866D39FF5; ln_uact=mr_mao_hacker@163.com; depovince=BJ; jebecookies=54f5d0fd-9299-4bb4-801c-eefa4fd3012b|||||; JSESSIONID=abcI6TfWH4N4t_aWJnvdw; ick_login=4be198ce-1f9c-4eab-971d-48abfda70a50; p=0cbee3304bce1ede82a56e901916d0949; first_login_flag=1; ln_hurl=http://hdn.xnimg.cn/photos/hdn421/20171230/1635/main_JQzq_ae7b0000a8791986.jpg; t=79bdd322e760beae79c0b511b8c92a6b9; societyguester=79bdd322e760beae79c0b511b8c92a6b9; id=327550029; xnsid=2ac9a5d8; loginfrom=syshome; ch_id=10016; wp_fold=0"
cookies = {i.split("=")[0]:i.split("=")[1] for i in cookies.split("; ")}
print(cookies)

r = requests.get("http://www.renren.com/327550029/profile",headers=headers,cookies=cookies)

#保存頁面
with open("renren3.html","w",encoding="utf-8") as f:
    f.write(r.content.decode())
View Code
# coding=utf-8
import requests


proxies = {"http":"http://163.177.151.23:80"}
headers = {
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"}

r = requests.get("http://www.baidu.com",proxies=proxies,headers=headers)
print(r.status_code)
View Code
# coding=utf-8
import requests


class TiebaSpider:
    def __init__(self, tieba_name):
        self.tieba_name = tieba_name
        self.url_temp = "https://tieba.baidu.com/f?kw=" + tieba_name + "&ie=utf-8&pn={}"
        self.headers = {
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36"}

    def get_url_list(self):  # 1.構造url列表
        # url_list = []
        # for i in range(1000):
        #     url_list.append(self.url_temp.format(i*50))
        # return url_list
        return [self.url_temp.format(i * 50) for i in range(1000)]

    def parse_url(self, url):  # 發送請求,獲取響應
        print(url)
        response = requests.get(url, headers=self.headers)
        return response.content.decode()

    def save_html(self, html_str, page_num):  # 保存html字符串
        file_path = "{}—第{}頁.html".format(self.tieba_name, page_num)
        with open(file_path, "w", encoding="utf-8") as f:  # "李毅—第4頁.html"
            f.write(html_str)

    def run(self):  # 實現主要邏輯
        # 1.構造url列表
        url_list = self.get_url_list()
        # 2.遍歷,發送請求,獲取響應
        for url in url_list:
            html_str = self.parse_url(url)
            # 3.保存
            page_num = url_list.index(url) + 1  # 頁碼數
            self.save_html(html_str, page_num)


if __name__ == '__main__':
    tieba_spider = TiebaSpider("lol")
    tieba_spider.run()
View Code
# coding=utf-8
import requests
import json
import sys

query_string = sys.argv[1]

headers = {"User-Agent":"Mozilla/5.0 (Linux; Android 5.1.1; Nexus 6 Build/LYZ28E) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Mobile Safari/537.36"}

post_data = {
    "query":query_string,
    "from":"zh",
    "to":"en",
}

post_url = "http://fanyi.baidu.com/basetrans"

r = requests.post(post_url,data=post_data,headers=headers)
# print(r.content.decode())
dict_ret = json.loads(r.content.decode())
ret = dict_ret["trans"][0]["dst"]
print("result is :",ret)
View Code
相關文章
相關標籤/搜索