先說說爬蟲,爬蟲常被用來抓取特定網站網頁的HTML數據,定位在後端數據的獲取,而對於網站而言,爬蟲給網站帶來流量的同時,一些設計很差的爬蟲因爲爬得太猛,致使給網站來帶很大的負擔,固然再加上一些網站並不但願被爬取,因此就出現了許許多多的反爬技術。nginx
1. requestsgit
模塊安裝方法:github
pip3 install requests
二、beautisoup模塊json
軟件安裝方法:後端
pip3 install beautifulsoup4 或 pip3 install bs4
Python標準庫中提供了:urllib、urllib二、httplib等模塊以供Http請求,可是,它的 API 太渣了。它是爲另外一個時代、另外一個互聯網所建立的。它須要巨量的工做,甚至包括各類方法覆蓋,來完成最簡單的任務。api
Requests 是使用 Apache2 Licensed 許可證的 基於Python開發的HTTP 庫,其在Python內置模塊的基礎上進行了高度的封裝,從而使得Pythoner進行網絡請求時,變得美好了許多,使用Requests能夠垂手可得的完成瀏覽器可有的任何操做。瀏覽器
一、GET請求服務器
# 一、無參數實例 import requests ret = requests.get('https://github.com/timeline.json') print ret.url print ret.text # 二、有參數實例 import requests payload = {'key1': 'value1', 'key2': 'value2'} ret = requests.get("http://httpbin.org/get", params=payload) print ret.url print ret.text
二、POST請求cookie
# 一、基本POST實例 import requests payload = {'key1': 'value1', 'key2': 'value2'} ret = requests.post("http://httpbin.org/post", data=payload) print ret.text # 二、發送請求頭和數據實例 import requests import json url = 'https://api.github.com/some/endpoint' payload = {'some': 'data'} headers = {'content-type': 'application/json'} ret = requests.post(url, data=json.dumps(payload), headers=headers) print ret.text print ret.cookies
三、requests屬性網絡
response = requests.get('URL') response.text # 獲取文本內容(字符串內容) response.content # 獲取文本內容,字節 response.encoding # 設置返回結果的編碼 response.aparent_encoding # 獲取網站原始的編碼(編碼解析) response.status_code # 狀態碼 response.cookies.get_dict() # cookies
四、關係和方法
- 方法關係 requests.get(url, params=None, **kwargs) requests.post(url, data=None, json=None, **kwargs) requests.put(url, data=None, **kwargs) requests.head(url, **kwargs) requests.delete(url, **kwargs) requests.patch(url, data=None, **kwargs) requests.options(url, **kwargs) - 在此方法的基礎上構建 requests.request(method, url, **kwargs) - method: 提交方式 - url: 提交地址 - params: 在URL中傳遞的參數,GET requests.request( method='GET', url= 'http://www.baidu.com', params = {'k1':'v1','k2':'v2'} ) # http://www.baidu.com?k1=v1&k2=v2 - data: 在請求體裏傳遞的數據 requests.request( method='POST', url= 'http://www.baidu.com', params = {'k1':'v1','k2':'v2'}, data = {'use':'alex','pwd': '123','x':[11,2,3} ) 請求頭: content-type: application/url-form-encod..... 請求體: use=alex&pwd=123 - json 在請求體裏傳遞的數據 requests.request( method='POST', url= 'http://www.oldboyedu.com', params = {'k1':'v1','k2':'v2'}, json = {'use':'alex','pwd': '123'} ) 請求頭: content-type: application/json 請求體: "{'use':'alex','pwd': '123'}" PS: 字典中嵌套字典時 - headers 請求頭 requests.request( method='POST', url= 'http://www.oldboyedu.com', params = {'k1':'v1','k2':'v2'}, json = {'use':'alex','pwd': '123'}, headers={ 'Referer': 'http://dig.chouti.com/', 'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36" } ) - cookies Cookies - files 上傳文件 - auth 基本認證(headers中加入加密的用戶名和密碼) - timeout 請求和響應的超時時間 - allow_redirects 是否容許重定向 - proxies 代理 (nginx反向代理模塊) - verify 是否忽略證書 - cert 證書文件 - stream 流的方式迭代下載 - session: 用於保存客戶端歷史訪問信息
參數用法示例:
def param_method_url(): # requests.request(method='get', url='http://127.0.0.1:8000/test/') # requests.request(method='post', url='http://127.0.0.1:8000/test/') pass def param_param(): # - 能夠是字典 # - 能夠是字符串 # - 能夠是字節(ascii編碼之內) # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params={'k1': 'v1', 'k2': '水電費'}) # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params="k1=v1&k2=水電費&k3=v3&k3=vv3") # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8')) # 錯誤 # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params=bytes("k1=v1&k2=水電費&k3=v3&k3=vv3", encoding='utf8')) pass def param_data(): # 能夠是字典 # 能夠是字符串 # 能夠是字節 # 能夠是文件對象 # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data={'k1': 'v1', 'k2': '水電費'}) # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data="k1=v1; k2=v2; k3=v3; k3=v4" # ) # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data="k1=v1;k2=v2;k3=v3;k3=v4", # headers={'Content-Type': 'application/x-www-form-urlencoded'} # ) # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data=open('data_file.py', mode='r', encoding='utf-8'), # 文件內容是:k1=v1;k2=v2;k3=v3;k3=v4 # headers={'Content-Type': 'application/x-www-form-urlencoded'} # ) pass def param_json(): # 將json中對應的數據進行序列化成一個字符串,json.dumps(...) # 而後發送到服務器端的body中,而且Content-Type是 {'Content-Type': 'application/json'} requests.request(method='POST', url='http://127.0.0.1:8000/test/', json={'k1': 'v1', 'k2': '水電費'}) def param_headers(): # 發送請求頭到服務器端 requests.request(method='POST', url='http://127.0.0.1:8000/test/', json={'k1': 'v1', 'k2': '水電費'}, headers={'Content-Type': 'application/x-www-form-urlencoded'} ) def param_cookies(): # 發送Cookie到服務器端 requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={'k1': 'v1', 'k2': 'v2'}, cookies={'cook1': 'value1'}, ) # 也可使用CookieJar(字典形式就是在此基礎上封裝) from http.cookiejar import CookieJar from http.cookiejar import Cookie obj = CookieJar() obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None, discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False, port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False) ) requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={'k1': 'v1', 'k2': 'v2'}, cookies=obj) def param_files(): # 發送文件 # file_dict = { # 'f1': open('readme', 'rb') # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) # 發送文件,定製文件名 # file_dict = { # 'f1': ('test.txt', open('readme', 'rb')) # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) # 發送文件,定製文件名 # file_dict = { # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf") # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) # 發送文件,定製文件名 # file_dict = { # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', {'k1': '0'}) # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) pass def param_auth(): from requests.auth import HTTPBasicAuth, HTTPDigestAuth ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('wupeiqi', 'sdfasdfasdf')) print(ret.text) # ret = requests.get('http://192.168.1.1', # auth=HTTPBasicAuth('admin', 'admin')) # ret.encoding = 'gbk' # print(ret.text) # ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass')) # print(ret) # def param_timeout(): # ret = requests.get('http://google.com/', timeout=1) # print(ret) # ret = requests.get('http://google.com/', timeout=(5, 1)) # print(ret) pass def param_allow_redirects(): ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False) print(ret.text) def param_proxies(): # proxies = { # "http": "61.172.249.96:80", # "https": "http://61.185.219.126:3128", # } # proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'} # ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies) # print(ret.headers) # from requests.auth import HTTPProxyAuth # # proxyDict = { # 'http': '77.75.105.165', # 'https': '77.75.105.165' # } # auth = HTTPProxyAuth('username', 'mypassword') # # r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth) # print(r.text) pass def param_stream(): ret = requests.get('http://127.0.0.1:8000/test/', stream=True) print(ret.content) ret.close() # from contextlib import closing # with closing(requests.get('http://httpbin.org/get', stream=True)) as r: # # 在此處理響應。 # for i in r.iter_content(): # print(i) def requests_session(): import requests session = requests.Session() ### 一、首先登錄任何頁面,獲取cookie i1 = session.get(url="http://dig.chouti.com/help/service") ### 二、用戶登錄,攜帶上一次的cookie,後臺對cookie中的 gpsd 進行受權 i2 = session.post( url="http://dig.chouti.com/login", data={ 'phone': "8615131255089", 'password': "xxxxxx", 'oneMonth': "" } ) i3 = session.post( url="http://dig.chouti.com/link/vote?linksId=8589623", ) print(i3.text)