python網絡爬蟲(1)靜態網頁抓取

獲取響應內容:

import requests
r=requests.get('http://www.santostang.com/')
print(r.encoding)
print(r.status_code)
print(r.text)

獲取編碼,狀態(200成功,4xx客戶端錯誤,5xx服務器相應錯誤),文本,等。python

 

定製Request請求

傳遞URL參數

key_dict = {'key1':'value1','key2':'value2'}
r=requests.get('http://httpbin.org/get',params=key_dict)
print(r.url)
print(r.text)

定製請求頭

headers={'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0','Host':'www.santostang.com'}
r=requests.get('http://www.santostang.com',headers=headers)
print(r.status_code)

發送POST請求

POST請求發送表單信息,密碼不顯示在URL中,數據字典發送時自動編碼爲表單形式。服務器

key_dict = {'key1':'value1','key2':'value2'}
r=requests.post('http://httpbin.org/post',data=key_dict)
print(r.url)
print(r.text)

超時並拋出異常

r=requests.get('http://www.santostang.com/',timeout=0.11)

  

獲取top250電影數據

 

import requests
import myToolFunction
from bs4 import BeautifulSoup

def get_movies():
    headers={'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0','Host':'movie.douban.com'}
    movie_list=[]
    for i in range(10):
        link='https://movie.douban.com/top250'
        key_dict = {'start':i*25,'filter':''}
        r=requests.get(link,params=key_dict)
        #print(r.text)
        print(r.status_code)
        print(r.url)
        
        soup=BeautifulSoup(r.text,'lxml')
        div_list=soup.find_all('div', class_='hd')
        for each in div_list:
            movie=each.a.span.text.strip()+'\n'
            movie_list.append(movie)
        pass
    return movie_list

def storFile(data,fileName,method='a'):
    with open(fileName,method,newline ='') as f:
        f.write(data)
        pass
    pass

movie_list=get_movies()
for str in movie_list:
    myToolFunction.storFile(str, 'movie top250.txt','a')
    pass
相關文章
相關標籤/搜索