1. 靜態頁面爬取html
這類最簡單啦,右鍵->查看頁面源碼時,想下載的信息都可以顯示在這裏,這時只須要直接down頁面源碼,代碼以下:python
# Simple open web import urllib2 print urllib2.urlopen('http://stockrt.github.com').read() # With password? import urllib opener = urllib.FancyURLopener() print opener.open('http://user:password@stockrt.github.com').read()
2. 滑動鼠標動態加載內容git
有些頁面在打開時不會徹底顯示,而是經過滑動鼠標動態加載。對於這類頁面的爬蟲,須要找到觸發動態加載的url,一般方法爲:右鍵->審查元素->Networkgithub
尋找滑動鼠標時觸發的事件,分析每次滑動鼠標時url中變化的參數,在代碼中拼接出對應的url便可。web
3. 使用 mechanize 模擬瀏覽器訪問網頁 瀏覽器
有時會發現上述方法不靈,即down的東西與頁面內容不一致,會發現內容少了不少,這時就須要瀏覽器假裝,模擬瀏覽器動做,在命令行或者python腳本中實例化一個瀏覽器。代碼網頁鏈接。cookie
模擬瀏覽器:session
import mechanize import cookielib # Browser br = mechanize.Browser() # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) # Want debugging messages? #br.set_debug_http(True) #br.set_debug_redirects(True) #br.set_debug_responses(True) # User-Agent (this is cheating, ok?) br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
如今你獲得了一個瀏覽器的示例,br對象。使用這個對象,即可以打開一個頁面,使用相似以下的代碼: dom
# Open some site, let's pick a random one, the first that pops in mind: r = br.open('http://google.com') html = r.read() # Show the source print html # or print br.response().read() # Show the html title print br.title() # Show the response headers print r.info() # or print br.response().info() # Show the available forms for f in br.forms(): print f # Select the first (index zero) form br.select_form(nr=0) # Let's search br.form['q']='weekend codes' br.submit() print br.response().read() # Looking at some results in link format for l in br.links(url_regex='stockrt'): print l
若是你訪問的網站須要驗證(http basic auth),那麼: 網站
# If the protected site didn't receive the authentication data you would # end up with a 410 error in your face br.add_password('http://safe-site.domain', 'username', 'password') br.open('http://safe-site.domain')
因爲以前使用了Cookie Jar,你不須要管理網站的登陸session。也就是不須要管理須要POST一個用戶名和密碼的狀況。
一般這種狀況,網站會請求你的瀏覽器去存儲一個session cookie除非你重複登錄,
而致使你的cookie中含有這個字段。全部這些事情,存儲和重發這個session cookie已經被Cookie Jar搞定了,爽吧。
同時,你能夠管理你的瀏覽器歷史:
# Testing presence of link (if the link is not found you would have to # handle a LinkNotFoundError exception) br.find_link(text='Weekend codes') # Actually clicking the link req = br.click_link(text='Weekend codes') br.open(req) print br.response().read() print br.geturl() # Back br.back() print br.response().read() print br.geturl()
下載一個文件:
# Download f = br.retrieve('http://www.google.com.br/intl/pt-BR_br/images/logo.gif')[0] print f fh = open(f)
爲http設置代理
# Proxy and user/password br.set_proxies({"http": "joe:password@myproxy.example.com:3128"}) # Proxy br.set_proxies({"http": "myproxy.example.com:3128"}) # Proxy password br.add_proxy_password("joe", "password")