Python urllib urllib2

urlli2是對urllib的擴展。html

類似與區別:python

最經常使用的urllib.urlopen和urllib2.urlopen是相似的,可是參數有區別,例如超時和代理。cookie

urllib接受url字符串來獲取信息,而urllib2除了url字符串,也接受Request對象,而在Request對象中能夠設置headers,而urllib卻不能設置headers。socket

urllib有urlencode方法來對參數進行encode操做,而urllib2沒有此方法,因此他們兩常常一塊兒使用。post

相對來講urllib2功能更多一些,包含了各類handler和opener。ui

另外還有httplib模塊,它提供了最基礎的http請求的方法,例如能夠作get/post/put等操做。google

 

參考:http://blog.csdn.net/column/details/why-bug.html編碼

 

最基本的應用:url

import urllib2  
response = urllib2.urlopen('http://www.baidu.com/')  
html = response.read()  
print html

使用Request對象:.net

import urllib2    
req = urllib2.Request('http://www.baidu.com')    
response = urllib2.urlopen(req)    
the_page = response.read()    
print the_page

發送表單數據:

import urllib    
import urllib2    
  
url = 'http://www.someserver.com/register.cgi'    
    
values = {'name' : 'WHY',    
          'location' : 'SDU',    
          'language' : 'Python' }    
  
data = urllib.urlencode(values) # 編碼工做  
req = urllib2.Request(url, data)  # 發送請求同時傳data表單  
response = urllib2.urlopen(req)  #接受反饋的信息  
the_page = response.read()  #讀取反饋的內容
import urllib2    
import urllib  
  
data = {}  
  
data['name'] = 'WHY'    
data['location'] = 'SDU'    
data['language'] = 'Python'  
  
url_values = urllib.urlencode(data)    
print url_values  
  
name=Somebody+Here&language=Python&location=Northampton    
url = 'http://www.example.com/example.cgi'    
full_url = url + '?' + url_values  
  
data = urllib2.urlopen(full_url)

在http請求中設置headers:

import urllib    
import urllib2    
  
url = 'http://www.someserver.com/cgi-bin/register.cgi'  
  
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'    
values = {'name' : 'WHY',    
          'location' : 'SDU',    
          'language' : 'Python' }    
  
headers = { 'User-Agent' : user_agent }    
data = urllib.urlencode(values)    
req = urllib2.Request(url, data, headers)    
response = urllib2.urlopen(req)    
the_page = response.read()

 

下面是關於opener和handler的應用:

from urllib2 import Request, urlopen, URLError, HTTPError  
  
  
old_url = 'http://t.cn/RIxkRnO'  
req = Request(old_url)  
response = urlopen(req)    
print 'Old url :' + old_url  
print 'Real url :' + response.geturl()

這裏獲得url即response.geturl()與old_url不一樣,是由於重定向。

查看頁面信息info():

from urllib2 import Request, urlopen, URLError, HTTPError  
  
old_url = 'http://www.baidu.com'  
req = Request(old_url)  
response = urlopen(req)    
print 'Info():'  
print response.info()

一個opener和handler的實例:

# -*- coding: utf-8 -*-  
import urllib2  
  
# 建立一個密碼管理者  
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()  
  
# 添加用戶名和密碼  
  
top_level_url = "http://example.com/foo/"  
  
# 若是知道 realm, 咱們可使用他代替 ``None``.  
# password_mgr.add_password(None, top_level_url, username, password)  
password_mgr.add_password(None, top_level_url,'why', '1223')  
  
# 建立了一個新的handler  
handler = urllib2.HTTPBasicAuthHandler(password_mgr)  
  
# 建立 "opener" (OpenerDirector 實例)  
opener = urllib2.build_opener(handler)  
  
a_url = 'http://www.baidu.com/'  
  
# 使用 opener 獲取一個URL  
opener.open(a_url)  
  
# 安裝 opener.  
# 如今全部調用 urllib2.urlopen 將用咱們的 opener.  
urllib2.install_opener(opener)

下面是一些技巧:

代理設置:

import urllib2  
enable_proxy = True  
proxy_handler = urllib2.ProxyHandler({"http" : 'http://some-proxy.com:8080'})  
null_proxy_handler = urllib2.ProxyHandler({})  
if enable_proxy:  
    opener = urllib2.build_opener(proxy_handler)  
else:  
    opener = urllib2.build_opener(null_proxy_handler)  
urllib2.install_opener(opener)

timeout設置,

python2.6前:

import urllib2  
import socket  
socket.setdefaulttimeout(10) # 10 秒鐘後超時  
urllib2.socket.setdefaulttimeout(10) # 另外一種方式

2.6以後:

import urllib2  
response = urllib2.urlopen('http://www.google.com', timeout=10)

Request中加入header:

import urllib2  
request = urllib2.Request('http://www.baidu.com/')  
request.add_header('User-Agent', 'fake-client')  
response = urllib2.urlopen(request)  
print response.read()

redirect:

import urllib2  
my_url = 'http://www.google.cn'  
response = urllib2.urlopen(my_url)  
redirected = response.geturl() == my_url  
print redirected  
  
my_url = 'http://rrurl.cn/b1UZuP'  
response = urllib2.urlopen(my_url)  
redirected = response.geturl() == my_url  
print redirected
import urllib2  
class RedirectHandler(urllib2.HTTPRedirectHandler):  
    def http_error_301(self, req, fp, code, msg, headers):  
        print "301"  
        pass  
    def http_error_302(self, req, fp, code, msg, headers):  
        print "303"  
        pass  
  
opener = urllib2.build_opener(RedirectHandler)  
opener.open('http://rrurl.cn/b1UZuP')

cookie:

import urllib2  
import cookielib  
cookie = cookielib.CookieJar()  
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))  
response = opener.open('http://www.baidu.com')  
for item in cookie:  
    print 'Name = '+item.name  
    print 'Value = '+item.value

http的put和delete方法:

import urllib2  
request = urllib2.Request(uri, data=data)  
request.get_method = lambda: 'PUT' # or 'DELETE'  
response = urllib2.urlopen(request)

獲得http返回碼:

import urllib2  
try:  
    response = urllib2.urlopen('http://bbs.csdn.net/why')  
except urllib2.HTTPError, e:  
    print e.code

debug log:

import urllib2  
httpHandler = urllib2.HTTPHandler(debuglevel=1)  
httpsHandler = urllib2.HTTPSHandler(debuglevel=1)  
opener = urllib2.build_opener(httpHandler, httpsHandler)  
urllib2.install_opener(opener)  
response = urllib2.urlopen('http://www.google.com')
相關文章
相關標籤/搜索