http://blog.sina.com.cn/s/blog_74a7e56e010177l8.htmlhtml
早據說用python作網絡爬蟲很是方便,正好這幾天單位也有這樣的需求,須要登錄XX網站下載部分文檔,因而本身親身試驗了一番,效果還不錯。node
本例所登陸的某網站須要提供用戶名,密碼和驗證碼,在此使用了python的urllib2直接登陸網站並處理網站的Cookie。python
Cookie的工做原理:api
Cookie由服務端生成,而後發送給瀏覽器,瀏覽器會將Cookie保存在某個目錄下的文本文件中。在下次請求同一網站時,會發送該Cookie給服務器,這樣服務器就知道該用戶是否合法以及是否須要從新登陸。瀏覽器
Python提供了基本的cookielib庫,在首次訪問某頁面時,cookie便會自動保存下來,以後訪問其它頁面便都會帶有正常登陸的Cookie了。
服務器
原理:cookie
(1)激活cookie功能網絡
(2)反「反盜鏈」,假裝成瀏覽器訪問session
(3)訪問驗證碼連接,並將驗證碼圖片下載到本地併發
(4)驗證碼的識別方案網上較多,python也有圖像處理庫,此例調用了火車頭採集器的OCR識別接口。
(5)表單的處理,可用fiddler等抓包工具獲取須要提交的參數
(6)生成須要提交的數據,生成http請求併發送
(7)根據返回的js頁面判斷是否登錄成功
(8)登錄成功後下載其它頁面
此例中使用多個帳號輪詢登錄,每一個帳號下載指定數目頁面。
如下是部分代碼:
#!usr/bin/env python #-*- coding: utf-8 -*- import os import urllib2 import urllib import cookielib import xml.etree.ElementTree as ET #----------------------------------------------------------------------------- # Login in www.***.com.cn def ChinaBiddingLogin(url, username, password): # Enable cookie support for urllib2 cookiejar=cookielib.CookieJar() urlopener=urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar)) urllib2.install_opener(urlopener) urlopener.addheaders.append(('Referer', 'http://www.chinabidding.com.cn/zbw/login/login.jsp')) urlopener.addheaders.append(('Accept-Language', 'zh-CN')) urlopener.addheaders.append(('Host', 'www.chinabidding.com.cn')) urlopener.addheaders.append(('User-Agent', 'Mozilla/5.0 (compatible; MISE 9.0; Windows NT 6.1); Trident/5.0')) urlopener.addheaders.append(('Connection', 'Keep-Alive')) print 'XXX Login......' imgurl=r'http://www.*****.com.cn/zbw/login/p_w_picpath.jsp' DownloadFile(imgurl, urlopener) authcode=raw_input('Please enter the authcode:') #authcode=VerifyingCodeRecognization(r"http://192.168.0.106/p_w_picpaths/code.jpg") # Send login/password to the site and get the session cookie values={'login_id':username, 'opl':'op_login', 'login_passwd':password, 'login_check':authcode} urlcontent=urlopener.open(urllib2.Request(url, urllib.urlencode(values))) page=urlcontent.read(500000) # Make sure we are logged in, check the returned page content if page.find('login.jsp')!=-1: print 'Login failed with username=%s, password=%s and authcode=%s' \ % (username, password, authcode) return False else: print 'Login succeeded!' return True #----------------------------------------------------------------------------- # Download from fileUrl then save to fileToSave # Note: the fileUrl must be a valid file def DownloadFile(fileUrl, urlopener): isDownOk=False try: if fileUrl: outfile=open(r'/var/www/p_w_picpaths/code.jpg', 'w') outfile.write(urlopener.open(urllib2.Request(fileUrl)).read()) outfile.close() isDownOK=True else: print 'ERROR: fileUrl is NULL!' except: isDownOK=False return isDownOK #------------------------------------------------------------------------------ # Verifying code recoginization def VerifyingCodeRecognization(imgurl): url=r'http://192.168.0.119:800/api?' user='admin' pwd='admin' model='ocr' ocrfile='cbi' values={'user':user, 'pwd':pwd, 'model':model, 'ocrfile':ocrfile, 'imgurl':imgurl} data=urllib.urlencode(values) try: url+=data urlcontent=urllib2.urlopen(url) except IOError: print '***ERROR: invalid URL (%s)' % url page=urlcontent.read(500000) # Parse the xml data and get the verifying code root=ET.fromstring(page) node_find=root.find('AddField') authcode=node_find.attrib['data'] return authcode #------------------------------------------------------------------------------ # Read users from configure file def ReadUsersFromFile(filename): users={} for eachLine in open(filename, 'r'): info=[w for w in eachLine.strip().split()] if len(info)==2: users[info[0]]=info[1] return users #------------------------------------------------------------------------------ def main(): login_page=r'http://www.***.com.cnlogin/login.jsp' download_page=r'http://www.***.com.cn***/***?record_id=' start_id=8593330 end_id=8595000 now_id=start_id Users=ReadUsersFromFile('users.conf') while True: for key in Users: if ChinaBiddingLogin(login_page, key, Users[key]): for i in range(3): pageUrl=download_page+'%d' % now_id urlcontent=urllib2.urlopen(pageUrl) filepath='./download/%s.html' % now_id f=open(filepath, 'w') f.write(urlcontent.read(500000)) f.close() now_id+=1 else: continue #------------------------------------------------------------------------------ if __name__=='__main__': main()