有的網站作了反爬的處理,能夠添加User-Agent :判斷瀏覽器html
self.user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' # 初始化 headers self.headers = {'User-Agent': self.user_agent}
若是不行,在Chrome上按F12分析請求頭、請求體,看需不須要添加別的信息,例若有的網址添加了referer:記住當前網頁的來源,那麼咱們在請求的時候就能夠帶上。按Ctrl + Shift + C,能夠定位元素在HTML上的位置git
html = requests.get(url, headers=headers) #沒錯,就是這麼簡單
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)' headers = {'User-Agent': user_agent} # 注意:form data請求參數 params = 'q&viewFlag=A&sortType=default&searchStyle=&searchRegion=city%3A&searchFansNum=¤tPage=1&pageSize=100' def getHome(): url = 'https://mm.taobao.com/tstar/search/tstar_model.do?_input_charset=utf-8' req = urllib2.Request(url, headers=headers) # decode(’utf - 8’)解碼 把其餘編碼轉換成unicode編碼 # encode(’gbk’) 編碼 把unicode編碼轉換成其餘編碼 # 」gbk」.decode(’gbk’).encode(’utf - 8') # unicode = 中文 # gbk = 英文 # utf - 8 = 日文 # 英文一 > 中文一 > 日文,unicode至關於轉化器 html = urllib2.urlopen(req, data=params).read().decode('gbk').encode('utf-8') # json轉對象 peoples = json.loads(html) for i in peoples['data']['searchDOList']: #去下一個頁面獲取數據 getUseInfo(i['userId'], i['realName'])
def getUseInfo(userId, realName): url = 'https://mm.taobao.com/self/aiShow.htm?userId=' + str(userId) req = urllib2.Request(url) html = urllib2.urlopen(req).read().decode('gbk').encode('utf-8') pattern = re.compile('<img.*?src=(.*?)/>', re.S) items = re.findall(pattern, html) x = 0 for item in items: if re.match(r'.*(.jpg")$', item.strip()): tt = 'http:' + re.split('"', item.strip())[1] down_image(tt, x, realName) x = x + 1 print('下載完畢')
正則表達式說明github
def down_image(url, filename, realName): req = urllib2.Request(url=url) folder = 'e:\\images\\%s' % realName if os.path.isdir(folder): pass else: os.makedirs(folder) f = folder + '\\%s.jpg' % filename if not os.path.isfile(f): print f binary_data = urllib2.urlopen(req).read() with open(f, 'wb') as temp_file: temp_file.write(binary_data)
GitHub地址,還有其餘網站爬蟲,歡迎star:https://github.com/peiniwan/Spider2ajax