本文將介紹我是如何在python爬蟲裏面一步一步踩坑,而後慢慢走出來的,期間碰到的全部問題我都會詳細說明,讓你們之後碰到這些問題時可以快速肯定問題的來源,後面的代碼只是貼出了核心代碼,更詳細的代碼暫時沒有貼出來。css
首先我是想爬某個網站上面的全部文章內容,可是因爲以前沒有作過爬蟲(也不知道到底那個語言最方便),因此這裏想到了是用python來作一個爬蟲(畢竟人家的名字都帶有爬蟲的含義😄),我這邊是打算先將全部從網站上爬下來的數據放到ElasticSearch
裏面, 選擇ElasticSearch
的緣由是速度快,裏面分詞插件,倒排索引,須要數據的時候查詢效率會很是好(畢竟爬的東西比較多😄),而後我會將全部的數據在ElasticSearch
的老婆kibana
裏面將數據進行可視化出來,而且分析這些文章內容,能夠先看一下預期可視化的效果(上圖了),這個效果圖是kibana6.4
系統給予的幫助效果圖(就是說你能夠弄成這樣,我也想弄成這樣😁)。後面我會發一個dockerfile上來(如今還沒弄😳)。 html
這些東西能夠去找相應的教程安裝,我這裏只有ElasticSearch的安裝😢點我獲取安裝教程java
pip3 install tomd
複製代碼
pip3 install redis
複製代碼
pip3 install scrapy
複製代碼
gcc
組件 error: command 'gcc' failed with exit status 1
yum
來安裝python34-devel
, 這個python34-devel
根據你本身的python版原本,多是python-devel,是多少版本就將中間的34改爲你的版本, 個人是3.4.6yum install python34-devel
複製代碼
scrapy startproject scrapyDemo
, 來建立一個爬蟲項目liaochengdeMacBook-Pro:scrapy liaocheng$ scrapy startproject scrapyDemo
New Scrapy project 'scrapyDemo', using template directory '/usr/local/lib/python3.7/site-packages/scrapy/templates/project', created in:
/Users/liaocheng/script/scrapy/scrapyDemo
You can start your first spider with:
cd scrapyDemo
scrapy genspider example example.com
liaochengdeMacBook-Pro:scrapy liaocheng$
複製代碼
scrapy genspider demo juejin.im
, 後面這個網址是你要爬的網站,咱們先爬本身家的😂liaochengdeMacBook-Pro:scrapy liaocheng$ scrapy genspider demo juejin.im
Created spider 'demo' using template 'basic'
liaochengdeMacBook-Pro:scrapy liaocheng$
複製代碼
# -*- coding: utf-8 -*-
import scrapy
class DemoSpider(scrapy.Spider):
name = 'demo' ## 爬蟲的名字
allowed_domains = ['juejin.im'] ## 須要過濾的域名,也就是隻爬這個網址下面的內容
start_urls = ['https://juejin.im/post/5c790b4b51882545194f84f0'] ## 初始url連接
def parse(self, response): ## 若是新建的spider必須實現這個方法
pass
複製代碼
# -*- coding: utf-8 -*-
import scrapy
class DemoSpider(scrapy.Spider):
name = 'demo' ## 爬蟲的名字
allowed_domains = ['juejin.im'] ## 須要過濾的域名,也就是隻爬這個網址下面的內容
def start_requests(self):
start_urls = ['http://juejin.im/'] ## 初始url連接
for url in start_urls:
# 調用parse
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response): ## 若是新建的spider必須實現這個方法
pass
複製代碼
articleItem.py
文件(item文件就相似java裏面的實體類)import scrapy
class ArticleItem(scrapy.Item): ## 須要實現scrapy.Item文件
# 文章id
id = scrapy.Field()
# 文章標題
title = scrapy.Field()
# 文章內容
content = scrapy.Field()
# 做者
author = scrapy.Field()
# 發佈時間
createTime = scrapy.Field()
# 閱讀量
readNum = scrapy.Field()
# 點贊數
praise = scrapy.Field()
# 頭像
photo = scrapy.Field()
# 評論數
commentNum = scrapy.Field()
# 文章連接
link = scrapy.Field()
複製代碼
parse
方法的代碼def parse(self, response):
# 獲取頁面上全部的url
nextPage = response.css("a::attr(href)").extract()
# 遍歷頁面上全部的url連接,時間複雜度爲O(n)
for i in nextPage:
if nextPage is not None:
# 將連接拼起來
url = response.urljoin(i)
# 必須是掘金的連接才進入
if "juejin.im" in str(url):
# 存入redis,若是能存進去,就是一個沒有爬過的連接
if self.insertRedis(url) == True:
# dont_filter做用是是否過濾相同url true是不過濾,false爲過濾,咱們這裏只爬一個頁面就好了,不用全站爬,全站爬對對掘金不是很友好,我麼這裏只是用來測試的
yield scrapy.Request(url=url, callback=self.parse,headers=self.headers,dont_filter=False)
# 咱們只分析文章,其餘的內容都無論
if "/post/" in response.url and "#comment" not in response.url:
# 建立咱們剛纔的ArticleItem
article = ArticleItem()
# 文章id做爲id
article['id'] = str(response.url).split("/")[-1]
# 標題
article['title'] = response.css("#juejin > div.view-container > main > div > div.main-area.article-area.shadow > article > h1::text").extract_first()
# 內容
parameter = response.css("#juejin > div.view-container > main > div > div.main-area.article-area.shadow > article > div.article-content").extract_first()
article['content'] = self.parseToMarkdown(parameter)
# 做者
article['author'] = response.css("#juejin > div.view-container > main > div > div.main-area.article-area.shadow > article > div:nth-child(6) > meta:nth-child(1)::attr(content)").extract_first()
# 建立時間
createTime = response.css("#juejin > div.view-container > main > div > div.main-area.article-area.shadow > article > div.author-info-block > div > div > time::text").extract_first()
createTime = str(createTime).replace("年", "-").replace("月", "-").replace("日","")
article['createTime'] = createTime
# 閱讀量
article['readNum'] = int(str(response.css("#juejin > div.view-container > main > div > div.main-area.article-area.shadow > article > div.author-info-block > div > div > span::text").extract_first()).split(" ")[1])
# 點贊數
article['badge'] = response.css("#juejin > div.view-container > main > div > div.article-suspended-panel.article-suspended-panel > div.like-btn.panel-btn.like-adjust.with-badge::attr(badge)").extract_first()
# 評論數
article['commentNum'] = response.css("#juejin > div.view-container > main > div > div.article-suspended-panel.article-suspended-panel > div.comment-btn.panel-btn.comment-adjust.with-badge::attr(badge)").extract_first()
# 文章連接
article['link'] = response.url
# 這個方法和很重要(坑),以前就是因爲執行yield article, pipeline就一直不能獲取數據
yield article
# 將內容轉換成markdown
def parseToMarkdown(self, param):
return tomd.Tomd(str(param)).markdown
# url 存入redis,若是能存那麼就沒有該連接,若是不能存,那麼就存在該連接
def insertRedis(self, url):
if self.redis != None:
return self.redis.sadd("articleUrlList", url) == 1
else:
self.redis = self.redisConnection.getClient()
self.insertRedis(url)
複製代碼
from elasticsearch import Elasticsearch
class ArticlePipelines(object):
# 初始化
def __init__(self):
# elasticsearch的index
self.index = "article"
# elasticsearch的type
self.type = "type"
# elasticsearch的ip加端口
self.es = Elasticsearch(hosts="localhost:9200")
# 必須實現的方法,用來處理yield返回的數據
def process_item(self, item, spider):
# 這裏是判斷,若是是demo這個爬蟲的數據才處理
if spider.name != "demo":
return item
result = self.checkDocumentExists(item)
if result == False:
self.createDocument(item)
else:
self.updateDocument(item)
# 添加文檔
def createDocument(self, item):
body = {
"title": item['title'],
"content": item['content'],
"author": item['author'],
"createTime": item['createTime'],
"readNum": item['readNum'],
"praise": item['praise'],
"link": item['link'],
"commentNum": item['commentNum']
}
try:
self.es.create(index=self.index, doc_type=self.type, id=item["id"], body=body)
except:
pass
# 更新文檔
def updateDocument(self, item):
parm = {
"doc" : {
"readNum" : item['readNum'],
"praise" : item['praise']
}
}
try:
self.es.update(index=self.index, doc_type=self.type, id=item["id"], body=parm)
except:
pass
# 檢查文檔是否存在
def checkDocumentExists(self, item):
try:
self.es.get(self.index, self.type, item["id"])
return True
except:
return False
複製代碼
scrapy list
查看本地的全部爬蟲liaochengdeMacBook-Pro:scrapyDemo liaocheng$ scrapy list
demo
liaochengdeMacBook-Pro:scrapyDemo liaocheng$
複製代碼
scrapy crawl demo
來運行爬蟲scrapy crawl demo
複製代碼
GET /article/_search
{
"query": {
"match_all": {}
}
}
複製代碼
{
"took": 7,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "article2",
"_type": "type",
"_id": "5c790b4b51882545194f84f0",
"_score": 1,
"_source": {}
}
]
}
}
複製代碼