海王上映了,而後口碑炸了,對咱來講,多了一個可爬可分析的電影,美哉~html
摘錄一個評論json
零點場剛看完,溫導的電影一直很不錯,不管是速7,電鋸驚魂仍是招魂都很棒。打鬥和音效方面沒話說很是棒,特別震撼。總之,DC扳回一分( ̄▽ ̄)。比正義聯盟好的不止一點半點(我我的感受)。還有艾梅伯希爾德是真的漂亮,溫導選的人都很棒。
真的第一次看到這麼牛逼的電影 轉場特效都吊炸天cookie
數據爬取的依舊是貓眼的評論,這部份內容我們用把牛刀,scrapy
爬取,通常狀況下,用一下requests
就行了app
抓取地址、交流羣:1029344413 分享視頻資料dom
http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=15&startTime=2018-12-11%2009%3A58%3A43
關鍵參數scrapy
url:http://m.maoyan.com/mmdb/comments/movie/249342.json offset:15 startTime:起始時間
scrapy 爬取貓眼代碼特別簡單,我分開幾個py文件便可。Haiwang.py
ide
import scrapy import json from haiwang.items import HaiwangItem class HaiwangSpider(scrapy.Spider): name = 'Haiwang' allowed_domains = ['m.maoyan.com'] start_urls = ['http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime=0'] def parse(self, response): print(response.url) body_data = response.body_as_unicode() js_data = json.loads(body_data) item = HaiwangItem() for info in js_data["cmts"]: item["nickName"] = info["nickName"] item["cityName"] = info["cityName"] if "cityName" in info else "" item["content"] = info["content"] item["score"] = info["score"] item["startTime"] = info["startTime"] item["approve"] = info["approve"] item["reply"] = info["reply"] item["avatarurl"] = info["avatarurl"] yield item yield scrapy.Request("http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime={}".format(item["startTime"]),callback=self.parse)
setting.py
url
設置須要配置headersspa
DEFAULT_REQUEST_HEADERS = { "Referer":"http://m.maoyan.com/movie/249342/comments?_v_=yes", "User-Agent":"Mozilla/5.0 Chrome/63.0.3239.26 Mobile Safari/537.36", "X-Requested-With":"superagent" }
須要配置一些抓取條件code
# Obey robots.txt rules ROBOTSTXT_OBEY = False # See also autothrottle settings and docs DOWNLOAD_DELAY = 1 # Disable cookies (enabled by default) COOKIES_ENABLED = False
開啓管道
# Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'haiwang.pipelines.HaiwangPipeline': 300, }
items.py
獲取你想要的數據
import scrapy class HaiwangItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() nickName = scrapy.Field() cityName = scrapy.Field() content = scrapy.Field() score = scrapy.Field() startTime = scrapy.Field() approve = scrapy.Field() reply =scrapy.Field() avatarurl = scrapy.Field()
pipelines.py
保存數據,數據存儲到csv
文件中
import os import csv class HaiwangPipeline(object): def __init__(self): store_file = os.path.dirname(__file__) + '/spiders/haiwang.csv' self.file = open(store_file, "a+", newline="", encoding="utf-8") self.writer = csv.writer(self.file) def process_item(self, item, spider): try: self.writer.writerow(( item["nickName"], item["cityName"], item["content"], item["approve"], item["reply"], item["startTime"], item["avatarurl"], item["score"] )) except Exception as e: print(e.args) def close_spider(self, spider): self.file.close()
begin.py
編寫運行腳本
from scrapy import cmdline cmdline.execute(("scrapy crawl Haiwang").split())
搞定,等着數據來到,就能夠了