一、說明html
本文主要講述採集貓眼電影用戶評論進行分析,相關爬蟲採集程序能夠爬取多個電影評論。python
運行環境:Win10/Python3.5。git
分析工具:jieba、wordcloud、pyecharts、matplotlib。json
基本流程:下載內容 ---> 分析獲取關鍵數據 ---> 保存本地文件 ---> 分析本地文件製做圖表數組
注意:本文全部圖文和源碼僅供學習,請勿他用,轉發請註明出處!app
本文主要參考:https://mp.weixin.qq.com/s/mTxxkwRZPgBiKC3Sv-jo3gecharts
2、開始採集dom
2.一、分析數據接口:
ide
爲了健全數據樣本,數據直接從移動端接口進行採集,鏈接以下,其中橙色部分爲貓眼電影ID,修改便可爬取其餘電影。工具
連接地址:http://m.maoyan.com/mmdb/comments/movie/1208282.json?v=yes&offset=15&startTime=
接口返回的數據以下,主要採集(暱稱、城市、評論、評分和時間),用戶評論在 json['cmts'] 中:
2.二、爬蟲程序核心內容(詳細能夠看後面源代碼):
>啓動腳本須要的參數以下(腳本名+貓眼電影ID+上映日期+數據保存的文件名):.\myMovieComment.py 1208282 2016-11-16 myCmts2.txt
>下載html內容:download(self, url),經過python的requests模塊進行下載,將下載的數據轉成json格式
1 def download(self, url): 2 """下載html內容""" 3 4 print("正在下載URL: "+url) 5 # 下載html內容 6 response = requests.get(url, headers=self.headers) 7 8 # 轉成json格式數據 9 if response.status_code == 200: 10 return response.json() 11 else: 12 # print(html.status_code) 13 print('下載數據爲空!') 14 return ""
>而後就是對已下載的內容進行分析,就是取出咱們須要的數據:
1 def parse(self, content): 2 """分析數據""" 3 4 comments = [] 5 try: 6 for item in content['cmts']: 7 comment = { 8 'nickName': item['nickName'], # 暱稱 9 'cityName': item['cityName'], # 城市 10 'content': item['content'], # 評論內容 11 'score': item['score'], # 評分 12 'startTime': item['startTime'], # 時間 13 } 14 comments.append(comment) 15 16 except Exception as e: 17 print(e) 18 19 finally: 20 return comments
>將分析出來的數據,進行本地保存,方便後續的分析工做:
1 def save(self, data): 2 """寫入文件""" 3 4 print("保存數據,寫入文件中...") 5 self.save_file.write(data)
> 爬蟲的核心控制也即爬蟲的程序啓動入口,管理上面幾個方法的有序執行:
1 def start(self): 2 """啓動控制方法""" 3 4 print("爬蟲開始...\r\n") 5 6 start_time = self.start_time 7 end_time = self.end_time 8 9 num = 1 10 while start_time > end_time: 11 print("執行次數:", num) 12 # 一、下載html 13 content = self.download(self.target_url + str(start_time)) 14 15 # 二、分析獲取關鍵數據 16 comments = '' 17 if content != "": 18 comments = self.parse(content) 19 20 if len(comments) <= 0: 21 print("本次數據量爲:0,退出爬取!\r\n") 22 break 23 24 # 三、寫入文件 25 res = '' 26 for cmt in comments: 27 res += "%s###%s###%s###%s###%s\n" % (cmt['nickName'], cmt['cityName'], cmt['content'], cmt['score'], cmt['startTime']) 28 self.save(res) 29 30 print("本次數據量:%s\r\n" % len(comments)) 31 32 # 獲取最後一條數據的時間 ,而後減去一秒 33 start_time = datetime.strptime(comments[len(comments) - 1]['startTime'], "%Y-%m-%d %H:%M:%S") + timedelta(seconds=-1) 34 # start_time = datetime.strptime(start_time, "%Y-%m-%d %H:%M:%S") 35 36 # 休眠3s 37 num += 1 38 time.sleep(3) 39 40 self.save_file.close() 41 print("爬蟲結束...")
2.3 數據樣本,最終爬取將近2萬條數據,每條記錄的每一個數據使用 ### 進行分割:
3、圖形化分析數據
3.一、製做觀衆城市分佈熱點圖,(pyecharts-geo):
從圖表能夠輕鬆看出,用戶主要分佈地區,主要以沿海一些發達城市羣爲主:
1 def createCharts(self): 2 """生成圖表""" 3 4 # 讀取數據,格式:[{"北京", 10}, {"上海",10}] 5 data = self.readCityNum() 6 7 # 1 熱點圖 8 geo1 = Geo("《無名之輩》觀衆位置分佈熱點圖", "數據來源:貓眼,Fly採集", title_color="#FFF", title_pos="center", width="100%", height=600, background_color="#404A59") 9 10 attr1, value1 = geo1.cast(data) 11 12 geo1.add("", attr1, value1, type="heatmap", visual_range=[0, 1000], visual_text_color="#FFF", symbol_size=15, is_visualmap=True, is_piecewise=False, visual_split_number=10) 13 geo1.render("files/無名之輩-觀衆位置熱點圖.html") 14 15 # 2 位置圖 16 geo2 = Geo("《無名之輩》觀衆位置分佈", "數據來源:貓眼,Fly採集", title_color="#FFF", title_pos="center", width="100%", height=600, 17 background_color="#404A59") 18 19 attr2, value2 = geo1.cast(data) 20 geo2.add("", attr2, value2, visual_range=[0, 1000], visual_text_color="#FFF", symbol_size=15, 21 is_visualmap=True, is_piecewise=False, visual_split_number=10) 22 geo2.render("files/無名之輩-觀衆位置圖.html") 23 24 # 三、top20 柱狀圖 25 data_top20 = data[:20] 26 bar = Bar("《無名之輩》觀衆來源排行 TOP20", "數據來源:貓眼,Fly採集", title_pos="center", width="100%", height=600) 27 attr, value = bar.cast(data_top20) 28 bar.add('', attr, value, is_visualmap=True, visual_range=[0, 3500], visual_text_color="#FFF", is_more_utils=True, is_label_show=True) 29 bar.render("files/無名之輩-觀衆來源top20.html") 30 31 print("圖表生成完成")
3.二、製做觀衆人數TOP20的柱形圖,(pyecharts-bar):
3.三、製做評論詞雲,(jieba、wordcloud):
生成詞雲核心代碼:
1 def createWordCloud(self): 2 """生成評論詞雲""" 3 comments = self.readAllComments() # 19185 4 5 # 使用 jieba 分詞 6 commens_split = jieba.cut(str(comments), cut_all=False) 7 words = ''.join(commens_split) 8 9 # 給詞庫添加中止詞 10 stopwords = STOPWORDS.copy() 11 stopwords.add("電影") 12 stopwords.add("一部") 13 stopwords.add("無名之輩") 14 stopwords.add("一部") 15 stopwords.add("一個") 16 stopwords.add("有點") 17 stopwords.add("以爲") 18 19 # 加載背景圖片 20 bg_image = plt.imread("files/2048_bg.png") 21 22 # 初始化 WordCloud 23 wc = WordCloud(width=1200, height=600, background_color='#FFF', mask=bg_image, font_path='C:/Windows/Fonts/STFANGSO.ttf', stopwords=stopwords, max_font_size=400, random_state=50) 24 25 # 生成,顯示圖片 26 wc.generate_from_text(words) 27 plt.imshow(wc) 28 plt.axis('off') 29 plt.show()
4、修改pyecharts源碼
4.一、樣本數據的城市簡稱與數據集完整城市名匹配不上:
使用位置熱點圖時候,因爲採集數據城市是一些簡稱,與pyecharts的已存在數據的城市名對不上, 因此對源碼進行一些修改,方便匹配一些簡稱。
黔南 => 黔南布依族苗族自治州
模塊自帶的全國主要市縣經緯度在:[python安裝路徑]\Lib\site-packages\pyecharts\datasets\city_coordinates.json
因爲默認狀況下,一旦城市名不能徹底匹配就會報異常,程序會中止,因此對源碼修改以下(報錯方法爲 Geo.add()),其中添加註析爲我的修改部分:
1 def get_coordinate(self, name, region="中國", raise_exception=False): 2 """ 3 Return coordinate for the city name. 4 5 :param name: City name or any custom name string. 6 :param raise_exception: Whether to raise exception if not exist. 7 :return: A list like [longitude, latitude] or None 8 """ 9 if name in self._coordinates: 10 return self._coordinates[name] 11 12 13 coordinate = get_coordinate(name, region=region) 14 15 # [ 20181204 添加 16 # print(name, coordinate) 17 if coordinate is None: 18 # 若是字典key匹配不上,嘗試進行模糊查詢 19 search_res = search_coordinates_by_region_and_keyword(region, name) 20 # print("###",search_res) 21 if search_res: 22 coordinate = sorted(search_res.values())[0] 23 # 20181204 添加 ] 24 25 if coordinate is None and raise_exception: 26 raise ValueError("No coordinate is specified for {}".format(name)) 27 28 return coordinate
相應的須要對 __add()方法進行以下修改:
5、附錄-源碼
*說明:源碼爲本人所寫,數據來源爲貓眼,所有內容僅供學習,拒絕其餘用途!轉發請註明出處!
5.1 採集源碼
1 # -*- coding:utf-8 -*- 2 3 import requests 4 from datetime import datetime, timedelta 5 import os 6 import time 7 import sys 8 9 10 class MaoyanFilmReviewSpider: 11 """貓眼影評爬蟲""" 12 13 def __init__(self, url, end_time, filename): 14 # 頭部 15 self.headers = { 16 'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1' 17 } 18 19 # 目標URL 20 self.target_url = url 21 22 # 數據獲取時間段,start_time:截止日期,end_time:上映時間 23 now = datetime.now() 24 25 # 獲取當天的 零點 26 self.start_time = now + timedelta(hours=-now.hour, minutes=-now.minute, seconds=-now.second) 27 self.start_time = self.start_time.replace(microsecond=0) 28 self.end_time = datetime.strptime(end_time, "%Y-%m-%d %H:%M:%S") 29 30 # 打開寫入文件, 建立目錄 31 self.save_path = "files/" 32 if not os.path.exists(self.save_path): 33 os.makedirs(self.save_path) 34 self.save_file = open(self.save_path + filename, "a", encoding="utf-8") 35 36 def download(self, url): 37 """下載html內容""" 38 39 print("正在下載URL: "+url) 40 # 下載html內容 41 response = requests.get(url, headers=self.headers) 42 43 # 轉成json格式數據 44 if response.status_code == 200: 45 return response.json() 46 else: 47 # print(html.status_code) 48 print('下載數據爲空!') 49 return "" 50 51 def parse(self, content): 52 """分析數據""" 53 54 comments = [] 55 try: 56 for item in content['cmts']: 57 comment = { 58 'nickName': item['nickName'], # 暱稱 59 'cityName': item['cityName'], # 城市 60 'content': item['content'], # 評論內容 61 'score': item['score'], # 評分 62 'startTime': item['startTime'], # 時間 63 } 64 comments.append(comment) 65 66 except Exception as e: 67 print(e) 68 69 finally: 70 return comments 71 72 def save(self, data): 73 """寫入文件""" 74 75 print("保存數據,寫入文件中...") 76 self.save_file.write(data) 77 78 def start(self): 79 """啓動控制方法""" 80 81 print("爬蟲開始...\r\n") 82 83 start_time = self.start_time 84 end_time = self.end_time 85 86 num = 1 87 while start_time > end_time: 88 print("執行次數:", num) 89 # 一、下載html 90 content = self.download(self.target_url + str(start_time)) 91 92 # 二、分析獲取關鍵數據 93 comments = '' 94 if content != "": 95 comments = self.parse(content) 96 97 if len(comments) <= 0: 98 print("本次數據量爲:0,退出爬取!\r\n") 99 break 100 101 # 三、寫入文件 102 res = '' 103 for cmt in comments: 104 res += "%s###%s###%s###%s###%s\n" % (cmt['nickName'], cmt['cityName'], cmt['content'], cmt['score'], cmt['startTime']) 105 self.save(res) 106 107 print("本次數據量:%s\r\n" % len(comments)) 108 109 # 獲取最後一條數據的時間 ,而後減去一秒 110 start_time = datetime.strptime(comments[len(comments) - 1]['startTime'], "%Y-%m-%d %H:%M:%S") + timedelta(seconds=-1) 111 # start_time = datetime.strptime(start_time, "%Y-%m-%d %H:%M:%S") 112 113 # 休眠3s 114 num += 1 115 time.sleep(3) 116 117 self.save_file.close() 118 print("爬蟲結束...") 119 120 121 if __name__ == "__main__": 122 # 確保輸入參數 123 if len(sys.argv) != 4: 124 print("請輸入相關參數:[moveid]、[上映日期]和[保存文件名],如:xxx.py 42962 2018-11-09 text.txt") 125 exit() 126 127 # 貓眼電影ID 128 mid = sys.argv[1] # "1208282" # "42964" 129 # 電影上映日期 130 end_time = sys.argv[2] # "2018-11-16" # "2018-11-09" 131 # 每次爬取條數 132 offset = 15 133 # 保存文件名 134 filename = sys.argv[3] 135 136 spider = MaoyanFilmReviewSpider(url="http://m.maoyan.com/mmdb/comments/movie/%s.json?v=yes&offset=%d&startTime=" % (mid, offset), end_time="%s 00:00:00" % end_time, filename=filename) 137 # spider.start() 138 139 spider.start() 140 # t1 = "2018-11-09 23:56:23" 141 # t2 = "2018-11-25" 142 # 143 # res = datetime.strptime(t1, "%Y-%m-%d %H:%M:%S") + timedelta(days=-1) 144 # print(type(res))
5.2 分析製圖源碼
1 # -*- coding:utf-8 -*- 2 from pyecharts import Geo, Bar, Bar3D 3 import jieba 4 from wordcloud import STOPWORDS, WordCloud 5 import matplotlib.pyplot as plt 6 7 8 class ACoolFishAnalysis: 9 """無名之輩 --- 數據分析""" 10 def __init__(self): 11 pass 12 13 def readCityNum(self): 14 """讀取觀衆城市分佈數量""" 15 d = {} 16 17 with open("files/myCmts2.txt", "r", encoding="utf-8") as f: 18 row = f.readline() 19 20 while row != "": 21 arr = row.split('###') 22 23 # 確保每條記錄長度爲 5 24 while len(arr) < 5: 25 row += f.readline() 26 arr = row.split('###') 27 28 # 記錄每一個城市的人數 29 if arr[1] in d: 30 d[arr[1]] += 1 31 else: 32 d[arr[1]] = 1 # 首次加入字典,爲 1 33 34 row = f.readline() 35 36 37 # print(len(comments)) 38 # print(d) 39 40 # 字典 轉 元組數組 41 res = [] 42 for ks in d.keys(): 43 if ks == "": 44 continue 45 tmp = (ks, d[ks]) 46 res.append(tmp) 47 48 # 按地點人數降序 49 res = sorted(res, key=lambda x: (x[1]),reverse=True) 50 return res 51 52 def readAllComments(self): 53 """讀取全部評論""" 54 comments = [] 55 56 # 打開文件讀取數據 57 with open("files/myCmts2.txt", "r", encoding="utf-8") as f: 58 row = f.readline() 59 60 while row != "": 61 arr = row.split('###') 62 63 # 天天記錄長度爲 5 64 while len(arr) < 5: 65 row += f.readline() 66 arr = row.split('###') 67 68 if len(arr) == 5: 69 comments.append(arr[2]) 70 71 # if len(comments) > 20: 72 # break 73 row = f.readline() 74 75 return comments 76 77 def createCharts(self): 78 """生成圖表""" 79 80 # 讀取數據,格式:[{"北京", 10}, {"上海",10}] 81 data = self.readCityNum() 82 83 # 1 熱點圖 84 geo1 = Geo("《無名之輩》觀衆位置分佈熱點圖", "數據來源:貓眼,Fly採集", title_color="#FFF", title_pos="center", width="100%", height=600, background_color="#404A59") 85 86 attr1, value1 = geo1.cast(data) 87 88 geo1.add("", attr1, value1, type="heatmap", visual_range=[0, 1000], visual_text_color="#FFF", symbol_size=15, is_visualmap=True, is_piecewise=False, visual_split_number=10) 89 geo1.render("files/無名之輩-觀衆位置熱點圖.html") 90 91 # 2 位置圖 92 geo2 = Geo("《無名之輩》觀衆位置分佈", "數據來源:貓眼,Fly採集", title_color="#FFF", title_pos="center", width="100%", height=600, 93 background_color="#404A59") 94 95 attr2, value2 = geo1.cast(data) 96 geo2.add("", attr2, value2, visual_range=[0, 1000], visual_text_color="#FFF", symbol_size=15, 97 is_visualmap=True, is_piecewise=False, visual_split_number=10) 98 geo2.render("files/無名之輩-觀衆位置圖.html") 99 100 # 三、top20 柱狀圖 101 data_top20 = data[:20] 102 bar = Bar("《無名之輩》觀衆來源排行 TOP20", "數據來源:貓眼,Fly採集", title_pos="center", width="100%", height=600) 103 attr, value = bar.cast(data_top20) 104 bar.add('', attr, value, is_visualmap=True, visual_range=[0, 3500], visual_text_color="#FFF", is_more_utils=True, is_label_show=True) 105 bar.render("files/無名之輩-觀衆來源top20.html") 106 107 print("圖表生成完成") 108 109 def createWordCloud(self): 110 """生成評論詞雲""" 111 comments = self.readAllComments() # 19185 112 113 # 使用 jieba 分詞 114 commens_split = jieba.cut(str(comments), cut_all=False) 115 words = ''.join(commens_split) 116 117 # 給詞庫添加中止詞 118 stopwords = STOPWORDS.copy() 119 stopwords.add("電影") 120 stopwords.add("一部") 121 stopwords.add("無名之輩") 122 stopwords.add("一部") 123 stopwords.add("一個") 124 stopwords.add("有點") 125 stopwords.add("以爲") 126 127 # 加載背景圖片 128 bg_image = plt.imread("files/2048_bg.png") 129 130 # 初始化 WordCloud 131 wc = WordCloud(width=1200, height=600, background_color='#FFF', mask=bg_image, font_path='C:/Windows/Fonts/STFANGSO.ttf', stopwords=stopwords, max_font_size=400, random_state=50) 132 133 # 生成,顯示圖片 134 wc.generate_from_text(words) 135 plt.imshow(wc) 136 plt.axis('off') 137 plt.show() 138 139 140 141 if __name__ == "__main__": 142 demo = ACoolFishAnalysis() 143 demo.createWordCloud()