個人分析分爲核心的三步:python
第一步:爬取商品排名和詳情頁連接,須要的字段爲:排名、商品名、詳情頁連接 第二步:爬取商品詳情,須要的信息爲:數據庫
店家:這不就是競爭對手嗎?分析其爆品狀況,保留店家連接,後續可針對性挖掘分析數組
價格:分析爆品價格區間,對商品訂價、切分市場有幫助bash
上架時間:新品?爆了多久?app
星級、評論數、評論標籤、全部評論連接:進一步爬取評論內容,來分析爆品的優劣勢dom
尺寸、顏色:也是很是有價值的參考數據,但在實際爬取過程當中遇到問題,後面會提到函數
圖片連接:難道你不想看看商品長啥樣嗎?佈局
第三步:數據轉化爲可視化圖表,並作分析。字體
是否是火燒眉毛想要看過程了,來吧~ui
爬取過程分爲三步
一、爬取商品排名和詳情頁連接 須要爬取的具體字段:排名(Rank),商品名(item_name),商品詳情頁連接(item_link)、商品圖片連接(img_src)
二、在商品詳情頁爬取更多商品信息 核心事項:
1)構建函數來獲取單個商品的詳細信息; 2)利用for循環,遍歷商品詳情頁連接列表,來獲取每一個商品的詳細信息
三、爬取評論 核心事項:
1)從上一步的csv文件中,讀取Rank , item_name , reviews , reviews_link字段 2)構建函數讀取每一個商品的全部評論 3)利用for循環,獲取全部商品的全部評論 4)存儲到數據庫和csv文件中
四、爬取size和color數據
和第三步基本同樣,代碼基本同樣,主要在於要確認每頁評論的size&color個數。
一、讀取、清洗數據
從csv文件讀取100個商品的數據,篩選出所須要的字段,進行數據清洗
部分讀取的數據,看似是數值,實際是字符,所以須要進行類型轉換(如price拆分後,還須要轉爲float型)
須要參與數值計算的NaN,使用平均值進行替換
二、以商家維度處理數據
獲取所需的數據:商家的星級、評論數總和、評論數均值、最低價均值、最高價均值、價格均值、商品數量、佔比。針對星級、評論數均值、價格均值、商品數量作標準化處理,並計算加權分。
① 不一樣商家的星級排名
平均星級達4.15分,高於平均分的商家超過一半(17/32)
Top1的LALAVAVA高達4.9分,緊隨其後也有5家達到4.5分。
倒數第一N-pearI只有3.2分
讓我看看LALAVAVA長什麼樣。亞馬遜上的商品,看上去就是普通泳衣,米國人仍是很保守的嘛~ 但評分高真的就說明產品好嗎?不如來看看評論數吧——
②不一樣商家的平均評論數排名
首先平均評論數只有193條,並且高於平均線的只有不到三成(12/32),想一想淘寶動輒上萬,咱們的人口優點讓米國人羨慕呀;
再來看星級Top1的LALAVAVA,評論數少得可憐,那麼對其商品真實質量就要存疑了;
而星級倒數的N-pear I,一樣評論數不多,那大機率其商品其實不咋地;
反觀評論數Top1的Garmol,其星級評價4.4,口碑佳評論也多,看來是不錯的商品;
緊隨其後的幾家,其星級分數就低於平均分了
那麼,亞馬遜的星級評價難道就只受評論數的幾顆星比例影響嗎?我查閱了網上的一些資料,發現亞馬遜評價星級評定的三個重要因素:評論距離如今的時間,評論被買家投票採納數,評論是否有verified purchase標誌(意指真實買家)。此外,評論的字符數,被點擊次數等因素也可能會對評論星級有影響。
看來,亞馬遜對評論的監控和管理是很是嚴格而複雜的!固然,最重要的仍是看看評論第一名的Garmol長什麼樣: 比上邊的泳衣更點題了,你們說好纔是真的好,very sexy! ③不一樣商家的價格區間排名(按均價)
從圖上來看,明顯ELOVER鎖定的是高端市場,訂價區間在49刀左右;相反,Goddessvan訂價僅0.39刀,還只有一款,猜想多是虧本衝量,提升商家曝光,搶奪低端市場
從均價來看,基本分佈在10-20刀間,說明這是情趣內衣市場的主要價格區間;但20-40刀區間竟然沒有任何商家,能夠在這一塊深刻研究,看能不能找到證聽說明該區間是藍海,有更大的市場潛力
而從每一個商家的價格區間來看,大多數都是採起多顏色或款式的策略,一方面爲用戶提供更多選擇,另外一方面也體現了商家的上新能力;而僅有少數幾家採起了單一爆款的策略
最奢華的ELOVER看上去果真比較女神,縮略圖都比別家更用心。 那麼,到底哪一個商家的策略更靠譜,市場份額更大呢?
④商家的商品數量餅圖
在Top100的商品佔比中,Avidlove以28%的巨大優點稱霸
而其餘商家基本都是個位數的佔比,沒有很明顯的優劣勢
Avidlove的內衣是酷酷風的,我喜歡。 單一方面畢竟仍是很難衡量哪家商家更優秀,不如綜合多個指標來分析吧~
⑤不一樣商家的加權分排名 將星級、平均評論數、商品均價、商品數量進行標準化處理後,由於很差拍定加權的比例,便將4項的歸一化結果x10後直接累加獲得總分,並製做成堆積圖。
而每一個商家的4項指標的佔比,則側面反映其自身的優劣勢。
Avidlove,剛剛的酷酷風內衣,在其餘三項中規中矩的狀況下,以商品數量優點奪得綜合分第一,有種農村包圍城市的感受
Garmol,主要依靠口碑(星級、平均評論數)的優點,奪得了第二名
ELOVER,主要依靠精準切分高端市場,奪得了第三名
N-pearI,沒有任何優點,不出意料的光榮墊底
口碑最差的N-pearI,能搜到的商品也最少,不過圖很勁爆……就不放出來,太勁爆了~
粗略來看的話,想要排名靠前,口碑必定不能太差,至少要保持在平均水平及以上!
⑥不一樣商家的星級/價格散點圖
x軸爲商家的商品均價,y軸爲商家的星級,點大小爲商品數量,商品數量越大,點越大,點顏色爲評論均值,評論均值越大,顏色越深紅。
利用價格均值和星級均值,將圖切分爲四個象限:
①左上象限:實惠好評的商家 ②右上象限:有點貴,但一分錢一分貨的商家 ③右下象限:貴,但質量不咋地的商家 ④左下象限:便宜沒好貨的商家
因此藉助這張散點圖,挑商家買東西就容易多啦:
追求性價比,可選擇Avidlove,並且商品多,任君挑選 (圖中圓圈最大的淺紅色商家);
追求高端,可選擇ELOVER,它貴有它的道理 (圖中最左側且落在左上象限的商家);
追求大衆,可選擇Garmol,評論數最多,並且好評居多 (圖中顏色最紅的商家)
顧客能夠根據本身的喜愛挑選合適的商家,那麼做爲商家如何改進本身呢?
⑦詞頻分析 前面在爬取的過程當中,一樣爬取了評論標籤,經過對此進行詞頻分析,能夠發現顧客最關心的依次是:
1.是否合身:size、fit等相關字眼屢次出現且排位靠前 2.質量:good quality、well made;soft and comfortable、fabric是對材質的確定 3.款式:cute、sexy、like the picture你懂的 4.價格:cheaply made勉強算價格吧,但更可能是對商品質量的懷疑 5.口碑:highly recommend,評論的仍是很是有參考價值的
評論標籤的數量較少,進一步對2.4w條評論進行詞頻分析,並製做成詞雲: 最直觀的,仍然是跟「是否合身」以及質量或款式有關。那麼咱們就從顧客購買商品的Size&Color繼續分析
Size&Color的詞頻數據存在幾點問題:
一、數據量較少,僅有約6000條 二、Size&Color沒法較好的區分開,所以一塊兒分析 三、商家的命名規則不一樣,好比一樣是黑色款,有個商家會命名black,而有的多是style1(因此一些奇怪的數字編號實際上是商家的款式編號) 四、有些奇怪的字眼如trim多是爬蟲時爬錯了或者導出csv時的格式錯亂
能夠明顯看出:
Size方面:large、medium、small確定均有涵蓋,但另外還有xlarge、xxlarge、xxxlarge,亞馬遜主要是歐美顧客,可能體型相對較大,因此商家應該多研發以及備貨針對體型較大的顧客的商品。
Color方面:很是直觀:Black > red > blue > green > white > purple....因此黑色、紅色永遠不會錯;綠色是出乎我意料的,商家也能夠大膽嘗試。
Style方面:詞頻中出現trim、lace字眼,蕾絲最高!!!
商品評論
# 0、導入模塊
from bs4 import BeautifulSoup
import requests
import random
import time
from multiprocessing import Pool
import csv
import pymongo
# 0、建立數據庫
client = pymongo.MongoClient('localhost', 27017)
Amazon = client['Amazon']
reviews_info_M = Amazon['reviews_info_M']
# 0、反爬措施
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'
}
# http://cn-proxy.com/
proxy_list = [
'http://117.177.250.151:8081',
'http://111.85.219.250:3129',
'http://122.70.183.138:8118',
]
proxy_ip = random.choice(proxy_list) # 隨機獲取代理ip
proxies = {'http': proxy_ip}
# 一、讀取csv中的'Rank','item_name','reviews','reviews_link'
csv_file = csv.reader(open('C:/Users/zbd/Desktop/3.csv','r'))
reviews_datalst = []
for i in csv_file:
reviews_data = {
'Rank':i[10],
'item_name':i[8],
'reviews':i[6],
'reviews_link':i[5]
}
reviews_datalst.append(reviews_data)
del reviews_datalst[0] # 刪除表頭
#print(reviews_datalst)
reviews_links = list(i['reviews_link'] for i in reviews_datalst) # 將評論詳情頁連接存儲到列表reviews_links
# 清洗reviews,其中有空值或者「1,234」樣式
reviews = []
for i in reviews_datalst:
if i['reviews']:
reviews.append(int(i['reviews'].replace(',','')))
else:
reviews.append(0)
print(reviews_links)
print(reviews)
# 二、抓取每一個商品的評論頁連接
# 商品 1
# 第1頁:https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&reviewerType=all_reviews
# 第2頁:https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber=2
# 第3頁:https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/ref=cm_cr_getr_d_paging_btm_next_3?ie=UTF8&reviewerType=all_reviews&pageNumber=3
# 商品 2
# 第1頁:https://www.amazon.com/Avidlove-Women-Lingerie-Babydoll-Bodysuit/product-reviews/B077CLFWVN/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&reviewerType=all_reviews'
# 第2頁:https://www.amazon.com/Avidlove-Women-Lingerie-Babydoll-Bodysuit/product-reviews/B077CLFWVN/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber=2
# 每頁有8個評論,pages = reviews // 8 + 1
# 目標格式:https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/pageNumber=1
url = 'https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&reviewerType=all_reviews'
counts = 0
def get_item_reviews(reviews_link,reviews):
if reviews_link:
pages = reviews // 8 # 每頁有8個評論,pages = reviews // 8 ,最後一頁不爬取
for i in range(1,pages+1):
full_url = reviews_link.split('ref=')[0] + '?pageNumber={}'.format(i)
#full_url = 'https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/?pageNumber=10'
wb_data = requests.get(full_url, headers=headers, proxies=proxies)
soup = BeautifulSoup(wb_data.text, 'lxml')
every_page_reviews_num = len(soup.select('div.a-row.a-spacing-small.review-data > span'))
for j in range(every_page_reviews_num):
reviews_info ={
'customer_name' : soup.select('div:nth-child(1) > a > div.a-profile-content > span')[j].text,
'star' : soup.select('div.a-row>a.a-link-normal > i > span')[j].text.split('out')[0],
'review_date' : soup.select('div.a-section.review >div>div> span.a-size-base.a-color-secondary.review-date')[j].text,
'review_title' : soup.select('a.a-size-base.a-link-normal.review-title.a-color-base.a-text-bold')[j].text,
'review_text' : soup.select('div.a-row.a-spacing-small.review-data > span')[j].text,
'item_name' : soup.title.text.split(':')[-1]
}
yield reviews_info
reviews_info_M.insert_one(reviews_info)
global counts
counts +=1
print('第{}條評論'.format(counts),reviews_info)
else:
pass
''' # 這邊主要是爬取size和color,由於其數據大量缺失,因此另外爬取 # 與上一步的代碼基本同樣,主要在於要確認每頁評論的size&color個數 # 寫入數據庫和csv也須要做相應修改,但方法相同 def get_item_reviews(reviews_link,reviews): if reviews_link: pages = reviews // 8 # 每頁有8個評論,pages = reviews // 8 ,最後一頁不爬取,要作一個小於8個評論的判斷 for i in range(1,pages+1): full_url = reviews_link.split('ref=')[0] + '?pageNumber={}'.format(i) #full_url = 'https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/product-reviews/B0712188H2/?pageNumber=10' wb_data = requests.get(full_url, headers=headers, proxies=proxies) soup = BeautifulSoup(wb_data.text, 'lxml') every_page_reviews_num = len(soup.select('div.a-row.a-spacing-mini.review-data.review-format-strip > a')) # 每頁的size&color個數 for j in range(every_page_reviews_num): reviews_info ={ 'item_name' : soup.title.text.split(':')[-1], 'size_color' : soup.select('div.a-row.a-spacing-mini.review-data.review-format-strip > a')[j].text, } yield reviews_info print(reviews_info) reviews_size_color.insert_one(reviews_info) else: pass '''
# 三、開始爬取和存儲數據
all_reviews = []
def get_all_reviews(reviews_links,reviews):
for i in range(100):
for n in get_item_reviews(reviews_links[i],reviews[i]):
all_reviews.append(n)
get_all_reviews(reviews_links,reviews)
#print(all_reviews)
# 四、寫入csv
headers = ['_id','item_name', 'customer_name', 'star', 'review_date', 'review_title', 'review_text']
with open('C:/Users/zbd/Desktop/4.csv','w',newline='',encoding='utf-8') as f:
f_csv = csv.DictWriter(f, headers)
f_csv.writeheader()
f_csv.writerows(all_reviews)
print('寫入完畢!')複製代碼
商品信息
# 0、導入模塊
from bs4 import BeautifulSoup
import requests
import random
import time
from multiprocessing import Pool
import pymongo
# 0、建立數據庫
client = pymongo.MongoClient('localhost', 27017)
Amazon = client['Amazon']
item_info_M = Amazon['item_info_M']
# 0、反爬措施
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36'
}
# http://cn-proxy.com/
proxy_list = [
'http://117.177.250.151:8081',
'http://111.85.219.250:3129',
'http://122.70.183.138:8118',
]
proxy_ip = random.choice(proxy_list) # 隨機獲取代理ip
proxies = {'http': proxy_ip}
# 一、爬取商品排名和詳情頁連接
url_page1 = 'https://www.amazon.com/Best-Sellers-Womens-Chemises-Negligees/zgbs/fashion/1044968/ref=zg_bs_pg_1?_encoding=UTF8&pg=1' # 01-50名商品
url_page2 = 'https://www.amazon.com/Best-Sellers-Womens-Chemises-Negligees/zgbs/fashion/1044968/ref=zg_bs_pg_2?_encoding=UTF8&pg=2' # 51-100名商品
item_info = [] # 存儲商品詳細信息的列表
item_links = [] # 存儲商品詳情頁連接的列表
def get_item_info(url):
wb_data = requests.get(url,headers=headers,proxies=proxies)
soup = BeautifulSoup(wb_data.text,'lxml')
for i in range(50):
data = {
'Rank': soup.select('span.zg-badge-text')[i].text.strip('#'),
'item_name' : soup.select('#zg-ordered-list > li > span > div > span > a > div')[i].text.strip(),
'item_link' : 'https://www.amazon.com' + soup.select('#zg-ordered-list > li > span > div > span > a')[i].get('href'),
'img_src' :soup.select('#zg-ordered-list > li> span > div > span > a > span > div > img')[i].get('src')
}
item_info.append(data)
item_links.append(data['item_link'])
print('finish!')
get_item_info(url_page1)
get_item_info(url_page2)
# 二、在商品詳情頁爬取更多商品信息
#item_url = 'https://www.amazon.com/Avidlove-Lingerie-Babydoll-Sleepwear-Chemise/dp/B0712188H2/ref=zg_bs_1044968_1?_encoding=UTF8&refRID=MYWGH1W2P3HNS58R4WES'
def get_item_info_2(item_url,data):
wb_data = requests.get(item_url, headers=headers, proxies=proxies)
soup = BeautifulSoup(wb_data.text, 'lxml')
#獲取price(須要判斷)
price = soup.select('#priceblock_ourprice')
data['price'] = price[0].text if price else None
# 獲取star和reviews(須要判斷)
star = soup.select('div>div>span>span>span>a>i>span.a-icon-alt')
if star:
data['star'] = star[0].text.split(' ')[0]
data['reviews'] = soup.select('#reviews-medley-footer > div.a-row.a-spacing-large > a')[0].text.split(' ')[2]
data['Read reviews that mention'] = list(i.text.strip('\n').strip() for i in soup.select('span.cr-lighthouse-term'))
else:
data['star'] = None
data['reviews'] = None
data['Read reviews that mention'] = None
data['Date_first_listed_on_Amazon'] = soup.select('#detailBullets_feature_div > ul > li> span > span:nth-child(2)')[-1].text
# 獲取reviews_link(須要判斷)
reviews_link = soup.select('#reviews-medley-footer > div.a-row.a-spacing-large > a')
if reviews_link:
data['reviews_link'] = 'https://www.amazon.com' + reviews_link[0].get('href')
else:
data['reviews_link'] = None
# 獲取store和store_link (須要判斷)
store = soup.select('#bylineInfo')
if store:
data['store'] = store[0].text
data['store_link'] = 'https://www.amazon.com' + soup.select('#bylineInfo')[0].get('href')
else:
data['store'] = None
data['store_link'] = None
item_info_M.insert_one(data) # 存入MongoDB數據庫
print(data)
# 三、將商品詳情寫入csv文件
for i in range(100):
get_item_info_2(item_links[i],item_info[i])
print('已寫入第{}個商品'.format(i+1))
import csv
headers = ['_id','store', 'price', 'Date_first_listed_on_Amazon', 'item_link', 'reviews_link', 'reviews', 'store_link', 'item_name', 'img_src', 'Rank', 'Read reviews that mention', 'star']
with open('C:/Users/zbd/Desktop/3.csv','w',newline='',encoding='utf-8') as f:
f_csv = csv.DictWriter(f,headers)
f_csv.writeheader()
f_csv.writerows(item_info)
print('寫入完畢!')複製代碼
詞雲
path = 'C:/Users/zbd/Desktop/Amazon/fenci/'
# 讀取文件、分詞
def get_text():
f = open(path+'reviews.txt','r',encoding = 'utf-8')
text = f.read().lower() # 統一改成小寫
for i in '!@#$%^&*()_¯+-;:`~\'"<>=./?,': # 替換英文符號爲空格 text = text.replace(i,'') return text.split() # 返回分詞結果 lst_1= get_text() # 分詞 print('總共有{}個詞'.format(len(lst_1))) # 統計總詞數 # 去除stop_word(常見詞) stop_word_text = open(path+'stop_word.txt','r',encoding = 'utf-8') # 讀取下載的stop_word表 stop_word = stop_word_text.read().split() stop_word_add = ['a','i','im','it鈥檚','i鈥檓','\\u0026','5鈥','reviewdate'] # 可在該列表中繼續添加stop_word stop_word_new = stop_word + stop_word_add #print(stop_word_new) lst_2 =list(word for word in lst_1 if word not in stop_word_new) print('去除後總共有{}個詞'.format(len(lst_2))) # 統計詞頻 counts = {} for i in lst_2: counts[i] = counts.get(i,0) + 1 #print(counts) word_counts = list(counts.items()) #print(word_counts) word_counts.sort(key = lambda x:x[1],reverse = True) # 按詞頻降序排列 # 輸出結果 for i in word_counts[0:50]: print(i) # 製做詞雲 from scipy.misc import imread import matplotlib.pyplot as plt import jieba from wordcloud import WordCloud, ImageColorGenerator stopwords = {} # isCN = 0 # 0:英文分詞 1:中文分詞 path = 'C:/Users/zbd/Desktop/Amazon/fenci/' back_coloring_path = path + 'img.jpg' # 設置背景圖片路徑 text_path = path + 'reviews.txt' # 設置要分析的文本路徑 stopwords_path = path + 'stop_word.txt' # 停用詞詞表 imgname1 = path + 'WordCloudDefautColors.png' # 保存的圖片名字1(只按照背景圖片形狀) imgname2 = path + 'WordCloudColorsByImg.png' # 保存的圖片名字2(顏色按照背景圖片顏色佈局生成) #font_path = r'./fonts\simkai.ttf' # 爲matplotlib設置中文字體路徑 ----- 主要是中文時使用 back_coloring = imread(back_coloring_path) # 設置背景圖片 ---- back_coloring爲3維數組 wc = WordCloud(#font_path = font_path # 設置字體 background_color = 'white', # 設置背景顏色 max_words = 3000, # 設置顯示的最大詞數 mask = back_coloring, # 設置背景圖片 max_font_size = 200, # 設置字體最大值 min_font_size = 5, # 設置字體最小值 random_state = 42, # 隨機有N種配色方案 width = 1000 , height = 860 ,margin = 2 # 設置圖片默認的大小,可是若是使用背景圖片的話 # 那麼保存的圖片大小會按照其大小保存,margin爲詞語邊緣距離 ) #wc.generate(text) words = {} for i in word_counts: words['{}'.format(i[0])] = i[1] wc.generate_from_frequencies(words) # txt_freq例子爲 { word1: fre1, word2: fre2, word3: fre3,......, wordn: fren } plt.figure() # 如下代碼只顯示--------形狀與背景圖片一致,顏色爲默認顏色的詞雲 plt.imshow(wc) plt.axis("off") plt.show() # 繪製詞雲 wc.to_file(imgname1) # 保存圖片 # 如下代碼顯示--------形狀與背景圖片一致,顏色也與背景圖顏色一致的詞雲 image_colors = ImageColorGenerator(back_coloring) # 從背景圖片生成顏色值 plt.imshow(wc.recolor(color_func=image_colors)) plt.axis("off") plt.show() wc.to_file( imgname2) # 顯示原圖片 plt.figure() plt.imshow(back_coloring, cmap=plt.cm.gray) plt.axis("off") plt.show() # 保存圖片複製代碼
數據分析部分
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
get_ipython().magic('matplotlib inline')
# 0、數據讀取
item_info = pd.read_csv('C:/Users/zbd/Desktop/Amazon/item_info.csv', engine = 'python')
reviews_new = pd.read_csv('C:/Users/zbd/Desktop/Amazon/reviews_new.csv', engine = 'python')
print(item_info.head())
print(len(item_info))
#print(reviews_new.head())
# 一、清洗數據
# 篩選出須要的列
item_info_c = item_info[['Rank','item_name','store','price','Date_first_listed_on_Amazon','star','reviews','Read reviews that mention']]
# 清洗列:price
item_info_c['price'] = item_info_c['price'].str.replace('$','')
item_info_c['min_price'] = item_info_c['price'].str.split('-').str[0].astype('float')
item_info_c['max_price'] = item_info_c['price'].str.split('-').str[-1].astype('float')
item_info_c['mean_price'] = (item_info_c['max_price']+item_info_c['min_price'])/2
# 清洗NaN值
def f_na(data,cols):
for i in cols:
data[i].fillna(data[i].mean(),inplace = True)
return data
item_info_c = f_na(item_info_c,['star','reviews','min_price','max_price','mean_price'])
item_info_c.head(5)
# 二、以商家維度處理數據
a = item_info_c.groupby('store')['star'].mean().sort_values(ascending=False) # 商家星級均值
b = item_info_c.groupby('store')['reviews'].agg({'reviews_sum':np.sum,'reviews_mean':np.mean}) # 商家評論數總和、均值
c = item_info_c.groupby('store')['min_price'].mean() # 商家最低價均值
d = item_info_c.groupby('store')['max_price'].mean() # 商家最高價均值
e = item_info_c.groupby('store')['mean_price'].mean() # 商家價格均值
e.name = 'price_mean'
f = item_info_c.groupby('store')['star'].count() # 商家商品數量
f.name = 'item_num'
#print(a,b,c,d,e,f)
df = pd.concat([a,b,e,f],axis=1) # 商家商品數量百分比
df['per'] = df['item_num']/100
df['per%'] = df['per'].apply(lambda x: '%.2f%%' % (x*100))
# 標準化處理
def data_nor(df, *cols):
for col in cols:
colname = col + '_nor'
df[colname] = (df[col]-df[col].min())/(df[col].max()-df[col].min()) * 10
return df
# 建立函數,結果返回標準化取值,新列列名
df_re = data_nor(df, 'star','reviews_mean','price_mean','item_num')
print(df_re.head(5))
# 三、繪製圖表
fig,axes = plt.subplots(4,1,figsize = (10,15))
plt.subplots_adjust(wspace =0, hspace =0.5)
# 不一樣商家的星級排名
df_star = df['star'].sort_values(ascending = False)
df_star.plot(kind = 'bar',color = 'yellow',grid = True,alpha = 0.5,ax =axes[0],width =0.7,
ylim = [3,5],title = '不一樣商家的星級排名')
axes[0].axhline(df_star.mean(),label = '平均星級%.2f分' %df_star.mean() ,color = 'r' ,linestyle = '--',)
axes[0].legend(loc = 1)
# 不一樣商家的平均評論數排名
df_reviews_mean = df['reviews_mean'].sort_values(ascending = False)
df_reviews_mean.plot(kind = 'bar',color = 'blue',grid = True,alpha = 0.5,ax =axes[1],width =0.7,
title = '不一樣商家的平均評論數排名')
axes[1].axhline(df_reviews_mean.mean(),label = '平均評論數%i條' %df_reviews_mean.mean() ,color = 'r' ,linestyle = '--',)
axes[1].legend(loc = 1)
# 不一樣商家的價格區間(按均價)
avg_price = (d-c)/2
avg_price.name = 'avg_price'
max_price = avg_price.copy()
max_price.name = 'max_price'
df_price = pd.concat([c,avg_price,max_price,df_re['price_mean']],axis=1)
df_price = df_price.sort_values(['price_mean'],ascending = False)
df_price.drop(['price_mean'],axis =1,inplace = True)
df_price.plot(kind = 'bar',grid = True,alpha = 0.5 , ax =axes[2],width =0.7,stacked = True,
color= ['white','red','blue'],ylim = [0,55],title = '不一樣商家的價格區間')
# 不一樣商家的加權分排名
df_nor = pd.concat([df_re['star_nor'],df_re['reviews_mean_nor'],df_re['price_mean_nor'],df_re['item_num_nor']],axis =1)
df_nor['nor_total'] = df_re['star_nor'] + df_re['reviews_mean_nor'] + df_re['price_mean_nor'] + df_re['item_num_nor']
df_nor = df_nor.sort_values(['nor_total'],ascending = False)
df_nor.drop(['nor_total'],axis = 1,inplace = True)
df_nor.plot(kind = 'bar',grid = True,alpha = 0.5 , ax =axes[3],width =0.7,stacked = True,
title = '不一樣商家的加權分排名')
# 商家數量餅圖
colors = ['aliceblue','antiquewhite','beige','bisque','blanchedalmond','blue','blueviolet','brown','burlywood',
'cadetblue','chartreuse','chocolate','coral','cornflowerblue','cornsilk','crimson','cyan','darkblue','darkcyan','darkgoldenrod',
'darkgreen','darkkhaki','darkviolet','deeppink','deepskyblue','dimgray','dodgerblue','firebrick','floralwhite','forestgreen',
'gainsboro','ghostwhite','gold','goldenrod']
df_per = df_re['item_num']
fig,axes = plt.subplots(1,1,figsize = (8,8))
plt.axis('equal') #保證長寬相等
plt.pie(df_per ,
labels = df_per.index ,
autopct = '%.2f%%',
pctdistance = 1.05 ,
#shadow = True ,
startangle = 0 ,
radius = 1.5 ,
colors = colors,
frame = False
)
# 不一樣商家的星級/價格散點圖
plt.figure(figsize=(13,8))
x = df_re['price_mean'] # x軸爲均價
y = df_re['star'] # y軸爲星級
s = df_re['item_num']*100 # 點大小爲商品數量,商品數量越大,點越大
c = df_re['reviews_mean']*10 # 點顏色爲評論均值,評論均值越大,顏色越深紅
plt.scatter(x,y,marker='.',cmap='Reds',alpha=0.8,
s = s,c = c)
plt.grid()
plt.title('不一樣商家的星級/價格散點圖')
plt.xlim([0,50])
plt.ylim([3,5])
plt.xlabel('price')
plt.ylabel('star')
# 繪製平均線、圖例
p_mean = df_re['price_mean'].mean()
s_mean = df_re['star'].mean()
plt.axvline(p_mean,label = '平均價格%.2f$' %p_mean ,color = 'r' ,linestyle = '--',)
plt.axhline(s_mean,label = '平均星級%.2f' %s_mean ,color = 'g' ,linestyle = '-.')
plt.axvspan(p_mean, 50, ymin= (s_mean-3)/(5-3), ymax=1,alpha = 0.1,color = 'g')
plt.axhspan(0, s_mean, xmin= 0 , xmax=p_mean/50,alpha = 0.1,color = 'grey')
plt.legend(loc = 2)
# 添加商家標籤
for x,y,name in zip(df_re['price_mean'],df_re['star'],df_re.index):
plt.annotate(name, xy=(x,y),xytext = (0, -5), textcoords = 'offset points',ha = 'center', va = 'top',fontsize = 9)
# 清洗列:Read reviews that mention
df_rrtm = item_info_c['Read reviews that mention'].fillna('缺失數據',inplace =False)
df_rrtm = df_rrtm.str.strip('[')
df_rrtm = df_rrtm.str.rstrip(']')
df_rrtm = df_rrtm.str.replace('\'','') reviews_labels = [] for i in df_rrtm: reviews_labels = reviews_labels+i.split(',') #print(reviews_labels) labels = [] for j in reviews_labels: if j != '缺失數據': labels.append(j) #print(labels) # 統計標籤詞頻 counts = {} for i in labels: counts[i] = counts.get(i,0) + 1 #print(counts) label_counts = list(counts.items()) #print(word_counts) label_counts.sort(key = lambda x:x[1],reverse = True) # 按詞頻降序排列 print('總共%i個評論標籤,Top20以下:'%len(label_counts)) print('-----------------------------') # 輸出結果 for i in label_counts[:20]: print(i)複製代碼