在Spark體驗開始前須要準備環境和數據,環境的準備能夠本身按照Spark官方文檔安裝。筆者選擇使用CDH集羣安裝,能夠參考筆者以前的文章:Cloudera Manager大數據集羣環境搭建html
至於數據的準備就是本文的主要內容,數據採用python爬蟲的方式,爬去上一個月上海的天氣數據,參考了https://www.cnblogs.com/haha-point/p/7467221.html,可是由於網站作了反爬蟲,研究了一下,發下只要加上header請求就能夠規避掉反爬問題。python
上海市19年2月的天氣能夠經過http://lishi.tianqi.com/shanghai/201902.html獲取web
#encoding:utf-8
import requests
from bs4 import BeautifulSoup
url = "http://lishi.tianqi.com/shanghai/201902.html"
if __name__ == '__main__':
target_file = open("weather.txt",'w')
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36',
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding':'gzip, deflate',
'Accept-Language':'zh-CN,zh;q=0.9,en;q=0.8',
'Cache-Control':'no-cache',
'Cookie':'cityPy=shanghai; UM_distinctid=1696d7851820-0518fd894ef605-36657905-1aeaa0-1696d7851832da; CNZZDATA1275796416=2105206279-1552318077-https%253A%252F%252Fwww.cnblogs.com%252F%7C1552318077; Hm_lvt_ab6a683aa97a52202eab5b3a9042a8d2=1552319796,1552319840,1552319867; Hm_lpvt_ab6a683aa97a52202eab5b3a9042a8d2=1552322278'
}
response = requests.get(url,headers=headers)
soap = BeautifulSoup(response.text, 'html.parser')
weather_list = soap.select('div[class="tqtongji2"]')
for weather in weather_list:
weather_date = weather.select('a')[0].string.encode('utf-8')
ul_list = weather.select('ul')
i = 0
for ul in ul_list:
li_list = ul.select('li')
str = ""
for li in li_list:
str += li.string.encode('utf-8') + ','
if i != 0:
target_file.write(str + '\n')
i += 1
target_file.close()
經過以上代碼,把19年2月的天氣存到了weather.txt文件中,數據以逗號方式分割app
數據準備好了以後,下一篇是Spark基本API的體驗python爬蟲