Spark1.4發佈,支持了窗口分析函數(window functions)。
在離線平臺中,90%以上的離線分析任務都是使用Hive實現,其中必然會使用不少窗口分析函數,若是SparkSQL支持窗口分析函數,
那麼對於後面Hive向SparkSQL中的遷移的工做量會大大下降,使用方式以下:sql
一、初始化數據windows
建立表函數
create table window_test2 (url string, rate int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';測試
準備測試數據
url1,12
url2,11
url1,23
url2,25
url1,58
url3,11
url2,25
url3,58
url2,11url
加載數據:
load data local inpath '/opt/bin/short_opt/windows2.data' overwrite into table window_test2 ;spa
二、窗口函數測試
查詢全部數據.net
select * from window_test2;
+-------+-------+
| url | rate |
+-------+-------+
| url1 | 12 |
| url2 | 11 |
| url1 | 23 |
| url2 | 25 |
| url1 | 58 |
| url3 | 11 |
| url2 | 25 |
| url3 | 58 |
| url2 | 11 |
+-------+-------+htm
分組排序:
select url,rate,row_number() over(partition by url order by rate desc) as r from window_test2;
+-------+-------+----+
| url | rate | r |
+-------+-------+----+
| url1 | 58 | 1 |
| url1 | 23 | 2 |
| url1 | 12 | 3 |
| url2 | 25 | 1 |
| url2 | 25 | 2 |
| url2 | 11 | 3 |
| url2 | 11 | 4 |
| url3 | 58 | 1 |
| url3 | 11 | 2 |
+-------+-------+----+blog
分組統計sum
select url,rate,sum(rate) over(partition by url ) as r from window_test2;
+-------+-------+-----+
| url | rate | r |
+-------+-------+-----+
| url1 | 12 | 93 |
| url1 | 23 | 93 |
| url1 | 58 | 93 |
| url2 | 11 | 72 |
| url2 | 25 | 72 |
| url2 | 25 | 72 |
| url2 | 11 | 72 |
| url3 | 11 | 69 |
| url3 | 58 | 69 |
+-------+-------+-----+排序
分組統計avg
select url,rate,avg(rate) over(partition by url ) as r from window_test2;
+-------+-------+-------+
| url | rate | r |
+-------+-------+-------+
| url1 | 12 | 31.0 |
| url1 | 23 | 31.0 |
| url1 | 58 | 31.0 |
| url2 | 25 | 18.0 |
| url2 | 11 | 18.0 |
| url2 | 11 | 18.0 |
| url2 | 25 | 18.0 |
| url3 | 11 | 34.5 |
| url3 | 58 | 34.5 |
+-------+-------+-------+
分組統計count
select url,rate,count(rate) over(partition by url ) as r from window_test2;
+-------+-------+----+
| url | rate | r |
+-------+-------+----+
| url1 | 12 | 3 |
| url1 | 23 | 3 |
| url1 | 58 | 3 |
| url2 | 11 | 4 |
| url2 | 25 | 4 |
| url2 | 25 | 4 |
| url2 | 11 | 4 |
| url3 | 11 | 2 |
| url3 | 58 | 2 |
+-------+-------+----+
分組lag
select url,rate,lag(rate) over(partition by url ) as r from window_test2;
+-------+-------+-------+
| url | rate | r |
+-------+-------+-------+
| url1 | 12 | NULL |
| url1 | 23 | 12 |
| url1 | 58 | 23 |
| url2 | 25 | NULL |
| url2 | 11 | 25 |
| url2 | 11 | 11 |
| url2 | 25 | 11 |
| url3 | 11 | NULL |
| url3 | 58 | 11 |
+-------+-------+-------+
三、spark-1.4之後,支持全部的窗口函數了,有利用於hive做業向spark-sql來轉換。
---------------------
原文:https://blog.csdn.net/kwu_ganymede/article/details/50457528
分析窗口函數彙總:
part1: SUM,AVG,MIN,MAX
http://lxw1234.com/archives/2015/04/176.htm
part2: NTILE,ROW_NUMBER,RANK,DENSE_RANK
http://lxw1234.com/archives/2015/04/181.htm
part3: CUME_DIST,PERCENT_RANK
http://lxw1234.com/archives/2015/04/185.htm
part4:LAG,LEAD,FIRST_VALUE,LAST_VALUE
http://lxw1234.com/archives/2015/04/190.htm
part5: GROUPING SETS,GROUPING__ID,CUBE,ROLLUP
http://lxw1234.com/archives/2015/04/193.htm