項目實戰從0到1之spark(32)大數據項目之電商數倉(總結)(二):系統業務數據倉庫

2.4 關係建模與維度建模
關係模型
watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

關係模型主要應用與OLTP系統中,爲了保證數據的一致性以及避免冗餘,因此大部分業務系統的表都是遵循第三範式的。


維度模型
watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

維度模型主要應用於OLAP系統中,由於關係模型雖然冗餘少,可是在大規模數據,跨表分析統計查詢過程當中,會形成多表關聯,這會大大下降執行效率。
因此把相關各類表整理成兩種:事實表和維度表兩種。全部維度表圍繞着事實表進行解釋。


OLAP與OLTP對比
watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

雪花模型、星型模型和星座模型
在維度建模的基礎上又分爲三種模型:星型模型、雪花模型、星座模型

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

第3章 數倉搭建
3.0 配置Hadoop支持Snappy壓縮
1)將編譯後支持Snappy壓縮的Hadoop jar包解壓縮,並將lib/native目錄中全部文件上傳到hadoop102的/opt/module/hadoop-2.7.2/lib/native目錄,並分發到hadoop103 hadoop104。
2)從新啓動Hadoop。
3)檢查支持的壓縮方式java

[kgg@hadoop102 native]$ hadoop checknative
hadoop:  true /opt/module/hadoop-2.7.2/lib/native/libhadoop.so
zlib:    true /lib64/libz.so.1
snappy:  true /opt/module/hadoop-2.7.2/lib/native/libsnappy.so.1
lz4:     true revision:99
bzip2:   false

3.1 業務數據生成
3.1.1 建表語句
1)經過SQLyog建立數據庫gmall

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

2)設置數據庫編碼
watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

3)導入建表語句(1建表腳本)
watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

4)重複步驟3的導入方式,依次導入:2商品分類數據插入腳本、3函數腳本、4存儲過程腳本。

3.1.2 生成業務數據
1)生成業務數據函數說明mysql

    init_data ( do_date_string VARCHAR(20) , order_incr_num INT, user_incr_num INT , sku_num INT , if_truncate BOOLEAN  ):
    參數一:do_date_string生成數據日期
    參數二:order_incr_num訂單id個數
    參數三:user_incr_num用戶id個數
    參數四:sku_num商品sku個數
    參數五:if_truncate是否刪除數據

2)案例測試:
(1)需求:生成日期2019年2月10日數據、訂單1000個、用戶200個、商品sku300個、刪除原始數據。

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=sql

CALL init_data('2019-02-10',1000,200,300,TRUE);

(2)查詢生成數據結果數據庫

SELECT * from base_category1;
SELECT * from base_category2;
SELECT * from base_category3;

SELECT * from order_info;
SELECT * from order_detail;

SELECT * from sku_info;
SELECT * from user_info;

SELECT * from payment_info;

3.2 業務數據導入數倉
watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

3.2.1 Sqoop安裝
詳見尚硅谷大數據技術之Sqoop
3.2.2 Sqoop導入命令vim

/opt/module/sqoop/bin/sqoop import \
--connect \
--username \
--password \
--target-dir \
--delete-target-dir \
--num-mappers \
--fields-terminated-by \
--query "$2" ' and $CONDITIONS;'

3.2.3 分析表

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

3.2.4 Sqoop定時導入腳本
1)在/home/kgg/bin目錄下建立腳本sqoop_import.sh
[kgg@hadoop102 bin]$ vim sqoop_import.shbash

    在腳本中填寫以下內容
#!/bin/bash

db_date=$2
echo $db_date
db_name=gmall

import_data() {
/opt/module/sqoop/bin/sqoop import \
--connect jdbc:mysql://hadoop102:3306/$db_name \
--username root \
--password 000000 \
--target-dir /origin_data/$db_name/db/$1/$db_date \
--delete-target-dir \
--num-mappers 1 \
--fields-terminated-by "\t" \
--query "$2"' and $CONDITIONS;'
}

import_sku_info(){
  import_data "sku_info" "select 
id, spu_id, price, sku_name, sku_desc, weight, tm_id,
category3_id, create_time
  from sku_info where 1=1"
}

import_user_info(){
  import_data "user_info" "select 
id, name, birthday, gender, email, user_level, 
create_time 
from user_info where 1=1"
}

import_base_category1(){
  import_data "base_category1" "select 
id, name from base_category1 where 1=1"
}

import_base_category2(){
  import_data "base_category2" "select 
id, name, category1_id from base_category2 where 1=1"
}

import_base_category3(){
  import_data "base_category3" "select id, name, category2_id from base_category3 where 1=1"
}

import_order_detail(){
  import_data   "order_detail"   "select 
    od.id, 
    order_id, 
    user_id, 
    sku_id, 
    sku_name, 
    order_price, 
    sku_num, 
    o.create_time  
  from order_info o, order_detail od
  where o.id=od.order_id
  and DATE_FORMAT(create_time,'%Y-%m-%d')='$db_date'"
}

import_payment_info(){
  import_data "payment_info"   "select 
    id,  
    out_trade_no, 
    order_id, 
    user_id, 
    alipay_trade_no, 
    total_amount,  
    subject, 
    payment_type, 
    payment_time 
  from payment_info 
  where DATE_FORMAT(payment_time,'%Y-%m-%d')='$db_date'"
}

import_order_info(){
  import_data   "order_info"   "select 
    id, 
    total_amount, 
    order_status, 
    user_id, 
    payment_way, 
    out_trade_no, 
    create_time, 
    operate_time  
  from order_info 
  where (DATE_FORMAT(create_time,'%Y-%m-%d')='$db_date' or DATE_FORMAT(operate_time,'%Y-%m-%d')='$db_date')"
}

case $1 in
  "base_category1")
     import_base_category1
;;
  "base_category2")
     import_base_category2
;;
  "base_category3")
     import_base_category3
;;
  "order_info")
     import_order_info
;;
  "order_detail")
     import_order_detail
;;
  "sku_info")
     import_sku_info
;;
  "user_info")
     import_user_info
;;
  "payment_info")
     import_payment_info
;;
   "all")
   import_base_category1
   import_base_category2
   import_base_category3
   import_order_info
   import_order_detail
   import_sku_info
   import_user_info
   import_payment_info
;;
esac

2)增長腳本執行權限app

[kgg@hadoop102 bin]$ chmod 777 sqoop_import.sh

3)執行腳本導入數據ide

[kgg@hadoop102 bin]$ sqoop_import.sh all 2019-02-10

4)在SQLyog中生成2019年2月11日數據函數

CALL init_data('2019-02-11',1000,200,300,TRUE);

5)執行腳本導入數據oop

[kgg@hadoop102 bin]$ sqoop_import.sh all 2019-02-11

3.2.5 Sqoop導入數據異常處理
1)問題描述:執行Sqoop導入數據腳本時,發生以下異常

java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@65d6b83b is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:930)
    at com.mysql.jdbc.MysqlIO.checkForOutstandingStreamingData(MysqlIO.java:2646)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1861)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2101)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2548)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2477)
    at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1422)
    at com.mysql.jdbc.ConnectionImpl.getMaxBytesPerChar(ConnectionImpl.java:2945)
    at com.mysql.jdbc.Field.getMaxBytesPerCharacter(Field.java:582)

2)問題解決方案:增長以下導入參數

--driver com.mysql.jdbc.Driver \
相關文章
相關標籤/搜索