大數據平臺之權限管理組件 - Aapche Ranger

Apache Ranger簡介

Apache Ranger提供一個集中式安全管理框架, 並解決受權和審計。它能夠對Hadoop生態的組件如HDFS、Yarn、Hive、Hbase等進行細粒度的數據訪問控制。經過操做Ranger控制檯,管理員能夠輕鬆的經過配置策略來控制用戶訪問權限。Ranger優勢:html

  • 豐富的組件支持(HDFS,HBASE,HIVE,YARN,KAFKA,STORM)
  • 提供了細粒度級權限控制(hive列級別)
  • 權限控制插件式,統一方便的策略管理
  • 支持審計日誌,記錄各類操做的日誌,提供統一的查詢接口和界面
  • 支持和kerberos的集成,提供了Rest接口供二次開發

爲何選擇Ranger:java

  • 多組件支持,基本覆蓋目前現有技術棧的組件
  • 支持審計日誌,能夠查找到用戶操做明細,方便問題排查反饋
  • 擁有本身的用戶體系,方便和其餘系統集成,提供接口調用

Ranger的架構圖:
大數據平臺之權限管理組件 - Aapche Rangernode

RangerAdmin:python

  • 對於各服務策略進行規劃,分配相應的資源給相應的用戶或組
  • 以RESTFUL形式提供策略的增刪改查接口
  • 統一查詢和管理頁面

Service Plugin:mysql

  • 嵌入到各系統執行流程中,按期從RangerAdmin拉取策略
  • 根據策略執行訪問決策樹
  • 記錄訪問審計

Ranger權限模型linux

  • 用戶:由User或Group來表達
  • 資源:不一樣組件有不一樣的資源,如HDFS的Path,Hive的DB\TABLE
  • 策略:Service能夠有多條Policy,組件不一樣,Policy受權模型不一樣

以HDFS爲例,與Ranger集成後的訪問流程:
大數據平臺之權限管理組件 - Aapche Rangersql

  • HDFS啓動時加載Ranger插件,並從Admin拉取權限策略
  • 用戶訪問請求到達NameNode,進行權限驗證
  • 驗證後處理訪問請求,並記錄審計日誌

以Hive爲例,與Ranger集成後的訪問流程:
大數據平臺之權限管理組件 - Aapche Ranger數據庫

  • HiveServer2啓動時加載Ranger插件,並從Admin拉取權限策略
  • 用戶SQL查詢請求到達HiveServer2,在Compile階段進行權限驗證
  • 驗證後處理訪問請求,並記錄審計日誌

以YARN爲例,與Ranger集成後的訪問流程:
大數據平臺之權限管理組件 - Aapche Rangerapache

  • ResourceManger啓動時加載Ranger插件,從Admin拉取權限策略
  • 用戶提交任務到ResourceManager,在解析任務階段進行權限驗證
  • 驗證後提交任務,並記錄審計日誌

Apache Ranger安裝

官方文檔:vim

前置準備

首先準備好Java和Maven環境:

[root@hadoop ~]# java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
[root@hadoop ~]# mvn -v
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /usr/local/maven
Java version: 1.8.0_261, vendor: Oracle Corporation, runtime: /usr/local/jdk/1.8/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1062.el7.x86_64", arch: "amd64", family: "unix"
[root@hadoop ~]#
  • Tips:Maven需配置好國內的鏡像源,不然依賴下一天也下載不下來

安裝一個MySQL數據庫,我這裏使用的是我本地的數據庫:

C:\Users\Administrator>mysql --version
mysql  Ver 8.0.21 for Win64 on x86_64 (MySQL Community Server - GPL)

搭建一個Hadoop環境,注意Hadoop的版本必定要 >= 2.7.1,由於我以前嘗試過2.6.0版本的Hadoop沒法與Ranger整合成功,本文使用的是2.8.5版本:

[root@hadoop ~]# echo $HADOOP_HOME
/usr/local/hadoop-2.8.5
[root@hadoop ~]#

Ranger依賴了MySQL做爲狀態存儲,因此須要準備一個MySQL的驅動包:

[root@hadoop ~]# ls /usr/local/src |grep mysql
mysql-connector-java-8.0.21.jar
[root@hadoop ~]#

編譯Ranger源碼

到官網上下載源碼包:

須要注意Ranger與Hadoop的對應版本,若是你安裝的Hadoop是2.x的,那麼Ranger須要採用2.x如下的版本。若是你安裝的Hadoop是3.x的,那麼Ranger須要採用2.x以上的版本。例如,我這裏安裝的Hadoop版本是2.8.5,因此選擇1.2.0版本的Ranger:

[root@hadoop ~]# cd /usr/local/src
[root@hadoop /usr/local/src]# wget https://mirror-hk.koddos.net/apache/ranger/1.2.0/apache-ranger-1.2.0.tar.gz

解壓源碼包:

[root@hadoop /usr/local/src]# tar -zxvf apache-ranger-1.2.0.tar.gz

進入解壓後的目錄:cd apache-ranger-1.2.0,修改該目錄下的pom文件,將倉庫相關配置都給註釋掉:

<!--
    <repositories>
        <repository>
            <id>apache.snapshots.https</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/snapshots</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
        <repository>
            <id>apache.public.https</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/public</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
    <repository>
      <id>repo</id>
      <url>file://${basedir}/local-repo</url>
      <snapshots>
         <enabled>true</enabled>
      </snapshots>
  </repository>
    </repositories>
-->

完成以上的修改後,使用maven命令進行編譯打包:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# mvn -DskipTests=true clean package assembly:assembly

通過一段漫長的等待後,編譯打包完成將輸出以下信息:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for ranger 1.2.0:
[INFO] 
[INFO] ranger ............................................. SUCCESS [  0.838 s]
[INFO] Jdbc SQL Connector ................................. SUCCESS [  0.861 s]
[INFO] Credential Support ................................. SUCCESS [ 26.341 s]
[INFO] Audit Component .................................... SUCCESS [  1.475 s]
[INFO] Common library for Plugins ......................... SUCCESS [  3.154 s]
[INFO] Installer Support Component ........................ SUCCESS [  0.471 s]
[INFO] Credential Builder ................................. SUCCESS [  1.074 s]
[INFO] Embedded Web Server Invoker ........................ SUCCESS [  0.807 s]
[INFO] Key Management Service ............................. SUCCESS [  3.335 s]
[INFO] ranger-plugin-classloader .......................... SUCCESS [  0.797 s]
[INFO] HBase Security Plugin Shim ......................... SUCCESS [ 17.365 s]
[INFO] HBase Security Plugin .............................. SUCCESS [  6.050 s]
[INFO] Hdfs Security Plugin ............................... SUCCESS [  5.831 s]
[INFO] Hive Security Plugin ............................... SUCCESS [02:01 min]
[INFO] Knox Security Plugin Shim .......................... SUCCESS [03:47 min]
[INFO] Knox Security Plugin ............................... SUCCESS [07:05 min]
[INFO] Storm Security Plugin .............................. SUCCESS [  1.757 s]
[INFO] YARN Security Plugin ............................... SUCCESS [  0.820 s]
[INFO] Ranger Util ........................................ SUCCESS [  0.869 s]
[INFO] Unix Authentication Client ......................... SUCCESS [ 17.494 s]
[INFO] Security Admin Web Application ..................... SUCCESS [03:01 min]
[INFO] KAFKA Security Plugin .............................. SUCCESS [  6.686 s]
[INFO] SOLR Security Plugin ............................... SUCCESS [03:07 min]
[INFO] NiFi Security Plugin ............................... SUCCESS [  1.210 s]
[INFO] NiFi Registry Security Plugin ...................... SUCCESS [  1.205 s]
[INFO] Unix User Group Synchronizer ....................... SUCCESS [  2.062 s]
[INFO] Ldap Config Check Tool ............................. SUCCESS [  3.478 s]
[INFO] Unix Authentication Service ........................ SUCCESS [  0.638 s]
[INFO] KMS Security Plugin ................................ SUCCESS [  1.430 s]
[INFO] Tag Synchronizer ................................... SUCCESS [01:58 min]
[INFO] Hdfs Security Plugin Shim .......................... SUCCESS [  0.584 s]
[INFO] Hive Security Plugin Shim .......................... SUCCESS [ 24.249 s]
[INFO] YARN Security Plugin Shim .......................... SUCCESS [  0.612 s]
[INFO] Storm Security Plugin shim ......................... SUCCESS [  0.709 s]
[INFO] KAFKA Security Plugin Shim ......................... SUCCESS [  0.617 s]
[INFO] SOLR Security Plugin Shim .......................... SUCCESS [  0.716 s]
[INFO] Atlas Security Plugin Shim ......................... SUCCESS [ 31.534 s]
[INFO] KMS Security Plugin Shim ........................... SUCCESS [  0.648 s]
[INFO] ranger-examples .................................... SUCCESS [  0.015 s]
[INFO] Ranger Examples - Conditions and ContextEnrichers .. SUCCESS [  1.108 s]
[INFO] Ranger Examples - SampleApp ........................ SUCCESS [  0.386 s]
[INFO] Ranger Examples - Ranger Plugin for SampleApp ...... SUCCESS [  0.519 s]
[INFO] Ranger Tools ....................................... SUCCESS [  1.411 s]
[INFO] Atlas Security Plugin .............................. SUCCESS [  3.977 s]
[INFO] Sqoop Security Plugin .............................. SUCCESS [  3.637 s]
[INFO] Sqoop Security Plugin Shim ......................... SUCCESS [  0.558 s]
[INFO] Kylin Security Plugin .............................. SUCCESS [01:04 min]
[INFO] Kylin Security Plugin Shim ......................... SUCCESS [  0.883 s]
[INFO] Unix Native Authenticator .......................... SUCCESS [  0.452 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

此時在target目錄下能夠看到打包好的插件安裝包:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# ls target/
antrun                            ranger-1.2.0-hbase-plugin.zip     ranger-1.2.0-kms.zip                ranger-1.2.0-ranger-tools.zip     ranger-1.2.0-storm-plugin.zip
archive-tmp                       ranger-1.2.0-hdfs-plugin.tar.gz   ranger-1.2.0-knox-plugin.tar.gz     ranger-1.2.0-solr-plugin.tar.gz   ranger-1.2.0-tagsync.tar.gz
maven-shared-archive-resources    ranger-1.2.0-hdfs-plugin.zip      ranger-1.2.0-knox-plugin.zip        ranger-1.2.0-solr-plugin.zip      ranger-1.2.0-tagsync.zip
ranger-1.2.0-admin.tar.gz         ranger-1.2.0-hive-plugin.tar.gz   ranger-1.2.0-kylin-plugin.tar.gz    ranger-1.2.0-sqoop-plugin.tar.gz  ranger-1.2.0-usersync.tar.gz
ranger-1.2.0-admin.zip            ranger-1.2.0-hive-plugin.zip      ranger-1.2.0-kylin-plugin.zip       ranger-1.2.0-sqoop-plugin.zip     ranger-1.2.0-usersync.zip
ranger-1.2.0-atlas-plugin.tar.gz  ranger-1.2.0-kafka-plugin.tar.gz  ranger-1.2.0-migration-util.tar.gz  ranger-1.2.0-src.tar.gz           ranger-1.2.0-yarn-plugin.tar.gz
ranger-1.2.0-atlas-plugin.zip     ranger-1.2.0-kafka-plugin.zip     ranger-1.2.0-migration-util.zip     ranger-1.2.0-src.zip              ranger-1.2.0-yarn-plugin.zip
ranger-1.2.0-hbase-plugin.tar.gz  ranger-1.2.0-kms.tar.gz           ranger-1.2.0-ranger-tools.tar.gz    ranger-1.2.0-storm-plugin.tar.gz  version
[root@hadoop /usr/local/src/apache-ranger-1.2.0]#

安裝Ranger Admin

將ranger admin的安裝包解壓到合適的目錄下,我這裏習慣放到/usr/local

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# tar -zxvf target/ranger-1.2.0-admin.tar.gz -C /usr/local/

進入解壓後的目錄,目錄結構以下:

[root@hadoop /usr/local/src/apache-ranger-1.2.0]# cd /usr/local/ranger-1.2.0-admin/
[root@hadoop /usr/local/ranger-1.2.0-admin]# ls
bin                    contrib  dba_script.py           ews                 ranger_credential_helper.py  set_globals.sh           templates-upgrade   upgrade.sh
changepasswordutil.py  cred     db_setup.py             install.properties  restrict_permissions.py      setup_authentication.sh  update_property.py  version
changeusernameutil.py  db       deleteUserGroupUtil.py  jisql               rolebasedusersearchutil.py   setup.sh                 upgrade_admin.py
[root@hadoop /usr/local/ranger-1.2.0-admin]#

配置安裝選項:

[root@hadoop /usr/local/ranger-1.2.0-admin]# vim install.properties
# 指定MySQL驅動包所在的路徑
SQL_CONNECTOR_JAR=/usr/local/src/mysql-connector-java-8.0.21.jar

# 配置root用戶名密碼以及MySQL實例的鏈接地址
db_root_user=root
db_root_password=123456a.
db_host=192.168.1.11

# 配置訪問數據庫的用戶名密碼
db_name=ranger_test
db_user=root
db_password=123456a.

# 指定審計日誌的存儲方式
audit_store=db
audit_db_user=root
audit_db_name=ranger_test
audit_db_password=123456a.

在MySQL中建立ranger數據庫:

create database ranger_test;

因爲我這裏使用的是MySQL8.x,須要修改一下數據庫相關的腳本,不是MySQL8版本的能夠跳過這一步。打開dba_script.pydb_setup.py文件,搜索以下內容:

-cstring jdbc:mysql://%s/%s%s

將其所有修改成以下所示,主要是添加JDBC的serverTimezone鏈接參數:

-cstring jdbc:mysql://%s/%s%s?serverTimezone=Asia/Shanghai

而後執行以下命令開始安裝ranger admin:

[root@hadoop /usr/local/ranger-1.2.0-admin]# ./setup.sh

報錯解決

安裝過程當中若是報以下錯誤:

SQLException : SQL state: HY000 java.sql.SQLException: Operation CREATE USER failed for 'root'@'localhost' ErrorCode: 1396

SQLException : SQL state: 42000 java.sql.SQLSyntaxErrorException: Access denied for user 'root'@'192.168.1.11' to database 'mysql' ErrorCode: 1044

解決方式,就是在MySQL中執行以下語句:

use mysql;
flush privileges;
grant system_user on *.* to 'root';
drop user'root'@'localhost';
create user 'root'@'localhost' identified by '123456a.';
grant all privileges on *.* to 'root'@'localhost' with grant option;

drop user'root'@'192.168.1.11';
create user 'root'@'192.168.1.11' identified by '123456a.';
grant all privileges on *.* to 'root'@'192.168.1.11' with grant option;
flush privileges;

若是報以下錯誤:

SQLException : SQL state: HY000 java.sql.SQLException: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) ErrorCode: 1418

解決方式:

set global log_bin_trust_function_creators=TRUE;
flush privileges;

若是報以下錯誤:

SQLException : SQL state: HY000 java.sql.SQLException: Cannot drop table 'x_policy' referenced by a foreign key constraint 'x_policy_ref_role_FK_policy_id' on table 'x_policy_ref_role'. ErrorCode: 3730

解決方式:刪除ranger庫中全部的表,再從新執行./setup.sh

安裝完成後最終會輸出:

Installation of Ranger PolicyManager Web Application is completed.

啓動ranger admin

修改配置文件,配置數據庫鏈接密碼和jdbc url時區參數:

[root@hadoop /usr/local/ranger-1.2.0-admin]# vim conf/ranger-admin-site.xml
...

<property>
        <name>ranger.jpa.jdbc.url</name>
        <value>jdbc:log4jdbc:mysql://192.168.1.11/ranger_test?serverTimezone=Asia/Shanghai</value>
        <description />
</property>
<property>
        <name>ranger.jpa.jdbc.user</name>
        <value>root</value>
        <description />
</property>
<property>
        <name>ranger.jpa.jdbc.password</name>
        <value>123456a.</value>
        <description />
</property>

...

修改審計存儲相關配置:

[root@hadoop /usr/local/ranger-1.2.0-admin]# vim conf/ranger-admin-default-site.xml
...

<property>
        <name>ranger.jpa.audit.jdbc.url</name>
        <value>jdbc:log4jdbc:mysql://192.168.1.11:3306/ranger_test?serverTimezone=Asia/Shanghai</value>
        <description />
</property>
<property>
        <name>ranger.jpa.audit.jdbc.user</name>
        <value>root</value>
        <description />
</property>
<property>
        <name>ranger.jpa.audit.jdbc.password</name>
        <value>123456a.</value>
        <description />
</property>

...

啓動命令以下:

[root@hadoop /usr/local/ranger-1.2.0-admin]# ranger-admin start 
Starting Apache Ranger Admin Service
Apache Ranger Admin Service with pid 21102 has started.
[root@hadoop /usr/local/ranger-1.2.0-admin]#

檢查端口和進程是否正常:

[root@hadoop /usr/local/ranger-1.2.0-admin]# jps
21194 Jps
21102 EmbeddedServer
[root@hadoop /usr/local/ranger-1.2.0-admin]# netstat -lntp |grep 21102
tcp6       0      0 :::6080                 :::*           LISTEN      21102/java          
tcp6       0      0 127.0.0.1:6085          :::*           LISTEN      21102/java          
[root@hadoop /usr/local/ranger-1.2.0-admin]#

使用瀏覽器訪問6080端口,進入到登陸頁面,默認用戶名和密碼均爲admin
大數據平臺之權限管理組件 - Aapche Ranger

登陸成功後,進入到首頁,以下:
大數據平臺之權限管理組件 - Aapche Ranger


Ranger HDFS Plugin安裝

解壓hdfs plugin的安裝包到合適的目錄下:

[root@hadoop ~]# mkdir /usr/local/ranger-plugin
[root@hadoop ~]# tar -zxvf /usr/local/src/apache-ranger-1.2.0/target/ranger-1.2.0-hdfs-plugin.tar.gz -C /usr/local/ranger-plugin
[root@hadoop ~]# cd /usr/local/ranger-plugin/
[root@hadoop /usr/local/ranger-plugin]# mv ranger-1.2.0-hdfs-plugin/ hdfs-plugin

進入解壓後的目錄,目錄結構以下:

[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# ls
disable-hdfs-plugin.sh  enable-hdfs-plugin.sh  install  install.properties  lib  ranger_credential_helper.py  upgrade-hdfs-plugin.sh  upgrade-plugin.py
[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]#

配置安裝選項:

[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# vim install.properties
# 指定ranger admin服務的訪問地址
POLICY_MGR_URL=http://192.168.243.161:6080
# 配置倉庫名稱,可自定義
REPOSITORY_NAME=dev_hdfs
# 配置hadoop的安裝目錄
COMPONENT_INSTALL_DIR_NAME=/usr/local/hadoop-2.8.5

# 配置用戶和用戶組
CUSTOM_USER=root
CUSTOM_GROUP=root

執行以下腳本開啓hdfs-plugin

[root@hadoop /usr/local/ranger-plugin/hdfs-plugin]# ./enable-hdfs-plugin.sh

腳本執行成功後,會輸出以下內容:

Ranger Plugin for hadoop has been enabled. Please restart hadoop to ensure that changes are effective.

重啓Hadoop:

[root@hadoop ~]# stop-all.sh 
[root@hadoop ~]# start-all.sh

驗證權限控制

到Ranger Admin上添加hdfs service,這裏的Service Name需與配置文件中的配置所對應上:
大數據平臺之權限管理組件 - Aapche Ranger

填寫相應信息:
大數據平臺之權限管理組件 - Aapche Ranger

填寫完成後,到頁面底部點擊「Test Connection」測試可否正常鏈接,確承認以正常鏈接後點擊「Add」完成新增:
大數據平臺之權限管理組件 - Aapche Ranger

等待一會後,到 「Audit」 -> 「Plugins」 頁面查看有沒有發現hdfs插件,若是沒有的話表明插件沒有啓用成功。正常狀況以下:
大數據平臺之權限管理組件 - Aapche Ranger

確認hdfs插件整合成功後,在hdfs中建立一些測試目錄和文件:

[root@hadoop ~]# hdfs dfs -mkdir /rangertest1
[root@hadoop ~]# hdfs dfs -mkdir /rangertest2
[root@hadoop ~]# echo "ranger test" > testfile
[root@hadoop ~]# hdfs dfs -put testfile /rangertest1
[root@hadoop ~]# hdfs dfs -put testfile /rangertest2

而後到Ranger Admin上添加Ranger的內部用戶,「Settings」 -> 「Add New User」,填寫用戶信息:
大數據平臺之權限管理組件 - Aapche Ranger

接着添加權限策略,「Access Manager」 -> 「dev_hdfs」 -> 「Add New Policy」,配置權限策略所做用的用戶、目錄等信息:
大數據平臺之權限管理組件 - Aapche Ranger

拉到底部點擊「Add」完成添加後,能夠看到新增了一條策略配置:
大數據平臺之權限管理組件 - Aapche Ranger

回到操做系統,添加並切換到hive用戶,測試可否正常讀取目錄、文件:

[root@hadoop ~]# sudo su - hive
[hive@hadoop ~]$ hdfs dfs -ls /
Found 2 items
drwxr-xr-x   - root supergroup          0 2020-11-12 13:48 /rangertest1
drwxr-xr-x   - root supergroup          0 2020-11-12 13:48 /rangertest2
[hive@hadoop ~]$ hdfs dfs -ls /rangertest1
Found 1 items
-rw-r--r--   1 root supergroup         12 2020-11-12 13:48 /rangertest1/testfile
[hive@hadoop ~]$ hdfs dfs -cat /rangertest1/testfile
ranger test
[hive@hadoop ~]$ hdfs dfs -ls /rangertest2
Found 1 items
-rw-r--r--   1 root supergroup         12 2020-11-12 13:48 /rangertest2/testfile
[hive@hadoop ~]$

經過查看目錄信息能夠看到rangertest1rangertest2目錄的權限位是:drwxr-xr-x,也就說除root外的用戶是沒權限對這兩個目錄進行寫操做的。

但此時測試寫操做,會發現hive用戶可以正常往rangertest1目錄添加文件,但往rangertest2目錄添加文件就會報錯,由於在Ranger中咱們只賦予了hive用戶對rangertest1目錄的讀寫權限:

[hive@hadoop ~]$ echo "this is test file 2" > testfile2
[hive@hadoop ~]$ hdfs dfs -put testfile2 /rangertest1
[hive@hadoop ~]$ hdfs dfs -put testfile2 /rangertest2
put: Permission denied: user=hive, access=WRITE, inode="/rangertest2":root:supergroup:drwxr-xr-x
[hive@hadoop ~]$

若是咱們想禁止hive用戶對rangertest2目錄的全部操做,那麼就能夠新增一條拒絕策略,「Resource Path」選擇rangertest2目錄,而且在「Deny Conditions」一欄中勾選須要deny的權限便可:
大數據平臺之權限管理組件 - Aapche Ranger

策略生效後,此時hive用戶訪問rangertest2目錄就會提示權限拒絕了:

[hive@hadoop ~]$ hdfs dfs -ls /rangertest2
ls: Permission denied: user=hive, access=EXECUTE, inode="/rangertest2"
[hive@hadoop ~]$ hdfs dfs -cat /rangertest2/testfile
cat: Permission denied: user=hive, access=EXECUTE, inode="/rangertest2/testfile"
[hive@hadoop ~]$

至此,Ranger對HDFS的權限控制也驗證經過了。除此以外,你也能夠進行其餘的測試。


Ranger Hive Plugin安裝

首先須要搭建好Hive環境,能夠參考下文:

爲了與Hadoop和Ranger版本保持兼容,本文使用的Hive版本是2.3.6

[root@hadoop ~]# echo $HIVE_HOME
/usr/local/apache-hive-2.3.6-bin
[root@hadoop ~]#

解壓hive plugin的安裝包到合適的目錄下:

[root@hadoop ~]# tar -zxvf /usr/local/src/apache-ranger-1.2.0/target/ranger-1.2.0-hive-plugin.tar.gz -C /usr/local/ranger-plugin/
[root@hadoop /usr/local/ranger-plugin]# mv ranger-1.2.0-hive-plugin/ hive-plugin

進入解壓後的目錄,目錄結構以下:

[root@hadoop /usr/local/ranger-plugin]# cd hive-plugin/
[root@hadoop /usr/local/ranger-plugin/hive-plugin]# ls
disable-hive-plugin.sh  enable-hive-plugin.sh  install  install.properties  lib  ranger_credential_helper.py  upgrade-hive-plugin.sh  upgrade-plugin.py
[root@hadoop /usr/local/ranger-plugin/hive-plugin]#

配置安裝選項:

[root@hadoop /usr/local/ranger-plugin/hive-plugin]# vim install.properties
# 指定ranger admin服務的訪問地址
POLICY_MGR_URL=http://192.168.243.161:6080
# 配置倉庫名稱,可自定義
REPOSITORY_NAME=dev_hive
# 配置hive的安裝目錄
COMPONENT_INSTALL_DIR_NAME=/usr/local/apache-hive-2.3.6-bin

# 配置用戶和用戶組
CUSTOM_USER=root
CUSTOM_GROUP=root

執行以下腳本開啓hive-plugin

[root@hadoop /usr/local/ranger-plugin/hive-plugin]# ./enable-hive-plugin.sh

腳本執行成功後,會輸出以下內容:

Ranger Plugin for hive has been enabled. Please restart hive to ensure that changes are effective.

重啓Hive:

[root@hadoop ~]# jps
8258 SecondaryNameNode
9554 EmbeddedServer
8531 NodeManager
13764 Jps
7942 NameNode
11591 RunJar
8040 DataNode
8428 ResourceManager
[root@hadoop ~]# kill -15 11591
[root@hadoop ~]# nohup hiveserver2 -hiveconf hive.execution.engine=mr &

驗證權限控制

到Ranger Admin上添加hive service,這裏的Service Name需與配置文件中的配置所對應上:
大數據平臺之權限管理組件 - Aapche Ranger

填寫相應信息,並點擊「Add」完成新增:
大數據平臺之權限管理組件 - Aapche Ranger

  • Tips:第一次添加hive service,點擊「Test Connection」時可能會提示測試鏈接失敗,能夠暫且不用管,只要「Plugins」頁面能探測到該插件便可

等待一會後,到 「Audit」 -> 「Plugins」 頁面查看有沒有探測到該hive插件,若是沒有的話表明插件沒有啓用成功。正常狀況以下:
大數據平臺之權限管理組件 - Aapche Ranger

確認hive插件整合成功後,添加權限策略,「Access Manager」 -> 「dev_hive」 -> 「Add New Policy」,配置權限策略所做用的用戶、庫、表、列等信息:
大數據平臺之權限管理組件 - Aapche Ranger

回到操做系統上,切換到hive用戶,並經過beeline進入Hive的交互終端:

[root@hadoop ~]# sudo su - hive
上一次登陸:四 11月 12 13:53:53 CST 2020pts/1 上
[hive@hadoop ~]$ beeline -u jdbc:hive2://localhost:10000 -n hive

測試權限,能夠看到除了show tables外的操做都被拒絕了:

0: jdbc:hive2://localhost:10000> show tables;
+-----------------+
|    tab_name     |
+-----------------+
| hive_wordcount  |
+-----------------+
1 row selected (0.126 seconds)
0: jdbc:hive2://localhost:10000> show databases;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hive] does not have [USE] privilege on [*] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000> select * from hive_wordcount;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [hive] does not have [SELECT] privilege on [default/hive_wordcount/*] (state=42000,code=40000)
0: jdbc:hive2://localhost:10000>

由於咱們只給予了hive用戶drop hive_wordcount表的權限:

0: jdbc:hive2://localhost:10000> drop table hive_wordcount;
No rows affected (0.222 seconds)
0: jdbc:hive2://localhost:10000>
相關文章
相關標籤/搜索