轉載請註明出處:http://www.cnblogs.com/xiaodf/html
以前的博客介紹了經過Kerberos + Sentry的方式實現了hive server2的身份認證和權限管理功能,本文主要介紹Spark SQL JDBC方式操做Hive庫時的身份認證和權限管理實現。java
ThriftServer是一個JDBC/ODBC接口,用戶能夠經過JDBC/ODBC鏈接ThriftServer來訪問SparkSQL的數據。ThriftServer在啓動的時候,會啓動了一個sparkSQL的應用程序,而經過JDBC/ODBC鏈接進來的客戶端共同分享這個sparkSQL應用程序的資源,也就是說不一樣的用戶之間能夠共享數據;ThriftServer啓動時還開啓一個偵聽器,等待JDBC客戶端的鏈接和提交查詢。因此,在配置ThriftServer的時候,至少要配置ThriftServer的主機名和端口,若是要使用hive數據的話,還要提供hive metastore的uris。node
前提:sql
本文是在如下幾個部署前提下進行的實驗:docker
(1)CDH 開啓了Kerberos身份認證,並安裝了Sentry;數據庫
(2)Hive權限經過Sentry服務控制;apache
(3)HDFS開啓了HDFS ACL與Sentry的權限同步功能,經過sql語句更改Hive表的權限,會同步到相應的HDFS文件。session
以上各項配置可參考我以前博客:http://www.cnblogs.com/xiaodf/p/5968248.htmlless
CDH自帶的spark不支持thrift server,因此須要自行下載spark編譯好的安裝包,下載地址以下:http://spark.apache.org/downloads.htmlide
本文下載的spark版本爲1.5.2,
將集羣hive-site.xml文件拷貝到spark目錄的conf下
[root@t162 spark-1.5.2-bin-hadoop2.6]# cd conf/
[root@t162 conf]# ll
total 52
-rw-r--r-- 1 root root 202 Oct 25 13:05 docker.properties.template
-rw-r--r-- 1 root root 303 Oct 25 13:05 fairscheduler.xml.template
-rw-r--r-- 1 root root 5708 Oct 25 13:08 hive-site.xml
-rw-r--r-- 1 root root 949 Oct 25 13:05 log4j.properties.template
-rw-r--r-- 1 root root 5886 Oct 25 13:05 metrics.properties.template
-rw-r--r-- 1 root root 80 Oct 25 13:05 slaves.template
-rw-r--r-- 1 root root 507 Oct 25 13:05 spark-defaults.conf.template
-rwxr-xr-x 1 root root 4299 Oct 25 13:08 spark-env.sh
-rw-r--r-- 1 root root 3418 Oct 25 13:05 spark-env.sh.template
-rwxr-xr-x 1 root root 119 Oct 25 13:09 stopjdbc.sh
修改hive-site.xml參數hive.server2.enable.doAs爲true,注意doAs務必是true,不然spark jdbc用戶權限控制會失效。
<property> <name>hive.server2.enable.doAs</name> <value>true</value> </property>
生成spark-env.sh文件,並添加參數
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cloudera/parcels/CDH/lib/hadoop/lib/native
此處HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
調用start-thriftserver.sh腳本啓動thrift server
#!/bin/sh #start Spark-thriftserver export YARN_CONF_DIR=/etc/hadoop/conf file="hive-site.xml" dir=$(pwd) cd conf/ if [ ! -e "$file" ] then cp /etc/hive/conf/hive-site.xml $dir/conf/ fi cd ../sbin ./start-thriftserver.sh --name SparkJDBC --master yarn-client --num-executors 10 --executor-memory 2g --executor-cores 4 --driver-memory 10g
--driver-cores 2 --conf spark.storage.memoryFraction=0.2 --conf spark.shuffle.memoryFraction=0.6 --hiveconf hive.server2.thrift.port=10001
--hiveconf hive.server2.logging.operation.enabled=true --hiveconf hive.server2.authentication.kerberos.principal=hive/t162@HADOOP.COM
--hiveconf hive.server2.authentication.kerberos.keytab /home/hive.keytab
上面腳本實際上就是提交了一個spark job,其中主要參數以下:
master :指定spark提交模式爲yarn-client hive.server2.thrift.port : 指定thrift server的端口 hive.server2.authentication.kerberos.principal:指定啓動thrift server的超級管理員principal,此處超級管理員爲hive hive.server2.authentication.kerberos.keytab : 超級管理員對應的keytab
執行startjdbc.sh須要kinit到hive庫的超管來執行,hive庫的超管須要在開啓sentry與hdfs權限同步基礎上,被賦予整個hive庫的權限,即對hive庫的hdfs整個目錄也有全部權限。
#!/bin/sh # Stop SparkJDBC cd sbin ./spark-daemon.sh stop org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 1
Spark SQL Thriftserver認證,目的是讓不一樣的用戶,使用不一樣的身份來登陸beeline。使用Kerberos,的確能夠解決服務互相認證、用戶認證的功能。
使用使用管理員帳戶啓動,已配置在啓動腳本中。thriftserver實際是個spark Job,經過spark-submit提交到YARN上去,須要這個帳戶用來訪問YARN和HDFS;若是使用一些普通帳戶,因爲HDFS權限不足,可能啓動不了,由於須要往HDFS寫一些東西。
[root@t162 spark-1.5.2-bin-hadoop2.6]# ./startjdbc.sh starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /home/iie/spark-1.5.2-bin-hadoop2/spark-1.5.2-bin-hadoop2.6/sbin/../logs/
spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-t162.out ...... 16/10/25 16:56:07 INFO thrift.ThriftCLIService: Starting ThriftBinaryCLIService on port 10001 with 5...500 worker threads
能夠經過輸出日誌查看服務啓動狀況
[root@t162 spark-1.5.2-bin-hadoop2.6]# tailf /home/iie/spark-1.5.2-bin-hadoop2/spark-1.5.2-bin-hadoop2.6/sbin/../logs/
spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-t162.out
由於服務啓動了kerberos身份認證,沒有認證時鏈接服務會報錯,以下所示:
[root@t161 ~]# beeline -u "jdbc:hive2://t162:10001/;principal=hive/t162@HADOOP.COM" 16/10/25 16:59:04 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. scan complete in 2ms Connecting to jdbc:hive2://t162:10001/;principal=hive/t162@HADOOP.COM 16/10/25 16:59:06 [main]: ERROR transport.TSaslTransport: SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
咱們用user1用戶進行認證,就能夠鏈接了,用戶事先已建立,建立方式見http://www.cnblogs.com/xiaodf/p/5968282.html
[root@t161 ~]# kinit user1 Password for user1@HADOOP.COM: [root@t161 ~]# beeline -u "jdbc:hive2://t162:10001/;principal=hive/t162@HADOOP.COM" 16/10/25 17:01:46 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. scan complete in 3ms Connecting to jdbc:hive2://t162:10001/;principal=hive/t162@HADOOP.COM Connected to: Spark SQL (version 1.5.2) Driver: Hive JDBC (version 1.1.0-cdh5.7.2) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 1.1.0-cdh5.7.2 by Apache Hive 0: jdbc:hive2://t162:10001/>
不一樣的用戶經過kinit使用本身的Principal+密碼經過Kerberos的AS認證拿到TGT,就能夠登陸到spark sql thriftserver上去查看庫、表;
不過因爲sts還不支持sqlbased authorization,因此還只能作到底層hdfs的權限隔離,比較惋惜;相對來講hive的完整度高一些,支持SQLstandard authorization。
由於事先咱們已經開啓了HDFS ACL與Sentry的權限同步功能,因此spark sql jdbc 的用戶權限經過hive2的權限設置來實現。即先jdbc登陸hive2 ,再利用hive sql語句進行用戶權限設置,而後表和數據庫的權限會同步到對應的HDFS目錄和文件,從而實現spark sql thriftserver基於底層hdfs的用戶權限隔離。
以下所示,user1對test庫的table1表有權限,對test庫的table2表無權限,讀table2表時顯示無hdfs權限,即權限設置成功!
0: jdbc:hive2://node1:10000/> select * from test.table1 limit 1; +--------------+-------------+---------------------+----------+-----------+----------+---------------------------+-----------+-----------+------------------------+------------+---------------+-------------+--+ | cint | cbigint | cfloat | cdouble | cdecimal | cstring | cvarchar | cboolean | ctinyint | ctimestamp | csmallint | cipv4 | cdate | +--------------+-------------+---------------------+----------+-----------+----------+---------------------------+-----------+-----------+------------------------+------------+---------------+-------------+--+ | 15000000001 | 1459107060 | 1.8990000486373901 | 1.7884 | 1.92482 | 中文測試1 | /browser/addBasicInfo.do | true | -127 | 2014-05-14 00:53:21.0 | -63 | 0 | 2014-05-14 | +--------------+-------------+---------------------+----------+-----------+----------+---------------------------+-----------+-----------+------------------------+------------+---------------+-------------+--+ 1 row selected (3.165 seconds) 0: jdbc:hive2://node1:10000/> select * from test.table2 limit 10; Error: org.apache.hadoop.security.AccessControlException: Permission denied: user=user1, access=READ_EXECUTE, inode="/user/hive/warehouse/test.db/table2":hive:hive:drwxrwx--x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkAccessAcl(DefaultAuthorizationProvider.java:365) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:258) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:175) at org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:178) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6617) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6524) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:5061) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:5022) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:882) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getListing(AuthorizationProviderProxyClientProtocol.java:335) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:615) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) (state=,code=0)
權限測試可參考以前博客:http://www.cnblogs.com/xiaodf/p/5968282.html,此處略
使用spark1.6.0版本,啓動thrift server服務後,執行「show databases」報以下錯誤:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException:
Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
查詢資料說這多是1.6版本的一個bug,換成1.5.2版本後,沒有這個問題了。下面爲此問題的查詢連接:https://forums.databricks.com/questions/7207/spark-thrift-server-on-kerberos-enabled-hadoophive.html
Spark SQL ThriftServer服務啓動7天后,用戶在用beeline命令去鏈接服務報錯連不上了。
服務日誌報一下錯誤:
17/01/18 13:46:08 INFO HiveMetaStore.audit: ugi=hive/t162@HADOOP.COM ip=unknown-ip-addr cmd=Metastore shutdown complete. 17/01/18 13:46:08 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:08 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:08 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:08 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:09 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:12 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:17 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:19 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 7 days before. 17/01/18 13:46:19 WARN ipc.Client: Couldn't setup connection for hive/t162@HADOOP.COM to t162/t161:8020 17/01/18 13:46:19 WARN thrift.ThriftCLIService: Error opening session: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException:
Failed on local exception: java.io.IOException: Couldn't setup connection for hive/t162@HADOOP.COM to t162/t161:8020; Host Details :
local host is: "t162/t161"; destination host is: "t162":8020; at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:264) at org.apache.spark.sql.hive.thriftserver.SparkSQLSessionMa
緣由:建立kerberos庫時咱們設置了principal的認證有效期和最大renew時間,以下/etc/krb5.conf文件內容所示:
[libdefaults] default_realm = HADOOP.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true renewable=true
7天后認證沒法renew致使服務認證失敗,用戶連不上服務了。未解決這個問題咱們須要定時從新kinit下服務principal,咱們對服務啓動腳本進行一下修改,添加定時認證腳本,以下所示:
#!/bin/sh #start Spark-thriftserver export YARN_CONF_DIR=/etc/hadoop/conf file="hive-site.xml" dir=$(pwd) cd conf/ if [ ! -e "$file" ] then cp /etc/hive/conf/hive-site.xml $dir/conf/ fi cd ../sbin ./start-thriftserver.sh --name SparkJDBC --master yarn-client --num-executors 10 --executor-memory 2g --executor-cores 4 --driver-memory 10g --driver-cores 2 --conf spark.storage.memoryFraction=0.2 --conf spark.shuffle.memoryFraction=0.6 --hiveconf hive.server2.thrift.port=10001 --hiveconf hive.server2.logging.operation.enabled=true --hiveconf hive.server2.authentication.kerberos.principal=hive/t162@HADOOP.COM --hiveconf hive.server2.authentication.kerberos.keytab=/home/hive.keytab while(true) do kinit -kt /home/hive.keytab hive/t162@HADOOP.COM sleep 6*24h done &
經測試,問題解決!