hadoop mapred-queue-acls 配置(轉)

hadoop做業提交時能夠指定相應的隊列,例如:-Dmapred.job.queue.name=queue2
經過對mapred-queue-acls.xml和mapred-site.xml配置能夠對不一樣的隊列實現不一樣用戶的提交權限.
先編輯mapred-site.xml,修改配置以下(增長四個隊列):java

<property> 
  <name>mapred.queue.names</name> 
  <value>default,queue1,queue2,queue3,queue4</value> 
</property>

修改生效後經過jobtrack界面能夠看到配置的隊列信息。apache

 

要對隊列進行控制, 還須要編輯mapred-queue-acls.xml文件app

<property> 
  <name>mapred.queue.queue1.acl-submit-job</name> 
  <value>' '</value> 
  <description> Comma separated list of user and group names that are allowed 
   to submit jobs to the 'default' queue. The user list and the group list 
   are separated by a blank. For e.g. user1,user2 group1,group2. 
   If set to the special value '*', it means all users are allowed to 
   submit jobs. If set to ' '(i.e. space), no user will be allowed to submit 
   jobs. 
 
   It is only used if authorization is enabled in Map/Reduce by setting the 
   configuration property mapred.acls.enabled to true. 
   Irrespective of this ACL configuration, the user who started the cluster and 
   cluster administrators configured via 
   mapreduce.cluster.administrators can submit jobs. 
  </description> 
</property> 

 

 要配置多個隊列, 只須要重複添加上面配置信息,修改隊列名稱和value值,爲方便測試,queue1禁止全部用戶向其提交做業. 
   要使該配置生效, 還須要修改mapred-site.xml,將mapred.acls.enabled值設置爲truejsp

<property> 
  <name>mapred.acls.enabled</name> 
  <value>true</value> 
</property> 

 重啓hadoop, 使配置生效, 接下來拿hive進行測試:oop

先使用queue2隊列:測試

set mapred.job.queue.name=queue2; 
hive>  
    > select count(*) from t_aa_pc_log; 
Total MapReduce jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
  set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
  set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
  set mapred.reduce.tasks=<number> 
Starting Job = job_201205211843_0002, Tracking URL = http://192.168.189.128:50030/jobdetails.jsp?jobid=job_201205211843_0002 
Kill Command = /opt/app/hadoop-0.20.2-cdh3u3/bin/hadoop job  -Dmapred.job.tracker=192.168.189.128:9020 -kill job_201205211843_0002 
2012-05-21 18:45:01,593 Stage-1 map = 0%,  reduce = 0% 
2012-05-21 18:45:04,613 Stage-1 map = 100%,  reduce = 0% 
2012-05-21 18:45:12,695 Stage-1 map = 100%,  reduce = 100% 
Ended Job = job_201205211843_0002 
OK 
136003 
Time taken: 14.674 seconds 
hive>  

做業成功完成this

再來向queue1隊列提交做業:spa

   > set mapred.job.queue.name=queue1; 
hive> select count(*) from t_aa_pc_log; 
Total MapReduce jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
  set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
  set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
  set mapred.reduce.tasks=<number> 
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: User p_sdo_data_01 cannot perform operation SUBMIT_JOB on queue queue1. 
 Please run "hadoop queue -showacls" command to find the queues you have access to . 
    at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:179) 
    at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:136) 
    at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:113) 
    at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3781) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
    at java.lang.reflect.Method.invoke(Method.java:597) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) 

做業提交失敗!code

最後, 能夠使用 hadoop queue -showacls 命令查看隊列信息:orm

[hadoop@localhost conf]$ hadoop queue -showacls 
Queue acls for user :  hadoop 
 
Queue  Operations 
===================== 
queue1  administer-jobs 
queue2  submit-job,administer-jobs 
queue3  submit-job,administer-jobs 
queue4  submit-job,administer-jobs 

 

轉自 http://yaoyinjie.blog.51cto.com/3189782/872294

相關文章
相關標籤/搜索