本文記錄使用ambari-server安裝HDP的過程,對比於使用cloudera-manager安裝CDH,不得不說ambari的易用性差的比較多~_~,須要用戶介入的過程較多,或者說可定製性更高。java
首先、安裝以前,在每一個主機節點上執行下面命令,已清除緩存,避免一些repo緣由致使的安裝失敗。python
yum clean all
下面開始安裝過程:mysql
1、安裝過程:web
1,登陸ambari-server管理界面,用瀏覽器訪問http://ep-bd01:8080,默認用戶名口令皆爲admin。sql
2,點擊按鈕「LUNCH INSTALL WIZZARD」,給集羣起名,這裏爲EPBD,下一步數據庫
4,選擇HDP版本3.0.0.0,配置repo地址apache
這一步ambari自動列出配置在本地repo中HDP版本的repo IDcentos
下面是倉庫的設置,這裏選擇本地倉庫,刪除掉除了"Redhat7"以外的其餘操做系統,倉庫基地址,就是前面配置的hdp-local.repo中的設置:瀏覽器
http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634 、 http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634 和 http://ep-bd01/hdp/HDP-UTILS/centos7/1.1.0.22 緩存
而後,選中「Use RedHat Satellite/Spacewalk」,此時能夠修改倉庫名稱,確保和配好的hdp.repo中保持一致,點擊下一步。
5,[Target Hosts]填寫集羣中主機列表,主機填寫可使用中括號加上序數後綴範圍的方式,詳細用法點擊"Pattern Expressions"
主機註冊方式能夠選中SSH方式,這須要提供ssh免密訪問所用私有證書;
或者選擇「Perform manual registration on hosts and do not use SSH」,這種方式須要在每臺主機上事先安裝好ambari-agent,就如我在上一篇中所作的,因此我選擇的是這種方式。經試驗對比用SSH的方式註冊主機時稍稍快上一點兒。
進行下一步「REGISTER AND CONFIRM」,ambari可能會提示主機名稱不是全名稱FQDN,不用理會它,繼續進行便可。
6,點擊下一步開始進行主機檢測,檢測成功後能夠點擊「 Click here to see the check results」能夠查看檢測結果。
7,進入選擇filesystem和services,這裏接受默認設置,點擊next。
注:後通過無數次失敗的打擊,我取消了Ranger和Ranger KMS服務,緣由不知,這裏又一次失敗的log:
stderr: 2018-08-17 12:04:22,639 - The 'ranger-kms' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (3.0.0.0-1634). This is the version that will be reported. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 137, in <module> KmsServer().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 52, in install self.configure(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 94, in configure kms.kms() File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms.py", line 150, in kms create_parents = True File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 177, in action_create raise Fail("Applying %s failed, looped symbolic links found while resolving %s" % (self.resource, path)) resource_management.core.exceptions.Fail: Applying Directory['/usr/hdp/current/ranger-kms/conf'] failed, looped symbolic links found while resolving /usr/hdp/current/ranger-kms/conf stdout: 2018-08-17 12:04:22,355 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0 2018-08-17 12:04:22,358 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2018-08-17 12:04:22,359 - Group['kms'] {} 2018-08-17 12:04:22,360 - Group['livy'] {} 2018-08-17 12:04:22,360 - Group['spark'] {} 2018-08-17 12:04:22,360 - Group['ranger'] {} 2018-08-17 12:04:22,360 - Group['hdfs'] {} 2018-08-17 12:04:22,360 - Group['zeppelin'] {} 2018-08-17 12:04:22,360 - Group['hadoop'] {} 2018-08-17 12:04:22,361 - Group['users'] {} 2018-08-17 12:04:22,361 - Group['knox'] {} 2018-08-17 12:04:22,361 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,362 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,363 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,363 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,364 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,365 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-08-17 12:04:22,366 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,366 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,367 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None} 2018-08-17 12:04:22,368 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-08-17 12:04:22,368 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None} 2018-08-17 12:04:22,369 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None} 2018-08-17 12:04:22,370 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,370 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None} 2018-08-17 12:04:22,371 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,372 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None} 2018-08-17 12:04:22,373 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2018-08-17 12:04:22,373 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,374 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2018-08-17 12:04:22,375 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,376 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,376 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,377 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-17 12:04:22,378 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None} 2018-08-17 12:04:22,378 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-08-17 12:04:22,379 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2018-08-17 12:04:22,383 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2018-08-17 12:04:22,383 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2018-08-17 12:04:22,384 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-08-17 12:04:22,385 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-08-17 12:04:22,385 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2018-08-17 12:04:22,391 - call returned (0, '1015') 2018-08-17 12:04:22,392 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2018-08-17 12:04:22,395 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] due to not_if 2018-08-17 12:04:22,396 - Group['hdfs'] {} 2018-08-17 12:04:22,396 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2018-08-17 12:04:22,396 - FS Type: HDFS 2018-08-17 12:04:22,396 - Directory['/etc/hadoop'] {'mode': 0755} 2018-08-17 12:04:22,406 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2018-08-17 12:04:22,407 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2018-08-17 12:04:22,419 - Repository['HDP-3.0-repo-1'] {'append_to_file': False, 'base_url': 'http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None} 2018-08-17 12:04:22,424 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-08-17 12:04:22,424 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match 2018-08-17 12:04:22,424 - Repository['HDP-3.0-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None} 2018-08-17 12:04:22,427 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-08-17 12:04:22,427 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match 2018-08-17 12:04:22,444 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://ep-bd01/hdp/HDP-UTILS/centos7/1.1.0.22', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None} 2018-08-17 12:04:22,454 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://ep-bd01/hdp/HDP-UTILS/centos7/1.1.0.22\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-08-17 12:04:22,455 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match 2018-08-17 12:04:22,472 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-08-17 12:04:22,564 - Skipping installation of existing package unzip 2018-08-17 12:04:22,564 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-08-17 12:04:22,571 - Skipping installation of existing package curl 2018-08-17 12:04:22,571 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-08-17 12:04:22,579 - Skipping installation of existing package hdp-select 2018-08-17 12:04:22,622 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {} 2018-08-17 12:04:22,639 - call returned (0, '3.0.0.0-1634') 2018-08-17 12:04:22,639 - The 'ranger-kms' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (3.0.0.0-1634). This is the version that will be reported. 2018-08-17 12:04:22,822 - Command repositories: HDP-3.0-repo-1, HDP-3.0-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1 2018-08-17 12:04:22,822 - Applicable repositories: HDP-3.0-repo-1, HDP-3.0-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1 2018-08-17 12:04:22,822 - Looking for matching packages in the following repositories: HDP-3.0-repo-1, HDP-3.0-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1 2018-08-17 12:04:26,159 - Adding fallback repositories: HDP-UTILS-1.1.0.22, HDP-3.0-GPL, HDP-3.0 2018-08-17 12:04:29,472 - Package['ranger_3_0_0_0_1634-kms'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-08-17 12:04:29,527 - Installing package ranger_3_0_0_0_1634-kms ('/usr/bin/yum -y install ranger_3_0_0_0_1634-kms') 2018-08-17 12:04:41,340 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0 2018-08-17 12:04:41,355 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2018-08-17 12:04:41,359 - Execute[('cp', '-f', u'/usr/hdp/current/ranger-kms/install.properties', u'/usr/hdp/current/ranger-kms/install-backup.properties')] {'not_if': 'ls /usr/hdp/current/ranger-kms/install-backup.properties', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-kms/install.properties'} 2018-08-17 12:04:41,371 - Password validated 2018-08-17 12:04:41,372 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://ep-bd01:8080/resources/mysql-connector-java.jar'), 'mode': 0644} 2018-08-17 12:04:41,372 - Not downloading the file from http://ep-bd01:8080/resources/mysql-connector-java.jar, because /var/lib/ambari-agent/tmp/mysql-connector-java.jar already exists 2018-08-17 12:04:41,373 - Directory['/usr/hdp/current/ranger-kms/ews/lib'] {'mode': 0755} 2018-08-17 12:04:41,373 - Creating directory Directory['/usr/hdp/current/ranger-kms/ews/lib'] since it doesn't exist. 2018-08-17 12:04:41,374 - Execute[('cp', '--remove-destination', u'/var/lib/ambari-agent/tmp/mysql-connector-java.jar', u'/usr/hdp/current/ranger-kms/ews/webapp/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True} 2018-08-17 12:04:41,379 - File['/usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar'] {'mode': 0644} 2018-08-17 12:04:41,380 - ModifyPropertiesFile['/usr/hdp/current/ranger-kms/install.properties'] {'owner': 'kms', 'properties': ...} 2018-08-17 12:04:41,380 - Modifying existing properties file: /usr/hdp/current/ranger-kms/install.properties 2018-08-17 12:04:41,387 - File['/usr/hdp/current/ranger-kms/install.properties'] {'owner': 'kms', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2018-08-17 12:04:41,388 - Writing File['/usr/hdp/current/ranger-kms/install.properties'] because contents don't match 2018-08-17 12:04:41,388 - Changing owner for /usr/hdp/current/ranger-kms/install.properties from 0 to kms 2018-08-17 12:04:41,388 - ModifyPropertiesFile['/usr/hdp/current/ranger-kms/install.properties'] {'owner': 'kms', 'properties': {'SQL_CONNECTOR_JAR': u'/usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar'}} 2018-08-17 12:04:41,388 - Modifying existing properties file: /usr/hdp/current/ranger-kms/install.properties 2018-08-17 12:04:41,389 - File['/usr/hdp/current/ranger-kms/install.properties'] {'owner': 'kms', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'} 2018-08-17 12:04:41,389 - Setting up Ranger KMS DB and DB User 2018-08-17 12:04:41,389 - Execute['ambari-python-wrap /usr/hdp/current/ranger-kms/dba_script.py -q'] {'logoutput': True, 'environment': {'RANGER_KMS_HOME': u'/usr/hdp/current/ranger-kms', 'JAVA_HOME': u'/usr/java/jdk1.8.0_181-amd64'}, 'tries': 5, 'user': 'kms', 'try_sleep': 10} 2018-08-17 12:04:41,451 [I] Running DBA setup script. QuiteMode:True 2018-08-17 12:04:41,451 [I] Using Java:/usr/java/jdk1.8.0_181-amd64/bin/java 2018-08-17 12:04:41,451 [I] DB FLAVOR:MYSQL 2018-08-17 12:04:41,451 [I] DB Host:ep-bd01 2018-08-17 12:04:41,451 [I] ---------- Verifing DB root password ---------- 2018-08-17 12:04:41,451 [I] DBA root user password validated 2018-08-17 12:04:41,451 [I] ---------- Verifing Ranger KMS db user password ---------- 2018-08-17 12:04:41,451 [I] KMS user password validated 2018-08-17 12:04:41,451 [I] ---------- Creating Ranger KMS db user ---------- 2018-08-17 12:04:41,451 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "SELECT version();" 2018-08-17 12:04:41,716 [I] Verifying user rangerkms for Host % 2018-08-17 12:04:41,716 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "select user from mysql.user where user='rangerkms' and host='%';" 2018-08-17 12:04:41,981 [I] MySQL user rangerkms already exists for host % 2018-08-17 12:04:41,981 [I] Verifying user rangerkms for Host localhost 2018-08-17 12:04:41,981 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "select user from mysql.user where user='rangerkms' and host='localhost';" 2018-08-17 12:04:42,250 [I] MySQL user rangerkms already exists for host localhost 2018-08-17 12:04:42,250 [I] Verifying user rangerkms for Host ep-bd01 2018-08-17 12:04:42,250 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "select user from mysql.user where user='rangerkms' and host='ep-bd01';" 2018-08-17 12:04:42,525 [I] MySQL user rangerkms already exists for host ep-bd01 2018-08-17 12:04:42,525 [I] ---------- Creating Ranger KMS database ---------- 2018-08-17 12:04:42,525 [I] Verifying database rangerkms 2018-08-17 12:04:42,525 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "show databases like 'rangerkms';" 2018-08-17 12:04:42,788 [I] Database rangerkms already exists. 2018-08-17 12:04:42,788 [I] ---------- Granting permission to Ranger KMS db user ---------- 2018-08-17 12:04:42,788 [I] ---------- Granting privileges TO user 'rangerkms'@'%' on db 'rangerkms'---------- 2018-08-17 12:04:42,788 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "grant all privileges on rangerkms.* to 'rangerkms'@'%' with grant option;" 2018-08-17 12:04:43,048 [I] ---------- FLUSH PRIVILEGES ---------- 2018-08-17 12:04:43,048 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "FLUSH PRIVILEGES;" 2018-08-17 12:04:43,544 [I] Privileges granted to 'rangerkms' on 'rangerkms' 2018-08-17 12:04:43,544 [I] ---------- Granting privileges TO user 'rangerkms'@'localhost' on db 'rangerkms'---------- 2018-08-17 12:04:43,544 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "grant all privileges on rangerkms.* to 'rangerkms'@'localhost' with grant option;" 2018-08-17 12:04:43,810 [I] ---------- FLUSH PRIVILEGES ---------- 2018-08-17 12:04:43,810 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "FLUSH PRIVILEGES;" 2018-08-17 12:04:44,080 [I] Privileges granted to 'rangerkms' on 'rangerkms' 2018-08-17 12:04:44,080 [I] ---------- Granting privileges TO user 'rangerkms'@'ep-bd01' on db 'rangerkms'---------- 2018-08-17 12:04:44,080 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "grant all privileges on rangerkms.* to 'rangerkms'@'ep-bd01' with grant option;" 2018-08-17 12:04:44,353 [I] ---------- FLUSH PRIVILEGES ---------- 2018-08-17 12:04:44,353 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/mysql -u root -p '********' -noheader -trim -c \; -query "FLUSH PRIVILEGES;" 2018-08-17 12:04:44,619 [I] Privileges granted to 'rangerkms' on 'rangerkms' 2018-08-17 12:04:44,619 [I] ---------- Ranger KMS DB and User Creation Process Completed.. ---------- 2018-08-17 12:04:44,624 - Execute['ambari-python-wrap /usr/hdp/current/ranger-kms/db_setup.py'] {'logoutput': True, 'environment': {'PATH': '/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent', 'RANGER_KMS_HOME': u'/usr/hdp/current/ranger-kms', 'JAVA_HOME': u'/usr/java/jdk1.8.0_181-amd64'}, 'tries': 5, 'user': 'kms', 'try_sleep': 10} 2018-08-17 12:04:44,679 [I] DB FLAVOR :MYSQL 2018-08-17 12:04:44,679 [I] --------- Verifying Ranger DB connection --------- 2018-08-17 12:04:44,679 [I] Checking connection.. 2018-08-17 12:04:44,679 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/rangerkms -u 'rangerkms' -p '********' -noheader -trim -c \; -query "SELECT version();" 2018-08-17 12:04:44,947 [I] Checking connection passed. 2018-08-17 12:04:44,947 [I] --------- Verifying Ranger DB tables --------- 2018-08-17 12:04:44,947 [JISQL] /usr/java/jdk1.8.0_181-amd64/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://ep-bd01/rangerkms -u 'rangerkms' -p '********' -noheader -trim -c \; -query "show tables like 'ranger_masterkey';" 2018-08-17 12:04:45,211 [I] Table ranger_masterkey already exists in database 'rangerkms' 2018-08-17 12:04:45,217 - Directory['/usr/hdp/current/ranger-kms/conf'] {'owner': 'kms', 'group': 'kms', 'create_parents': True} 2018-08-17 12:04:45,217 - Creating directory Directory['/usr/hdp/current/ranger-kms/conf'] since it doesn't exist. Command failed after 1 tries
8,【Assign Slaves and Clients】
9,【Assign Slaves and Clients】
10,【CREDENTIALS】密碼這裏我都是同樣的粘貼過來,除了Ranger Admin這個保持不變
11,【DATABASEs】,用戶名和數據庫一概使用服務名相同
Hive和Oozie的數據庫和用戶須要手動創建。 Druid的用戶須要創建好。
[root@ep-bd01 downloads]# mysql -uroot -phadoop Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 22 Server version: 5.5.56-MariaDB MariaDB Server Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [mysql]> create database oozie DEFAULT CHARSET utf8 COLLATE utf8_general_ci; Query OK, 1 row affected (0.00 sec) MariaDB [mysql]> grant all privileges on *.* to 'oozie'@'%' identified by 'oozie';
MariaDB [mysql]> create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci; Query OK, 1 row affected (0.00 sec) MariaDB [mysql]> grant all privileges on *.* to 'hive'@'%' identified by 'hive';
MariaDB [mysql]> grant all privileges on *.* to 'druid'@'%' identified by 'druid';
Hive的數據庫配置須要選擇「Existing MySQL/MariaDB」,注意密碼要和事先創建的用戶一致,不能用默認的。填寫後點擊「TEST CONNECTION」測試成功才行。
Oozie的設置和Hive基本相似,一樣要測試經過才行。
Ranger的數據庫配置須要給出數據庫所在主機,和root用戶的數據庫密碼,而後測試經過。 Ranger KMS相似,可是沒有測試,因此須要仔細填寫,我在一次失敗過程當中就是這一步時填寫的數據庫主機名寫錯了一個字母致使的。
12,【DIRECTORYs,ACCOUNTs 】【ALL CONFIGURATIONS】
所有接受默認值,直接下一步
14,【review】
沒啥好說的,點擊next,等待......
2、遇坑失敗,經驗總結:
(一)【重置ambari-server,從新開始安裝】
1,重設reset ambari-server
systemctl stop ambari-server
ambari-server reset
2,因爲ambari-server的數據存在數據庫中,ambari-server不能自動重置mariadb數據庫表,須要手動刪除重建ambari數據庫:
mysql -uroot -p
use mysql; drop database ambari; create database ambari; use ambari; source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;
use mysql;
3,從新啓動ambari-server和全部主機上的ambari-agent
systemctl restart ambari-server systemctl restart ambari-agent ssh ep-bd02 systemctl restart ambari-agent ssh ep-bd03 systemctl restart ambari-agent ssh ep-bd04 systemctl restart ambari-agent ssh ep-bd05 systemctl restart ambari-agent
4,卸載已安裝的模塊軟件包(不卸載致使再次安裝失敗)
yum erase -y -C ranger_3_0_0_0_1634-admin hive_3_0_0_0_1634 ambari-infra-solr-client oozie_3_0_0_0_1634-client oozie_3_0_0_0_1634-webapp oozie_3_0_0_0_1634-sharelib-sqoop hadoop_3_0_0_0_1634-libhdfs ranger_3_0_0_0_1634-kafka-plugin ranger_3_0_0_0_1634-hive-plugin druid_3_0_0_0_1634 tez_3_0_0_0_1634 oozie_3_0_0_0_1634-sharelib-pig ranger_3_0_0_0_1634-usersync ranger_3_0_0_0_1634-hbase-plugin accumulo_3_0_0_0_1634 ranger_3_0_0_0_1634-yarn-plugin oozie_3_0_0_0_1634-sharelib-distcp hive_3_0_0_0_1634-jdbc knox_3_0_0_0_1634 oozie_3_0_0_0_1634-sharelib-hcatalog hadoop_3_0_0_0_1634 phoenix_3_0_0_0_1634 atlas-metadata_3_0_0_0_1634-hbase-plugin hbase_3_0_0_0_1634 storm_3_0_0_0_1634 ranger_3_0_0_0_1634-hdfs-plugin hadoop_3_0_0_0_1634-hdfs ranger_3_0_0_0_1634-tagsync atlas-metadata_3_0_0_0_1634-storm-plugin ranger_3_0_0_0_1634-storm-plugin ranger_3_0_0_0_1634-knox-plugin ambari-metrics-grafana oozie_3_0_0_0_1634-common kafka_3_0_0_0_1634 spark2_3_0_0_0_1634-yarn-shuffle oozie_3_0_0_0_1634-sharelib-hive2 oozie_3_0_0_0_1634-sharelib-spark bigtop-jsvc oozie_3_0_0_0_1634-sharelib-mapreduce-streaming bigtop-tomcat atlas-metadata_3_0_0_0_1634 oozie_3_0_0_0_1634-sharelib atlas-metadata_3_0_0_0_1634-hive-plugin oozie_3_0_0_0_1634-sharelib-hive ambari-infra-solr hive_3_0_0_0_1634-hcatalog ambari-metrics-monitor hadoop_3_0_0_0_1634-client hadoop_3_0_0_0_1634-yarn smartsense-hst ranger_3_0_0_0_1634-atlas-plugin hadoop_3_0_0_0_1634-mapreduce hdp-select oozie_3_0_0_0_1634 zookeeper_3_0_0_0_1634 ambari-metrics-hadoop-sink zookeeper_3_0_0_0_1634-server ambari-metrics-collector atlas-metadata_3_0_0_0_1634-sqoop-plugin
5,刪除安裝目錄中的內容:不清除文件將致使分發時出現包解壓失敗等錯誤。
rm -rf /usr/hdp/* ssh ep-bd02 rm -rf /usr/hdp/* ssh ep-bd03 rm -rf /usr/hdp/* ssh ep-bd04 rm -rf /usr/hdp/* ssh ep-bd05 rm -rf /usr/hdp/*
因爲失敗重置次數太多,故將以上過程寫成腳本,方便執行
/root/ambari-server-reset.sh
echo reset ambari server and database ...... ambari-server stop && echo yes | ambari-server reset >/dev/null 2>&1 echo Drop and recreate ambari database ...... mysql -uroot -phadoop < /root/ambari-server-db-reset.sql echo remove all packages installed ......
ssh -t root@ep-bd01 "echo -n \"==> Removing installed packages and folders on --- \";hostname;sh /root/rm-hdp-packages.sh >/dev/null 2>&1;rm -rf /usr/hdp/*" &&
ssh -t root@ep-bd02 "echo -n \"==> Removing installed packages and folders on --- \";hostname;sh /root/rm-hdp-packages.sh >/dev/null 2>&1;rm -rf /usr/hdp/*" &&
ssh -t root@ep-bd03 "echo -n \"==> Removing installed packages and folders on --- \";hostname;sh /root/rm-hdp-packages.sh >/dev/null 2>&1;rm -rf /usr/hdp/*" &&
ssh -t root@ep-bd04 "echo -n \"==> Removing installed packages and folders on --- \";hostname;sh /root/rm-hdp-packages.sh >/dev/null 2>&1;rm -rf /usr/hdp/*" &&
ssh -t root@ep-bd05 "echo -n \"==> Removing installed packages and folders on --- \";hostname;sh /root/rm-hdp-packages.sh >/dev/null 2>&1;rm -rf /usr/hdp/*"
echo restart ambari server and all agents ...... systemctl restart ambari-server systemctl restart ambari-agent ssh ep-bd02 systemctl restart ambari-agent ssh ep-bd03 systemctl restart ambari-agent ssh ep-bd04 systemctl restart ambari-agent ssh ep-bd05 systemctl restart ambari-agent echo reset ambari-server done!
(二)、提供版本文件VDF,致使失敗,緣由不明
VDF,我這裏是:
http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634/HDP-3.0.0.0-1634.xml
在指定版本頁能夠順利讀取,可是再部署時報錯:「Upload Version Definition File Error」,詳細信息:"javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog"
未找到緣由。
(三)、Ranger、Ranger KMS安裝失敗,屢次未找到解決辦法,已暫時取消安裝,結果安裝成功。
oozie_3_0_0_0_1634-client-4.3.1.3.0.0.0-1634.noarch 強制卸載
yum remove oozie_3_0_0_0_1634-client-4.3.1.3.0.0.0-1634.noarch --setopt=tsflags=noscripts -y