經過cloudera manager 5.x添加spark服務,在建立服務過程當中,發現spark服務建立失敗,能夠經過控制檯錯誤輸出看到以下日誌信息:java
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/etc/spark/conf.cloudera.spark_on_yarn/yarn-conf#g' /opt/cm-5.9.2/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_1615663591259519890/spark-conf/yarn-conf/yarn-site.xmlnode ++ get_default_fs /opt/cm-5.9.2/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_1615663591259519890/spark-conf/yarn-confshell ++ get_hadoop_conf /opt/cm-5.9.2/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_1615663591259519890/spark-conf/yarn-conf fs.defaultFSapache ++ local conf=/opt/cm-5.9.2/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_1615663591259519890/spark-conf/yarn-confbash ++ local key=fs.defaultFS服務器 ++ '[' 1 == 1 ']'jvm ++ /opt/cloudera/parcels/CDH-5.9.2-1.cdh5.9.2.p0.3/lib/hadoop/../../bin/hdfs --config /opt/cm-5.9.2/run/cloudera-scm-agent/process/ccdeploy_spark-conf_etcsparkconf.cloudera.spark_on_yarn_1615663591259519890/spark-conf/yarn-conf getconf -confKey fs.defaultFSide Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/hdfs/tools/GetConf : Unsupported major.minor version 51.0oop at java.lang.ClassLoader.defineClass1(Native Method)post at java.lang.ClassLoader.defineClass(ClassLoader.java:643) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:323) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:268) Could not find the main class: org.apache.hadoop.hdfs.tools.GetConf. Program will exit. + DEFAULT_FS= |
根據輸出日誌信息大體能夠判斷這是由於jdk版本致使的添加spark服務失敗。由於這是我全權安裝的環境,因此印象中jdk版本是知足cm5安裝要求的,我這裏使用的是jdk1.7.0_67,以下:
# java -version java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) You have new mail in /var/spool/mail/root |
針對於目前java版本並無問題,知足當前安裝的cm5版本要求。因此判斷多是由於我是經過tar包方式安裝的java的緣由,正常經過rpm包安裝應該沒有這個問題。下面驗證本身的猜想:
這裏使用alternatives命令,alternatives命令一般用來管理服務器上的相同軟件多版本問題。
--查看服務器java版本,發現jdk1.7.0_67沒有再服務器管理之下: [root@db01 ~]# alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- 1 /usr/lib/jvm/jre-1.5.0-gcj/bin/java *+ 2 /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java Enter to keep the current selection[+], or type selection number: |
--將jdk1.7.0_67添加到服務器管理中: [root@db01 ~]# alternatives --install /usr/bin/java java /opt/java/jdk1.7.0_67/bin/java 3 |
--再次查看服務器java版本信息,而且調整優先級最高的爲jdk1.7.0_67: [root@db01 ~]# alternatives --config java There are 3 programs which provide 'java'. Selection Command ----------------------------------------------- 1 /usr/lib/jvm/jre-1.5.0-gcj/bin/java *+ 2 /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java 3 /opt/java/jdk1.7.0_67/bin/java Enter to keep the current selection[+], or type selection number: 3 |
[root@db01 ~]# alternatives --config java There are 3 programs which provide 'java'. Selection Command ----------------------------------------------- 1 /usr/lib/jvm/jre-1.5.0-gcj/bin/java * 2 /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java + 3 /opt/java/jdk1.7.0_67/bin/java Enter to keep the current selection[+], or type selection number: |
調整java版本信息後,再次添加spark服務,成功。
或者卸載原生的java版本,如:
# rpm -e java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64 tzdata-java-2013g-1.el6.noarch java_cup-0.10k-5.el6.x86_64 java-1.6.0-openjdk-devel-1.6.0.0-1.66.1.13.0.el6.x86_64 gcc-java-4.4.7-4.el6.x86_64 --nodeps
若是以上方法不可用,採用如下方法,直接指定環境變量:
find / -type f -name "*cc.sh"
定位到/opt/program/cm-5.9.0/lib64/cmf/service/client/deploy-cc.sh
直接在上面加上
JAVA_HOME=/opt/java
export JAVA_HOME=/opt/java
===========================
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/avro-tools /etc/alternatives/avro-tools
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/beeline /etc/alternatives/beeline
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/bigtop-detect-javahome /etc/alternatives/bigtop-detect-javahome
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/catalogd /etc/alternatives/catalogd
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/cli_mt /etc/alternatives/cli_mt
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/cli_st /etc/alternatives/cli_st
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/flume-ng /etc/alternatives/flume-ng
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hadoop /etc/alternatives/hadoop
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hadoop-0.20 /etc/alternatives/hadoop-0.20
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hadoop-fuse-dfs /etc/alternatives/hadoop-fuse-dfs
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hbase /etc/alternatives/hbase
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hbase-indexer /etc/alternatives/hbase-indexer
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hcat /etc/alternatives/hcat
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hdfs /etc/alternatives/hdfs
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hive /etc/alternatives/hive
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/hiveserver2 /etc/alternatives/hiveserver2
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/impala-collect-minidumps /etc/alternatives/impala-collect-minidumps
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/impalad /etc/alternatives/impalad
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/impala-shell /etc/alternatives/impala-shell
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/kite-dataset /etc/alternatives/kite-dataset
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/llama /etc/alternatives/llama
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/llamaadmin /etc/alternatives/llamaadmin
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/load_gen /etc/alternatives/load_gen
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/mahout /etc/alternatives/mahout
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/mapred /etc/alternatives/mapred
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/oozie /etc/alternatives/oozie
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/parquet-tools /etc/alternatives/parquet-tools
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/pig /etc/alternatives/pig
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/pyspark /etc/alternatives/pyspark
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sentry /etc/alternatives/sentry
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/solrctl /etc/alternatives/solrctl
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/spark-shell /etc/alternatives/spark-shell
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/spark-submit /etc/alternatives/spark-submit
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop /etc/alternatives/sqoop
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop2 /etc/alternatives/sqoop2
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-codegen /etc/alternatives/sqoop-codegen
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-create-hive-table /etc/alternatives/sqoop-create-hive-table
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-eval /etc/alternatives/sqoop-eval
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-export /etc/alternatives/sqoop-export
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-help /etc/alternatives/sqoop-help
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-import /etc/alternatives/sqoop-import
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-import-all-tables /etc/alternatives/sqoop-import-all-tables
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-job /etc/alternatives/sqoop-job
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-list-databases /etc/alternatives/sqoop-list-databases
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-list-tables /etc/alternatives/sqoop-list-tables
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-merge /etc/alternatives/sqoop-merge
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-metastore /etc/alternatives/sqoop-metastore
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/sqoop-version /etc/alternatives/sqoop-version
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/statestored /etc/alternatives/statestored
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/whirr /etc/alternatives/whirr
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/yarn /etc/alternatives/yarn
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/zookeeper-client /etc/alternatives/zookeeper-client
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/zookeeper-server /etc/alternatives/zookeeper-server
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/zookeeper-server-cleanup /etc/alternatives/zookeeper-server-cleanup
ln -s /opt/cloudera/parcels/CDH-5.14.0-1.cdh5.14.0.p0.24/bin/zookeeper-server-initialize /etc/alternatives/zookeeper-server-initialize
#!/bin/bash
for a in `find . -type l`do stat -L $a >/dev/null 2>/dev/null if [ $? -gt 0 ] then rm $a fidone