使用spark-shell ,sc.textFile(「hdfs://test02.com:8020/tmp/w」).count 出現以下異常:java
java.lang.RuntimeException: Error in configuring objectnode
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)linux
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)shell
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)apache
at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:186)api
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)oracle
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)app
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)maven
at scala.Option.getOrElse(Option.scala:120)ide
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1517)
at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:22)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:27)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
at $iwC$$iwC$$iwC.<init>(<console>:35)
at $iwC$$iwC.<init>(<console>:37)
at $iwC.<init>(<console>:39)
at <init>(<console>:41)
at .<init>(<console>:45)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
... 61 more
Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135)
at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)
at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
... 66 more
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2018)
at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
... 68 more
這是由於在hadoop 的core-site.xml 和mapred-site.xml中開啓了壓縮,而且壓縮式lzo的。這就致使寫入/上傳到hdfs的文件自動被壓縮爲lzo了。這個時候你使用sc.textFile讀取文件時就會報告一堆lzo找不到的異常。
最根本的緣由就是spark找不到hadoop-lzo.jar 和lzo本地庫,你須要確保集羣中每個機器上都安裝了lzo,lzop,hadoop-lzo.jar,而後修改spark-env.sh,添加SPARK_LIBRARY_PATH和SPARK_CLASSPATH,其中SPARK_LIBRARY_PATH指向lzo本地庫,SPARK_CLASSPATH指向hadoop-lzo.jar。若是你從spark-shell中進行測試,在啓動spark-shell時須要配置--jars和--driver-library-path。
對於cdh集羣,hadoop-lzo已經安裝了。對於apache集羣,你須要本身手動安裝
通常來講須要在集羣中每臺機器執行以下步驟:
安裝lzo lib
安裝lzop 可執行程序
安裝twitter的hadoop-lzo.jar
添加以下變量:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/native/*
export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/native
SPARK_CLASSPATH=/opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib:$SPARK_CLASSPATH
./spark-shell --jars hadoop-lzo.jar的全路徑 --driver-library-path hadoop-lzo的native目錄
這個時候,你就能夠在spark-shell中使用textFile讀取hdfs數據了。
譬如,你能夠以下啓動spark-shell
./bin/spark-shell --jars /opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/hadoop-lzo.jar --driver-library-path /opt/cloudera/parcels/HADOOP_LZO/lib/hadoop/lib/native^C
maven pom.xml 中的scala 版本應該和spark版本一直:
若是pom.xml 的scala版本是2.11的
<properties>
<scala.version>2.11.4</scala.version>
</properties>
那麼 spark也應該是2.11的:
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>1.6.1</version>
</dependency>
一樣,在使用scalatest和scalactic時也是如此。
maven pom.xml 中的scala 版本應該和spark版本一直:
若是pom.xml 的scala版本是2.11的
<properties>
<scala.version>2.11.4</scala.version>
</properties>
那麼 spark也應該是2.11的:
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>1.6.1</version>
</dependency>
一樣,在使用scalatest和scalactic時也是如此。
Caused by: java.lang.IllegalArgumentException: Comparison method violates its general contract!
at java.util.TimSort.mergeHi(TimSort.java:899)
at java.util.TimSort.mergeAt(TimSort.java:516)
at java.util.TimSort.mergeCollapse(TimSort.java:441)
at java.util.TimSort.sort(TimSort.java:245)
at java.util.Arrays.sort(Arrays.java:1438)
at scala.collection.SeqLike$class.sorted(SeqLike.scala:618)
at scala.collection.mutable.ArrayOps$ofRef.sorted(ArrayOps.scala:186)
at scala.collection.SeqLike$class.sortWith(SeqLike.scala:575)
at scala.collection.mutable.ArrayOps$ofRef.sortWith(ArrayOps.scala:186)
at bitnei.utils.Utils$.sortBy(Utils.scala:116)
at FsmTest$$anonfun$1$$anonfun$apply$mcV$sp$4.apply(FsmTest.scala:30)
... 54 more
出現這個問題的緣由,是在排序時,兩個相等的值沒有返回true。源代碼以下:
def compareDate(dateA:String,dateB:String):Boolean={
val dateFormat=new java.text.SimpleDateFormat("yyyyMMddHHmmss")
val timeA=dateFormat.parse(dateA).getTime
val timeB=dateFormat.parse(dateB).getTime
timeA<=timeB
}
將上面的<=換爲<便可。
<dependency>
<groupId>spark.jobserver</groupId>
<artifactId>job-server-api_2.11</artifactId>
<version>0.6.2</version>
</dependency>
如上圖,再引用job-server-api_2.11時,maven找不到某些jar的依賴,緣由是默認中央倉庫不全,須要添加其餘中央倉庫,以下:
</pluginRepository>
<pluginRepository>
<id>dl-bintray.com/</id>
<name>Scala-Tools Maven2 Repository</name>
<url>https://dl.bintray.com/spark-jobserver/maven/</url>
</pluginRepository>
<repository>
<id>dl-bintray.com/</id>
<name>Scala-Tools Maven2 Repository</name>
<url>https://dl.bintray.com/spark-jobserver/maven/</url>
</repository>
而後就ok了。
java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
解決方法:
在spark-default.sh中添加以下內容:
spark.files=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
也就是說,將core-site.xml和hdfs-site.xml添加到spark.files中
問題6
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/vehicle/result/mid/2016/09/13/_temporary/0/_temporary/attempt_201611121515_0001_m_000005_188/part-00005 could only be replicated to 0 nodes instead of minReplication (=1). There are 6 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1541)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3289)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:668)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
Hdfs磁盤空間滿了
JDBC問題
1.在maven中引入ojdbc方式:
首先將ojdbc安裝到本地倉庫
C:\Users\franciswang>mvn install:install-file -Dfile=d:/spark/lib/ojdbc6-11.2.0.3.0.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0 -Dpackaging=jar
接下來在porm中引用:
<dependency>
<groupId>com.oracle</groupId>
<artifactId>ojdbc6</artifactId>
<version>11.2.0</version>
</dependency>
Jdk/jre/lib/ext/下才好使。參考地址:
http://stackoverflow.com/questions/17701610/cannot-find-or-load-oracle-jdbc-driver-oracledriver