跟着教學試着用Idea編程,實現Spark查詢Hive中的表。結果上來就涼了。sql
搗鼓很久都不行,在網上查有說將hive-site.xml放到resource目錄就行,還有什麼hadoop針對windows用戶的權限問題,結果都是扯淡。apache
其實問題仍是處在代碼上,直接附上代碼了,緣由下載註釋裏編程
Spark Hive操做windows
package sparkSql import org.apache.spark.sql.SparkSession /** * Created with IntelliJ IDEA. */ object SparkHiveSQL { def main(args: Array[String]): Unit = { //想要經過Spark操做hive SparkSession必需要調用enableHiveSupport(),不然沒法查詢到Hive val spark = SparkSession .builder() .appName("Spark Hive") .master("spark://192.168.194.131:7077") .enableHiveSupport() .getOrCreate() val df1 = spark.sql("select * from default.src") df1.show() } }