前提:配置JDK1.8環境,並配置相應的環境變量,JAVA_HOMEhtml
一.Hadoop的安裝java
1.1 下載Hadoop (2.6.0) http://hadoop.apache.org/releases.htmlnode
1.1.1 下載對應版本的winutils(https://github.com/steveloughran/winutils)並將其bin目錄下的文件,所有複製到hadoop的安裝目錄的bin文件下,進行替換。mysql
1.2 解壓hadoop-2.6.0.tar.gz到指定目錄,並配置相應的環境變量。react
1.2.1 新建HADOOP_HOME環境變量,並將其添加到path目錄(;%HADOOP_HOME%\bin)git
1.2.2 打開cmd窗口,輸入hadoop version 命令進行驗證,環境變量是否正常github
1.3 對Hadoop進行配置:(無同名配置文件,可經過其同名template文件複製,再進行編輯)web
1.3.1 編輯core-site.xml文件:(在hadoop的安裝目錄下建立workplace文件夾,在workplace下建立tmp和name文件夾)sql
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/E:/software/hadoop-2.6.0/workplace/tmp</value> </property> <property> <name>dfs.name.dir</name> <value>/E:/software/hadoop-2.6.0/workplace/name</value> </property> <property> <name>fs.default.name</name> <value>hdfs://127.0.0.1:9000</value> </property> <property> <name>hadoop.proxyuser.gl.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.gl.groups</name> <value>*</value> </property> </configuration>
1.3.4 編輯hdfs-site.xml文件數據庫
<configuration> <!-- 這個參數設置爲1,由於是單機版hadoop --> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/hadoop/data/dfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/hadoop/data/dfs/datanode</value> </property> </configuration>
1.3.5 編輯mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.job.tracker</name> <value>127.0.0.1:9001</value> </property> </configuration>
1.3.5 編輯yarn-site.xml文件
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
1.4 初始化並啓動Hadoop
1.4.1 在cmd窗口輸入: hadoop namenode –format(或者hdfs namenode -format) 命令,對節點進行初始化
1.4.2 進入Hadoop安裝目錄下的sbin:(E:\software\hadoop-2.6.0\sbin),點擊運行 start-all.bat 批處理文件,啓動hadoop
1.4.3 驗證hadoop是否成功啓動:新建cmd窗口,輸入:jps 命令,查看全部運行的服務,若是NameNode、NodeManager、DataNode、ResourceManager服務都存在,則說明啓動成功。
1.5 文件傳輸測試
1.5.1 建立輸入目錄,新建cmd窗口,輸入以下命令:(hdfs://localhost:9000 該路徑爲core-site.xml文件中配置的 fs.default.name 路徑)
hadoop fs -mkdir hdfs://localhost:9000/user/
hadoop fs -mkdir hdfs://localhost:9000/user/wcinput
1.5.2 上傳數據到指定目錄:在cmd窗口輸入以下命令:
hadoop fs -put E:\temp\MM.txt hdfs://localhost:9000/user/wcinput
hadoop fs -put E:\temp\react文檔.txt hdfs://localhost:9000/user/wcinput
1.5.3 查看文件是否上傳成功,在cmd窗口輸入以下命令:
hadoop fs -ls hdfs://localhost:9000/user/wcinput
1.5.4 前臺頁面顯示狀況:
1.5.4 在前臺查看hadoop的運行狀況:(資源管理界面:http://localhost:8088/)
1.5.5 節點管理界面(http://localhost:50070/)
1.5.6 經過前臺查看Hadoop的文件系統:(點擊進入Utilities下拉菜單,再點擊Browse the file system,依次進入user和wcinput,便可查看到上傳的文件列表)以下圖:
二. 安裝並配置Hive
2.1 下載Hive 地址:http://mirror.bit.edu.cn/apache/hive/
2.2 將apache-hive-2.2.0-bin.tar.gz解壓到指定的安裝目錄,並配置環境變量
2.2.1 新建HIVE_HOME環境變量,並將其添加到path目錄(;%HIVE_HOME%\bin)
2.2.2 打開cmd窗口,輸入hive version 命令進行驗證環境變量是否正常
2.3 配置hive-site.xml文件(不解釋,能夠直接看配置的相關描述)
<property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>E:/software/apache-hive-2.2.0-bin/scratch_dir</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>E:/software/apache-hive-2.2.0-bin/resources_dir/${hive.session.id}_resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.querylog.location</name> <value>E:/software/apache-hive-2.2.0-bin/querylog_dir</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>E:/software/apache-hive-2.2.0-bin/operation_dir</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true</value> <description> JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. </description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123</value> <description>password to use against metastore database</description> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> <description> Enforce metastore schema version consistency. True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. </description> </property> <!--配置用戶名和密碼--> <property> <name>hive.jdbc_passwd.auth.zhangsan</name> <value>123</value> </property>
2.4 在安裝目錄下建立配置文件中相應的文件夾:
E:\software\apache-hive-2.2.0-bin\scratch_dir
E:\software\apache-hive-2.2.0-bin\resources_dir
E:\software\apache-hive-2.2.0-bin\querylog_dir
E:\software\apache-hive-2.2.0-bin\operation_dir
2.5 對Hive元數據庫進行初始化:(將mysql-connector-java-*.jar拷貝到安裝目錄lib下)
進入安裝目錄:apache-hive-2.2.0-bin/bin/,在新建cmd窗口執行以下命令(MySql數據庫中會產生相應的用戶和數據表)
hive --service schematool
(程序會自動進入***apache-hive-2.2.0-bin\scripts\metastore\upgrade\mysql\文件中讀取對應版本的sql文件)
2.6 啓動Hive ,新建cmd窗口,並輸入如下命令:
啓動metastore : hive --service metastore
啓動hiveserver2:hive --service hiveserver2
(HiveServer2(HS2)是服務器接口,使遠程客戶端執行對hive的查詢和檢索結果(更詳細的介紹這裏)。目前基於Thrift RPC的實現,是HiveServer的改進版本,並支持多客戶端併發和身份驗證。
它旨在爲JDBC和ODBC等開放API客戶端提供更好的支持。)
2.7 驗證:
新建cmd窗口,並輸入如下 hive 命令,進入hive庫交互操做界面:
hive> create table test_table(id INT, name string);
hive> create table student(id int,name string,gender int,address string);
hive> show tables;
student
test_table
2 rows selected (1.391 seconds)
三. 經過Kettle 6.0 鏈接Hadoop2.6.0 + Hive 2.2.0 大數據環境進行數據遷移
3.1 明確本身hadoop的發行版本,kettle自帶的hadoop大體能夠分爲四種版本:
1. Apache 原生版本
2. Cloudera發行的CDH
3. Google發行的MapR
4. Amazon發行的EMR
5. hortworks發行的hdp
其餘須要鏈接kettle的須要自行將所須要的依賴和配置文件導入到kettle的文件夾下:
E:\software\pdi-ce-6.0.1.0-386\data-integration\plugins\pentaho-big-data-plugin\hadoop-configurations
3.2 配置plugin.properties文件(之前在kettle的相關文檔中看到對應的解釋)
修改:active.hadoop.configuration=hdp22
3.3 無需添加任何其餘的jar包,開啓kettle,並編輯如數據庫鏈接,以下圖:
3.4 測試鏈接,成功,以下圖:
搭建Hadoop過程當中報錯和解決辦法:
報錯:1. error Couldn't find a package.json file in "***\\hadoop-2.6.0\\sbin
解決:下載webutils覆蓋安裝目錄bin下的文件
報錯:2.FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost ***. Exiting.
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to node****:9000. Exiting.
java.io.IOException: Incompatible clusterIDs in ****dfs/data: namenode clusterID = CID-9f0a13ce-d7c-****-4d60e63d8392; datanode clusterID = CID-29ff9b68-****-e-aa06-816513a36add
解決:暴力:刪除tmp,name,data文件下的全部內容,從新生成,hdfs namenode -format
溫柔:將name/current下的VERSION中的clusterID複製到data/current下的VERSION中,覆蓋掉原來的clusterID讓兩個保持一致
報錯:3. 執行sbin/start-all.cmd 啓動命令
命令行報錯:This script is deprecated. Instead use start-dfs.cmd and start - yard.cmd (windows找不到指定文件‘hadoop’)
bin/hadoop dfsadmin -report 查看報錯詳細信息
解決:
修改配置文件:hdfs-site.xml
修改前配置: <property> <name>dfs.data.dir</name> <value>/E:/software/hadoop-2.6.0/workplace/data</value> </property> 修改後配置: <property> <name>dfs.namenode.name.dir</name> <value>file:/hadoop/data/dfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/hadoop/data/dfs/datanode</value> </property>
搭建Hive和鏈接Kettle過程當中的報錯:
報錯:1. ERROR [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(135)) - Self-test query [select "DB_ID" from "DBS"] failed; direc
t SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"".
解決:第一次啓動hive時,須要對Hive元數據庫進行初始化,以上報錯由於元數據庫初始化有問題致使的。
刪除數據庫中全部表,從新進行初始化
進入安裝目錄:apache-hive-2.2.0-bin/bin/,在命令行執行以下命令:hive --service schematool
程序會自動進入***apache-hive-2.2.0-bin\scripts\metastore\upgrade\mysql\文件中讀取對應版本的sql文件
報錯:2. Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000: Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: A is not allowed to impersonate B(state=08S01,code=0)
解決:hadoop目錄下/etc/hadoop/core-site.xml加入配置:
<property> <name>hadoop.proxyuser.A.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.A.groups</name> <value>*</value> </property>
(hadoop引入了一個安全假裝機制,使得hadoop 不容許上層系統直接將實際用戶傳遞到hadoop層,而是將實際用戶傳遞給一個超級代理,由此代理在hadoop上執行操做,避免任意客戶端隨意操做hadoop)
完結。。。。。