Sqoop是一個用來將Hadoop和關係型數據庫中的數據相互轉移的工具,能夠將一個關係型數據庫(例如 : MySQL ,Oracle ,Postgres等)中的數據導進到Hadoop的HDFS中,也能夠將HDFS的數據導進到關係型數據庫中。 java
以前已經安裝Hadoop-0.20.2,因sqoop官方版本不支持此版本,但可以使用CDH3版本,如上面的下載連接。爲了測試方便,能夠經過拷貝相應的包到sqoop-1.2.0-CDH3B4/lib下,依然能夠使用Hadoop-0.20.2版本。 mysql
[hadoop@node01 ~]$ cp hadoop-core-0.20.2-CDH3B4.jar sqoop-1.2.0-CDH3B4/lib
[hadoop@node01 ~]$ ls -l sqoop-1.2.0-CDH3B4/lib/hadoop-core-0.20.2-CDH3B4.jar
-rw-r--r--. 1 hadoop root 3452461 May 9 05:40 sqoop-1.2.0-CDH3B4/lib/hadoop-core-0.20.2-CDH3B4.jar
3)另外,sqoop導入mysql數據運行過程當中依賴mysql-connector-java-*.jar,因此你須要下載mysql-connector-java-*.jar並複製到sqoop-1.2.0-CDH3B4/lib中
[hadoop@node01 ~]$ cp mysql-connector-java-5.1.24-bin.jar sqoop-1.2.0-CDH3B4/lib
[hadoop@node01 ~]$ ls -l sqoop-1.2.0-CDH3B4/lib/mysql-connector-java-5.1.24-bin.jar
-rw-r--r--. 1 hadoop root 846263 May 9 05:43 sqoop-1.2.0-CDH3B4/lib/mysql-connector-java-5.1.24-bin.jar
4)修改SQOOP的文件configure-sqoop,註釋掉hbase和zookeeper檢查(除非你準備使用HABASE等HADOOP上的組件),不然在進行hbase和zookeeper檢查時,可能會卡在這裏。
[hadoop@node01 bin]$ pwd
/home/hadoop/sqoop-1.2.0-CDH3B4/bin
[hadoop@node01 bin]$ vi configure-sqoop
#if [ -z "${HBASE_HOME}" ]; then
# HBASE_HOME=/usr/lib/hbase
#fi
#if [ -z "${ZOOKEEPER_HOME}" ]; then
# ZOOKEEPER_HOME=/usr/lib/zookeeper
#fi
#if [ ! -d "${HBASE_HOME}" ]; then
# echo "Error: $HBASE_HOME does not exist!"
# echo 'Please set $HBASE_HOME to the root of your HBase installation.'
# exit 1
#fi
#if [ ! -d "${ZOOKEEPER_HOME}" ]; then
# echo "Error: $ZOOKEEPER_HOME does not exist!"
# echo 'Please set $ZOOKEEPER_HOME to the root of your ZooKeeper installation.'
# exit 1
#fi
5)啓動Hadoop
[hadoop@node01 bin]$ start-all.sh
[hadoop@node01 bin]$ jps
2732 Jps
2478 NameNode
2665 JobTracker
2600 SecondaryNameNode
6)從MySQL導入數據到HDFS
(1)在MySQL裏建立測試數據庫sqooptest
[hadoop@node01 ~]$ mysql -u root -p
mysql> create database sqooptest;
Query OK, 1 row affected (0.01 sec)
(2)建立sqoop專有用戶
mysql> create user 'sqoop' identified by 'sqoop';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all privileges on *.* to 'sqoop' with grant option;
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
(3)生成測試數據
mysql> use sqooptest;
Database changed
mysql> create table tb1 as select table_schema,table_name,table_type from information_schema.TABLES;
Query OK, 154 rows affected (0.28 sec)
Records: 154 Duplicates: 0 Warnings: 0
(4)測試sqoop與mysql的鏈接
[hadoop@node01 ~]$ sqoop list-databases --connect jdbc:mysql://node01:3306/ --username sqoop --password sqoop
13/05/09 06:15:01 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/05/09 06:15:01 INFO manager.MySQLManager: Executing SQL statement: SHOW DATABASES
information_schema
hive
mysql
performance_schema
sqooptest
test
(5)從MySQL導入數據到HDFS
[hadoop@node01 ~]$ sqoop import --connect jdbc:mysql://node01:3306/sqooptest --username sqoop --password sqoop --table tb1 -m 1
13/05/09 06:16:39 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/05/09 06:16:39 INFO tool.CodeGenTool: Beginning code generation
13/05/09 06:16:39 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `tb1` AS t LIMIT 1
13/05/09 06:16:39 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `tb1` AS t LIMIT 1
13/05/09 06:16:39 INFO orm.CompilationManager: HADOOP_HOME is /home/hadoop/hadoop-0.20.2/bin/..
13/05/09 06:16:39 INFO orm.CompilationManager: Found hadoop core jar at: /home/hadoop/hadoop-0.20.2/bin/../hadoop-0.20.2-core.jar
13/05/09 06:16:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/4175ce59fd53eb3de75875cfd3bd450b/tb1.jar
13/05/09 06:16:42 WARN manager.MySQLManager: It looks like you are importing from mysql.
13/05/09 06:16:42 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
13/05/09 06:16:42 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
13/05/09 06:16:42 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
13/05/09 06:16:42 INFO mapreduce.ImportJobBase: Beginning import of tb1
13/05/09 06:16:43 INFO manager.MySQLManager: Executing SQL statement: SELECT t.* FROM `tb1` AS t LIMIT 1
13/05/09 06:16:45 INFO mapred.JobClient: Running job: job_201305090600_0001
13/05/09 06:16:46 INFO mapred.JobClient: map 0% reduce 0%
13/05/09 06:17:01 INFO mapred.JobClient: map 100% reduce 0%
13/05/09 06:17:03 INFO mapred.JobClient: Job complete: job_201305090600_0001
13/05/09 06:17:03 INFO mapred.JobClient: Counters: 5
13/05/09 06:17:03 INFO mapred.JobClient: Job Counters
13/05/09 06:17:03 INFO mapred.JobClient: Launched map tasks=1
13/05/09 06:17:03 INFO mapred.JobClient: FileSystemCounters
13/05/09 06:17:03 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=7072
13/05/09 06:17:03 INFO mapred.JobClient: Map-Reduce Framework
13/05/09 06:17:03 INFO mapred.JobClient: Map input records=154
13/05/09 06:17:03 INFO mapred.JobClient: Spilled Records=0
13/05/09 06:17:03 INFO mapred.JobClient: Map output records=154
13/05/09 06:17:03 INFO mapreduce.ImportJobBase: Transferred 6.9062 KB in 19.9871 seconds (353.8277 bytes/sec)
13/05/09 06:17:03 INFO mapreduce.ImportJobBase: Retrieved 154 records.
(6)在HDFS上查看剛剛導入的數據
[hadoop@node01 ~]$ hadoop dfs -ls tb1
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2013-05-09 06:16 /user/hadoop/tb1/_logs
-rw-r--r-- 2 hadoop supergroup 7072 2013-05-09 06:16 /user/hadoop/tb1/part-m-00000