正文java
1、概述
sqoop 是 apache 旗下一款「Hadoop 和關係數據庫服務器之間傳送數據」的工具。node
核心的功能有兩個:mysql
導入、遷入sql
導出、遷出數據庫
導入數據:MySQL,Oracle 導入數據到 Hadoop 的 HDFS、HIVE、HBASE 等數據存儲系統apache
導出數據:從 Hadoop 的文件系統中導出數據到關係數據庫 mysql 等 Sqoop 的本質仍是一個命令行工具,和 HDFS,Hive 相比,並無什麼高深的理論。bash
sqoop:服務器
工具:本質就是遷移數據, 遷移的方式:就是把sqoop的遷移命令轉換成MR程序app
hive
工具,本質就是執行計算,依賴於HDFS存儲數據,把SQL轉換成MR程序
2、工做機制
將導入或導出命令翻譯成 MapReduce 程序來實現 在翻譯出的 MapReduce 中主要是對 InputFormat 和 OutputFormat 進行定製
3、安裝
一、前提概述
未來sqoop在使用的時候有可能會跟那些系統或者組件打交道?
HDFS, MapReduce, YARN, ZooKeeper, Hive, HBase, MySQL
sqoop就是一個工具, 只須要在一個節點上進行安裝便可。
補充一點: 若是你的sqoop工具未來要進行hive或者hbase等等的系統和MySQL之間的交互
你安裝的SQOOP軟件的節點必定要包含以上你要使用的集羣或者軟件系統的安裝包
補充一點: 未來要使用的azakban這個軟件 除了會調度 hadoop的任務或者hbase或者hive的任務以外, 還會調度sqoop的任務
azkaban這個軟件的安裝節點也必須包含以上這些軟件系統的客戶端/二、
二、軟件下載
下載地址http://mirrors.hust.edu.cn/apache/
sqoop版本說明
絕大部分企業所使用的sqoop的版本都是 sqoop1
sqoop-1.4.6 或者 sqoop-1.4.7 它是 sqoop1
sqoop-1.99.4----都是 sqoop2
此處使用sqoop-1.4.6版本sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz
三、安裝步驟
(1)上傳解壓縮安裝包到指定目錄
由於以前hive只是安裝在hadoop3機器上,因此sqoop也一樣安裝在hadoop3機器上
[hadoop@hadoop3 ~]$ tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C apps/
(2)進入到 conf 文件夾,找到 sqoop-env-template.sh,修改其名稱爲 sqoop-env.sh cd conf
[hadoop@hadoop3 ~]$ cd apps/ [hadoop@hadoop3 apps]$ ls apache-hive-2.3.3-bin hadoop-2.7.5 hbase-1.2.6 sqoop-1.4.6.bin__hadoop-2.0.4-alpha zookeeper-3.4.10 [hadoop@hadoop3 apps]$ mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha/ sqoop-1.4.6 [hadoop@hadoop3 apps]$ cd sqoop-1.4.6/conf/ [hadoop@hadoop3 conf]$ ls oraoop-site-template.xml sqoop-env-template.sh sqoop-site.xml sqoop-env-template.cmd sqoop-site-template.xml [hadoop@hadoop3 conf]$ mv sqoop-env-template.sh sqoop-env.sh
(3)修改 sqoop-env.sh
[hadoop@hadoop3 conf]$ vi sqoop-env.sh
export HADOOP_COMMON_HOME=/home/hadoop/apps/hadoop-2.7.5
#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/home/hadoop/apps/hadoop-2.7.5
#set the path to where bin/hbase is available
export HBASE_HOME=/home/hadoop/apps/hbase-1.2.6
#Set the path to where bin/hive is available
export HIVE_HOME=/home/hadoop/apps/apache-hive-2.3.3-bin
#Set the path for where zookeper config dir is
export ZOOCFGDIR=/home/hadoop/apps/zookeeper-3.4.10/conf
爲何在sqoop-env.sh 文件中會要求分別進行 common和mapreduce的配置呢???
在apache的hadoop的安裝中;四大組件都是安裝在同一個hadoop_home中的
可是在CDH, HDP中, 這些組件都是可選的。
在安裝hadoop的時候,能夠選擇性的只安裝HDFS或者YARN,
CDH,HDP在安裝hadoop的時候,會把HDFS和MapReduce有可能分別安裝在不一樣的地方。
(4)加入 mysql 驅動包到 sqoop1.4.6/lib 目錄下
[hadoop@hadoop3 ~]$ cp mysql-connector-java-5.1.40-bin.jar apps/sqoop-1.4.6/lib/
(5)配置系統環境變量
[hadoop@hadoop3 ~]$ vi .bashrc
#Sqoop
export SQOOP_HOME=/home/hadoop/apps/sqoop-1.4.6
export PATH=$PATH:$SQOOP_HOME/bin
保存退出使其當即生效
[hadoop@hadoop3 ~]$ source .bashrc
(6)驗證安裝是否成功
sqoop-version 或者 sqoop version
4、Sqoop的基本命令
基本操做
首先,咱們可使用 sqoop help 來查看,sqoop 支持哪些命令
[hadoop@hadoop3 ~]$ sqoop help
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:37:19 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
usage: sqoop COMMAND [ARGS]
Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information
See 'sqoop help COMMAND' for information on a specific command.
[hadoop@hadoop3 ~]$
而後獲得這些支持了的命令以後,若是不知道使用方式,可使用 sqoop command 的方式 來查看某條具體命令的使用方式,好比:
[hadoop@hadoop3 ~]$ sqoop help import Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 18/04/12 13:38:29 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS] Common arguments: --connect <jdbc-uri> Specify JDBC connect string --connection-manager <class-name> Specify connection manager class name --connection-param-file <properties-file> Specify connection parameters file --driver <class-name> Manually specify JDBC driver class to use --hadoop-home <hdir> Override $HADOOP_MAPRED_HOME_ARG --hadoop-mapred-home <dir> Override $HADOOP_MAPRED_HOME_ARG --help Print usage instructions -P Read password from console --password <password> Set authentication password --password-alias <password-alias> Credential provider password alias --password-file <password-file> Set authentication password file path --relaxed-isolation Use read-uncommitted isolation for imports --skip-dist-cache Skip copying jars to distributed cache --username <username> Set authentication username --verbose Print more information while working Import control arguments: --append Imports data in append mode --as-avrodatafile Imports data to Avro data files --as-parquetfile Imports data to Parquet files --as-sequencefile Imports data to SequenceFile s --as-textfile Imports data as plain text (default) --autoreset-to-one-mapper Reset the number of mappers to one mapper if no split key available --boundary-query <statement> Set boundary query for retrieving max and min value of the primary key --columns <col,col,col...> Columns to import from table --compression-codec <codec> Compression codec to use for import --delete-target-dir Imports data in delete mode --direct Use direct import fast path --direct-split-size <n> Split the input stream every 'n' bytes when importing in direct mode -e,--query <statement> Import results of SQL 'statement' --fetch-size <n> Set number 'n' of rows to fetch from the database when more rows are needed --inline-lob-limit <n> Set the maximum size for an inline LOB -m,--num-mappers <n> Use 'n' map tasks to import in parallel --mapreduce-job-name <name> Set name for generated mapreduce job --merge-key <column> Key column to use to join results --split-by <column-name> Column of the table used to split work units --table <table-name> Table to read --target-dir <dir> HDFS plain table destination --validate Validate the copy using the configured validator --validation-failurehandler <validation-failurehandler> Fully qualified class name for ValidationFa ilureHandler --validation-threshold <validation-threshold> Fully qualified class name for ValidationTh reshold --validator <validator> Fully qualified class name for the Validator --warehouse-dir <dir> HDFS parent for table destination --where <where clause> WHERE clause to use during import -z,--compress Enable compression Incremental import arguments: --check-column <column> Source column to check for incremental change --incremental <import-type> Define an incremental import of type 'append' or 'lastmodified' --last-value <value> Last imported value in the incremental check column Output line formatting arguments: --enclosed-by <char> Sets a required field enclosing character --escaped-by <char> Sets the escape character --fields-terminated-by <char> Sets the field separator character --lines-terminated-by <char> Sets the end-of-line character --mysql-delimiters Uses MySQL's default delimiter set: fields: , lines: \n escaped-by: \ optionally-enclosed-by: ' --optionally-enclosed-by <char> Sets a field enclosing character Input parsing arguments: --input-enclosed-by <char> Sets a required field encloser --input-escaped-by <char> Sets the input escape character --input-fields-terminated-by <char> Sets the input field separator --input-lines-terminated-by <char> Sets the input end-of-line char --input-optionally-enclosed-by <char> Sets a field enclosing character Hive arguments: --create-hive-table Fail if the target hive table exists --hive-database <database-name> Sets the database name to use when importing to hive --hive-delims-replacement <arg> Replace Hive record \0x01 and row delimiters (\n\r) from imported string fields with user-defined string --hive-drop-import-delims Drop Hive record \0x01 and row delimiters (\n\r) from imported string fields --hive-home <dir> Override $HIVE_HOME --hive-import Import tables into Hive (Uses Hive's default delimiters if none are set.) --hive-overwrite Overwrite existing data in the Hive table --hive-partition-key <partition-key> Sets the partition key to use when importing to hive --hive-partition-value <partition-value> Sets the partition value to use when importing to hive --hive-table <table-name> Sets the table name to use when importing to hive --map-column-hive <arg> Override mapping for specific column to hive types. HBase arguments: --column-family <family> Sets the target column family for the import --hbase-bulkload Enables HBase bulk loading --hbase-create-table If specified, create missing HBase tables --hbase-row-key <col> Specifies which input column to use as the row key --hbase-table <table> Import to <table> in HBase HCatalog arguments: --hcatalog-database <arg> HCatalog database name --hcatalog-home <hdir> Override $HCAT_HOME --hcatalog-partition-keys <partition-key> Sets the partition keys to use when importing to hive --hcatalog-partition-values <partition-value> Sets the partition values to use when importing to hive --hcatalog-table <arg> HCatalog table name --hive-home <dir> Override $HIVE_HOME --hive-partition-key <partition-key> Sets the partition key to use when importing to hive --hive-partition-value <partition-value> Sets the partition value to use when importing to hive --map-column-hive <arg> Override mapping for specific column to hive types. HCatalog import specific options: --create-hcatalog-table Create HCatalog before import --hcatalog-storage-stanza <arg> HCatalog storage stanza for table creation Accumulo arguments: --accumulo-batch-size <size> Batch size in bytes --accumulo-column-family <family> Sets the target column family for the import --accumulo-create-table If specified, create missing Accumulo tables --accumulo-instance <instance> Accumulo instance name. --accumulo-max-latency <latency> Max write latency in milliseconds --accumulo-password <password> Accumulo password. --accumulo-row-key <col> Specifies which input column to use as the row key --accumulo-table <table> Import to <table> in Accumulo --accumulo-user <user> Accumulo user name. --accumulo-visibility <vis> Visibility token to be applied to all rows imported --accumulo-zookeepers <zookeepers> Comma-separated list of zookeepers (host:port) Code generation arguments: --bindir <dir> Output directory for compiled objects --class-name <name> Sets the generated class name. This overrides --package-name. When combined with --jar-file, sets the input class. --input-null-non-string <null-str> Input null non-string representation --input-null-string <null-str> Input null string representation --jar-file <file> Disable code generation; use specified jar --map-column-java <arg> Override mapping for specific columns to java types --null-non-string <null-str> Null non-string representation --null-string <null-str> Null string representation --outdir <dir> Output directory for generated code --package-name <name> Put auto-generated classes in this package Generic Hadoop command-line arguments: (must preceed any tool-specific arguments) Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] At minimum, you must specify --connect and --table Arguments to mysqldump and other subprograms may be supplied after a '--' on the command line. [hadoop@hadoop3 ~]$
示例
列出MySQL數據有哪些數據庫
[hadoop@hadoop3 ~]$ sqoop list-databases \
> --connect jdbc:mysql://hadoop1:3306/ \
> --username root \
> --password root
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:43:51 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:43:51 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:43:51 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema hivedb mysql performance_schema test
[hadoop@hadoop3 ~]$
列出MySQL中的某個數據庫有哪些數據表:
[hadoop@hadoop3 ~]$ sqoop list-tables \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root
[hadoop@hadoop3 ~]$ sqoop list-tables \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:46:21 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:46:21 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:46:21 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
columns_priv
db
event
func
general_log
help_category
help_keyword
help_relation
help_topic
innodb_index_stats
innodb_table_stats
ndb_binlog_index
plugin
proc
procs_priv
proxies_priv
servers
slave_master_info
slave_relay_log_info
slave_worker_info
slow_log
tables_priv
time_zone
time_zone_leap_second
time_zone_name
time_zone_transition
time_zone_transition_type
user
[hadoop@hadoop3 ~]$
建立一張跟mysql中的help_keyword表同樣的hive表hk:
sqoop create-hive-table \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --hive-table hk
[hadoop@hadoop3 ~]$ sqoop create-hive-table \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root \
> --table help_keyword \
> --hive-table hk
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:50:20 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:50:20 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:50:20 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
18/04/12 13:50:20 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
18/04/12 13:50:20 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/04/12 13:50:21 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 13:50:21 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/12 13:50:23 INFO hive.HiveImport: Loading uploaded data into Hive
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Class path contains multiple SLF4J bindings.
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/apache-hive-2.3.3-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
18/04/12 13:50:36 INFO hive.HiveImport:
18/04/12 13:50:36 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/home/hadoop/apps/apache-hive-2.3.3-bin/lib/hive-common-2.3.3.jar!/hive-log4j2.properties Async: true
18/04/12 13:50:50 INFO hive.HiveImport: OK
18/04/12 13:50:50 INFO hive.HiveImport: Time taken: 11.651 seconds
18/04/12 13:50:51 INFO hive.HiveImport: Hive import complete.
[hadoop@hadoop3 ~]$
5、Sqoop的數據導入
「導入工具」導入單個表從 RDBMS 到 HDFS。表中的每一行被視爲 HDFS 的記錄。全部記錄 都存儲爲文本文件的文本數據(或者 Avro、sequence 文件等二進制數據)
一、從RDBMS導入到HDFS中
語法格式
sqoop import (generic-args) (import-args)
經常使用參數
--connect <jdbc-uri> jdbc 鏈接地址 --connection-manager <class-name> 鏈接管理者 --driver <class-name> 驅動類 --hadoop-mapred-home <dir> $HADOOP_MAPRED_HOME --help help 信息 -P 從命令行輸入密碼 --password <password> 密碼 --username <username> 帳號 --verbose 打印流程信息 --connection-param-file <filename> 可選參數
示例
普通導入:導入mysql庫中的help_keyword的數據到HDFS上
導入的默認路徑:/user/hadoop/help_keyword
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ -m 1
[hadoop@hadoop3 ~]$ sqoop import \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root \
> --table help_keyword \
> -m 1
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:53:48 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:53:48 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:53:48 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/04/12 13:53:48 INFO tool.CodeGenTool: Beginning code generation
18/04/12 13:53:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 13:53:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 13:53:49 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/apps/hadoop-2.7.5
注: /tmp/sqoop-hadoop/compile/979d87b9521d0a09ee6620060a112d60/help_keyword.java使用或覆蓋了已過期的 API。
注: 有關詳細信息, 請使用 -Xlint:deprecation 從新編譯。
18/04/12 13:53:51 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/979d87b9521d0a09ee6620060a112d60/help_keyword.jar
18/04/12 13:53:51 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/04/12 13:53:51 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/04/12 13:53:51 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/04/12 13:53:51 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/04/12 13:53:51 INFO mapreduce.ImportJobBase: Beginning import of help_keyword
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/12 13:53:52 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/04/12 13:53:53 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/04/12 13:53:58 INFO db.DBInputFormat: Using read commited transaction isolation
18/04/12 13:53:58 INFO mapreduce.JobSubmitter: number of splits:1
18/04/12 13:53:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523510178850_0001
18/04/12 13:54:00 INFO impl.YarnClientImpl: Submitted application application_1523510178850_0001
18/04/12 13:54:00 INFO mapreduce.Job: The url to track the job: http://hadoop3:8088/proxy/application_1523510178850_0001/
18/04/12 13:54:00 INFO mapreduce.Job: Running job: job_1523510178850_0001
18/04/12 13:54:17 INFO mapreduce.Job: Job job_1523510178850_0001 running in uber mode : false
18/04/12 13:54:17 INFO mapreduce.Job: map 0% reduce 0%
18/04/12 13:54:33 INFO mapreduce.Job: map 100% reduce 0%
18/04/12 13:54:34 INFO mapreduce.Job: Job job_1523510178850_0001 completed successfully
18/04/12 13:54:35 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=142965
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=8264
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=12142
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=12142
Total vcore-milliseconds taken by all map tasks=12142
Total megabyte-milliseconds taken by all map tasks=12433408
Map-Reduce Framework
Map input records=619
Map output records=619
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=123
CPU time spent (ms)=1310
Physical memory (bytes) snapshot=93212672
Virtual memory (bytes) snapshot=2068234240
Total committed heap usage (bytes)=17567744
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=8264
18/04/12 13:54:35 INFO mapreduce.ImportJobBase: Transferred 8.0703 KB in 41.8111 seconds (197.6507 bytes/sec)
18/04/12 13:54:35 INFO mapreduce.ImportJobBase: Retrieved 619 records.
[hadoop@hadoop3 ~]$
查看導入的文件
[hadoop@hadoop4 ~]$ hadoop fs -cat /user/hadoop/help_keyword/part-m-00000
導入: 指定分隔符和導入路徑
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --target-dir /user/hadoop11/my_help_keyword1 \ --fields-terminated-by '\t' \ -m 2
導入數據:帶where條件
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --where "name='STRING' " \ --table help_keyword \ --target-dir /sqoop/hadoop11/myoutport1 \ -m 1
查詢指定列
sqoop import \
--connect jdbc:mysql://hadoop1:3306/mysql \
--username root \
--password root \
--columns "name" \
--where "name='STRING' " \
--table help_keyword \
--target-dir /sqoop/hadoop11/myoutport22 \
-m 1
selct name from help_keyword where name = "string"
導入:指定自定義查詢SQL
sqoop import \
--connect jdbc:mysql://hadoop1:3306/ \
--username root \
--password root \
--target-dir /user/hadoop/myimport33_1 \
--query 'select help_keyword_id,name from mysql.help_keyword where $CONDITIONS and name = "STRING"' \
--split-by help_keyword_id \
--fields-terminated-by '\t' \
-m 4
在以上須要按照自定義SQL語句導出數據到HDFS的狀況下:
一、引號問題,要麼外層使用單引號,內層使用雙引號,$CONDITIONS的$符號不用轉義, 要麼外層使用雙引號,那麼內層使用單引號,而後$CONDITIONS的$符號須要轉義
二、自定義的SQL語句中必須帶有WHERE \$CONDITIONS
二、把MySQL數據庫中的表數據導入到Hive中
Sqoop 導入關係型數據到 hive 的過程是先導入到 hdfs,而後再 load 進入 hive
普通導入:數據存儲在默認的default hive庫中,表名就是對應的mysql的表名:
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --hive-import \ -m 1
導入過程
第一步:導入mysql.help_keyword的數據到hdfs的默認路徑
第二步:自動仿造mysql.help_keyword去建立一張hive表, 建立在默認的default庫中
第三步:把臨時目錄中的數據導入到hive表中
查看數據
[hadoop@hadoop3 ~]$ hadoop fs -cat /user/hive/warehouse/help_keyword/part-m-00000
指定行分隔符和列分隔符,指定hive-import,指定覆蓋導入,指定自動建立hive表,指定表名,指定刪除中間結果數據目錄
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --fields-terminated-by "\t" \ --lines-terminated-by "\n" \ --hive-import \ --hive-overwrite \ --create-hive-table \ --delete-target-dir \ --hive-database mydb_test \ --hive-table new_help_keyword
報錯緣由是hive-import 當前這個導入命令。 sqoop會自動給建立hive的表。 可是不會自動建立不存在的庫
手動建立mydb_test數據塊
hive> create database mydb_test; OK Time taken: 6.147 seconds hive>
以後再執行上面的語句沒有報錯
查詢一下
select * from new_help_keyword limit 10;
上面的導入語句等價於
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --fields-terminated-by "\t" \ --lines-terminated-by "\n" \ --hive-import \ --hive-overwrite \ --create-hive-table \ --hive-table mydb_test.new_help_keyword \ --delete-target-dir
增量導入
執行增量導入以前,先清空hive數據庫中的help_keyword表中的數據
truncate table help_keyword;
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --target-dir /user/hadoop/myimport_add \ --incremental append \ --check-column help_keyword_id \ --last-value 500 \ -m 1
語句執行成功
[hadoop@hadoop3 ~]$ sqoop import \ > --connect jdbc:mysql://hadoop1:3306/mysql \ > --username root \ > --password root \ > --table help_keyword \ > --target-dir /user/hadoop/myimport_add \ > --incremental append \ > --check-column help_keyword_id \ > --last-value 500 \ > -m 1 Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 18/04/12 22:01:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 18/04/12 22:01:08 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/04/12 22:01:08 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 18/04/12 22:01:08 INFO tool.CodeGenTool: Beginning code generation 18/04/12 22:01:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1 18/04/12 22:01:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1 18/04/12 22:01:08 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/apps/hadoop-2.7.5 注: /tmp/sqoop-hadoop/compile/a51619d1ef8c6e4b112a209326ed9e0f/help_keyword.java使用或覆蓋了已過期的 API。 注: 有關詳細信息, 請使用 -Xlint:deprecation 從新編譯。 18/04/12 22:01:11 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/a51619d1ef8c6e4b112a209326ed9e0f/help_keyword.jar SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 18/04/12 22:01:12 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`help_keyword_id`) FROM `help_keyword` 18/04/12 22:01:12 INFO tool.ImportTool: Incremental import based on column `help_keyword_id` 18/04/12 22:01:12 INFO tool.ImportTool: Lower bound value: 500 18/04/12 22:01:12 INFO tool.ImportTool: Upper bound value: 618 18/04/12 22:01:12 WARN manager.MySQLManager: It looks like you are importing from mysql. 18/04/12 22:01:12 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 18/04/12 22:01:12 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 18/04/12 22:01:12 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 18/04/12 22:01:12 INFO mapreduce.ImportJobBase: Beginning import of help_keyword 18/04/12 22:01:12 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 18/04/12 22:01:12 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 18/04/12 22:01:17 INFO db.DBInputFormat: Using read commited transaction isolation 18/04/12 22:01:17 INFO mapreduce.JobSubmitter: number of splits:1 18/04/12 22:01:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523510178850_0010 18/04/12 22:01:19 INFO impl.YarnClientImpl: Submitted application application_1523510178850_0010 18/04/12 22:01:19 INFO mapreduce.Job: The url to track the job: http://hadoop3:8088/proxy/application_1523510178850_0010/ 18/04/12 22:01:19 INFO mapreduce.Job: Running job: job_1523510178850_0010 18/04/12 22:01:30 INFO mapreduce.Job: Job job_1523510178850_0010 running in uber mode : false 18/04/12 22:01:30 INFO mapreduce.Job: map 0% reduce 0% 18/04/12 22:01:40 INFO mapreduce.Job: map 100% reduce 0% 18/04/12 22:01:40 INFO mapreduce.Job: Job job_1523510178850_0010 completed successfully 18/04/12 22:01:41 INFO mapreduce.Job: Counters: 30 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=143200 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=87 HDFS: Number of bytes written=1576 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Other local map tasks=1 Total time spent by all maps in occupied slots (ms)=7188 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=7188 Total vcore-milliseconds taken by all map tasks=7188 Total megabyte-milliseconds taken by all map tasks=7360512 Map-Reduce Framework Map input records=118 Map output records=118 Input split bytes=87 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=86 CPU time spent (ms)=870 Physical memory (bytes) snapshot=95576064 Virtual memory (bytes) snapshot=2068234240 Total committed heap usage (bytes)=18608128 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=1576 18/04/12 22:01:41 INFO mapreduce.ImportJobBase: Transferred 1.5391 KB in 28.3008 seconds (55.6875 bytes/sec) 18/04/12 22:01:41 INFO mapreduce.ImportJobBase: Retrieved 118 records. 18/04/12 22:01:41 INFO util.AppendUtils: Creating missing output directory - myimport_add 18/04/12 22:01:41 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments: 18/04/12 22:01:41 INFO tool.ImportTool: --incremental append 18/04/12 22:01:41 INFO tool.ImportTool: --check-column help_keyword_id 18/04/12 22:01:41 INFO tool.ImportTool: --last-value 618 18/04/12 22:01:41 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create') [hadoop@hadoop3 ~]$
查看結果
三、把MySQL數據庫中的表數據導入到hbase
普通導入
sqoop import \ --connect jdbc:mysql://hadoop1:3306/mysql \ --username root \ --password root \ --table help_keyword \ --hbase-table new_help_keyword \ --column-family person \ --hbase-row-key help_keyword_id
此時會報錯,由於須要先建立Hbase裏面的表,再執行導入的語句
hbase(main):001:0> create 'new_help_keyword', 'base_info' 0 row(s) in 3.6280 seconds => Hbase::Table - new_help_keyword hbase(main):002:0>