Hive命令操做(一)

一、準備文本文件,啓動hadoop[root@hadoop0 ~]# cat /opt/test.txt
JieJie
MengMeng
NingNing
JingJing
FengJie
[root@hadoop0 ~]# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out
localhost: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-hadoop0.out
localhost: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.out
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out
localhost: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop0.out
二、進入命令行[root@hadoop0 ~]# hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201509252001_1674268419.txt
三、查詢昨天的表hive> select * from stu;
OK
JieJie 26       NULL
MM 24   NULL
Time taken: 17.05 seconds
四、顯示數據庫hive> show databases; 
OK
default
Time taken: 0.237 seconds
五、建立數據庫hive> create database test; 
OK
Time taken: 0.259 seconds
hive> show databases;       
OK
default
test
六、使用數據庫Time taken: 0.119 seconds
hive> use test;
OK
Time taken: 0.03 seconds
七、建立表textfile 默認格式,數據不作壓縮,磁盤開銷大,數據解析開銷大。
可結合Gzip、Bzip2使用(系統自動檢查,執行查詢時自動解壓),但使用這種方式,hive不會對數據進行切分,從而沒法對數據進行並行操做。
SequenceFile是Hadoop API提供的一種二進制文件支持,其具備使用方便、可分割、可壓縮的特色。
SequenceFile支持三種壓縮選擇:NONE, RECORD, BLOCK。 Record壓縮率低,通常建議使用BLOCK壓縮
rcfile是一種行列存儲相結合的存儲方式。首先,其將數據按行分塊,保證同一個record在一個塊上,避免讀一個記錄須要讀取多個block。其次,塊數據列式存儲,有利於數據壓縮和快速的列存取。
hive>  create table test1(str STRING)  STORED AS TEXTFILE; 
OK
Time taken: 0.598 seconds
--加載數據
hive> LOAD DATA LOCAL INPATH '/opt/test.txt' INTO TABLE test1; 
Copying data from file:/opt/test.txt
Copying file: file:/opt/test.txt
Loading data to table test.test1
OK
Time taken: 1.657 seconds
hive> select * from test1;
OK
JieJie
MengMeng
NingNing
JingJing
FengJie
Time taken: 0.388 seconds
hive> select count(*) from test1;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201509252000_0001, Tracking URL = http://hadoop0:50030/jobdetails.jsp?jobid=job_201509252000_0001
Kill Command = /opt/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=hadoop0:9001 -kill job_201509252000_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-09-25 20:09:55,796 Stage-1 map = 0%,  reduce = 0%
2015-09-25 20:10:19,806 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.67 sec
2015-09-25 20:10:53,218 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.95 sec
2015-09-25 20:10:54,223 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.95 sec
MapReduce Total cumulative CPU time: 6 seconds 950 msec
Ended Job = job_201509252000_0001
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 6.95 sec   HDFS Read: 258 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 950 msec
OK
5
Time taken: 77.515 seconds


create table test1(str STRING)  STORED AS TEXTFILE; 
create table test2(str STRING) ;
hive> create table test3(str STRING)  STORED AS SEQUENCEFILE;
OK
Time taken: 0.112 seconds
 
hive> create table test4(str STRING)  STORED AS RCFILE; 
OK
Time taken: 0.502 seconds
八、把舊錶數據導入新表INSERT OVERWRITE TABLE test4 SELECT * FROM test1;
九、設置hive參數hive> SET hive.exec.compress.output=true; 
hive> SET io.seqfile.compression.type=BLOCK;
十、查看hive參數 hive> SET ; node

相關文章
相關標籤/搜索