HBase表在剛剛被建立時,只有1個分區(region),當一個region過大(達到hbase.hregion.max.filesize屬性中定義的閾值,默認10GB)時,java
表將會進行split,分裂爲2個分區。表在進行split的時候,會耗費大量的資源,頻繁的分區對HBase的性能有巨大的影響。shell
HBase提供了預分區功能,即用戶能夠在建立表的時候對錶按照必定的規則分區。apache
減小因爲region split帶來的資源消耗。從而提升HBase的性能。api
===方法1===oop
經過HBase shell來建立。命令樣例以下:性能
create 't1', 'f1', SPLITS => ['10', '20', '30', '40']spa
create 't1', {NAME =>'f1', TTL => 180}, SPLITS => ['10', '20', '30', '40']code
create 't1', {NAME =>'f1', TTL => 180}, {NAME => 'f2', TTL => 240}, SPLITS => ['10', '20', '30', '40']blog
命令截圖:ip
從Web界面查看錶結構
===方法2===
仍然是經過HBase shell來建立,不過是經過讀取文件
一、在任意路徑下建立一個保存分區key的文件,我這裏以下
路徑:/home/hadmin/hbase-1.3.1/txt/splits.txt
內容以下圖
二、經過HBase shell命令建立表
命令樣例:
create 't1', 'f1', SPLITS_FILE => '/home/hadmin/hbase-1.3.1/txt/splits.txt'
create 't1', {NAME =>'f1', TTL => 180}, SPLITS_FILE => '/home/hadmin/hbase-1.3.1/txt/splits.txt'
create 't1', {NAME =>'f1', TTL => 180}, {NAME => 'f2', TTL => 240}, SPLITS_FILE => '/home/hadmin/hbase-1.3.1/txt/splits.txt'
操做截圖:
Web界面結果:
====方法3==
經過java api建立,代碼樣例以下:
package api; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.util.Bytes; public class create_table_sample2 { public static void main(String[] args) throws Exception { Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "192.168.1.80,192.168.1.81,192.168.1.82"); Connection connection = ConnectionFactory.createConnection(conf); Admin admin = connection.getAdmin(); TableName table_name = TableName.valueOf("TEST1"); if (admin.tableExists(table_name)) { admin.disableTable(table_name); admin.deleteTable(table_name); } HTableDescriptor desc = new HTableDescriptor(table_name); HColumnDescriptor family1 = new HColumnDescriptor(constants.COLUMN_FAMILY_DF.getBytes()); family1.setTimeToLive(3 * 60 * 60 * 24); //過時時間 family1.setMaxVersions(3); //版本數 desc.addFamily(family1); byte[][] splitKeys = { Bytes.toBytes("row01"), Bytes.toBytes("row02"), }; admin.createTable(desc, splitKeys); admin.close(); connection.close(); } }
--END--