轉自:http://www.cnblogs.com/chenz/articles/3229997.htmlhtml
某電信項目中採用HBase來存儲用戶終端明細數據,供前臺頁面即時查詢。HBase不容置疑擁有其優點,但其自己只對rowkey支持毫秒級的快速檢索,對於多字段的組合查詢卻無能爲力。針對HBase的多條件查詢也有多種方案,可是這些方案要麼太複雜,要麼效率過低,本文只對基於Solr的HBase多條件查詢方案進行測試和驗證。前端
基於Solr的HBase多條件查詢原理很簡單,將HBase表中涉及條件過濾的字段和rowkey在Solr中創建索引,經過Solr的多條件查詢快速得到符合過濾條件的rowkey值,拿到這些rowkey以後在HBASE中經過指定rowkey進行查詢。java
solr 4.0.0版本,使用其自帶的jetty服務端容器,單節點;web
hbase-0.94.2-cdh4.2.1,10臺Lunux服務器組成的HBase集羣。apache
HBase中2512萬條數據172個字段;瀏覽器
Solr索引HBase中的100萬條數據;緩存
一、100萬條數據在Solr中對8個字段創建索引。在Solr中最多8個過濾條件獲取51316條數據的rowkey值,基本在57-80毫秒。根據Solr返回的rowkey值在HBase表中獲取全部51316條數據12個字段值,耗時基本在15秒;服務器
二、數據量同上,過濾條件同上,採用Solr分頁查詢,每次獲取20條數據,Solr得到20個rowkey值耗時4-10毫秒,拿到Solr傳入的rowkey值在HBase中獲取對應20條12個字段的數據,耗時6毫秒。多線程
由於初衷只是測試Solr的使用,Solr的運行環境也只是用了其自帶的jetty,而非大多人用的Tomcat;沒有搭建Solr集羣,只是一個單一的Solr服務端,也沒有任何參數調優。併發
1)在Apache網站上下載Solr 4:http://lucene.apache.org/solr/downloads.html,咱們這裏下載的是「apache-solr-4.0.0.tgz」;
2)在當前目錄解壓Solr壓縮包:
cd /opt
tar -xvzf apache-solr-4.0.0.tgz
3)修改Solr的配置文件schema.xml,添加咱們須要索引的多個字段(配置文件位於「/opt/apache-solr-4.0.0/example/solr/collection1/conf/」)
<field name="rowkey" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="time" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="tebid" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="tetid" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="puid" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="mgcvid" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="mtcvid" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="smaid" type="string" indexed="true" stored="true" required="false" multiValued="false" /> <field name="mtlkid" type="string" indexed="true" stored="true" required="false" multiValued="false" />
另外關鍵的一點是修改原有的uniqueKey,本文設置HBase表的rowkey字段爲Solr索引的uniqueKey:
<uniqueKey>rowkey</uniqueKey>
type 參數表明索引數據類型,我這裏將type所有設置爲string是爲了不異常類型的數據致使索引創建失敗,正常狀況下應該根據實際字段類型設置,好比整型字段設置爲int,更加有利於索引的創建和檢索;
indexed 參數表明此字段是否創建索引,根據實際狀況設置,建議不參與條件過濾的字段一概設置爲false;
stored 參數表明是否存儲此字段的值,建議根據實際需求只將須要獲取值的字段設置爲true,以避免浪費存儲,好比咱們的場景只須要獲取rowkey,那麼只需把rowkey字段設置爲true便可,其餘字段所有設置flase;
required 參數表明此字段是否必需,若是數據源某個字段可能存在空值,那麼此屬性必需設置爲false,否則Solr會拋出異常;
multiValued 參數表明此字段是否容許有多個值,一般都設置爲false,根據實際需求可設置爲true。
4)咱們使用Solr自帶的example來做爲運行環境,定位到example目錄,啓動服務監聽:
cd /opt/apache-solr-4.0.0/example java -jar ./start.jar
若是啓動成功,能夠經過瀏覽器打開此頁面:http://192.168.1.10:8983/solr/
一種方案是經過HBase的普通API獲取數據創建索引,此方案的缺點是效率較低每秒只能處理100多條數據(或許能夠經過多線程提升效率):
package com.ultrapower.hbase.solrhbase; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.impl.HttpSolrServer; import org.apache.solr.common.SolrInputDocument; public class SolrIndexer { /** * @param args * @throws IOException * @throws SolrServerException */ public static void main(String[] args) throws IOException, SolrServerException { final Configuration conf; HttpSolrServer solrServer = new HttpSolrServer( "http://192.168.1.10:8983/solr"); // 由於服務端是用的Solr自帶的jetty容器,默認端口號是8983 conf = HBaseConfiguration.create(); HTable table = new HTable(conf, "hb_app_xxxxxx"); // 這裏指定HBase表名稱 Scan scan = new Scan(); scan.addFamily(Bytes.toBytes("d")); // 這裏指定HBase表的列族 scan.setCaching(500); scan.setCacheBlocks(false); ResultScanner ss = table.getScanner(scan); System.out.println("start ..."); int i = 0; try { for (Result r : ss) { SolrInputDocument solrDoc = new SolrInputDocument(); solrDoc.addField("rowkey", new String(r.getRow())); for (KeyValue kv : r.raw()) { String fieldName = new String(kv.getQualifier()); String fieldValue = new String(kv.getValue()); if (fieldName.equalsIgnoreCase("time") || fieldName.equalsIgnoreCase("tebid") || fieldName.equalsIgnoreCase("tetid") || fieldName.equalsIgnoreCase("puid") || fieldName.equalsIgnoreCase("mgcvid") || fieldName.equalsIgnoreCase("mtcvid") || fieldName.equalsIgnoreCase("smaid") || fieldName.equalsIgnoreCase("mtlkid")) { solrDoc.addField(fieldName, fieldValue); } } solrServer.add(solrDoc); solrServer.commit(true, true, true); i = i + 1; System.out.println("已經成功處理 " + i + " 條數據"); } ss.close(); table.close(); System.out.println("done !"); } catch (IOException e) { } finally { ss.close(); table.close(); System.out.println("erro !"); } } }
另一種方案是用到HBase的Mapreduce框架,分佈式並行執行效率特別高,處理1000萬條數據僅需5分鐘,可是這種高併發須要對Solr服務器進行配置調優,否則會拋出服務器沒法響應的異常:
Error: org.apache.solr.common.SolrException: Server at http://192.168.1.10:8983/solr returned non ok status:503, message:Service Unavailable
MapReduce入口程序:
package com.ultrapower.hbase.solrhbase; import java.io.IOException; import java.net.URISyntaxException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat; public class SolrHBaseIndexer { private static void usage() { System.err.println("輸入參數: <配置文件路徑> <起始行> <結束行>"); System.exit(1); } private static Configuration conf; public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException, URISyntaxException { if (args.length == 0 || args.length > 3) { usage(); } createHBaseConfiguration(args[0]); ConfigProperties tutorialProperties = new ConfigProperties(args[0]); String tbName = tutorialProperties.getHBTbName(); String tbFamily = tutorialProperties.getHBFamily(); Job job = new Job(conf, "SolrHBaseIndexer"); job.setJarByClass(SolrHBaseIndexer.class); Scan scan = new Scan(); if (args.length == 3) { scan.setStartRow(Bytes.toBytes(args[1])); scan.setStopRow(Bytes.toBytes(args[2])); } scan.addFamily(Bytes.toBytes(tbFamily)); scan.setCaching(500); // 設置緩存數據量來提升效率 scan.setCacheBlocks(false); // 建立Map任務 TableMapReduceUtil.initTableMapperJob(tbName, scan, SolrHBaseIndexerMapper.class, null, null, job); // 不須要輸出 job.setOutputFormatClass(NullOutputFormat.class); // job.setNumReduceTasks(0); System.exit(job.waitForCompletion(true) ? 0 : 1); } /** * 從配置文件讀取並設置HBase配置信息 * * @param propsLocation * @return */ private static void createHBaseConfiguration(String propsLocation) { ConfigProperties tutorialProperties = new ConfigProperties( propsLocation); conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", tutorialProperties.getZKQuorum()); conf.set("hbase.zookeeper.property.clientPort", tutorialProperties.getZKPort()); conf.set("hbase.master", tutorialProperties.getHBMaster()); conf.set("hbase.rootdir", tutorialProperties.getHBrootDir()); conf.set("solr.server", tutorialProperties.getSolrServer()); } }
對應的Mapper:
package com.ultrapower.hbase.solrhbase; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableMapper; import org.apache.hadoop.io.Text; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.impl.HttpSolrServer; import org.apache.solr.common.SolrInputDocument; public class SolrHBaseIndexerMapper extends TableMapper<Text, Text> { public void map(ImmutableBytesWritable key, Result hbaseResult, Context context) throws InterruptedException, IOException { Configuration conf = context.getConfiguration(); HttpSolrServer solrServer = new HttpSolrServer(conf.get("solr.server")); solrServer.setDefaultMaxConnectionsPerHost(100); solrServer.setMaxTotalConnections(1000); solrServer.setSoTimeout(20000); solrServer.setConnectionTimeout(20000); SolrInputDocument solrDoc = new SolrInputDocument(); try { solrDoc.addField("rowkey", new String(hbaseResult.getRow())); for (KeyValue rowQualifierAndValue : hbaseResult.list()) { String fieldName = new String( rowQualifierAndValue.getQualifier()); String fieldValue = new String(rowQualifierAndValue.getValue()); if (fieldName.equalsIgnoreCase("time") || fieldName.equalsIgnoreCase("tebid") || fieldName.equalsIgnoreCase("tetid") || fieldName.equalsIgnoreCase("puid") || fieldName.equalsIgnoreCase("mgcvid") || fieldName.equalsIgnoreCase("mtcvid") || fieldName.equalsIgnoreCase("smaid") || fieldName.equalsIgnoreCase("mtlkid")) { solrDoc.addField(fieldName, fieldValue); } } solrServer.add(solrDoc); solrServer.commit(true, true, true); } catch (SolrServerException e) { System.err.println("更新Solr索引異常:" + new String(hbaseResult.getRow())); } } }
讀取參數配置文件的輔助類:
package com.ultrapower.hbase.solrhbase; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.Properties; public class ConfigProperties { private static Properties props; private String HBASE_ZOOKEEPER_QUORUM; private String HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT; private String HBASE_MASTER; private String HBASE_ROOTDIR; private String DFS_NAME_DIR; private String DFS_DATA_DIR; private String FS_DEFAULT_NAME; private String SOLR_SERVER; // Solr服務器地址 private String HBASE_TABLE_NAME; // 須要創建Solr索引的HBase表名稱 private String HBASE_TABLE_FAMILY; // HBase表的列族 public ConfigProperties(String propLocation) { props = new Properties(); try { File file = new File(propLocation); System.out.println("從如下位置加載配置文件: " + file.getAbsolutePath()); FileReader is = new FileReader(file); props.load(is); HBASE_ZOOKEEPER_QUORUM = props.getProperty("HBASE_ZOOKEEPER_QUORUM"); HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT = props.getProperty("HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT"); HBASE_MASTER = props.getProperty("HBASE_MASTER"); HBASE_ROOTDIR = props.getProperty("HBASE_ROOTDIR"); DFS_NAME_DIR = props.getProperty("DFS_NAME_DIR"); DFS_DATA_DIR = props.getProperty("DFS_DATA_DIR"); FS_DEFAULT_NAME = props.getProperty("FS_DEFAULT_NAME"); SOLR_SERVER = props.getProperty("SOLR_SERVER"); HBASE_TABLE_NAME = props.getProperty("HBASE_TABLE_NAME"); HBASE_TABLE_FAMILY = props.getProperty("HBASE_TABLE_FAMILY"); } catch (IOException e) { throw new RuntimeException("加載配置文件出錯"); } catch (NullPointerException e) { throw new RuntimeException("文件不存在"); } } public String getZKQuorum() { return HBASE_ZOOKEEPER_QUORUM; } public String getZKPort() { return HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT; } public String getHBMaster() { return HBASE_MASTER; } public String getHBrootDir() { return HBASE_ROOTDIR; } public String getDFSnameDir() { return DFS_NAME_DIR; } public String getDFSdataDir() { return DFS_DATA_DIR; } public String getFSdefaultName() { return FS_DEFAULT_NAME; } public String getSolrServer() { return SOLR_SERVER; } public String getHBTbName() { return HBASE_TABLE_NAME; } public String getHBFamily() { return HBASE_TABLE_FAMILY; } }
參數配置文件「config.properties」:
HBASE_ZOOKEEPER_QUORUM=slave-1,slave-2,slave-3,slave-4,slave-5 HBASE_ZOOKEEPER_PROPERTY_CLIENT_PORT=2181 HBASE_MASTER=master-1:60000 HBASE_ROOTDIR=hdfs:///hbase DFS_NAME_DIR=/opt/data/dfs/name DFS_DATA_DIR=/opt/data/d0/dfs2/data FS_DEFAULT_NAME=hdfs://192.168.1.10:9000 SOLR_SERVER=http://192.168.1.10:8983/solr HBASE_TABLE_NAME=hb_app_m_user_te HBASE_TABLE_FAMILY=d
能夠經過web頁面操做Solr索引,
查詢:
http://192.168.1.10:8983/solr/select?(time:201307 AND tetid:1 AND mgcvid:101 AND smaid:101 AND puid:102)
刪除全部索引:
http://192.168.1.10:8983/solr/update/?stream.body=<delete><query>*:*</query></delete>&stream.contentType=text/xml;charset=utf-8&commit=true
經過java客戶端結合Solr查詢HBase數據:
package com.ultrapower.hbase.solrhbase; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.util.Bytes; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.SolrServer; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.impl.HttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocument; import org.apache.solr.common.SolrDocumentList; public class QueryData { /** * @param args * @throws SolrServerException * @throws IOException */ public static void main(String[] args) throws SolrServerException, IOException { final Configuration conf; conf = HBaseConfiguration.create(); HTable table = new HTable(conf, "hb_app_m_user_te"); Get get = null; List<Get> list = new ArrayList<Get>(); String url = "http://192.168.1.10:8983/solr"; SolrServer server = new HttpSolrServer(url); SolrQuery query = new SolrQuery("time:201307 AND tetid:1 AND mgcvid:101 AND smaid:101 AND puid:102"); query.setStart(0); //數據起始行,分頁用 query.setRows(10); //返回記錄數,分頁用 QueryResponse response = server.query(query); SolrDocumentList docs = response.getResults(); System.out.println("文檔個數:" + docs.getNumFound()); //數據總條數也可輕易獲取 System.out.println("查詢時間:" + response.getQTime()); for (SolrDocument doc : docs) { get = new Get(Bytes.toBytes((String) doc.getFieldValue("rowkey"))); list.add(get); } Result[] res = table.get(list); byte[] bt1 = null; byte[] bt2 = null; byte[] bt3 = null; byte[] bt4 = null; String str1 = null; String str2 = null; String str3 = null; String str4 = null; for (Result rs : res) { bt1 = rs.getValue("d".getBytes(), "3mpon".getBytes()); bt2 = rs.getValue("d".getBytes(), "3mponid".getBytes()); bt3 = rs.getValue("d".getBytes(), "amarpu".getBytes()); bt4 = rs.getValue("d".getBytes(), "amarpuid".getBytes()); if (bt1 != null && bt1.length>0) {str1 = new String(bt1);} else {str1 = "無數據";} //對空值進行new String的話會拋出異常 if (bt2 != null && bt2.length>0) {str2 = new String(bt2);} else {str2 = "無數據";} if (bt3 != null && bt3.length>0) {str3 = new String(bt3);} else {str3 = "無數據";} if (bt4 != null && bt4.length>0) {str4 = new String(bt4);} else {str4 = "無數據";} System.out.print(new String(rs.getRow()) + " "); System.out.print(str1 + "|"); System.out.print(str2 + "|"); System.out.print(str3 + "|"); System.out.println(str4 + "|"); } table.close(); } }
經過測試發現,結合Solr索引能夠很好的實現HBase的多條件查詢,同時還能解決其兩個難點:分頁查詢、數據總量統計。
實際場景中大多都是分頁查詢,分頁查詢返回的數據量不多,採用此種方案徹底能夠達到前端頁面毫秒級的實時響應;如有大批量的數據交互,好比涉及到數據導出,實際上效率也是很高,十萬數據僅耗時10秒。
另外,若是真的將Solr歸入使用,Solr以及HBase端均可以不斷進行優化,好比能夠搭建Solr集羣,甚至能夠採用SolrCloud基於hadoop的分佈式索引服務。
總之,HBase不能多條件過濾查詢的先天性缺陷,在Solr的配合之下能夠獲得較好的彌補,難怪諸如新蛋科技、國美電商、蘇寧電商等互聯網公司以及衆多遊戲公司,都使用Solr來支持快速查詢。
----end
本文鏈接:http://www.cnblogs.com/chenz/articles/3229997.html
做者:chenzheng
聯繫:vinkeychen@gmail.com