客戶調用批量查詢接口對Solr核進行查詢時以爲查詢響應時間有些慢,接口的內部實現目前是順序執行每一個查詢,再把結果彙總起來返回給調用方。所以,考慮引入線程池對查詢接口的內部實現進行重構優化。java
- private ExecutorService executor = Executors.newCachedThreadPool();//查詢請求處理線程池
而後是主線程方法的代碼: public Listsql
- List<Map<String, String>> finalResult = null;
- if (idList == null || idList.size() == 0 || StringUtil.isBlank(entityCode)) {//參數合法性校驗
- return finalResult;
- }
- finalResult = new ArrayList<Map<String, String>>();
- List<Future<Map<String, String>>> futureList = new ArrayList<Future<Map<String, String>>>();
- int threadNum = idList.size();//查詢子線程數目
- for (int i = 0; i < threadNum; i++) {
- Long itemId = idList.get(i);
- Future<Map<String, String>> future = executor.submit(new QueryCallable (entityCode, itemId));
- futureList.add(future);
- }
- for(Future<Map<String, String>> future : futureList) {
- Map<String, String> threadResult = null;
- try {
- threadResult = future.get();
- } catch (Exception e) {
- threadResult = null;
- }
- if (null != threadResult && threadResult.size() > 0) {//結果集不爲空
- finalResult.add(threadResult);
- }
- }
- return finalResult;
- }
最後是具體負責處理每一個查詢請求的Callableapache
- public class QueryCallable implements Callable<Map<String, String>> {
- private String entityCode = "";
- private Long itemId = 0L;
- public GetEntityListCallable(String entityCode, Long itemId) {
- this. entityCode = entityCode;
- this.itemId = itemId;
- }
- public Map<String, String> call() throws Exception {
- Map<String, String> entityMap = null;
- try {
- entityMap = QueryServiceImpl.this.getEntity(entityCode, itemId);//先去hbase查基本信息
- } catch (Exception e) {
- entityMap = null;
- }
- return entityMap;
- }
- }
經過線程池的使用,能夠減小建立,銷燬進程所帶來的系統開銷,並且線程池中的工做線程能夠重複使用,極大地利用現有系統資源,增長了系統吞吐量。app
另外,今天也嘗試了另外一種合併Solr索引的方法,直接經過底層的Lucene的API進行,而不是提交Http請求,具體方法以下:ide
java -cp lucene-core-3.4.0.jar:lucene-misc-3.4.0.jar org/apache/lucene/misc/IndexMergeTool ./newindex ./app1/solr/data/index ./app2/solr/data/index 優化