調用ElasticSearch作分頁查詢時報錯:java
QueryPhaseExecutionException[Result window is too large, from + size must be less than or equal to: [10000] but was [666000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.]; }
提示用from+size方式有1萬條數據查詢的限制,須要更改index.max_result_window參數的值。api
翻了下elasticsearch官網的文檔:less
index.max_result_window The maximum value of from + size for searches to this index.Defaults to 10000.
Search requests take heap memory and time proportional to from + size and this limits that memory.
See Scroll or Search After for a more efficient alternative to raising this.
說是用傳統方式(from + size)查詢佔用內存空間且比較消耗時間,因此作了限制。elasticsearch
問題是用scroll方式作後臺分頁根本行不通。ui
不說用scroll方式只能一頁頁的翻這種不人性化的操做。頁碼一多,scrollId也很難管理啊。this
因此繼續鼓搗傳統方式的分頁。spa
上網查了下設置max_result_window的方法,全都是用crul或者http方式改的。.net
後來無心間看到了一篇文檔: https://blog.csdn.net/tzconn/article/details/83309516code
結合以前逛elastic中文社區的時候知道這個參數是索引級別的。因而小試了一下,結果居然能夠了。blog
java代碼以下:
public SearchResponse search(String logIndex, String logType, QueryBuilder query,
List<AggregationBuilder> agg, int page, int size) { page = page > 0 ? page - 1 : page; TransportClient client = getClient(); SearchRequestBuilder searchRequestBuilder = client.prepareSearch(logIndex.split(",")) .setTypes(logType.split(",")) .setSearchType(SearchType.DFS_QUERY_THEN_FETCH) .addSort("createTime", SortOrder.DESC); if (agg != null && !agg.isEmpty()) { for (int i = 0; i < agg.size(); i++) { searchRequestBuilder.addAggregation(agg.get(i)); } } updateIndexs(client, logIndex, page, size); SearchResponse searchResponse = searchRequestBuilder .setQuery(query) .setFrom(page * size) .setSize(size) .get(); return searchResponse; } //更新索引的max_result_window參數 private boolean updateIndexs(TransportClient client, String indices, int from, int size) { int records = from * size + size; if (records <= 10000) return true; UpdateSettingsResponse indexResponse = client.admin().indices() .prepareUpdateSettings(indices) .setSettings(Settings.builder() .put("index.max_result_window", records) .build() ).get(); return indexResponse.isAcknowledged(); }
搞定。
固然這段代碼很差的地方在於:
每次查詢超過10000萬條記錄的時候,都會去更新一次index。
這對本來就偏慢的from+size查詢來講,更是雪上加霜了。