elasticsearch 配置文件譯文解析

##################### ElasticSearch 配置示例 #####################
# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
# consult the guide at <http://elasticsearch.org/guide>.
# 這個文件包含了各類配置的概覽,旨在配置與運行操做相關的東西。
# 應用程序開發人員應該諮詢<http://elasticsearch.org/guide>
#
# The installation procedure is covered at
# <http://elasticsearch.org/guide/reference/setup/installation.html>.
# 安裝過程在這裏有<http://elasticsearch.org/guide/reference/setup/installation.html>.
#
#
# ElasticSearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
# ElasticSearch 已經提供了大部分設置,都是合理的默認配置。
# 因此你沒必要進行煩人的配置就能夠嘗試一下。
#
# Most of the time, these defaults are just fine for running a production
# cluster. If you're fine-tuning your cluster, or wondering about the
# effect of certain configuration option, please _do ask_ on the
# mailing list or IRC channel [http://elasticsearch.org/community].
# 大多數時候,這些默認的配置就足以運行一個生產集羣了。
# 若是你想優化你的集羣,或者對一個特定的配置選項的做用好奇,你能夠訪問郵件列表
# 或者IRC頻道[http://elasticsearch.org/community].
#
# Any element in the configuration can be replaced with environment variables
# by placing them in ${...} notation. For example:
# 配置中的任何一個元素均可以被環境變量取代,這些環境變量使用${...}符號佔位
# 例如:
# node.rack: ${RACK_ENV_VAR}
# See <http://elasticsearch.org/guide/reference/setup/configuration.html>
# for information on supported formats and syntax for the configuration file.
# 查看<http://elasticsearch.org/guide/reference/setup/configuration.html>瞭解更多
# 的可支持的格式和配置文件的語法。
################################### 集羣 ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
# 集羣名稱標識了你的集羣,自動探查會用到它。
# 若是你在同一個網絡中運行多個集羣,那就要確保你的集羣名稱是獨一無二的。
#
# cluster.name: elasticsearch
#################################### 節點 #####################################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
# 節點名稱會在啓動的時候自動生成,因此你能夠不用手動配置。你也能夠給節點指定一個
# 特定的名稱
#
# node.name: "Franz Kafka"
# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
# 每個節點是否容許被選舉成爲主節點,是否容許存儲數據,都是能夠配置的
#
#
# Allow this node to be eligible as a master node (enabled by default):
# 容許這個節點被選舉爲一個主節點(默認爲容許)
#
#
# node.master: true
#
# Allow this node to store data (enabled by default):
# 容許這個節點存儲數據(默認爲容許)
#
# node.data: true
# You can exploit these settings to design advanced cluster topologies.
# 你能夠利用這些設置設計高級的集羣拓撲
#
# 1. You want this node to never become a master node, only to hold data.
#    This will be the "workhorse" of your cluster.
# 1. 你不想讓這個節點成爲一個主節點,只想用來存儲數據。
#    這個節點會成爲你的集羣的「負載器」
#
# node.master: false
# node.data: true
#
# 2. You want this node to only serve as a master: to not store any data and
#    to have free resources. This will be the "coordinator" of your cluster.
# 2. 你想讓這個節點成爲一個主節點,而且不用來存儲任何數據,而且擁有空閒資源。
#    這個節點會成爲你集羣中的「協調器」
#
# node.master: true
# node.data: false
#
# 3. You want this node to be neither master nor data node, but
#    to act as a "search load balancer" (fetching data from nodes,
#    aggregating results, etc.)
# 4. 你既不想讓這個節點變成主節點也不想讓其變成數據節點,只想讓其成爲一個「搜索負載均衡器」
#    (從節點中獲取數據,聚合結果,等等)
#
# node.master: false
# node.data: false
# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools
# such as <http://github.com/lukas-vlcek/bigdesk> and
# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.
# 使用集羣體檢API[http://localhost:9200/_cluster/health] ,
# 節點信息API[http://localhost:9200/_cluster/nodes] 或者GUI工具例如:
# <http://github.com/lukas-vlcek/bigdesk>和<http://mobz.github.com/elasticsearch-head>
# 能夠查看集羣狀態
#
# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
# 一個節點能夠附帶一些普通的屬性,這些屬性能夠在後面的自定義分片分配過濾或者allocation awareness中使用。
# 一個屬性就是一個簡單的鍵值對,相似於node.key: value, 這裏有一個例子:
#
# node.rack: rack314


# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
# 默認的,多個節點容許從同一個安裝位置啓動。若想禁止這個特性,按照下面所示配置:
# node.max_local_storage_nodes: 1

#################################### 索引 ####################################
# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
# 你能夠在這個文件中爲全部的索引設置一系列的全局操做(例如 分片/副本 操做,mapping(映射)
# 或者分詞器定義,translog配置,...)

#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
# 提示,針對一個特定的索引進行配置更合理,不管是在建立索引仍是使用索引模板API的時候。
#
#
# See <http://elasticsearch.org/guide/reference/index-modules/> and
# <http://elasticsearch.org/guide/reference/api/admin-indices-create-index.html>
# for more information.
# 詳情見<http://elasticsearch.org/guide/reference/index-modules/>和
# <http://elasticsearch.org/guide/reference/api/admin-indices-create-index.html>
# Set the number of shards (splits) of an index (5 by default):
# 設置一個索引的分片數量(默認爲5)
#
# index.number_of_shards: 5
# Set the number of replicas (additional copies) of an index (1 by default):
# 設置一個索引的副本數量(默認爲1)
#
# index.number_of_replicas: 1
# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
# 注意,爲了使用小的索引在本地機器上開發,禁用分佈式特性是合理的作法。
#
#
# index.number_of_shards: 1
# index.number_of_replicas: 0
# These settings directly affect the performance of index and search operations
# in your cluster. Assuming you have enough machines to hold shards and
# replicas, the rule of thumb is:
# 這些設置會直接影響索引和查詢操做的在集羣中的性能。假如你有足夠的機器來放分片和副本,
# 最佳實踐是:
#
# 1. Having more *shards* enhances the _indexing_ performance and allows to
#    _distribute_ a big index across machines.
# 1. 索引分片分的多一些,能夠提升索引的性能,而且把一個大的索引分佈到機器中去。
# 2. Having more *replicas* enhances the _search_ performance and improves the
#    cluster _availability_.
# 2. 副本分片分的多一些,能夠提升搜索的性能,而且提升集羣的可用性。
#
# The "number_of_shards" is a one-time setting for an index.
# "number_of_shards"對一個索引來講只能配置一次
#
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
# "number_of_replicas"在任什麼時候候均可以增長或減小,經過Index Update Settings(索引更新配置)API能夠作到這一點。
#
#
# ElasticSearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.
# ElasticSearch 會維護load balancin(負載均衡),relocating(重定位),合併來自各個節點的結果等等。
# 你能夠實驗不一樣的配置來進行優化。
#
# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect
# the index status.
# 使用Index Status(索引狀態)API (<http://localhost:9200/A/_status>)查看索引狀態
####################################html

Paths(路徑)node

####################################
# Path to directory containing configuration (this file and logging.yml):
# 包含配置(這個文件和logging.yml)的目錄的路徑
#
# path.conf: /path/to/conf
# Path to directory where to store index data allocated for this node.
# 存儲這個節點的索引數據的目錄的路徑

# path.data: /path/to/data
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
# 能夠隨意的包含不止一個位置,這樣數據會在文件層跨越多個位置(a la RAID 0),建立時會
# 優先選擇大的剩餘空間的位置
#
# path.data: /path/to/data1,/path/to/data2
# Path to temporary files:
# 臨時文件的路徑
#
# path.work: /path/to/work
# Path to log files:
# 日誌文件的路徑
#
# path.logs: /path/to/logs
# Path to where plugins are installed:
# 插件安裝路徑
#
# path.plugins: /path/to/plugins
#################################### 插件 ###################################
# If a plugin listed here is not installed for current node, the node will not start.
# 若是當前結點沒有安裝下面列出的插件,結點不會啓動
#
# plugin.mandatory: mapper-attachments,lang-groovy
################################### 內存 ####################################
# ElasticSearch performs poorly when JVM starts swapping: you should ensure that
# it _never_ swaps.
# 當JVM開始swapping(換頁)時ElasticSearch性能會低下,你應該保證它不會換頁
#
#
# Set this property to true to lock the memory:
# 設置這個屬性爲true來鎖定內存
#
# bootstrap.mlockall: true
# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for ElasticSearch, leaving enough memory for the operating system itself.
# 確保ES_MIN_MEM和ES_MAX_MEM環境變量設置成了同一個值,確保機器有足夠的內存來分配
# 給ElasticSearch,而且保留足夠的內存給操做系統
#
#
# You should also make sure that the ElasticSearch process is allowed to lock
# the memory, eg. by using `ulimit -l unlimited`.
# 你應該確保ElasticSearch的進程能夠鎖定內存,例如:使用`ulimit -l unlimited`
#
##############################git

Network(網絡) 和 HTTPgithub

###############################
# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).
# 默認的ElasticSearch把本身和0.0.0.0地址綁定,HTTP傳輸的監聽端口在[9200-9300],節點之間
# 通訊的端口在[9300-9400]。(範圍的意思是說若是一個端口已經被佔用,它將會自動嘗試下一個端口)
#
#
# Set the bind address specifically (IPv4 or IPv6):
# 設置一個特定的綁定地址(IPv4 or IPv6):
#
# network.bind_host: 192.168.0.1
# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
# 設置其餘節點用來與這個節點通訊的地址。若是沒有設定,會自動獲取。
# 必須是一個真實的IP地址。
#
# network.publish_host: 192.168.0.1
# Set both 'bind_host' and 'publish_host':
# 'bind_host'和'publish_host'都設置
#
# network.host: 192.168.0.1
# Set a custom port for the node to node communication (9300 by default):
# 爲節點之間的通訊設置一個自定義端口(默認爲9300)
#
# transport.tcp.port: 9300
# Enable compression for all communication between nodes (disabled by default):
# 爲全部的節點間的通訊啓用壓縮(默認爲禁用)
#
# transport.tcp.compress: true
# Set a custom port to listen for HTTP traffic:
# 設置一個監聽HTTP傳輸的自定義端口
#
# http.port: 9200
# Set a custom allowed content length:
# 設置一個自定義的容許的內容長度
#
# http.max_content_length: 100mb
# Disable HTTP completely:
# 徹底禁用HTTP
#
# http.enabled: false
################################### Gateway ###################################
# The gateway allows for persisting the cluster state between full cluster
# restarts. Every change to the state (such as adding an index) will be stored
# in the gateway, and when the cluster starts up for the first time,
# it will read its state from the gateway.
# Gateway支持持久化集羣狀態。狀態的每個改變(例如添加一個索引)將會被存儲在gateway,
# 當集羣第一次啓動時,它會從gateway中讀取它的狀態。
#
# There are several types of gateway implementations. For more information,
# see <http://elasticsearch.org/guide/reference/modules/gateway>.
# 還有多種類型的gateway實現。詳情見<http://elasticsearch.org/guide/reference/modules/gateway>
# The default gateway type is the "local" gateway (recommended):
# 默認的gateway類型是 "local" gateway(推薦)
#
# gateway.type: local
# Settings below control how and when to start the initial recovery process on
# a full cluster restart (to reuse as much local data as possible when using shared
# gateway).
# 下面的配置控制怎樣以及什麼時候啓動一整個集羣重啓的初始化恢復過程
# (當使用shard gateway時,是爲了儘量的重用local data(本地數據))
#
# Allow recovery process after N nodes in a cluster are up:
# 一個集羣中的N個節點啓動後,才容許進行恢復處理
#
# gateway.recover_after_nodes: 1
# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
# 設置初始化恢復過程的超時時間,超時時間從上一個配置中配置的N個節點啓動後算起
#
# gateway.recover_after_time: 5m
# Set how many nodes are expected in this cluster. Once these N nodes
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
# 設置這個集羣中指望有多少個節點。一旦這N個節點啓動(而且recover_after_nodes也符合),
# 當即開始恢復過程(不等待recover_after_time超時)
#
# gateway.expected_nodes: 2
#############################bootstrap

Recovery Throttling (節點恢復限流閥)api

#############################
# These settings allow to control the process of shards allocation between
# nodes during initial recovery, replica allocation, rebalancing,
# or when adding and removing nodes.
# 這些配置容許在初始化恢復,副本分配,再平衡,或者添加和刪除節點時控制節點間的分片分配
#
# Set the number of concurrent recoveries happening on a node:
# 設置一個節點的並行恢復數
#
# 1. During the initial recovery 
# 1. 初始化恢復期間
#
# cluster.routing.allocation.node_initial_primaries_recoveries: 4
#
# 2. During adding/removing nodes, rebalancing, etc 
# 2. 添加/刪除節點,再平衡等期間
#
# cluster.routing.allocation.node_concurrent_recoveries: 2
# Set to throttle throughput when recovering (eg. 100mb, by default unlimited):
# 設置恢復時的吞吐量(例如,100mb,默認沒有上限)
#
# indices.recovery.max_size_per_sec: 0
# Set to limit the number of open concurrent streams when
# recovering a shard from a peer:
# 設置當一個分片從對等點恢復時可以打開的併發流的上限
#
# indices.recovery.concurrent_streams: 5
##################################網絡

Discovery(探查)併發

##################################
# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.
# 探查機制可以保障一個集羣中的節點能被找到,而且主節點可以被選舉出來。
# 默認的方式爲多播。
# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. Set this option to a higher value (2-4)
# for large clusters (>3 nodes):
# 這個選項用來設置一個節點能夠看到其餘N個在集羣中具備可操性的而且具備當選主節點資格的節點
# 對於大的集羣(大於3個節點),這個選項應該設置成一個高一點的值(2-4)
#
# discovery.zen.minimum_master_nodes: 1
# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
# 設置在探查過程當中從其餘節點返回ping的迴應的等待時間
# 在一個低速或者擁堵的網絡環境中這個選項應該設置的大一些,這樣能夠下降探查失敗的可能性。
#
# discovery.zen.ping.timeout: 3s
# See <http://elasticsearch.org/guide/reference/modules/discovery/zen.html>
# for more information.
# 詳情見<http://elasticsearch.org/guide/reference/modules/discovery/zen.html>
# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
# 利用單播探查,咱們能夠顯示的指定哪些節點在探查集羣過程當中會被用到。
# 當多播不可用,或者須要約束集羣的通訊時可使用單播探查。
#
# 1. Disable multicast discovery (enabled by default):
# 1. 禁用多播探查(默承認用)
#
# discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
#    to perform discovery when new nodes (master or data) are started:
# 2. 這是一個集羣中的主節點的初始列表,當節點(主節點或者數據節點)啓動時使用這個列表進行探查
#
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]
# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
# 爲了執行探查EC2探查容許使用AWS EC2 API
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
# 想要啓用EC2探查功能,你必須安裝cloud-aws插件
#
# See <http://elasticsearch.org/guide/reference/modules/discovery/ec2.html>
# for more information.
# 詳情見<http://elasticsearch.org/guide/reference/modules/discovery/ec2.html>
#
#
# See <http://elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html>
# for a step-by-step tutorial.
# 詳情見<http://elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html>

##################################app

Slow Log(慢日誌)負載均衡

################################## # Shard level query and fetch threshold logging. #  #index.search.slowlog.threshold.query.warn: 10s #index.search.slowlog.threshold.query.info: 5s #index.search.slowlog.threshold.query.debug: 2s #index.search.slowlog.threshold.query.trace: 500ms #index.search.slowlog.threshold.fetch.warn: 1s #index.search.slowlog.threshold.fetch.info: 800ms #index.search.slowlog.threshold.fetch.debug: 500ms #index.search.slowlog.threshold.fetch.trace: 200ms #index.indexing.slowlog.threshold.index.warn: 10s #index.indexing.slowlog.threshold.index.info: 5s #index.indexing.slowlog.threshold.index.debug: 2s #index.indexing.slowlog.threshold.index.trace: 500ms ################################## GC Logging ################################ #monitor.jvm.gc.ParNew.warn: 1000ms #monitor.jvm.gc.ParNew.info: 700ms #monitor.jvm.gc.ParNew.debug: 400ms #monitor.jvm.gc.ConcurrentMarkSweep.warn: 10s #monitor.jvm.gc.ConcurrentMarkSweep.info: 5s #monitor.jvm.gc.ConcurrentMarkSweep.debug: 2s

相關文章
相關標籤/搜索