Spark記錄-官網學習配置篇(一)

參考http://spark.apache.org/docs/latest/configuration.htmlhtml

Spark提供三個位置來配置系統:java

  • Spark屬性控制大多數應用程序參數,可使用SparkConf對象或經過Java系統屬性進行設置。
  • 可使用環境變量經過conf/spark-env.sh每一個節點上的腳原本設置每臺機器的設置,例如IP地址。
  • 日誌記錄能夠經過配置log4j.properties

Spark屬性控制大多數應用程序設置,併爲每一個應用程序單獨配置。這些屬性能夠直接在一個 SparkConf上傳遞給你 SparkContextSparkConf容許您經過該set()方法配置一些經常使用屬性(例如主URL和應用程序名稱)以及任意的鍵值對 。例如,咱們能夠用兩個線程來初始化一個應用程序,以下所示:node

請注意,咱們使用local [2]運行,這意味着兩個線程 - 表示「最小」並行性,這能夠幫助檢測在分佈式環境中運行時只存在的錯誤。python

val conf = new SparkConf() .setMaster("local[2]") .setAppName("CountingSheep") val sc = new SparkContext(conf)

動態加載Spark屬性

在某些狀況下,您可能但願避免對某些配置進行硬編碼SparkConf。例如,若是你想運行不一樣的主人或不一樣數量的內存相同的應用程序。Spark容許你簡單地建立一個空的conf:web

val sc = new SparkContext(new SparkConf())

而後,您能夠在運行時提供配置值:正則表達式

./bin/spark-submit --name "My app" --master local[4] --conf spark.eventLog.enabled=false --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar

Spark shell和spark-submit 工具支持兩種動態加載配置的方式。第一個是命令行選項--master,如上所示。spark-submit可使用該--conf 標誌接受任何Spark屬性,但對於啓動Spark應用程序的屬性使用特殊標誌。運行./bin/spark-submit --help將顯示這些選項的完整列表。sql

bin/spark-submit也將讀取配置選項conf/spark-defaults.conf,其中每行包含一個由空格分隔的鍵和值。例如:shell

spark.master spark://5.6.7.8:7077 spark.executor.memory 4g spark.eventLog.enabled true spark.serializer org.apache.spark.serializer.KryoSerializer

指定爲標誌或屬性文件中的任何值都將被傳遞給應用程序,並與經過SparkConf指定的那些值合併。直接在SparkConf上設置的屬性具備最高的優先級,而後將標誌傳遞給spark-submit或者spark-shell而後選擇spark-defaults.conf文件中的選項。自早期版本的Spark以來,一些配置鍵已被重命名; 在這種狀況下,舊的密鑰名仍然被接受,可是比新密鑰的任何實例的優先級低。express

查看Spark屬性

應用程序Web UI:http://IP:4040列出了「環境」選項卡中的Spark屬性這是一個有用的地方來檢查,以確保您的屬性已被正確設置。請注意,只有價值明確規定經過spark-defaults.confSparkConf或在命令行中會出現。對於全部其餘配置屬性,能夠假設使用默認值。apache

可用的屬性

大部分控制內部設置的屬性都有合理的默認值。一些最多見的選項是:

應用程序屬性

屬性名稱 默認 含義
spark.app.name none 您的應用程序的名稱。這將出如今用戶界面和日誌數據中。
spark.driver.cores 1 用於驅動程序進程的核心數量,僅在集羣模式下使用。
spark.driver.maxResultSize 1g 每一個Spark動做(例如collect)的全部分區的序列化結果的總大小限制。應該至少有1M,或者0表示無限制。若是總數超過這個限制,工做將會被停止。具備高限制可能會致使驅動程序內存不足錯誤(取決於spark.driver.memory和JVM中對象的內存開銷)。設置適當的限制能夠保護驅動程序免於內存不足錯誤。
spark.driver.memory 1g 用於驅動程序進程的內存量,即SparkContext被初始化的地方。(例如1g2g)。 
注意:在客戶端模式下,不能SparkConf 直接在應用程序中設置此配置,由於驅動程序JVM已經在此時啓動。相反,請經過--driver-memory命令行選項或在您的默認屬性文件中進行設置。
spark.executor.memory 1g 每一個執行程序進程使用的內存量(例如2g8g)。
spark.extraListeners none 一個用逗號分隔的類實現的列表SparkListener; 當初始化SparkContext時,這些類的實例將被建立並註冊到Spark的監聽器總線。若是一個類有一個接受SparkConf的單參數構造函數,那麼將會調用該構造函數;不然,將調用一個零參數的構造函數。若是沒有找到有效的構造函數,則SparkContext建立將失敗並出現異常。
spark.local.dir /tmp 用於Spark中「scratch」空間的目錄,包括映射輸出文件和存儲在磁盤上的RDD。這應該在系統中的快速本地磁盤上。它也能夠是不一樣磁盤上多個目錄的逗號分隔列表。注意:在Spark 1.0及更高版本中,這將被羣集管理器設置的SPARK_LOCAL_DIRS(Standalone,Mesos)或LOCAL_DIRS(YARN)環境變量覆蓋。
spark.logConf false 當SparkContext啓動時,將有效的SparkConf記錄爲INFO。
spark.master none 要鏈接到的羣集管理器。查看容許的主URL的列表 。
spark.submit.deployMode none Spark驅動程序的部署模式,不管是「客戶端」仍是「集羣」,這意味着在集羣內的一個節點上在本地(「客戶端」)或遠程(「集羣」)啓動驅動程序。
spark.log.callerContext none 在Yarn / HDFS上運行時將被寫入Yarn RM日誌/ HDFS審計日誌的應用程序信息。它的長度取決於Hadoop配置hadoop.caller.context.max.size。它應該簡潔,一般最多能夠有50個字符。
spark.driver.supervise false 若是爲true,則在非零退出狀態失敗時自動從新啓動驅動程序。僅在Spark獨立模式或Mesos集羣部署模式下有效。

除此以外,如下屬性也可用,在某些狀況下可能有用:

運行時環境

屬性名稱 默認 含義
spark.driver.extraClassPath none 額外的類路徑條目預先添加到驅動程序的類路徑中。 
注意:在客戶端模式下,不能SparkConf 直接在應用程序中設置此配置,由於驅動程序JVM已經在此時啓動。相反,請經過--driver-class-path命令行選項或在您的默認屬性文件中進行設置。
spark.driver.extraJavaOptions none 一串額外的JVM選項傳遞給驅動程序。例如,GC設置或其餘日誌記錄。請注意,使用此選項設置最大堆大小(-Xmx)設置是非法的。最大堆大小設置能夠spark.driver.memory在羣集模式下設置,也能夠經過--driver-memory客戶端模式下的命令行選項進行設置。 
注意:在客戶端模式下,不能SparkConf 直接在應用程序中設置此配置,由於驅動程序JVM已經在此時啓動。相反,請經過--driver-java-options命令行選項或在您的默認屬性文件中進行設置。
spark.driver.extraLibraryPath none 設置啓動驅動程序JVM時使用的特殊庫路徑。 
注意:在客戶端模式下,不能SparkConf 直接在應用程序中設置此配置,由於驅動程序JVM已經在此時啓動。相反,請經過--driver-library-path命令行選項或在您的默認屬性文件中進行設置。
spark.driver.userClassPathFirst false (實驗)在驅動程序中加載類時,是否給用戶添加的jar優先於Spark自帶的jar。此功能可用於緩解Spark的依賴性和用戶依賴性之間的衝突。它目前是一個實驗性的功能。這僅用於集羣模式。
spark.executor.extraClassPath none 額外的類路徑條目預先添加到執行者的類路徑。這主要是爲了與舊版本的Spark向後兼容。用戶一般不須要設置此選項。
spark.executor.extraJavaOptions none 一串額外的JVM選項傳遞給執行者。例如,GC設置或其餘日誌記錄。請注意,使用此選項設置Spark屬性或最大堆大小(-Xmx)設置是非法的。Spark屬性應該使用SparkConf對象或與spark-submit腳本一塊兒使用的spark-defaults.conf文件來設置。最大堆大小設置能夠經過spark.executor.memory進行設置。
spark.executor.extraLibraryPath none 設置啓動執行器JVM時使用的特殊庫路徑。
spark.executor.logs.rolling.maxRetainedFiles none 設置系統將要保留的最新滾動日誌文件的數量。較舊的日誌文件將被刪除。默認狀況下禁用。
spark.executor.logs.rolling.enableCompression false 啓用執行程序日誌壓縮。若是啓用,則執行器日誌將被壓縮。默認狀況下禁用。
spark.executor.logs.rolling.maxSize none 設置執行程序日誌將被滾動的文件的最大大小(以字節爲單位)。滾動默認是禁用的。請參閱spark.executor.logs.rolling.maxRetainedFiles 舊日誌的自動清理。
spark.executor.logs.rolling.strategy none 設置執行者日誌的滾動策略。默認狀況下它被禁用。它能夠設置爲「時間」(基於時間的滾動)或「大小」(基於大小的滾動)。對於「時間」,spark.executor.logs.rolling.time.interval用來設置滾動間隔。對於「大小」,使用spark.executor.logs.rolling.maxSize設置滾動的最大文件大小。
spark.executor.logs.rolling.time.interval daily 設置執行程序日誌將被回滾的時間間隔。滾動默認是禁用的。有效值是dailyhourlyminutely或在幾秒鐘內的任什麼時候間間隔。請參閱spark.executor.logs.rolling.maxRetainedFiles 舊日誌的自動清理。
spark.executor.userClassPathFirst false (實驗)與spark.driver.userClassPathFirst執行程序實例相同,但適用於執行程序實例。
spark.executorEnv.[EnvironmentVariableName] none 將指定的環境變量添加EnvironmentVariableName到Executor進程。用戶能夠指定其中的多個來設置多個環境變量。
spark.redaction.regex
(?i)secret|password

正則表達式來決定驅動程序和執行環境中哪些Spark配置屬性和環境變量包含敏感信息。當這個正則表達式匹配一個屬性鍵或值時,該值將從環境UI和各類日誌(如YARN和事件日誌)中進行編輯。
spark.python.profile false 在Python worker中啓用配置文件,配置文件結果將顯示sc.show_profiles(),或在驅動程序退出以前顯示。它也能夠被轉儲到磁盤sc.dump_profiles(path)。若是某些配置文件結果是手動顯示的,則不會在驅動程序退出以前自動顯示。默認狀況下pyspark.profiler.BasicProfiler會使用,可是這能夠經過將profiler類做爲參數傳遞給SparkContext構造函數來重寫。
spark.python.profile.dump none 在驅動程序退出以前用於轉儲配置文件結果的目錄。結果將做爲每一個RDD的分離文件轉儲。他們能夠經過ptats.Stats()加載。若是這是指定的,配置文件結果將不會自動顯示。
spark.python.worker.memory 512m 聚合期間每一個python工做進程使用的內存量,格式與JVM內存字符串(例如512m2g)相同。若是聚合過程當中使用的內存超過這個數量,它會將數據泄漏到磁盤中。
spark.python.worker.reuse true 是否重用Python工做者。若是是,它將使用固定數量的Python工做者,不須要爲每一個任務fork()一個Python進程。若是有大的廣播,這將是很是有用的,那麼廣播就不須要從JVM轉移到Python工做人員的每一個任務。
spark.files   逗號分隔的文件列表將放置在每一個執行器的工做目錄中。
spark.submit.pyFiles   逗號分隔的.zip,.egg或.py文件列表,放置在Python應用程序的PYTHONPATH上。
spark.jars   包含在驅動程序和執行程序類路徑中的逗號分隔的本地jar列表。
spark.jars.packages   包含在驅動程序和執行程序類路徑中的jar的Maven座標的逗號分隔列表。座標應該是groupId:artifactId:version。若是spark.jars.ivySettings 有文件將根據文件中的配置來解決,不然將在本地maven庫中搜索工件,而後maven central,最後是由命令行選項給出的任何附加遠程庫--repositories。有關更多詳細信息,請參閱 高級依賴關係管理
spark.jars.excludes   groupId:artifactId的逗號分隔列表,在解決提供的依賴性時排除,spark.jars.packages以免依賴衝突。
spark.jars.ivy   指定Ivy用戶目錄的路徑,用於本地Ivy緩存和包文件 spark.jars.packages。這將覆蓋ivy.default.ivy.user.dir 默認爲〜/ .ivy2 的Ivy屬性。
spark.jars.ivySettings   常春藤設置文件的路徑,以自定義指定的jar的分辨率,spark.jars.packages而不是內置的默認值,好比maven central。由命令行選項給出的其餘存儲庫--repositories也將包括在內。對於容許Spark從防火牆後面解析工件(例如經過像Artifactory這樣的內部工件服務器)很是有用。有關設置文件格式的詳細信息,請參閱http://ant.apache.org/ivy/history/latest-milestone/settings.html
spark.pyspark.driver.python   Python二進制可執行文件用於驅動程序中的PySpark。(默認是spark.pyspark.python
spark.pyspark.python   Python二進制可執行文件用於驅動程序和執行程序中的PySpark。

Shuffle Behavior

Property Name Default Meaning
spark.reducer.maxSizeInFlight 48m Maximum size of map outputs to fetch simultaneously from each reduce task. Since each output requires us to create a buffer to receive it, this represents a fixed memory overhead per reduce task, so keep it small unless you have a large amount of memory.
spark.reducer.maxReqsInFlight Int.MaxValue This configuration limits the number of remote requests to fetch blocks at any given point. When the number of hosts in the cluster increase, it might lead to very large number of in-bound connections to one or more nodes, causing the workers to fail under load. By allowing it to limit the number of fetch requests, this scenario can be mitigated.
spark.shuffle.compress true Whether to compress map output files. Generally a good idea. Compression will use spark.io.compression.codec.
spark.shuffle.file.buffer 32k Size of the in-memory buffer for each shuffle file output stream. These buffers reduce the number of disk seeks and system calls made in creating intermediate shuffle files.
spark.shuffle.io.maxRetries 3 (Netty only) Fetches that fail due to IO-related exceptions are automatically retried if this is set to a non-zero value. This retry logic helps stabilize large shuffles in the face of long GC pauses or transient network connectivity issues.
spark.shuffle.io.numConnectionsPerPeer 1 (Netty only) Connections between hosts are reused in order to reduce connection buildup for large clusters. For clusters with many hard disks and few hosts, this may result in insufficient concurrency to saturate all disks, and so users may consider increasing this value.
spark.shuffle.io.preferDirectBufs true (Netty only) Off-heap buffers are used to reduce garbage collection during shuffle and cache block transfer. For environments where off-heap memory is tightly limited, users may wish to turn this off to force all allocations from Netty to be on-heap.
spark.shuffle.io.retryWait 5s (Netty only) How long to wait between retries of fetches. The maximum delay caused by retrying is 15 seconds by default, calculated as maxRetries * retryWait.
spark.shuffle.service.enabled false Enables the external shuffle service. This service preserves the shuffle files written by executors so the executors can be safely removed. This must be enabled if spark.dynamicAllocation.enabled is "true". The external shuffle service must be set up in order to enable it. See dynamic allocation configuration and setup documentation for more information.
spark.shuffle.service.port 7337 Port on which the external shuffle service will run.
spark.shuffle.service.index.cache.entries 1024 Max number of entries to keep in the index cache of the shuffle service.
spark.shuffle.sort.bypassMergeThreshold 200 (Advanced) In the sort-based shuffle manager, avoid merge-sorting data if there is no map-side aggregation and there are at most this many reduce partitions.
spark.shuffle.spill.compress true Whether to compress data spilled during shuffles. Compression will usespark.io.compression.codec.
spark.shuffle.accurateBlockThreshold 100 * 1024 * 1024 When we compress the size of shuffle blocks in HighlyCompressedMapStatus, we will record the size accurately if it's above this config. This helps to prevent OOM by avoiding underestimating shuffle block size when fetch shuffle blocks.
spark.io.encryption.enabled false Enable IO encryption. Currently supported by all modes except Mesos. It's recommended that RPC encryption be enabled when using this feature.
spark.io.encryption.keySizeBits 128 IO encryption key size in bits. Supported values are 128, 192 and 256.
spark.io.encryption.keygen.algorithm HmacSHA1 The algorithm to use when generating the IO encryption key. The supported algorithms are described in the KeyGenerator section of the Java Cryptography Architecture Standard Algorithm Name Documentation.

Spark UI

Property Name Default Meaning
spark.eventLog.compress false Whether to compress logged events, if spark.eventLog.enabled is true. Compression will use spark.io.compression.codec.
spark.eventLog.dir file:///tmp/spark-events Base directory in which Spark events are logged, if spark.eventLog.enabled is true. Within this base directory, Spark creates a sub-directory for each application, and logs the events specific to the application in this directory. Users may want to set this to a unified location like an HDFS directory so history files can be read by the history server.
spark.eventLog.enabled false Whether to log Spark events, useful for reconstructing the Web UI after the application has finished.
spark.ui.enabled true Whether to run the web UI for the Spark application.
spark.ui.killEnabled true Allows jobs and stages to be killed from the web UI.
spark.ui.port 4040 Port for your application's dashboard, which shows memory and workload data.
spark.ui.retainedJobs 1000 How many jobs the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances.
spark.ui.retainedStages 1000 How many stages the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances.
spark.ui.retainedTasks 100000 How many tasks the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances.
spark.ui.reverseProxy false Enable running Spark Master as reverse proxy for worker and application UIs. In this mode, Spark master will reverse proxy the worker and application UIs to enable access without requiring direct access to their hosts. Use it with caution, as worker and application UI will not be accessible directly, you will only be able to access them through spark master/proxy public URL. This setting affects all the workers and application UIs running in the cluster and must be set on all the workers, drivers and masters.
spark.ui.reverseProxyUrl   This is the URL where your proxy is running. This URL is for proxy which is running in front of Spark Master. This is useful when running proxy for authentication e.g. OAuth proxy. Make sure this is a complete URL including scheme (http/https) and port to reach your proxy.
spark.ui.showConsoleProgress true Show the progress bar in the console. The progress bar shows the progress of stages that run for longer than 500ms. If multiple stages run at the same time, multiple progress bars will be displayed on the same line.
spark.worker.ui.retainedExecutors 1000 How many finished executors the Spark UI and status APIs remember before garbage collecting.
spark.worker.ui.retainedDrivers 1000 How many finished drivers the Spark UI and status APIs remember before garbage collecting.
spark.sql.ui.retainedExecutions 1000 How many finished executions the Spark UI and status APIs remember before garbage collecting.
spark.streaming.ui.retainedBatches 1000 How many finished batches the Spark UI and status APIs remember before garbage collecting.
spark.ui.retainedDeadExecutors 100 How many dead executors the Spark UI and status APIs remember before garbage collecting.

Compression and Serialization

Property Name Default Meaning
spark.broadcast.compress true Whether to compress broadcast variables before sending them. Generally a good idea. Compression will use spark.io.compression.codec.
spark.io.compression.codec lz4 The codec used to compress internal data such as RDD partitions, event log, broadcast variables and shuffle outputs. By default, Spark provides three codecs: lz4lzf, and snappy. You can also use fully qualified class names to specify the codec, e.g.org.apache.spark.io.LZ4CompressionCodec,org.apache.spark.io.LZFCompressionCodec, and org.apache.spark.io.SnappyCompressionCodec.
spark.io.compression.lz4.blockSize 32k Block size used in LZ4 compression, in the case when LZ4 compression codec is used. Lowering this block size will also lower shuffle memory usage when LZ4 is used.
spark.io.compression.snappy.blockSize 32k Block size used in Snappy compression, in the case when Snappy compression codec is used. Lowering this block size will also lower shuffle memory usage when Snappy is used.
spark.kryo.classesToRegister (none) If you use Kryo serialization, give a comma-separated list of custom class names to register with Kryo. See the tuning guide for more details.
spark.kryo.referenceTracking true Whether to track references to the same object when serializing data with Kryo, which is necessary if your object graphs have loops and useful for efficiency if they contain multiple copies of the same object. Can be disabled to improve performance if you know this is not the case.
spark.kryo.registrationRequired false Whether to require registration with Kryo. If set to 'true', Kryo will throw an exception if an unregistered class is serialized. If set to false (the default), Kryo will write unregistered class names along with each object. Writing class names can cause significant performance overhead, so enabling this option can enforce strictly that a user has not omitted classes from registration.
spark.kryo.registrator (none) If you use Kryo serialization, give a comma-separated list of classes that register your custom classes with Kryo. This property is useful if you need to register your classes in a custom way, e.g. to specify a custom field serializer. Otherwise spark.kryo.classesToRegister is simpler. It should be set to classes that extend KryoRegistrator. See the tuning guide for more details.
spark.kryo.unsafe false Whether to use unsafe based Kryo serializer. Can be substantially faster by using Unsafe Based IO.
spark.kryoserializer.buffer.max 64m Maximum allowable size of Kryo serialization buffer. This must be larger than any object you attempt to serialize and must be less than 2048m. Increase this if you get a "buffer limit exceeded" exception inside Kryo.
spark.kryoserializer.buffer 64k Initial size of Kryo's serialization buffer. Note that there will be one buffer per core on each worker. This buffer will grow up tospark.kryoserializer.buffer.max if needed.
spark.rdd.compress false Whether to compress serialized RDD partitions (e.g. forStorageLevel.MEMORY_ONLY_SER in Java and Scala or StorageLevel.MEMORY_ONLY in Python). Can save substantial space at the cost of some extra CPU time. Compression will use spark.io.compression.codec.
spark.serializer org.apache.spark.serializer.
JavaSerializer
Class to use for serializing objects that will be sent over the network or need to be cached in serialized form. The default of Java serialization works with any Serializable Java object but is quite slow, so we recommend usingorg.apache.spark.serializer.KryoSerializer and configuring Kryo serialization when speed is necessary. Can be any subclass oforg.apache.spark.Serializer.
spark.serializer.objectStreamReset 100 When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches objects to prevent writing redundant data, however that stops garbage collection of those objects. By calling 'reset' you flush that info from the serializer, and allow old objects to be collected. To turn off this periodic reset set it to -1. By default it will reset the serializer every 100 objects.

Memory Management

### Execution Behavior

Property Name Default Meaning
spark.memory.fraction 0.6 Fraction of (heap space - 300MB) used for execution and storage. The lower this is, the more frequently spills and cached data eviction occur. The purpose of this config is to set aside memory for internal metadata, user data structures, and imprecise size estimation in the case of sparse, unusually large records. Leaving this at the default value is recommended. For more detail, including important information about correctly tuning JVM garbage collection when increasing this value, see this description.
spark.memory.storageFraction 0.5 Amount of storage memory immune to eviction, expressed as a fraction of the size of the region set aside by s​park.memory.fraction. The higher this is, the less working memory may be available to execution and tasks may spill to disk more often. Leaving this at the default value is recommended. For more detail, see this description.
spark.memory.offHeap.enabled false If true, Spark will attempt to use off-heap memory for certain operations. If off-heap memory use is enabled, then spark.memory.offHeap.size must be positive.
spark.memory.offHeap.size 0 The absolute amount of memory in bytes which can be used for off-heap allocation. This setting has no impact on heap memory usage, so if your executors' total memory consumption must fit within some hard limit then be sure to shrink your JVM heap size accordingly. This must be set to a positive value when spark.memory.offHeap.enabled=true.
spark.memory.useLegacyMode false ​Whether to enable the legacy memory management mode used in Spark 1.5 and before. The legacy mode rigidly partitions the heap space into fixed-size regions, potentially leading to excessive spilling if the application was not tuned. The following deprecated memory fraction configurations are not read unless this is enabled:spark.shuffle.memoryFraction
spark.storage.memoryFraction
spark.storage.unrollFraction
spark.shuffle.memoryFraction 0.2 (deprecated) This is read only if spark.memory.useLegacyMode is enabled. Fraction of Java heap to use for aggregation and cogroups during shuffles. At any given time, the collective size of all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will begin to spill to disk. If spills are often, consider increasing this value at the expense of spark.storage.memoryFraction.
spark.storage.memoryFraction 0.6 (deprecated) This is read only if spark.memory.useLegacyMode is enabled. Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old" generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase it if you configure your own old generation size.
spark.storage.unrollFraction 0.2 (deprecated) This is read only if spark.memory.useLegacyMode is enabled. Fraction of spark.storage.memoryFraction to use for unrolling blocks in memory. This is dynamically allocated by dropping existing blocks when there is not enough free storage space to unroll the new block in its entirety.
spark.storage.replication.proactive false Enables proactive block replication for RDD blocks. Cached RDD block replicas lost due to executor failures are replenished if there are any existing available replicas. This tries to get the replication level of the block to the initial number.
Property Name Default Meaning
spark.broadcast.blockSize 4m Size of each piece of a block for TorrentBroadcastFactory. Too large a value decreases parallelism during broadcast (makes it slower); however, if it is too small, BlockManagermight take a performance hit.
spark.executor.cores 1 in YARN mode, all the available cores on the worker in standalone and Mesos coarse-grained modes. The number of cores to use on each executor. In standalone and Mesos coarse-grained modes, setting this parameter allows an application to run multiple executors on the same worker, provided that there are enough cores on that worker. Otherwise, only one executor per application will run on each worker.
spark.default.parallelism For distributed shuffle operations like reduceByKeyand join, the largest number of partitions in a parent RDD. For operations like parallelizewith no parent RDDs, it depends on the cluster manager:
  • Local mode: number of cores on the local machine
  • Mesos fine grained mode: 8
  • Others: total number of cores on all executor nodes or 2, whichever is larger
Default number of partitions in RDDs returned by transformations like joinreduceByKey, and parallelize when not set by user.
spark.executor.heartbeatInterval 10s Interval between each executor's heartbeats to the driver. Heartbeats let the driver know that the executor is still alive and update it with metrics for in-progress tasks. spark.executor.heartbeatInterval should be significantly less than spark.network.timeout
spark.files.fetchTimeout 60s Communication timeout to use when fetching files added through SparkContext.addFile() from the driver.
spark.files.useFetchCache true If set to true (default), file fetching will use a local cache that is shared by executors that belong to the same application, which can improve task launching performance when running many executors on the same host. If set to false, these caching optimizations will be disabled and all executors will fetch their own copies of files. This optimization may be disabled in order to use Spark local directories that reside on NFS filesystems (see SPARK-6313 for more details).
spark.files.overwrite false Whether to overwrite files added through SparkContext.addFile() when the target file exists and its contents do not match those of the source.
spark.files.maxPartitionBytes 134217728 (128 MB) The maximum number of bytes to pack into a single partition when reading files.
spark.files.openCostInBytes 4194304 (4 MB) The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition. It is better to over estimate, then the partitions with small files will be faster than partitions with bigger files.
spark.hadoop.cloneConf false If set to true, clones a new Hadoop Configurationobject for each task. This option should be enabled to work around Configuration thread-safety issues (see SPARK-2546 for more details). This is disabled by default in order to avoid unexpected performance regressions for jobs that are not affected by these issues.
spark.hadoop.validateOutputSpecs true If set to true, validates the output specification (e.g. checking if the output directory already exists) used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing output directories. We recommend that users do not disable this except if trying to achieve compatibility with previous versions of Spark. Simply use Hadoop's FileSystem API to delete output directories by hand. This setting is ignored for jobs generated through Spark Streaming's StreamingContext, since data may need to be rewritten to pre-existing output directories during checkpoint recovery.
spark.storage.memoryMapThreshold 2m Size of a block above which Spark memory maps when reading a block from disk. This prevents Spark from memory mapping very small blocks. In general, memory mapping has high overhead for blocks close to or below the page size of the operating system.
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 1 The file output committer algorithm version, valid algorithm version number: 1 or 2. Version 2 may have better performance, but version 1 may handle failures better in certain situations, as per MAPREDUCE-4815.

### Networking

Property Name Default Meaning
spark.rpc.message.maxSize 128 Maximum message size (in MB) to allow in "control plane" communication; generally only applies to map output size information sent between executors and the driver. Increase this if you are running jobs with many thousands of map and reduce tasks and see messages about the RPC message size.
spark.blockManager.port (random) Port for all block managers to listen on. These exist on both the driver and the executors.
spark.driver.blockManager.port (value of spark.blockManager.port) Driver-specific port for the block manager to listen on, for cases where it cannot use the same configuration as executors.
spark.driver.bindAddress (value of spark.driver.host) Hostname or IP address where to bind listening sockets. This config overrides the SPARK_LOCAL_IP environment variable (see below). 
It also allows a different address from the local one to be advertised to executors or external systems. This is useful, for example, when running containers with bridged networking. For this to properly work, the different ports used by the driver (RPC, block manager and UI) need to be forwarded from the container's host.
spark.driver.host (local hostname) Hostname or IP address for the driver. This is used for communicating with the executors and the standalone Master.
spark.driver.port (random) Port for the driver to listen on. This is used for communicating with the executors and the standalone Master.
spark.network.timeout 120s Default timeout for all network interactions. This config will be used in place ofspark.core.connection.ack.wait.timeout,spark.storage.blockManagerSlaveTimeoutMs,spark.shuffle.io.connectionTimeoutspark.rpc.askTimeout orspark.rpc.lookupTimeout if they are not configured.
spark.port.maxRetries 16 Maximum number of retries when binding to a port before giving up. When a port is given a specific value (non 0), each subsequent retry will increment the port used in the previous attempt by 1 before retrying. This essentially allows it to try a range of ports from the start port specified to port + maxRetries.
spark.rpc.numRetries 3 Number of times to retry before an RPC task gives up. An RPC task will run at most times of this number.
spark.rpc.retry.wait 3s Duration for an RPC ask operation to wait before retrying.
spark.rpc.askTimeout spark.network.timeout Duration for an RPC ask operation to wait before timing out.
spark.rpc.lookupTimeout 120s Duration for an RPC remote endpoint lookup operation to wait before timing out.

### Scheduling

Property Name Default Meaning
spark.cores.max (not set) When running on a standalone deploy cluster or a Mesos cluster in "coarse-grained" sharing mode, the maximum amount of CPU cores to request for the application from across the cluster (not from each machine). If not set, the default will be spark.deploy.defaultCores on Spark's standalone cluster manager, or infinite (all available cores) on Mesos.
spark.locality.wait 3s How long to wait to launch a data-local task before giving up and launching it on a less-local node. The same wait will be used to step through multiple locality levels (process-local, node-local, rack-local and then any). It is also possible to customize the waiting time for each level by setting spark.locality.wait.node, etc. You should increase this setting if your tasks are long and see poor locality, but the default usually works well.
spark.locality.wait.node spark.locality.wait Customize the locality wait for node locality. For example, you can set this to 0 to skip node locality and search immediately for rack locality (if your cluster has rack information).
spark.locality.wait.process spark.locality.wait Customize the locality wait for process locality. This affects tasks that attempt to access cached data in a particular executor process.
spark.locality.wait.rack spark.locality.wait Customize the locality wait for rack locality.
spark.scheduler.maxRegisteredResourcesWaitingTime 30s Maximum amount of time to wait for resources to register before scheduling begins.
spark.scheduler.minRegisteredResourcesRatio 0.8 for YARN mode; 0.0 for standalone mode and Mesos coarse-grained mode The minimum ratio of registered resources (registered resources / total expected resources) (resources are executors in yarn mode, CPU cores in standalone mode and Mesos coarsed-grained mode ['spark.cores.max' value is total expected resources for Mesos coarse-grained mode] ) to wait for before scheduling begins. Specified as a double between 0.0 and 1.0. Regardless of whether the minimum ratio of resources has been reached, the maximum amount of time it will wait before scheduling begins is controlled by configspark.scheduler.maxRegisteredResourcesWaitingTime.
spark.scheduler.mode FIFO The scheduling mode between jobs submitted to the same SparkContext. Can be set to FAIR to use fair sharing instead of queueing jobs one after another. Useful for multi-user services.
spark.scheduler.revive.interval 1s The interval length for the scheduler to revive the worker resource offers to run tasks.
spark.blacklist.enabled false If set to "true", prevent Spark from scheduling tasks on executors that have been blacklisted due to too many task failures. The blacklisting algorithm can be further controlled by the other "spark.blacklist" configuration options.
spark.blacklist.timeout 1h (Experimental) How long a node or executor is blacklisted for the entire application, before it is unconditionally removed from the blacklist to attempt running new tasks.
spark.blacklist.task.maxTaskAttemptsPerExecutor 1 (Experimental) For a given task, how many times it can be retried on one executor before the executor is blacklisted for that task.
spark.blacklist.task.maxTaskAttemptsPerNode 2 (Experimental) For a given task, how many times it can be retried on one node, before the entire node is blacklisted for that task.
spark.blacklist.stage.maxFailedTasksPerExecutor 2 (Experimental) How many different tasks must fail on one executor, within one stage, before the executor is blacklisted for that stage.
spark.blacklist.stage.maxFailedExecutorsPerNode 2 (Experimental) How many different executors are marked as blacklisted for a given stage, before the entire node is marked as failed for the stage.
spark.blacklist.application.maxFailedTasksPerExecutor 2 (Experimental) How many different tasks must fail on one executor, in successful task sets, before the executor is blacklisted for the entire application. Blacklisted executors will be automatically added back to the pool of available resources after the timeout specified byspark.blacklist.timeout. Note that with dynamic allocation, though, the executors may get marked as idle and be reclaimed by the cluster manager.
spark.blacklist.application.maxFailedExecutorsPerNode 2 (Experimental) How many different executors must be blacklisted for the entire application, before the node is blacklisted for the entire application. Blacklisted nodes will be automatically added back to the pool of available resources after the timeout specified byspark.blacklist.timeout. Note that with dynamic allocation, though, the executors on the node may get marked as idle and be reclaimed by the cluster manager.
spark.blacklist.killBlacklistedExecutors false (Experimental) If set to "true", allow Spark to automatically kill, and attempt to re-create, executors when they are blacklisted. Note that, when an entire node is added to the blacklist, all of the executors on that node will be killed.
spark.speculation false If set to "true", performs speculative execution of tasks. This means if one or more tasks are running slowly in a stage, they will be re-launched.
spark.speculation.interval 100ms How often Spark will check for tasks to speculate.
spark.speculation.multiplier 1.5 How many times slower a task is than the median to be considered for speculation.
spark.speculation.quantile 0.75 Fraction of tasks which must be complete before speculation is enabled for a particular stage.
spark.task.cpus 1 Number of cores to allocate for each task.
spark.task.maxFailures 4 Number of failures of any particular task before giving up on the job. The total number of failures spread across different tasks will not cause the job to fail; a particular task has to fail this number of attempts. Should be greater than or equal to 1. Number of allowed retries = this value - 1.
spark.task.reaper.enabled false Enables monitoring of killed / interrupted tasks. When set to true, any task which is killed will be monitored by the executor until that task actually finishes executing. See the other spark.task.reaper.* configurations for details on how to control the exact behavior of this monitoring. When set to false (the default), task killing will use an older code path which lacks such monitoring.
spark.task.reaper.pollingInterval 10s When spark.task.reaper.enabled = true, this setting controls the frequency at which executors will poll the status of killed tasks. If a killed task is still running when polled then a warning will be logged and, by default, a thread-dump of the task will be logged (this thread dump can be disabled via the spark.task.reaper.threadDump setting, which is documented below).
spark.task.reaper.threadDump true When spark.task.reaper.enabled = true, this setting controls whether task thread dumps are logged during periodic polling of killed tasks. Set this to false to disable collection of thread dumps.
spark.task.reaper.killTimeout -1 When spark.task.reaper.enabled = true, this setting specifies a timeout after which the executor JVM will kill itself if a killed task has not stopped running. The default value, -1, disables this mechanism and prevents the executor from self-destructing. The purpose of this setting is to act as a safety-net to prevent runaway uncancellable tasks from rendering an executor unusable.
spark.stage.maxConsecutiveAttempts 4 Number of consecutive stage attempts allowed before a stage is aborted.

### Dynamic Allocation

Property Name Default Meaning
spark.dynamicAllocation.enabled false Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. For more detail, see the description here

This requires spark.shuffle.service.enabled to be set. The following configurations are also relevant:spark.dynamicAllocation.minExecutors,spark.dynamicAllocation.maxExecutors, andspark.dynamicAllocation.initialExecutors
spark.dynamicAllocation.executorIdleTimeout 60s If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed. For more detail, see this description.
spark.dynamicAllocation.cachedExecutorIdleTimeout infinity If dynamic allocation is enabled and an executor which has cached data blocks has been idle for more than this duration, the executor will be removed. For more details, see this description.
spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.minExecutors Initial number of executors to run if dynamic allocation is enabled. 

If `--num-executors` (or `spark.executor.instances`) is set and larger than this value, it will be used as the initial number of executors.
spark.dynamicAllocation.maxExecutors infinity Upper bound for the number of executors if dynamic allocation is enabled.
spark.dynamicAllocation.minExecutors 0 Lower bound for the number of executors if dynamic allocation is enabled.
spark.dynamicAllocation.schedulerBacklogTimeout 1s If dynamic allocation is enabled and there have been pending tasks backlogged for more than this duration, new executors will be requested. For more detail, see this description.
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout schedulerBacklogTimeout Same as spark.dynamicAllocation.schedulerBacklogTimeout, but used only for subsequent executor requests. For more detail, see this description.

### Security

Property Name Default Meaning
spark.acls.enable false Whether Spark acls should be enabled. If enabled, this checks to see if the user has access permissions to view or modify the job. Note this requires the user to be known, so if the user comes across as null no checks are done. Filters can be used with the UI to authenticate and set the user.
spark.admin.acls Empty Comma separated list of users/administrators that have view and modify access to all Spark jobs. This can be used if you run on a shared cluster and have a set of administrators or devs who help debug when things do not work. Putting a "*" in the list means any user can have the privilege of admin.
spark.admin.acls.groups Empty Comma separated list of groups that have view and modify access to all Spark jobs. This can be used if you have a set of administrators or developers who help maintain and debug the underlying infrastructure. Putting a "*" in the list means any user in any group can have the privilege of admin. The user groups are obtained from the instance of the groups mapping provider specified by spark.user.groups.mapping. Check the entryspark.user.groups.mapping for more details.
spark.user.groups.mapping org.apache.spark.security.ShellBasedGroupsMappingProvider The list of groups for a user are determined by a group mapping service defined by the trait org.apache.spark.security.GroupMappingServiceProvider which can configured by this property. A default unix shell based implementation is provided org.apache.spark.security.ShellBasedGroupsMappingProviderwhich can be specified to resolve a list of groups for a user. Note:This implementation supports only a Unix/Linux based environment. Windows environment is currently not supported. However, a new platform/protocol can be supported by implementing the trait org.apache.spark.security.GroupMappingServiceProvider.
spark.authenticate false Whether Spark authenticates its internal connections. Seespark.authenticate.secret if not running on YARN.
spark.authenticate.secret None Set the secret key used for Spark to authenticate between components. This needs to be set if not running on YARN and authentication is enabled.
spark.network.crypto.enabled false Enable encryption using the commons-crypto library for RPC and block transfer service. Requires spark.authenticate to be enabled.
spark.network.crypto.keyLength 128 The length in bits of the encryption key to generate. Valid values are 128, 192 and 256.
spark.network.crypto.keyFactoryAlgorithm PBKDF2WithHmacSHA1 The key factory algorithm to use when generating encryption keys. Should be one of the algorithms supported by the javax.crypto.SecretKeyFactory class in the JRE being used.
spark.network.crypto.saslFallback true Whether to fall back to SASL authentication if authentication fails using Spark's internal mechanism. This is useful when the application is connecting to old shuffle services that do not support the internal Spark authentication protocol. On the server side, this can be used to block older clients from authenticating against a new shuffle service.
spark.network.crypto.config.* None Configuration values for the commons-crypto library, such as which cipher implementations to use. The config name should be the name of commons-crypto configuration without the "commons.crypto" prefix.
spark.authenticate.enableSaslEncryption false Enable encrypted communication when authentication is enabled. This is supported by the block transfer service and the RPC endpoints.
spark.network.sasl.serverAlwaysEncrypt false Disable unencrypted connections for services that support SASL authentication.
spark.core.connection.ack.wait.timeout spark.network.timeout How long for the connection to wait for ack to occur before timing out and giving up. To avoid unwilling timeout caused by long pause like GC, you can set larger value.
spark.modify.acls Empty Comma separated list of users that have modify access to the Spark job. By default only the user that started the Spark job has access to modify it (kill it for example). Putting a "*" in the list means any user can have access to modify it.
spark.modify.acls.groups Empty Comma separated list of groups that have modify access to the Spark job. This can be used if you have a set of administrators or developers from the same team to have access to control the job. Putting a "*" in the list means any user in any group has the access to modify the Spark job. The user groups are obtained from the instance of the groups mapping provider specified byspark.user.groups.mapping. Check the entry spark.user.groups.mapping for more details.
spark.ui.filters None Comma separated list of filter class names to apply to the Spark web UI. The filter should be a standard javax servlet Filter. Parameters to each filter can also be specified by setting a java system property of: 
spark.<class name of filter>.params='param1=value1,param2=value2'
For example: 
-Dspark.ui.filters=com.test.filter1 
-Dspark.com.test.filter1.params='param1=foo,param2=testing'
spark.ui.view.acls Empty Comma separated list of users that have view access to the Spark web ui. By default only the user that started the Spark job has view access. Putting a "*" in the list means any user can have view access to this Spark job.
spark.ui.view.acls.groups Empty Comma separated list of groups that have view access to the Spark web ui to view the Spark Job details. This can be used if you have a set of administrators or developers or users who can monitor the Spark job submitted. Putting a "*" in the list means any user in any group can view the Spark job details on the Spark web ui. The user groups are obtained from the instance of the groups mapping provider specified by spark.user.groups.mapping. Check the entryspark.user.groups.mapping for more details.

### TLS / SSL

Property Name Default Meaning
spark.ssl.enabled false Whether to enable SSL connections on all supported protocols. 
When spark.ssl.enabled is configured, spark.ssl.protocol is required. 
All the SSL settings like spark.ssl.xxx where xxx is a particular configuration property, denote the global configuration for all the supported protocols. In order to override the global configuration for the particular protocol, the properties must be overwritten in the protocol-specific namespace. 
Use spark.ssl.YYY.XXX settings to overwrite the global configuration for particular protocol denoted by YYY. Example values for YYY include fsuistandalone, and historyServer. See SSL Configuration for details on hierarchical SSL configuration for services.
spark.ssl.[namespace].port None The port where the SSL service will listen on. 
The port must be defined within a namespace configuration; see SSL Configuration for the available namespaces. 
When not set, the SSL port will be derived from the non-SSL port for the same service. A value of "0" will make the service bind to an ephemeral port.
spark.ssl.enabledAlgorithms Empty A comma separated list of ciphers. The specified ciphers must be supported by JVM. The reference list of protocols one can find on this page. Note: If not set, it will use the default cipher suites of JVM.
spark.ssl.keyPassword None A password to the private key in key-store.
spark.ssl.keyStore None A path to a key-store file. The path can be absolute or relative to the directory where the component is started in.
spark.ssl.keyStorePassword None A password to the key-store.
spark.ssl.keyStoreType JKS The type of the key-store.
spark.ssl.protocol None A protocol name. The protocol must be supported by JVM. The reference list of protocols one can find on this page.
spark.ssl.needClientAuth false Set true if SSL needs client authentication.
spark.ssl.trustStore None A path to a trust-store file. The path can be absolute or relative to the directory where the component is started in.
spark.ssl.trustStorePassword None A password to the trust-store.
spark.ssl.trustStoreType JKS The type of the trust-store.
相關文章
相關標籤/搜索