hive> select count(1) from serde_regex; Automatically selecting local only mode for query Query ID = hadoop_20160125101917_ab5615a4-e6f1-47e3-9e97-6795c3268cea Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number>
計算的公式: reduce個數 = InputFileSize / bytes per reducershell
有三個參數oop
hive.exec.reducers.bytes.per.reducer 控制一個reducer能處理多少input。 default 256Mspa
hive> set hive.exec.reducers.bytes.per.reducer; hive.exec.reducers.bytes.per.reducer=256000000
hive.exec.reducers.max 控制最大的reducer數量。若是input / bytes per reduce > max 會啓動該參數設定的值,code
hive> set hive.exec.reducers.max; hive.exec.reducers.max=1009
mapreduce.reduce.tasks 這個參數指定了,那麼不會進行計算了,都會使用這個參數進行計算,hadoop
hive> set mapreduce.reduce.tasks; mapred.reduce.tasks=-1