摘要: 在stackoverflow上有一個很是有名的問題:爲何處理有序數組要比非有序數組快?,可見分支預測對代碼運行效率有很是大的影響。要提升代碼執行效率,一個重要的原則就是儘可能避免CPU把流水線清空,那麼提升分支預測的成功率就很是重要。java
在stackoverflow上有一個很是有名的問題:爲何處理有序數組要比非有序數組快?,可見分支預測對代碼運行效率有很是大的影響。git
現代CPU都支持分支預測(branch prediction)和指令流水線(instruction pipeline),這兩個結合能夠極大提升CPU效率。對於像簡單的if跳轉,CPU是能夠比較好地作分支預測的。可是對於switch跳轉,CPU則沒有太多的辦法。switch本質上是據索引,從地址數組裏取地址再跳轉。github
要提升代碼執行效率,一個重要的原則就是儘可能避免CPU把流水線清空,那麼提升分支預測的成功率就很是重要。api
那麼對於代碼裏,若是某個switch分支機率很高,是否能夠考慮代碼層面幫CPU把判斷提早,來提升代碼執行效率呢?數組
在ChannelEventRunnable
裏有一個switch來判斷channel state,而後作對應的邏輯:查看dom
一個channel創建起來以後,超過99.9%狀況它的state都是ChannelState.RECEIVED
,那麼能夠考慮把這個判斷提早。性能
下面經過jmh來驗證下:spa
public class TestBenchMarks {
public enum ChannelState { CONNECTED, DISCONNECTED, SENT, RECEIVED, CAUGHT } @State(Scope.Benchmark) public static class ExecutionPlan { @Param({ "1000000" }) public int size; public ChannelState[] states = null; @Setup public void setUp() { ChannelState[] values = ChannelState.values(); states = new ChannelState[size]; Random random = new Random(new Date().getTime()); for (int i = 0; i < size; i++) { int nextInt = random.nextInt(1000000); if (nextInt > 100) { states[i] = ChannelState.RECEIVED; } else { states[i] = values[nextInt % values.length]; } } } } @Fork(value = 5) @Benchmark @BenchmarkMode(Mode.Throughput) public void benchSiwtch(ExecutionPlan plan, Blackhole bh) { int result = 0; for (int i = 0; i < plan.size; ++i) { switch (plan.states[i]) { case CONNECTED: result += ChannelState.CONNECTED.ordinal(); break; case DISCONNECTED: result += ChannelState.DISCONNECTED.ordinal(); break; case SENT: result += ChannelState.SENT.ordinal(); break; case RECEIVED: result += ChannelState.RECEIVED.ordinal(); break; case CAUGHT: result += ChannelState.CAUGHT.ordinal(); break; } } bh.consume(result); } @Fork(value = 5) @Benchmark @BenchmarkMode(Mode.Throughput) public void benchIfAndSwitch(ExecutionPlan plan, Blackhole bh) { int result = 0; for (int i = 0; i < plan.size; ++i) { ChannelState state = plan.states[i]; if (state == ChannelState.RECEIVED) { result += ChannelState.RECEIVED.ordinal(); } else { switch (state) { case CONNECTED: result += ChannelState.CONNECTED.ordinal(); break; case SENT: result += ChannelState.SENT.ordinal(); break; case DISCONNECTED: result += ChannelState.DISCONNECTED.ordinal(); break; case CAUGHT: result += ChannelState.CAUGHT.ordinal(); break; } } } bh.consume(result); }
}code
ChannelState.RECEIVED
benchmark結果是:索引
Result "io.github.hengyunabc.jmh.TestBenchMarks.benchSiwtch": 576.745 ±(99.9%) 6.806 ops/s [Average] (min, avg, max) = (490.348, 576.745, 618.360), stdev = 20.066 CI (99.9%): 569.939, 583.550
Benchmark (size) Mode Cnt Score Error Units
TestBenchMarks.benchIfAndSwitch 1000000 thrpt 100 1535.867 ± 61.212 ops/s
TestBenchMarks.benchSiwtch 1000000 thrpt 100 576.745 ± 6.806 ops/s
能夠看到提早if判斷的確提升了代碼效率,這種技巧能夠放在性能要求嚴格的地方。
Benchmark代碼:https://github.com/hengyunabc/jmh-demo