本文主要研究一下FluxSink的機制java
reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/FluxSink.javareact
/** * Wrapper API around a downstream Subscriber for emitting any number of * next signals followed by zero or one onError/onComplete. * <p> * @param <T> the value type */ public interface FluxSink<T> { /** * @see Subscriber#onComplete() */ void complete(); /** * Return the current subscriber {@link Context}. * <p> * {@link Context} can be enriched via {@link Flux#subscriberContext(Function)} * operator or directly by a child subscriber overriding * {@link CoreSubscriber#currentContext()} * * @return the current subscriber {@link Context}. */ Context currentContext(); /** * @see Subscriber#onError(Throwable) * @param e the exception to signal, not null */ void error(Throwable e); /** * Try emitting, might throw an unchecked exception. * @see Subscriber#onNext(Object) * @param t the value to emit, not null */ FluxSink<T> next(T t); /** * The current outstanding request amount. * @return the current outstanding request amount */ long requestedFromDownstream(); /** * Returns true if the downstream cancelled the sequence. * @return true if the downstream cancelled the sequence */ boolean isCancelled(); /** * Attaches a {@link LongConsumer} to this {@link FluxSink} that will be notified of * any request to this sink. * <p> * For push/pull sinks created using {@link Flux#create(java.util.function.Consumer)} * or {@link Flux#create(java.util.function.Consumer, FluxSink.OverflowStrategy)}, * the consumer * is invoked for every request to enable a hybrid backpressure-enabled push/pull model. * When bridging with asynchronous listener-based APIs, the {@code onRequest} callback * may be used to request more data from source if required and to manage backpressure * by delivering data to sink only when requests are pending. * <p> * For push-only sinks created using {@link Flux#push(java.util.function.Consumer)} * or {@link Flux#push(java.util.function.Consumer, FluxSink.OverflowStrategy)}, * the consumer is invoked with an initial request of {@code Long.MAX_VALUE} when this method * is invoked. * * @param consumer the consumer to invoke on each request * @return {@link FluxSink} with a consumer that is notified of requests */ FluxSink<T> onRequest(LongConsumer consumer); /** * Associates a disposable resource with this FluxSink * that will be disposed in case the downstream cancels the sequence * via {@link org.reactivestreams.Subscription#cancel()}. * @param d the disposable callback to use * @return the {@link FluxSink} with resource to be disposed on cancel signal */ FluxSink<T> onCancel(Disposable d); /** * Associates a disposable resource with this FluxSink * that will be disposed on the first terminate signal which may be * a cancel, complete or error signal. * @param d the disposable callback to use * @return the {@link FluxSink} with resource to be disposed on first terminate signal */ FluxSink<T> onDispose(Disposable d); /** * Enumeration for backpressure handling. */ enum OverflowStrategy { /** * Completely ignore downstream backpressure requests. * <p> * This may yield {@link IllegalStateException} when queues get full downstream. */ IGNORE, /** * Signal an {@link IllegalStateException} when the downstream can't keep up */ ERROR, /** * Drop the incoming signal if the downstream is not ready to receive it. */ DROP, /** * Downstream will get only the latest signals from upstream. */ LATEST, /** * Buffer all signals if the downstream can't keep up. * <p> * Warning! This does unbounded buffering and may lead to {@link OutOfMemoryError}. */ BUFFER } }
注意OverflowStrategy.BUFFER使用的是一個無界隊列,須要額外注意OOM問題
public static void main(String[] args) throws InterruptedException { final Flux<Integer> flux = Flux.<Integer> create(fluxSink -> { //NOTE sink:class reactor.core.publisher.FluxCreate$SerializedSink LOGGER.info("sink:{}",fluxSink.getClass()); while (true) { LOGGER.info("sink next"); fluxSink.next(ThreadLocalRandom.current().nextInt()); } }, FluxSink.OverflowStrategy.BUFFER); //NOTE flux:class reactor.core.publisher.FluxCreate,prefetch:-1 LOGGER.info("flux:{},prefetch:{}",flux.getClass(),flux.getPrefetch()); flux.subscribe(e -> { LOGGER.info("subscribe:{}",e); try { TimeUnit.SECONDS.sleep(10); } catch (InterruptedException e1) { e1.printStackTrace(); } }); TimeUnit.MINUTES.sleep(20); }
這裏create建立的是reactor.core.publisher.FluxCreate,而其sink是reactor.core.publisher.FluxCreate$SerializedSink
reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/Flux.javagit
/** * Subscribe {@link Consumer} to this {@link Flux} that will respectively consume all the * elements in the sequence, handle errors, react to completion, and request upon subscription. * It will let the provided {@link Subscription subscriptionConsumer} * request the adequate amount of data, or request unbounded demand * {@code Long.MAX_VALUE} if no such consumer is provided. * <p> * For a passive version that observe and forward incoming data see {@link #doOnNext(java.util.function.Consumer)}, * {@link #doOnError(java.util.function.Consumer)}, {@link #doOnComplete(Runnable)} * and {@link #doOnSubscribe(Consumer)}. * <p>For a version that gives you more control over backpressure and the request, see * {@link #subscribe(Subscriber)} with a {@link BaseSubscriber}. * <p> * Keep in mind that since the sequence can be asynchronous, this will immediately * return control to the calling thread. This can give the impression the consumer is * not invoked when executing in a main thread or a unit test for instance. * * <p> * <img class="marble" src="https://raw.githubusercontent.com/reactor/reactor-core/v3.1.3.RELEASE/src/docs/marble/subscribecomplete.png" alt=""> * * @param consumer the consumer to invoke on each value * @param errorConsumer the consumer to invoke on error signal * @param completeConsumer the consumer to invoke on complete signal * @param subscriptionConsumer the consumer to invoke on subscribe signal, to be used * for the initial {@link Subscription#request(long) request}, or null for max request * * @return a new {@link Disposable} that can be used to cancel the underlying {@link Subscription} */ public final Disposable subscribe( @Nullable Consumer<? super T> consumer, @Nullable Consumer<? super Throwable> errorConsumer, @Nullable Runnable completeConsumer, @Nullable Consumer<? super Subscription> subscriptionConsumer) { return subscribeWith(new LambdaSubscriber<>(consumer, errorConsumer, completeConsumer, subscriptionConsumer)); } @Override public final void subscribe(Subscriber<? super T> actual) { onLastAssembly(this).subscribe(Operators.toCoreSubscriber(actual)); }
建立的是LambdaSubscriber,最後調用FluxCreate.subscribe
reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/FluxCreate.javagithub
public void subscribe(CoreSubscriber<? super T> actual) { BaseSink<T> sink = createSink(actual, backpressure); actual.onSubscribe(sink); try { source.accept( createMode == CreateMode.PUSH_PULL ? new SerializedSink<>(sink) : sink); } catch (Throwable ex) { Exceptions.throwIfFatal(ex); sink.error(Operators.onOperatorError(ex, actual.currentContext())); } } static <T> BaseSink<T> createSink(CoreSubscriber<? super T> t, OverflowStrategy backpressure) { switch (backpressure) { case IGNORE: { return new IgnoreSink<>(t); } case ERROR: { return new ErrorAsyncSink<>(t); } case DROP: { return new DropAsyncSink<>(t); } case LATEST: { return new LatestAsyncSink<>(t); } default: { return new BufferAsyncSink<>(t, Queues.SMALL_BUFFER_SIZE); } } }
先建立sink,這裏建立的是BufferAsyncSink,而後調用LambdaSubscriber.onSubscribe
而後再調用source.accept,也就是調用fluxSink的lambda方法產生數據,開啓stream模式
reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/LambdaSubscriber.javaapp
public final void onSubscribe(Subscription s) { if (Operators.validate(subscription, s)) { this.subscription = s; if (subscriptionConsumer != null) { try { subscriptionConsumer.accept(s); } catch (Throwable t) { Exceptions.throwIfFatal(t); s.cancel(); onError(t); } } else { s.request(Long.MAX_VALUE); } } }
這裏又調用了BufferAsyncSink的request(Long.MAX_VALUE),實際是調用BaseSink的request
public final void request(long n) { if (Operators.validate(n)) { Operators.addCap(REQUESTED, this, n); LongConsumer consumer = requestConsumer; if (n > 0 && consumer != null && !isCancelled()) { consumer.accept(n); } onRequestedFromDownstream(); } }
這裏的onRequestedFromDownstream調用了BufferAsyncSink的onRequestedFromDownstream
@Override void onRequestedFromDownstream() { drain(); }
調用的是BufferAsyncSink的drain
void drain() { if (WIP.getAndIncrement(this) != 0) { return; } int missed = 1; final Subscriber<? super T> a = actual; final Queue<T> q = queue; for (; ; ) { long r = requested; long e = 0L; while (e != r) { if (isCancelled()) { q.clear(); return; } boolean d = done; T o = q.poll(); boolean empty = o == null; if (d && empty) { Throwable ex = error; if (ex != null) { super.error(ex); } else { super.complete(); } return; } if (empty) { break; } a.onNext(o); e++; } if (e == r) { if (isCancelled()) { q.clear(); return; } boolean d = done; boolean empty = q.isEmpty(); if (d && empty) { Throwable ex = error; if (ex != null) { super.error(ex); } else { super.complete(); } return; } } if (e != 0) { Operators.produced(REQUESTED, this, e); } missed = WIP.addAndGet(this, -missed); if (missed == 0) { break; } } }
這裏的queue是建立BufferAsyncSink指定的,默認是Queues.SMALL_BUFFER_SIZE(Math.max(16,Integer.parseInt(System.getProperty("reactor.bufferSize.small", "256")))
)
而這裏的onNext則是同步調用LambdaSubscriber的consumer
source.accept( createMode == CreateMode.PUSH_PULL ? new SerializedSink<>(sink) : sink);
CreateMode.PUSH_PULL這裏對sink包裝爲SerializedSink,而後調用Flux.create自定義的lambda consumer
fluxSink -> { //NOTE sink:class reactor.core.publisher.FluxCreate$SerializedSink LOGGER.info("sink:{}",fluxSink.getClass()); while (true) { LOGGER.info("sink next"); fluxSink.next(ThreadLocalRandom.current().nextInt()); } }
以後就開啓數據推送
reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/FluxCreate.java#SerializedSink.nextdom
public FluxSink<T> next(T t) { Objects.requireNonNull(t, "t is null in sink.next(t)"); if (sink.isCancelled() || done) { Operators.onNextDropped(t, sink.currentContext()); return this; } if (WIP.get(this) == 0 && WIP.compareAndSet(this, 0, 1)) { try { sink.next(t); } catch (Throwable ex) { Operators.onOperatorError(sink, ex, t, sink.currentContext()); } if (WIP.decrementAndGet(this) == 0) { return this; } } else { Queue<T> q = queue; synchronized (this) { q.offer(t); } if (WIP.getAndIncrement(this) != 0) { return this; } } drainLoop(); return this; }
這裏調用BufferAsyncSink.next,而後drainLoop以後才返回
public FluxSink<T> next(T t) { queue.offer(t); drain(); return this; }
這裏將數據放入queue中,而後調用drain取數據,同步調用LambdaSubscriber的onNext
reactor-core-3.1.3.RELEASE-sources.jar!/reactor/core/publisher/LambdaSubscriber.javaasync
@Override public final void onNext(T x) { try { if (consumer != null) { consumer.accept(x); } } catch (Throwable t) { Exceptions.throwIfFatal(t); this.subscription.cancel(); onError(t); } }
即同步調用自定義的subscribe方法,實例中除了log還會sleep,這裏是同步阻塞的
這裏調用完以後,fluxSink這裏的next方法返回,而後繼續循環
fluxSink -> { //NOTE sink:class reactor.core.publisher.FluxCreate$SerializedSink LOGGER.info("sink:{}",fluxSink.getClass()); while (true) { LOGGER.info("sink next"); fluxSink.next(ThreadLocalRandom.current().nextInt()); } }
fluxSink這裏看是無限循環next產生數據,實則不用擔憂,若是subscribe與fluxSink都是同一個線程的話(本實例都是在main線程
),它們是同步阻塞調用的。ide
subscribe的時候調用LambdaSubscriber.onSubscribe,request(N)請求數據,而後再調用source.accept,也就是調用fluxSink的lambda方法產生數據,開啓stream模式這裏的fluxSink.next裏頭阻塞調用了subscribe的consumer,返回以後才繼續循環。oop
至於BUFFER模式OOM的問題,能夠思考下如何產生。fetch