聊聊flink LocalEnvironment的execute方法

本文主要研究一下flink LocalEnvironment的execute方法html

實例

final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

        DataSet<RecordDto> csvInput = env.readCsvFile(csvFilePath)
                .pojoType(RecordDto.class, "playerName", "country", "year", "game", "gold", "silver", "bronze", "total");

        DataSet<Tuple2<String, Integer>> groupedByCountry = csvInput
                .flatMap(new FlatMapFunction<RecordDto, Tuple2<String, Integer>>() {

                    private static final long serialVersionUID = 1L;

                    @Override
                    public void flatMap(RecordDto record, Collector<Tuple2<String, Integer>> out) throws Exception {

                        out.collect(new Tuple2<String, Integer>(record.getCountry(), 1));
                    }
                }).groupBy(0).sum(1);
        System.out.println("===groupedByCountry===");
        groupedByCountry.print();
  • 這裏使用DataSet從csv讀取數據,而後進行flatMap、groupBy、sum操做,最後調用print輸出

DataSet.print

flink-java-1.6.2-sources.jar!/org/apache/flink/api/java/DataSet.javajava

/**
	 * Prints the elements in a DataSet to the standard output stream {@link System#out} of the JVM that calls
	 * the print() method. For programs that are executed in a cluster, this method needs
	 * to gather the contents of the DataSet back to the client, to print it there.
	 *
	 * <p>The string written for each element is defined by the {@link Object#toString()} method.
	 *
	 * <p>This method immediately triggers the program execution, similar to the
	 * {@link #collect()} and {@link #count()} methods.
	 *
	 * @see #printToErr()
	 * @see #printOnTaskManager(String)
	 */
	public void print() throws Exception {
		List<T> elements = collect();
		for (T e: elements) {
			System.out.println(e);
		}
	}
  • print方法這裏主要是調用collect方法,獲取結果,而後挨個打印

DataSet.collect

flink-java-1.6.2-sources.jar!/org/apache/flink/api/java/DataSet.javaapache

/**
	 * Convenience method to get the elements of a DataSet as a List.
	 * As DataSet can contain a lot of data, this method should be used with caution.
	 *
	 * @return A List containing the elements of the DataSet
	 */
	public List<T> collect() throws Exception {
		final String id = new AbstractID().toString();
		final TypeSerializer<T> serializer = getType().createSerializer(getExecutionEnvironment().getConfig());

		this.output(new Utils.CollectHelper<>(id, serializer)).name("collect()");
		JobExecutionResult res = getExecutionEnvironment().execute();

		ArrayList<byte[]> accResult = res.getAccumulatorResult(id);
		if (accResult != null) {
			try {
				return SerializedListAccumulator.deserializeList(accResult, serializer);
			} catch (ClassNotFoundException e) {
				throw new RuntimeException("Cannot find type class of collected data type.", e);
			} catch (IOException e) {
				throw new RuntimeException("Serialization error while deserializing collected data", e);
			}
		} else {
			throw new RuntimeException("The call to collect() could not retrieve the DataSet.");
		}
	}
  • 這裏調用了getExecutionEnvironment().execute()來獲取JobExecutionResult;executionEnvironment這裏是LocalEnvironment

ExecutionEnvironment.execute

flink-java-1.6.2-sources.jar!/org/apache/flink/api/java/ExecutionEnvironment.javaapi

/**
	 * Triggers the program execution. The environment will execute all parts of the program that have
	 * resulted in a "sink" operation. Sink operations are for example printing results ({@link DataSet#print()},
	 * writing results (e.g. {@link DataSet#writeAsText(String)},
	 * {@link DataSet#write(org.apache.flink.api.common.io.FileOutputFormat, String)}, or other generic
	 * data sinks created with {@link DataSet#output(org.apache.flink.api.common.io.OutputFormat)}.
	 *
	 * <p>The program execution will be logged and displayed with a generated default name.
	 *
	 * @return The result of the job execution, containing elapsed time and accumulators.
	 * @throws Exception Thrown, if the program executions fails.
	 */
	public JobExecutionResult execute() throws Exception {
		return execute(getDefaultName());
	}

	/**
	 * Gets a default job name, based on the timestamp when this method is invoked.
	 *
	 * @return A default job name.
	 */
	private static String getDefaultName() {
		return "Flink Java Job at " + Calendar.getInstance().getTime();
	}

	/**
	 * Triggers the program execution. The environment will execute all parts of the program that have
	 * resulted in a "sink" operation. Sink operations are for example printing results ({@link DataSet#print()},
	 * writing results (e.g. {@link DataSet#writeAsText(String)},
	 * {@link DataSet#write(org.apache.flink.api.common.io.FileOutputFormat, String)}, or other generic
	 * data sinks created with {@link DataSet#output(org.apache.flink.api.common.io.OutputFormat)}.
	 *
	 * <p>The program execution will be logged and displayed with the given job name.
	 *
	 * @return The result of the job execution, containing elapsed time and accumulators.
	 * @throws Exception Thrown, if the program executions fails.
	 */
	public abstract JobExecutionResult execute(String jobName) throws Exception;
  • 具體的execute抽象方法由子類去實現,這裏咱們主要看一下LocalEnvironment的execute方法

LocalEnvironment.execute

flink-java-1.6.2-sources.jar!/org/apache/flink/api/java/LocalEnvironment.javasession

@Override
	public JobExecutionResult execute(String jobName) throws Exception {
		if (executor == null) {
			startNewSession();
		}

		Plan p = createProgramPlan(jobName);

		// Session management is disabled, revert this commit to enable
		//p.setJobId(jobID);
		//p.setSessionTimeout(sessionTimeout);

		JobExecutionResult result = executor.executePlan(p);

		this.lastJobExecutionResult = result;
		return result;
	}

	@Override
	@PublicEvolving
	public void startNewSession() throws Exception {
		if (executor != null) {
			// we need to end the previous session
			executor.stop();
			// create also a new JobID
			jobID = JobID.generate();
		}

		// create a new local executor
		executor = PlanExecutor.createLocalExecutor(configuration);
		executor.setPrintStatusDuringExecution(getConfig().isSysoutLoggingEnabled());

		// if we have a session, start the mini cluster eagerly to have it available across sessions
		if (getSessionTimeout() > 0) {
			executor.start();

			// also install the reaper that will shut it down eventually
			executorReaper = new ExecutorReaper(executor);
		}
	}
  • 這裏判斷executor爲null的話,會調用startNewSession,startNewSession經過PlanExecutor.createLocalExecutor(configuration)來建立executor;若是sessionTimeout大於0,則這裏會立馬調用executor.start(),默認該值爲0
  • 以後經過createProgramPlan方法來建立plan
  • 最後經過executor.executePlan(p)來獲取JobExecutionResult

PlanExecutor.createLocalExecutor

flink-core-1.6.2-sources.jar!/org/apache/flink/api/common/PlanExecutor.javaide

private static final String LOCAL_EXECUTOR_CLASS = "org.apache.flink.client.LocalExecutor";

	/**
	 * Creates an executor that runs the plan locally in a multi-threaded environment.
	 * 
	 * @return A local executor.
	 */
	public static PlanExecutor createLocalExecutor(Configuration configuration) {
		Class<? extends PlanExecutor> leClass = loadExecutorClass(LOCAL_EXECUTOR_CLASS);
		
		try {
			return leClass.getConstructor(Configuration.class).newInstance(configuration);
		}
		catch (Throwable t) {
			throw new RuntimeException("An error occurred while loading the local executor ("
					+ LOCAL_EXECUTOR_CLASS + ").", t);
		}
	}

	private static Class<? extends PlanExecutor> loadExecutorClass(String className) {
		try {
			Class<?> leClass = Class.forName(className);
			return leClass.asSubclass(PlanExecutor.class);
		}
		catch (ClassNotFoundException cnfe) {
			throw new RuntimeException("Could not load the executor class (" + className
					+ "). Do you have the 'flink-clients' project in your dependencies?");
		}
		catch (Throwable t) {
			throw new RuntimeException("An error occurred while loading the executor (" + className + ").", t);
		}
	}
  • PlanExecutor.createLocalExecutor方法經過反射建立org.apache.flink.client.LocalExecutor

LocalExecutor.executePlan

flink-clients_2.11-1.6.2-sources.jar!/org/apache/flink/client/LocalExecutor.javaui

/**
	 * Executes the given program on a local runtime and waits for the job to finish.
	 *
	 * <p>If the executor has not been started before, this starts the executor and shuts it down
	 * after the job finished. If the job runs in session mode, the executor is kept alive until
	 * no more references to the executor exist.</p>
	 *
	 * @param plan The plan of the program to execute.
	 * @return The net runtime of the program, in milliseconds.
	 *
	 * @throws Exception Thrown, if either the startup of the local execution context, or the execution
	 *                   caused an exception.
	 */
	@Override
	public JobExecutionResult executePlan(Plan plan) throws Exception {
		if (plan == null) {
			throw new IllegalArgumentException("The plan may not be null.");
		}

		synchronized (this.lock) {

			// check if we start a session dedicated for this execution
			final boolean shutDownAtEnd;

			if (jobExecutorService == null) {
				shutDownAtEnd = true;

				// configure the number of local slots equal to the parallelism of the local plan
				if (this.taskManagerNumSlots == DEFAULT_TASK_MANAGER_NUM_SLOTS) {
					int maxParallelism = plan.getMaximumParallelism();
					if (maxParallelism > 0) {
						this.taskManagerNumSlots = maxParallelism;
					}
				}

				// start the cluster for us
				start();
			}
			else {
				// we use the existing session
				shutDownAtEnd = false;
			}

			try {
				// TODO: Set job's default parallelism to max number of slots
				final int slotsPerTaskManager = jobExecutorServiceConfiguration.getInteger(TaskManagerOptions.NUM_TASK_SLOTS, taskManagerNumSlots);
				final int numTaskManagers = jobExecutorServiceConfiguration.getInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, 1);
				plan.setDefaultParallelism(slotsPerTaskManager * numTaskManagers);

				Optimizer pc = new Optimizer(new DataStatistics(), jobExecutorServiceConfiguration);
				OptimizedPlan op = pc.compile(plan);

				JobGraphGenerator jgg = new JobGraphGenerator(jobExecutorServiceConfiguration);
				JobGraph jobGraph = jgg.compileJobGraph(op, plan.getJobId());

				return jobExecutorService.executeJobBlocking(jobGraph);
			}
			finally {
				if (shutDownAtEnd) {
					stop();
				}
			}
		}
	}
  • 這裏當jobExecutorService爲null的時候,會調用start方法啓動cluster建立jobExecutorService
  • 以後建立JobGraphGenerator,而後經過JobGraphGenerator.compileJobGraph方法,將plan構建爲JobGraph
  • 最後調用jobExecutorService.executeJobBlocking(jobGraph),執行這個jobGraph,而後返回JobExecutionResult

LocalExecutor.start

flink-clients_2.11-1.6.2-sources.jar!/org/apache/flink/client/LocalExecutor.javathis

@Override
	public void start() throws Exception {
		synchronized (lock) {
			if (jobExecutorService == null) {
				// create the embedded runtime
				jobExecutorServiceConfiguration = createConfiguration();

				// start it up
				jobExecutorService = createJobExecutorService(jobExecutorServiceConfiguration);
			} else {
				throw new IllegalStateException("The local executor was already started.");
			}
		}
	}

	private Configuration createConfiguration() {
		Configuration newConfiguration = new Configuration();
		newConfiguration.setInteger(TaskManagerOptions.NUM_TASK_SLOTS, getTaskManagerNumSlots());
		newConfiguration.setBoolean(CoreOptions.FILESYTEM_DEFAULT_OVERRIDE, isDefaultOverwriteFiles());

		newConfiguration.addAll(baseConfiguration);

		return newConfiguration;
	}

	private JobExecutorService createJobExecutorService(Configuration configuration) throws Exception {
		final JobExecutorService newJobExecutorService;
		if (CoreOptions.NEW_MODE.equals(configuration.getString(CoreOptions.MODE))) {

			if (!configuration.contains(RestOptions.PORT)) {
				configuration.setInteger(RestOptions.PORT, 0);
			}

			final MiniClusterConfiguration miniClusterConfiguration = new MiniClusterConfiguration.Builder()
				.setConfiguration(configuration)
				.setNumTaskManagers(
					configuration.getInteger(
						ConfigConstants.LOCAL_NUMBER_TASK_MANAGER,
						ConfigConstants.DEFAULT_LOCAL_NUMBER_TASK_MANAGER))
				.setRpcServiceSharing(RpcServiceSharing.SHARED)
				.setNumSlotsPerTaskManager(
					configuration.getInteger(
						TaskManagerOptions.NUM_TASK_SLOTS, 1))
				.build();

			final MiniCluster miniCluster = new MiniCluster(miniClusterConfiguration);
			miniCluster.start();

			configuration.setInteger(RestOptions.PORT, miniCluster.getRestAddress().getPort());

			newJobExecutorService = miniCluster;
		} else {
			final LocalFlinkMiniCluster localFlinkMiniCluster = new LocalFlinkMiniCluster(configuration, true);
			localFlinkMiniCluster.start();

			newJobExecutorService = localFlinkMiniCluster;
		}

		return newJobExecutorService;
	}
  • start方法這裏先經過createConfiguration建立配置文件,再經過createJobExecutorService建立JobExecutorService
  • createConfiguration主要設置了TaskManagerOptions.NUM_TASK_SLOTS以及CoreOptions.FILESYTEM_DEFAULT_OVERRIDE
  • createJobExecutorService方法這裏主要是根據configuration.getString(CoreOptions.MODE)的配置來建立不一樣的newJobExecutorService
  • 默認是CoreOptions.NEW_MODE模式,它先建立MiniClusterConfiguration,而後建立MiniCluster(JobExecutorService),而後調用MiniCluster.start方法啓動以後返回
  • 非CoreOptions.NEW_MODE模式,則建立的是LocalFlinkMiniCluster(JobExecutorService),而後調用LocalFlinkMiniCluster.start()啓動以後返回

MiniCluster.executeJobBlocking

flink-runtime_2.11-1.6.2-sources.jar!/org/apache/flink/runtime/minicluster/MiniCluster.javacode

/**
	 * This method runs a job in blocking mode. The method returns only after the job
	 * completed successfully, or after it failed terminally.
	 *
	 * @param job  The Flink job to execute
	 * @return The result of the job execution
	 *
	 * @throws JobExecutionException Thrown if anything went amiss during initial job launch,
	 *         or if the job terminally failed.
	 */
	@Override
	public JobExecutionResult executeJobBlocking(JobGraph job) throws JobExecutionException, InterruptedException {
		checkNotNull(job, "job is null");

		final CompletableFuture<JobSubmissionResult> submissionFuture = submitJob(job);

		final CompletableFuture<JobResult> jobResultFuture = submissionFuture.thenCompose(
			(JobSubmissionResult ignored) -> requestJobResult(job.getJobID()));

		final JobResult jobResult;

		try {
			jobResult = jobResultFuture.get();
		} catch (ExecutionException e) {
			throw new JobExecutionException(job.getJobID(), "Could not retrieve JobResult.", ExceptionUtils.stripExecutionException(e));
		}

		try {
			return jobResult.toJobExecutionResult(Thread.currentThread().getContextClassLoader());
		} catch (IOException | ClassNotFoundException e) {
			throw new JobExecutionException(job.getJobID(), e);
		}
	}
  • MiniCluster.executeJobBlocking方法,先調用submitJob(job)方法,提交這個JobGraph,它返回一個CompletableFuture(submissionFuture)
  • 該CompletableFuture(submissionFuture)經過thenCompose鏈接了requestJobResult方法來根據jobId請求jobResult(jobResultFuture)
  • 最後經過jobResultFuture.get()獲取JobExecutionResult

小結

  • DataSet的print方法調用了collect方法,而collect方法則調用getExecutionEnvironment().execute()來獲取JobExecutionResult,executionEnvironment這裏是LocalEnvironment
  • ExecutionEnvironment.execute方法內部調用了抽象方法execute(String jobName),該抽象方法由子類實現,這裏是LocalEnvironment.execute,它先經過startNewSession,使用PlanExecutor.createLocalExecutor建立LocalExecutor,以後經過createProgramPlan建立plan,最後調用LocalExecutor.executePlan來獲取JobExecutionResult
  • LocalExecutor.executePlan方法它先判斷jobExecutorService,若是爲null,則調用start方法建立jobExecutorService(這裏根據CoreOptions.MODE配置,若是是CoreOptions.NEW_MODE則建立的jobExecutorService是MiniCluster,不然建立的jobExecutorService是LocalFlinkMiniCluster),這裏建立的jobExecutorService爲MiniCluster;以後經過JobGraphGenerator將plan轉換爲jobGraph;最後調用jobExecutorService.executeJobBlocking(jobGraph),執行這個jobGraph,而後返回JobExecutionResult

doc

相關文章
相關標籤/搜索