#調試Master,在master節點的spark-env.sh中添加SPARK_MASTER_OPTS變量 export SPARK_MASTER_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=10000" #調試Worker,在worker節點的spark-env.sh中添加SPARK_WORKER_OPTS變量 export SPARK_WORKER_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=10001"
# #調試spark-submit + app bin/spark-submit --class cn.daxin.spark.WordCount --master spark://node-1.daxin.cn:7077 --driver-java-options "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=10002" /root/wc.jar hdfs://node-1.daxin.cn:9000/words.txt hdfs://node-1.daxin.cn:9000/out2 #調試spark-submit + app + executor bin/spark-submit --class cn.daxin.spark.WordCount --master spark://node-1.daxin.cn:7077 --conf "spark.executor.extraJavaOptions=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=10003" --driver-java-options "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=10002" /root/wc.jar hdfs://node-1.daxin.cn:9000/words.txt hdfs://node-1.daxin.cn:9000/out2
在咱們的idea中,添加兩個Remote啓動項java
重要的時刻來了,咱們先啓動調試Master,並加上屬於Master代碼的斷點:node
能夠看到,idea已經鏈接到了咱們Cluster中的Master機器的10000端口,而這正是咱們在集羣中配置的端口。同理啓動Slave1(Worker)app