Python測試進階——(6)Bash腳本啓動Python監控程序並傳遞PID

用HiBench執行Hadoop——Sort測試用例,進入 /HiBench-master/bin/workloads/micro/sort/hadoop 目錄下,執行命令:html

[root@node1 hadoop]# ./run.sh

執行後返回以下信息:node

[root@node1 hadoop]# ./run.sh patching args=  #enter_bench() Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf Parsing conf: /home/cf/app/HiBench-master/conf/workloads/micro/sort.conf probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar start HadoopSort bench hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Sort/Output
Deleted hdfs://node1:8020/HiBench/Sort/Output  #rmr_hdfs()
hdfs du -s: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -du -s hdfs://node1:8020/HiBench/Sort/Input
Submit MapReduce Job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn jar /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar sort -outKey org.apache.hadoop.io.Text -outValue org.apache.hadoop.io.Text -r 8 hdfs://node1:8020/HiBench/Sort/Input hdfs://node1:8020/HiBench/Sort/Output #run_hadoop_job()
The job took 14 seconds. finish HadoopSort bench

發現HiBench執行Python監控程序腳本的命令爲:python

UID PID PPID C STIME TTY TIME CMD root 32614     1  0 16:02 pts/0    00:00:00 python2 /home/cf/app/HiBench-master/bin/functions/monitor.py HadoopSort 32331 /home/cf/app/HiBench-master/report/sort/hadoop/conf/../monitor.log /home/cf/app/H root 32621 32331  0 16:02 pts/0    00:00:00 python2 /home/cf/app/HiBench-master/bin/functions/execute_with_log.py /home/cf/app/HiBench-master/report/sort/hadoop/conf/../bench.log /opt/cloudera/parcels/CD

查看文件run.sh的內容:shell

17 current_dir=`dirname "$0"` 18 current_dir=`cd "$current_dir"; pwd` 19 root_dir=${current_dir}/../../../../../
 20 workload_config=${root_dir}/conf/workloads/micro/sort.conf 21 . "${root_dir}/bin/functions/load_bench_config.sh"
 22 
 23 enter_bench HadoopSort ${workload_config} ${current_dir} 24 show_bannar start 25 
 26 rmr_hdfs $OUTPUT_HDFS || true
 27 
 28 SIZE=`dir_size $INPUT_HDFS` 29 START_TIME=`timestamp` 30 run_hadoop_job ${HADOOP_EXAMPLES_JAR} sort -outKey org.apache.hadoop.io.Text -outValue org.apache.hadoop.io.Text -r ${NUM_REDS} ${INPUT_HDFS} ${OUTPUT_HDFS} 31 
 32 END_TIME=`timestamp` 33 gen_report ${START_TIME} ${END_TIME} ${SIZE} 34 show_bannar finish 35 leave_bench

在文件run.sh中,發現 run_hadoop_job() 調用了 start_monitor 方法:apache

function run_hadoop_job(){ ENABLE_MONITOR=1
    if [ "$1" = "--without-monitor" ]; then ENABLE_MONITOR=0
        shift 1
    fi local job_jar=$1
    shift local job_name=$1
    shift local tail_arguments=$@ local CMD="${HADOOP_EXECUTABLE} --config ${HADOOP_CONF_DIR} jar $job_jar $job_name $tail_arguments"
    echo -e "${BGreen}Submit MapReduce Job: ${Green}$CMD${Color_Off}"
    if [ ${ENABLE_MONITOR} = 1 ]; then MONITOR_PID=`start_monitor` fi execute_withlog ${CMD} result=$?
    if [ ${ENABLE_MONITOR} = 1 ]; then stop_monitor ${MONITOR_PID} fi
    if [ $result -ne 0 ]; then
        echo -e "${BRed}ERROR${Color_Off}: Hadoop job ${BYellow}${job_jar} ${job_name}${Color_Off} failed to run successfully."
        echo -e "${BBlue}Hint${Color_Off}: You can goto ${BYellow}${WORKLOAD_RESULT_FOLDER}/bench.log${Color_Off} to check for detailed log.\nOpening log tail for you:\n"
        tail ${WORKLOAD_RESULT_FOLDER}/bench.log exit $result fi }

查看 start_monitor 方法的定義:bash

function start_monitor(){ MONITOR_PID=`${workload_func_bin}/monitor.py ${HIBENCH_CUR_WORKLOAD_NAME} $$ ${WORKLOAD_RESULT_FOLDER}/monitor.log ${WORKLOAD_RESULT_FOLDER}/bench.log ${WORKLOAD_RESULT_FOLDER}/monitor.html ${SLAVES} &` # echo "start monitor, got child pid:${MONITOR_PID}" > /dev/stderr echo ${MONITOR_PID} }

還有 stop_monitor  方法的定義:app

function stop_monitor(){ MONITOR_PID=$1 assert $1 "monitor pid missing" # echo "stop monitor, kill ${MONITOR_PID}" > /dev/stderr kill ${MONITOR_PID} }

 以及 execute_withlog 方法的定義:oop

function execute_withlog () { CMD="$@"
    if [ -t 1 ] ; then # Terminal, beautify the output. ${workload_func_bin}/execute_with_log.py ${WORKLOAD_RESULT_FOLDER}/bench.log $CMD else                        # pipe, do nothing. $CMD fi }

 在 run.sh 中加入如下三行:測試

32 echo "PID of this script: $$"
 33 echo "PPID of this script: $PPID"
 34 echo "UID of this script: $UID"

文件的部份內容以下:this

28 SIZE=`dir_size $INPUT_HDFS` 29 START_TIME=`timestamp` 30 run_hadoop_job ${HADOOP_EXAMPLES_JAR} sort -outKey org.apache.hadoop.io.Text -outValue org.apache.hadoop.io.Text -r ${NUM_REDS} ${INPUT_HDFS} ${OUTPUT_HDFS} 31 
 32 echo "PID of this script: $$"
 33 echo "PPID of this script: $PPID"
 34 echo "UID of this script: $UID"
 35 
 36 END_TIME=`timestamp` 37 gen_report ${START_TIME} ${END_TIME} ${SIZE} 38 show_bannar finish 39 leave_bench

執行後返回以下信息:

PID of this script: 32331 PPID of this script: 18804 UID of this script: 0

用 ps -ef 查看發現 18804 和 32331 分別對應以下進程:

UID PID PPID C STIME TTY TIME CMD root 18804 18802  0 15:42 pts/0    00:00:00 -bash root 32331 18804  0 16:02 pts/0    00:00:00 /bin/bash ./run_bak.sh

 經查找,發現:

在bash中,子shell進程的PID存儲在一個特殊的變量\$\$中。這個變量只讀,你不能夠在腳本中修改它。除了\$\$, bash shell還會導出其餘的只讀變量。好比,\$PPID存儲子shell父進程的ID(也就是主shell)。\$UID存儲了執行這個腳本的當前用戶ID。上面輸出中,PID每次執行都會變化。這個由於每次運行都會建立一個新的shell。另外一方面,PPID每次都會同樣只要你在同一個shell中運行。

分析 monitor.py 發現以下方法調用:

pid=os.fork()

分析 execute_with_log.py 發現以下方法調用:

proc = subprocess.Popen(" ".join(command_lines), shell=True, bufsize=1, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

通過思考,利用 subprocess.Popen( ) 方法啓動子進程並獲取子進程返回值,子進程利用 \$\$ 變量獲取PID並做爲返回值傳遞給父進程,父進程能夠啓動監控程序記錄子進程的運行數據。

 

參考:

https://www.jb51.net/article/62370.htm

https://www.cnblogs.com/ratels/p/11039813.html

https://www.cnblogs.com/ratels/p/11070615.html

https://www.cnblogs.com/zhoug2020/p/5079407.html

相關文章
相關標籤/搜索