HiBench成長筆記——(4) HiBench測試Spark SQL

不少內容以前的博客已經提過,這裏再也不贅述,詳細內容參照本系列前面的博客:http://www.javashuo.com/article/p-smnfzqth-be.html 和 http://www.javashuo.com/article/p-ksnaeteb-bo.htmlhtml

執行腳本node

bin/workloads/sql/scan/prepare/prepare.sh

返回信息sql

[root@node1 prepare]# ./prepare.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/sql/scan.conf
probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar
start HadoopPrepareScan bench
hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Scan/Input
rm: `hdfs://node1:8020/HiBench/Scan/Input': No such file or directory
Pages:120, USERVISITS:1000
Submit MapReduce Job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn jar /home/cf/app/HiBench-master/autogen/target/autogen-7.1-SNAPSHOT-jar-with-dependencies.jar HiBench.DataGen -t hive -b hdfs://node1:8020/HiBench/Scan -n Input -m 8 -r 8 -p 120 -v 1000 -o sequence
19/06/05 14:19:04 INFO HiBench.HiveData: Closing hive data generator...
finish HadoopPrepareScan bench

執行腳本app

bin/workloads/sql/scan/spark/run.sh

返回信息ide

[root@node1 spark]# ./run.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/sql/scan.conf
probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar
start ScalaSparkScan bench
hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Scan/Output
rm: `hdfs://node1:8020/HiBench/Scan/Output': No such file or directory
Export env: SPARKBENCH_PROPERTIES_FILES=/home/cf/app/HiBench-master/report/scan/spark/conf/sparkbench/sparkbench.conf
Export env: HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn
Submit Spark job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/spark/bin/spark-submit  --properties-file /home/cf/app/HiBench-master/report/scan/spark/conf/sparkbench/spark.conf --class com.intel.hibench.sparkbench.sql.ScalaSparkSQLBench --master yarn-client --num-executors 2 --executor-cores 4 --executor-memory 4g /home/cf/app/HiBench-master/sparkbench/assembly/target/sparkbench-assembly-7.1-SNAPSHOT-dist.jar ScalaScan /home/cf/app/HiBench-master/report/scan/spark/conf/../rankings_uservisits_scan.hive
19/06/05 14:39:38 INFO CuratorFrameworkSingleton: Closing ZooKeeper client.
hdfs du -s: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -du -s hdfs://node1:8020/HiBench/Scan/Output
finish ScalaSparkScan bench

查看ResourceManager Web UIoop

prepare.sh啓動了application_1554951897984_0047application_1554951897984_0046兩個MAPREDUCE任務,run.sh啓動了application_1554951897984_0048這個Spark任務。spa

查看(Hadoop)HistoryServer Web UI3d

顯示了prepare.sh啓動的application_1554951897984_0047application_1554951897984_0046兩個MAPREDUCE任務。code

查看(Spark) History Server Web UIhtm

並未顯示run.sh啓動的application_1554951897984_0048這個Spark任務。

 

執行腳本

bin/workloads/sql/join/prepare/prepare.sh

返回信息

[root@node1 prepare]# ./prepare.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/sql/join.conf
probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar
start HadoopPrepareJoin bench
hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Join/Input
rm: `hdfs://node1:8020/HiBench/Join/Input': No such file or directory
Pages:120, USERVISITS:1000
Submit MapReduce Job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn jar /home/cf/app/HiBench-master/autogen/target/autogen-7.1-SNAPSHOT-jar-with-dependencies.jar HiBench.DataGen -t hive -b hdfs://node1:8020/HiBench/Join -n Input -m 8 -r 8 -p 120 -v 1000 -o sequence
19/06/06 10:51:53 INFO HiBench.HiveData: Closing hive data generator...
finish HadoopPrepareJoin bench

執行腳本

bin/workloads/sql/join/spark/run.sh

返回信息

[root@node1 spark]# ./run.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/sql/join.conf
probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar
start ScalaSparkJoin bench
hdfs du -s: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -du -s hdfs://node1:8020/HiBench/Join/Input
hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Join/Output
rm: `hdfs://node1:8020/HiBench/Join/Output': No such file or directory
Export env: SPARKBENCH_PROPERTIES_FILES=/home/cf/app/HiBench-master/report/join/spark/conf/sparkbench/sparkbench.conf
Export env: HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn
Submit Spark job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/spark/bin/spark-submit  --properties-file /home/cf/app/HiBench-master/report/join/spark/conf/sparkbench/spark.conf --class com.intel.hibench.sparkbench.sql.ScalaSparkSQLBench --master yarn-client --num-executors 2 --executor-cores 4 --executor-memory 4g /home/cf/app/HiBench-master/sparkbench/assembly/target/sparkbench-assembly-7.1-SNAPSHOT-dist.jar ScalaJoin /home/cf/app/HiBench-master/report/join/spark/conf/../rankings_uservisits_join.hive
19/06/06 10:52:53 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
finish ScalaSparkJoin bench

執行腳本

bin/workloads/sql/aggregation/prepare/prepare.sh

返回信息

[root@node1 prepare]# ./prepare.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/sql/aggregation.conf
probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar
start HadoopPrepareAggregation bench
hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Aggregation/Input
rm: `hdfs://node1:8020/HiBench/Aggregation/Input': No such file or directory
Pages:120, USERVISITS:1000
Submit MapReduce Job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn jar /home/cf/app/HiBench-master/autogen/target/autogen-7.1-SNAPSHOT-jar-with-dependencies.jar HiBench.DataGen -t hive -b hdfs://node1:8020/HiBench/Aggregation -n Input -m 8 -r 8 -p 120 -v 1000 -o sequence
19/06/06 10:56:34 INFO HiBench.HiveData: Closing hive data generator...
finish HadoopPrepareAggregation bench

執行腳本

bin/workloads/sql/aggregation/spark/run.sh

返回信息

[root@node1 spark]# ./run.sh
patching args=
Parsing conf: /home/cf/app/HiBench-master/conf/hadoop.conf
Parsing conf: /home/cf/app/HiBench-master/conf/hibench.conf
Parsing conf: /home/cf/app/HiBench-master/conf/spark.conf
Parsing conf: /home/cf/app/HiBench-master/conf/workloads/sql/aggregation.conf
probe sleep jar: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/../../jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar
start ScalaSparkAggregation bench
hdfs rm -r: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -rm -r -skipTrash hdfs://node1:8020/HiBench/Aggregation/Output
rm: `hdfs://node1:8020/HiBench/Aggregation/Output': No such file or directory
Export env: SPARKBENCH_PROPERTIES_FILES=/home/cf/app/HiBench-master/report/aggregation/spark/conf/sparkbench/sparkbench.conf
Export env: HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn
Submit Spark job: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/spark/bin/spark-submit  --properties-file /home/cf/app/HiBench-master/report/aggregation/spark/conf/sparkbench/spark.conf --class com.intel.hibench.sparkbench.sql.ScalaSparkSQLBench --master yarn-client --num-executors 2 --executor-cores 4 --executor-memory 4g /home/cf/app/HiBench-master/sparkbench/assembly/target/sparkbench-assembly-7.1-SNAPSHOT-dist.jar ScalaAggregation /home/cf/app/HiBench-master/report/aggregation/spark/conf/../uservisits_aggre.hive
19/06/06 10:57:47 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
hdfs du -s: /opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/bin/hadoop --config /etc/hadoop/conf.cloudera.yarn fs -du -s hdfs://node1:8020/HiBench/Aggregation/Output
finish ScalaSparkAggregation bench

 

 

參考:http://www.javashuo.com/article/p-brovccqn-bn.html

相關文章
相關標籤/搜索