一些linux命令

轉自:http://www.fx114.net/qa-81-151600.aspxjava

一些雜瑣的東西,記錄一下,之後可能會用得上,另外之後遇到能夠記錄的能夠追加在這裏mysql


查找進程內最耗費CPU的線程:linux

ps -Lfp pid  #列出進程內全部線程 -L threads -f 全部full -p by process id
ps -mp pid -o THREAD,tid,time

 

top -Hp pid #找出進程內最耗CPU線程ID
printf "%x\n" tid #線程ID轉成16進制
jstak pid | grep tid  #找到最耗費CPU的線程


jmap導出java進程內存狀況並用jhat分析web

jmap -dump:format=b,file=/tmp/dump.dat 21711  
jhat -J-Xmx512m -port 9998 /tmp/dump.dat


storm相關進程啓動命令:sql

nohup ./storm nimbus >/dev/null 2>&1 &
nohup ./storm supervisor >/dev/null 2>&1 &
nohup ./storm ui >/dev/null 2>&1 &
nohup ./storm logviewer >/dev/null 2>&1 &


jstorm相關進程啓動命令:shell

nohup $JSTORM_HOME/bin/jstorm nimbus >/dev/null 2>&1 &
nohup $JSTORM_HOME/bin/jstorm supervisor >/dev/null 2>&1 &


storm殺進程命令:apache

kill `ps aux | egrep '(daemon\.nimbus)|(storm\.ui\.core)' | fgrep -v egrep | awk '{print $2}'`
kill `ps aux | fgrep storm | fgrep -v 'fgrep' | awk '{print $2}'`


hive相關進程啓動命令:elasticsearch

nohup ./hive --service hiveserver2 > hiveserver2.log 2>&1  &
nohup ./hive --service metastore > metastore.log 2>&1 &
nohup ./hive --service hwi > hwi.log 2>&1 &


找出目錄包含指定字符串的文件列表:tcp

find . -type f -name "*.sh" -exec grep -nH "xxxxxx" {} \;


linux清理內存:ide

sync && echo 3 > /proc/sys/vm/drop_caches


列出文件中包含指定字符串的行的先後指定行:

grep -n -A 10 -B 10 "xxxx" file


tcpdump抓包實例:

tcpdump -i eth1 -XvvS -s  0 tcp port 10020
tcpdump -S -nn -vvv -i eth1 port 10020


spark任務提交實例:

./spark-submit --deploy-mode cluster --master spark://10.49.133.77:6066  --jars hdfs://10.49.133.77:9000/spark/guava-14.0.1.jar --class spark.itil.video.ItilData hdfs://10.49.133.77:9000/spark/sparktest2-0.0.1-jar-with-dependencies.jar --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails  -XX:+PrintGCTimeStamps -XX:-UseGCOverheadLimit"


spark啓動worker實例:

./spark-daemon.sh start org.apache.spark.deploy.worker.Worker 1 --webui-port 8081 --port 8092 spark://100.65.32.215:8070,100.65.32.212:8070


spark sql操做實例:

export SPARK_CLASSPATH=$SPARK_CLASSPATH:/data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jar
SPARK_CLASSPATH=$SPARK_CLASSPATH:/data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jar ./spark-sql --master spark://10.49.133.77:8070
./spark-sql --master spark://10.49.133.77:8070 --jars /data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jar

./spark-shell --jars /data/webitil/hive/lib/mysql-connector-java-5.0.8-bin.jar
./spark-shell --packages com.databricks:spark-csv_2.11:1.4.0
ADD_JARS=../elasticsearch-hadoop-2.1.0.Beta1/dist/elasticsearch-spark_2.10-2.1.0.Beta1.jar ./bin/spark-shell

 

./spark-shell
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val url = "jdbc:mysql://10.198.30.118:3311/logplatform"
val table = " (select * from t_log_stat limit 5) as tb1"
val reader = sqlContext.read.format("jdbc")
reader.option("url", url)
reader.option("dbtable", table)
reader.option("driver", "com.mysql.jdbc.Driver")
reader.option("user", "logplat_w")
reader.option("password", "rm5Bey6x")
val df = reader.load()
df.show()



mvn安裝本身的jar包到本地mvn庫實例:

mvn install:install-file -DgroupId=com.tencent.omg.itil.net -DartifactId=IpServiceJNI -Dversion=1.0 -Dpackaging=jar -Dfile=d:\storm\IpServiceJNI-1.0.jar
相關文章
相關標籤/搜索