hadoop大數據組件啓動

1.1.啓動集羣html

sbin/start-dfs.sh
注:這個啓動腳本是經過ssh對多個節點的namenode、datanode、journalnode以及zkfc進程進行批量啓動的。node

1.2.啓動NameNodemysql

sbin/hadoop-daemon.sh start namenode
1.3.啓動DataNodesql

sbin/hadoop-daemon.sh start datanode
1.4. 啓動 MR的HistoryServershell

sbin/mr-jobhistory-daemon.sh start historyserver
apache

1.4.中止集羣ssh

sbin/stop-dfs.sh
1.5.中止單個進程oop

sbin/hadoop-daemon.sh stop zkfc
sbin/hadoop-daemon.sh stop journalnode
sbin/hadoop-daemon.sh stop datanode
sbin/hadoop-daemon.sh stop namenode
參考:http://www.cnblogs.com/jun1019/p/6266615.htmlui

 

2. Yarn (v 2.7.3)spa

2.1.啓動集羣

sbin/start-yarn.sh
注:start-yarn.sh啓動腳本只在本地啓動一個ResourceManager進程,而3臺機器上的nodemanager都是經過ssh的方式啓動的。

2.2.啓動ResouceMananger

sbin/yarn-daemon.sh start resourcemanager
2.3.啓動NodeManager

sbin/yarn-daemon.sh start nodemanager
2.3.啓動JobHistoryServer

sbin/yarn-daemon.sh start historyserver
2.4.中止集羣

sbin/stop-yarn.sh
2.5.中止單個節點

sbin/yarn-daemon.sh stop resourcemanager
sbin/yarn-daemon.sh stop nodemanager
參考:http://www.cnblogs.com/jun1019/p/6266615.html

 

3. Zookeeper (v 3.4.5)

3.1.啓動集羣

bin/zkServer.sh start
3.2.啓動單個節點

bin/zkServer.sh start
3.3.啓動客戶端

bin/zkCli.sh -server master:2181

4.Kafka (v 2.10-0.10.1.1)

4.1.啓動集羣

bin/kafka-server-start.sh -daemon config/server.properties
4.2.啓動單個節點

bin/kafka-server-start.sh -daemon config/server.properties
4.3.建立Topic

bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 1 --partitions 1 --topic test
4.4.列出Topic

bin/kafka-topics.sh --list --zookeeper master:2181
4.5.生產數據

bin/kafka-console-producer.sh --broker-list master:9092 --topic test
4.6.讀取數據

bin/kafka-console-consumer.sh --zookeeper master:2181 --topic test --from-beginning

5.Hbase (v 1.2.4)

5.1.啓動/中止集羣

bin/start-hbase.sh
bin/stop-hbase.sh
5.2. 啓動/中止HMaster

bin/hbase-daemon.sh start master
bin/hbase-daemon.sh stop master
5.3. 啓動/中止HRegionServer

bin/hbase-daemon.sh start regionserver
bin/hbase-daemon.sh stop regionserver
5.2.啓動Shell

bin/hbase shell

6.Spark (v 2.1.0-bin-hadoop2.7)

6.1.啓動程序

6.1.1. 本地

bin/spark-shell --master local
6.1.2.Standalone

bin/spark-shell --master spark://master:7077
6.1.3. Yarn Client

bin/spark-shell --master yarn-client
6.1.4. Yarn Cluster

bin/spark-shell --master yarn-cluster
7. Flume

7.1啓動Agent

bin/flume-ng agent -n LogAgent -c conf -f conf/logagent.properties -Dflume.root.logger=DEBUG,console

8.Sqoop

8.1.導入

sqoop import \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop \
--password sqoop \
--table cities
8.2.導出

複製代碼
sqoop export \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop \
--password sqoop \
--table cities \
--export-dir cities
複製代碼

9.Hive

9.1 啓動metastore

nohup hive --service metastore >> /home/zkpk/apache-hive-2.1.1-bin/metastore.log 2>&1 &
9.2 啓動Hive server

nohup hive --service hiveserver2 >> /home/zkpk/apache-hive-2.1.1-bin/hiveserver.log 2>&1 &
9.2. 啓動Shell

hive -h <host> -p <port>
beeline
!connect jdbc:hive2://master:10000
10. Mysql

10.1.啓動Shell

mysql -u<user> -p<password>

 

---------------------------------------------------------------


5).啓動storm集羣:

[hadoop@bgs-5p173-wangwenting storm]$ cd /opt/storm/bin[hadoop@bgs-5p173-wangwenting bin]$ nohup ./storm nimbus &[hadoop@bgs-5p173-wangwenting bin]$ nohup ./storm ui &b.再在bgs-5p174-wangwenting,bgs-5p175-wangwenting機器上啓動supervisor[hadoop@bgs-5p174-wangwenting bin]$ nohup ./storm supervisor &[hadoop@bgs-5p175-wangwenting bin]$ nohup ./storm supervisor &

相關文章
相關標籤/搜索