本文針對redhat或者centospython
對於測試集羣,若是經過ambari安裝Hadoop集羣后,想從新再來一次的話,須要清理集羣。web
對於安裝了不少hadoop組件的話,這個工做很繁瑣。接下來是我整理的清理過程。sql
1,經過ambari將集羣中的所用組件都關閉,若是關閉不了,直接kill -9 XXXcentos
2,關閉ambari-server,ambari-agentapp
3,卸載安裝的軟件ide
以上命令可能不全,執行完一下命令後,再執行oop
查看是否還有沒有卸載的,若是有,繼續經過#yum remove XXX卸載post
4,刪除postgresql的數據測試
postgresql軟件卸載後,其數據還保留在硬盤中,須要把這部分數據刪除掉,若是不刪除掉,從新安裝ambari-server後,有可能還應用之前的安裝數據,而這些數據時錯誤數據,因此須要刪除掉。spa
5,刪除用戶
ambari安裝hadoop集羣會建立一些用戶,清除集羣時有必要清除這些用戶,並刪除對應的文件夾。這樣作能夠避免集羣運行時出現的文件訪問權限錯誤的問題。
6,刪除ambari遺留數據
7,刪除其餘hadoop組件遺留數據
rm -rf /etc/falcon
rm -rf /etc/knox
rm -rf /etc/hive-webhcat
rm -rf /etc/kafka
rm -rf /etc/slider
rm -rf /etc/storm-slider-client
rm -rf /etc/spark
rm -rf /var/run/spark
rm -rf /var/run/hadoop
rm -rf /var/run/hbase
rm -rf /var/run/zookeeper
rm -rf /var/run/flume
rm -rf /var/run/storm
rm -rf /var/run/webhcat
rm -rf /var/run/hadoop-yarn
rm -rf /var/run/hadoop-mapreduce
rm -rf /var/run/kafka
rm -rf /var/log/hadoop
rm -rf /var/log/hbase
rm -rf /var/log/flume
rm -rf /var/log/storm
rm -rf /var/log/hadoop-yarn
rm -rf /var/log/hadoop-mapreduce
rm -rf /var/log/knox
rm -rf /usr/lib/flume
rm -rf /usr/lib/storm
rm -rf /var/lib/hive
rm -rf /var/lib/oozie
rm -rf /var/lib/flume
rm -rf /var/lib/hadoop-hdfs
rm -rf /var/lib/knox
rm -rf /var/log/hive
rm -rf /var/log/oozie
rm -rf /var/log/zookeeper
rm -rf /var/log/falcon
rm -rf /var/log/webhcat
rm -rf /var/log/spark
rm -rf /var/tmp/oozie
rm -rf /tmp/ambari-qa
rm -rf /var/hadoop
rm -rf /hadoop/falcon
rm -rf /tmp/hadoop
rm -rf /tmp/hadoop-hdfs
rm -rf /usr/hdp
rm -rf /usr/hadoop
rm -rf /opt/hadoop
rm -rf /opt/hadoop2
rm -rf /tmp/hadoop
rm -rf /var/hadoop
rm -rf /hadoop
rm -rf /etc/ambari-metrics-collector
rm -rf /etc/ambari-metrics-monitor
rm -rf /var/run/ambari-metrics-collector
rm -rf /var/run/ambari-metrics-monitor
rm -rf /var/log/ambari-metrics-collector
rm -rf /var/log/ambari-metrics-monitor
rm -rf /var/lib/hadoop-yarn
rm -rf /var/lib/hadoop-mapreduce
8,清理yum數據源