1.hadoop的內存配置調優前端
mapred-site.xml的內存調整 <property> <name>mapreduce.map.memory.mb</name> <value>1536</value> </property> <property> <name>mapreduce.map.java.opts</name> <value>-Xmx1024M</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>3072</value> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx2560M</value> </property> yarn-site.xml <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> <discription>每一個節點可用內存,單位MB</discription> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> <discription>單個任務可申請最少內存,默認1024MB</discription> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> <discription>單個任務可申請最大內存,默認8192MB</discription> </property> hadoop的內存調整hadoop-env.sh export HADOOP_HEAPSIZE_MAX=2048 export HADOOP_HEAPSIZE_MIN=2048
2.Hbase的參數調優java
hbase的內存調整hbase-env.sh export HBASE_HEAPSIZE=8G
3.數據的導入導出node
hbase數據的導出 hbase org.apache.hadoop.hbase.mapreduce.Export NS1.GROUPCHAT /do1/GROUPCHAT hdfs dfs -get /do1/GROUPCHAT /opt/GROUPCHAT 嘗試刪除數據 hdfs dfs -rm -r /do1/GROUPCHAT hbase數據的導入 hdfs dfs -put /opt/GROUPCHAT /do1/GROUPCHAT hdfs dfs -ls /do1/GROUPCHAT hbase org.apache.hadoop.hbase.mapreduce.Import NS1.GROUPCHAT /do1/GROUPCHAT
4.兩個master,再每個node加配置apache
backup-masters
[root@do2cloud01 conf]# pwd /do1cloud/hbase-2.0.5/conf [root@do1cloud01 conf]# cat backup-masters do1cloud02
5.經過控制檯就能夠看出參數設置是否生效oop
10.0.0.99:16010 hbase控制檯 10.0.0.99:9870 Hadoop 前端控制檯 10.0.0.99:8088 集羣 前端控制檯
6.regionserverspa
[root@do2cloud01 conf]# cat regionservers
do1cloud02
do1cloud03
do1cloud04
do1cloud05code