1.準備Linux環境 1.0點擊VMware快捷方式,右鍵打開文件所在位置 -> 雙擊vmnetcfg.exe -> VMnet1 host-only ->修改subnet ip 設置網段:192.168.1.0 子網掩碼:255.255.255.0 -> apply -> ok 回到windows --> 打開網絡和共享中心 -> 更改適配器設置 -> 右鍵VMnet1 -> 屬性 -> 雙擊IPv4 -> 設置windows的IP:192.168.1.110 子網掩碼:255.255.255.0 -> 點擊肯定 在虛擬軟件上 --My Computer -> 選中虛擬機 -> 右鍵 -> settings -> network adapter -> host only -> ok 1.1修改主機名 vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=xp ### 1.2修改IP 兩種方式: 第一種:經過Linux圖形界面進行修改(強烈推薦) 進入Linux圖形界面 -> 右鍵點擊右上方的兩個小電腦 -> 點擊Edit connections -> 選中當前網絡System eth0 -> 點擊edit按鈕 -> 選擇IPv4 -> method選擇爲manual -> 點擊add按鈕 -> 添加IP:192.168.1.119 子網掩碼:255.255.255.0 網關:192.168.1.1 -> apply 第二種:修改配置文件方式(屌絲程序猿專用) vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.1.44" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.1.1" ### 1.3修改主機名和IP的映射關係 vim /etc/hosts 192.168.1.44 xp 1.4關閉防火牆 #查看防火牆狀態 service iptables status #關閉防火牆 service iptables stop #查看防火牆開機啓動狀態 chkconfig iptables --list #關閉防火牆開機啓動 chkconfig iptables off 1.5重啓Linux reboot 2.安裝JDK 2.1上傳 2.2解壓jdk #建立文件夾 mkdir /usr/java #解壓 tar -zxvf jdk-7u55-linux-i586.tar.gz -C /usr/java/ 2.3將java添加到環境變量中 vim /etc/profile #在文件最後添加 export JAVA_HOME=/usr/java/jdk1.7.0_55 export PATH=$PATH:$JAVA_HOME/bin #刷新配置 source /etc/profile 3.安裝Hadoop 3.1上傳hadoop安裝包 3.2解壓hadoop安裝包 mkdir /cloud #解壓到/cloud/目錄下 tar -zxvf hadoop-2.2.0.tar.gz -C /cloud/ 3.3修改配置文件(5個) 第一個:hadoop-env.sh #在27行修改 export JAVA_HOME=/usr/java/jdk1.7.0_55 第二個:core-site.xml <configuration> <!-- 指定HDFS老大(namenode)的通訊地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://xp:9000</value> </property> <!-- 指定hadoop運行時產生文件的存儲路徑 --> <property> <name>hadoop.tmp.dir</name> <value>/cloud/hadoop-2.2.0/tmp</value> </property> </configuration> 第三個:hdfs-site.xml <configuration> <!-- 設置hdfs副本數量 --> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> 第四個:mapred-site.xml.template 須要重命名: mv mapred-site.xml.template mapred-site.xml <configuration> <!-- 通知框架MR使用YARN --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 第五個:yarn-site.xml <configuration> <!-- reducer取數據的方式是mapreduce_shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 3.4將hadoop添加到環境變量 vim /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_55 export HADOOP_HOME=/cloud/hadoop-2.2.0 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin source /etc/profile 3.5格式化HDFS(namenode)第一次使用時要格式化 hadoop namenode -format(過期) hadoop namenode -format 3.6啓動hadoop 先啓動HDFS sbin/start-dfs.sh 再啓動YARN sbin/start-yarn.sh 3.7驗證是否啓動成功 使用jps命令驗證 27408 NameNode 28218 Jps 27643 SecondaryNameNode 28066 NodeManager 27803 ResourceManager 27512 DataNode http://192.168.1.44:50070 (HDFS管理界面) 在這個文件中添加linux主機名和IP的映射關係 C:\Windows\System32\drivers\etc\hosts 192.168.1.119 itcast http://192.168.1.44:8088 (MR管理界面) 4.配置ssh免登錄 生成ssh免登錄密鑰 cd ~,進入到個人home目錄 cd .ssh/ ssh-keygen -t rsa (四個回車) 執行完這個命令後,會生成兩個文件id_rsa(私鑰)、id_rsa.pub(公鑰) 將公鑰拷貝到要免登錄的機器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 ssh-copy-id -i localhost 異常信息: [root@xp sbin]# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [xp] xp: /usr/local/devTools/hadoop/hadoop-2.2.0/sbin/slaves.sh: line 60: ssh: command not found localhost: /usr/local/devTools/hadoop/hadoop-2.2.0/sbin/slaves.sh: line 60: ssh: command not found Starting secondary namenodes [0.0.0.0] 0.0.0.0: /usr/local/devTools/hadoop/hadoop-2.2.0/sbin/slaves.sh: line 60: ssh: command not found starting yarn daemons resourcemanager running as process 4966. Stop it first. localhost: /usr/local/devTools/hadoop/hadoop-2.2.0/sbin/slaves.sh: line 60: ssh: command not found 解決方法: yum -y install openssh-clients 安裝成功 [root@xp sbin]# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [xp] root@xp's password: xp: namenode running as process 9823. Stop it first. root@localhost's password: localhost: starting datanode, logging to /usr/local/devTools/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-xp.out Starting secondary namenodes [0.0.0.0] root@0.0.0.0's password: 0.0.0.0: secondarynamenode running as process 9430. Stop it first. starting yarn daemons resourcemanager running as process 4966. Stop it first. root@localhost's password: localhost: starting nodemanager, logging to /usr/local/devTools/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-xp.out [root@xp sbin]# jps 9430 SecondaryNameNode 10111 DataNode 9823 NameNode 10459 NodeManager 4966 ResourceManager 10543 Jps Browse the filesystem點擊 http://xp:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=127.0.0.1:9000 頁面未出現 解決方法, C:\Windows\System32\drivers\etc\HOSTS 添加192.168.1.188 xp