Hadoop系列(一)

  1. 安裝

環境:centOS7 + jdk1.8html

@centOS網絡配置java

修改網絡配置vi /etc/sysconfig/network-scripts/ifcfg-ens33,設置開機啓動node

 

修改主機名:vi /etc/sysconfig/networkmysql

使用命令hostname master生效web

 

修改ip映射:vi /etc/hostssql

 

配置ssh免密登陸:apache

ssh-keygen -t raswindows

 

祕鑰生成後在~/.ssh/目錄下,有兩個文件id_rsa(私鑰)和id_rsa.pub(公鑰),將公鑰複製到authorized_keys並賦予authorized_keys600權限。api

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized.keys網絡

Chmod 600 ~/.ssh/authorized.keys

將slave1和slave2的rsa.pub複製到master的authorized_keys下

 

將master上的~/.ssh/authorized_keys傳輸到slave1和slave2上

scp ~/.ssh/authorized_keys root@slave1:~/.ssh/

 

驗證:從master到slave一、slave2不須要密碼登陸

ssh slave1

@安裝jdk1.8

上傳jdk到/root/java下(master、slave一、slave2);

 

環境變量的配置:

vi /etc/profile

配置環境變量:

export JAVA_HOME=/root/java/jdk1.8.0_191

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=:$JAVA_HOME/bin:$PATH

生效:

source /etc/profile

 

 

安裝mysql(master)

查看系統中是否已安裝了mysql,若是安裝需卸載

查看是否已安裝:rmp -qa |grep mariadb

卸載:rpm -e <包名> --nodeps

 

下載mysql-5.7.24-1.el7.x86_64.rpm-bundle.tar上傳到master上

(下載地址:https://dev.mysql.com/downloads/file/?id=481064)

 

解壓:tar -xvf mysql-5.7.24-1.el7.x86_64.rpm-bundle.tar

安裝MySQL有關的rpm包:

rpm -ivh mysql-community-common-5.7.24-1.el7.x86_64.rpm

rpm -ivh mysql-community-libs-5.7.24-1.el7.x86_64.rpm

rpm -ivh mysql-community-client-5.7.24-1.el7.x86_64.rpm

rpm -ivh mysql-community-server-5.7.24-1.el7.x86_64.rpm

最後一個可能報

這時候只要在安裝perl就好了:

yum install perl

而後再安裝mysql-community-server-5.7.24-1.el7.x86_64.rpm

 

此時能夠啓動mysql了:systemctl restart mysqld。

此時沒有密碼進不去,須要如下操做:

在/ect/my.cnf 的最後面加上一行:

skip-grant-tables,保存退出。

重啓mysql,systemctl restart mysqld。

進入mysql:mysql -u root -p

 

 

安裝hadoop

先在master安裝,安裝完成後,將master上的複製到slave上便可。

 

配置hadoop環境變量:vi /etc/profile

搭建hadoop集羣的準備工做

在master節點上建立如下文件夾

/root/hadoop/name

/root/hadoop/data

/root/hadoop/temp

 

配置hadoop配置文件

/root/hadoop/hadoop-2.8.4/etc/hadoop下的

 

hadoop-env.sh:修改jdk路徑

 

yarn-env.sh修改jdk路徑

 

Slaves:去掉localhost,增長slave一、slave2從節點

 

core-site.xml:

 

<configuration>

  <property>

    <name>fs.defaultFS</name>

    <value>hdfs://master:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/root/hadoop/temp</value>

  </property>

</configuration>

 

hdfs.xml

 

<configuration>

  <property>

    <name>dfs.namenode.name.dir</name>

    <value>file:/root/hadoop/name</value>

  </property>

  <property>

    <name>dfs.datanode.data.dir</name>

    <value>file:/root/hadoop/data</value>

  </property>

  <property>

    <name>dfs.replication</name>

    <value>3</value>

  </property>

<property>

    <name>dfs.webhdfs.enabled</name>

    <value>true</value>

  </property>

 

mapred-site.xml

 

<configuration>

  <property>

    <name>mapreduce.framework.name</name>

    <value>yarn</value>

  </property>

</configuration>

 

yarn-site.xml

  

Yarn-site.xml屬性說明:https://www.cnblogs.com/yinchengzhe/p/5142659.html

 

將配置好的hadoop複製到slave一、slave2節點上

scp -r /root/hadoop/hadoop-2.8.4/ root@slave1:/root/hadoop/

 

scp -r /root/hadoop/hadoop-2.8.4/ root@slave2:/root/hadoop/

 

然後,配置slave一、slave2的hadoop環境變量

 

啓動hadoop

格式化Namenode:

啓動集羣:

 

 

Hadoop api

hdfs所需jar包:hadoop-xxx\share\hadoop\common\hadoop-common-2.8.4.jar

           hadoop-xxx\share\hadoop\hdfs\lib\*.jar

           hadoop-xxx\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar

           hadoop-xxx\share\hadoop\common\lib\commons-collections-3.2.2.jar

           hadoop-xxx\share\hadoop\common\lib\servlet-api-2.5.jar

           hadoop-xxx\share\hadoop\common\lib\slf4j-api-1.7.10.jar

hadoop-xxx\share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar

hadoop-xxx\share\hadoop\common\lib\commons-configuration-1.6.jar

hadoop-xxx\share\hadoop\common\lib\hadoop-auth-2.8.4.jar

 

 

hdfs實例:

package dfs;

 

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.FileSystem;

import org.apache.hadoop.fs.LocatedFileStatus;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.fs.RemoteIterator;

import org.junit.Test;

 

public class HdfsTools {

 

Configuration config = new Configuration();

FileSystem sf = null;

 

private static final String defaultFS = "hdfs://localhost:9000";

 

/**

 * 建立 FileSystem,實際上是建立了org.apache.hadoop.hdfs.DistributedFileSystem

 */

private void init(){

try{

config.set("fs.defaultFS", defaultFS);

sf = FileSystem.get(config);

}catch(Exception e){

 

}

}

 

/**

 * 建立文件目錄

 * @throws Exception

 */

@Test

public void mkdir() throws Exception{

init();

sf.mkdirs(new Path("/a"));

}

 

/**

 * 上傳本地文件到hdfs

 * @throws Exception

 */

@Test

public void put() throws Exception{

init();

sf.copyFromLocalFile(new Path

("F:\\hadoop-2.8.4\\share\\hadoop\\common\\lib\\commons-collections-3.2.2.jar"),

new Path("/aaaa"));

 

/*sf.copyFromLocalFile(true,new Path

("F:\\hadoop-2.8.4\\share\\hadoop\\common\\lib\\commons-collections-3.2.2.jar"),

new Path("/aaaa"));*/

}

 

/**

 * 從hdfs上下載文件到本地

 * @throws Exception

 */

@Test

public void download() throws Exception{

init();

sf.copyToLocalFile(new Path("/aaaa/commons-collections-3.2.2.jar"), new Path("F:"));

sf.copyToLocalFile(true,new Path("/aaaa/commons-collections-3.2.2.jar"), new Path("F:"));

}

 

/**

 * 遞歸查找

 * @throws Exception

 */

@Test

public void find() throws Exception{

init();

RemoteIterator<LocatedFileStatus> remoteIterator = sf.listFiles(new Path("/"), true);

while(remoteIterator.hasNext()) {

LocatedFileStatus fileStatus = remoteIterator.next();

System.out.println("path:"+fileStatus.getPath().toString()+" size:"+fileStatus.getLen()/1024);

 

}

}

 

/**

 * 刪除hdfs上的目錄

 * @throws Exception

 */

@Test

public void remove() throws Exception{

init();

sf.deleteOnExit(new Path("/a"));

}

 

/**

 * windows環境下將本地文件上傳到hdfs

 * @throws Exception

 */

public void putDfsForWindow() throws Exception{

FileInputStream fis = new FileInputStream(new File("D:\\hello.txt"));

OutputStream os = sf.create(new Path("/test/hello1.txt"));

IOUtils.copyBytes(fis, os, 4096, true);

}

}

 

MapReduce 實例:

導入jar:

hadoop-xxx\share\hadoop\mapreduce\*

hadoop-xxx\share\hadoop\yarn\*

package mr;

 

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

 

 

/**

 * Mapper四個泛型說明:輸入鍵值(偏移量),文本,輸出文本,輸出類型

 * @author sunzy

 *

 */

public class Mapper extends 

org.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Text, LongWritable>{

 

@Override

protected void map(

LongWritable key,

Text value,

org.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Text, LongWritable>.Context context)

throws IOException, InterruptedException {

String[] line = value.toString().split(" ");

 

for(String word : line){

context.write(new Text(word), new LongWritable(1));

}

}

 

}

 

 

package mr;

 

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Reducer;

 

public class Reduce extends Reducer<Text, LongWritable, Text, LongWritable>{

 

@Override

protected void reduce(Text key, Iterable<LongWritable> values,

Reducer<Text, LongWritable, Text, LongWritable>.Context context)

throws IOException, InterruptedException {

long count = 0;

for(LongWritable value : values){

count += 1;

}

 

context.write(key, new LongWritable(count));

}

 

}

 

 

package mr;

 

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.junit.Test;

 

 

public class MRJobRunner{

 

@Test

public void run() throws Exception{

Configuration config = new Configuration();

config.set("fs.defaultFS", "hdfs://localhost:9000");

 

Job job = Job.getInstance(config);

 

job.setJarByClass(MRJobRunner.class);

 

job.setMapperClass(Mapper.class);

job.setReducerClass(Reduce.class);

 

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(LongWritable.class);

 

job.setMapOutputKeyClass(Text.class);

job.setMapOutputValueClass(LongWritable.class);

 

FileInputFormat.setInputPaths(job, new Path("/test"));

FileOutputFormat.setOutputPath(job, new Path("/t09"));

 

job.waitForCompletion(true);

}

 

}

 

Hdfs合併FSImage和edits流程:

Job提交到yarn執行流程:

 

相關文章
相關標籤/搜索