Hadoop(7)--java编写mapreduce程序

java

1、java开发map_reduce程序

2、配置系统环境变量HADOOP_HOME,指向hadoop安装目录(如果你不想招惹不必要的麻烦,不要在目录中包含空格或者中文字符)

  把HADOOP_HOME/bin加到PATH环境变量(非必要,只是为了方便)

3、如果是在windows下开发,需要添加windows的库文件

  1.把盘中共享的bin目录覆盖HADOOP_HOME/bin

  2.如果还是不行,把其中的hadoop.dll复制到c:\windows\system32目录下,可能需要重启机器

4、建立新项目,引入hadoop需要的jar文件

5、代码WordMapper:

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Mapper;

public class WordMapper extends Mapper<LongWritable,Text, Text, IntWritable> {

@Override

protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context)

throws IOException, InterruptedException {

String line = value.toString();

String[] words = line.split(" ");

for(String word : words) {

context.write(new Text(word), new IntWritable(1));

}

}

}

6、代码WordReducer:

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Reducer;

public class WordReducer extends Reducer<Text, IntWritable, Text, LongWritable> {

@Override

protected void reduce(Text key, Iterable<IntWritable> values,

Reducer<Text, IntWritable, Text, LongWritable>.Context context) throws IOException, InterruptedException {

long count = 0;

for(IntWritable v : values) {

count += v.get();

}

context.write(key, new LongWritable(count));

}

}

7、代码Test:

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class Test {

public static void main(String[] args) throws Exception {

Configuration conf = new Configuration();

Job job = Job.getInstance(conf);

job.setMapperClass(WordMapper.class);

job.setReducerClass(WordReducer.class);

job.setMapOutputKeyClass(Text.class);

job.setMapOutputValueClass(IntWritable.class);

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(LongWritable.class);

FileInputFormat.setInputPaths(job, "c:/bigdata/hadoop/test/test.txt");

FileOutputFormat.setOutputPath(job, new Path("c:/bigdata/hadoop/test/out/"));

job.waitForCompletion(true);

}

}

8、把hdfs中的文件拉到本地来运行

FileInputFormat.setInputPaths(job, "hdfs://master:9000/wcinput/");

FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/wcoutput2/"));

注意这里是把hdfs文件拉到本地来运行,如果观察输出的话会观察到jobID带有local字样

同时这样的运行方式是不需要yarn的(自己停掉yarn服务做实验)

9、在远程服务器执行

conf.set("fs.defaultFS", "hdfs://master:9000/");

conf.set("mapreduce.job.jar", "target/wc.jar");

conf.set("mapreduce.framework.name", "yarn");

conf.set("yarn.resourcemanager.hostname", "master");

conf.set("mapreduce.app-submission.cross-platform", "true");

FileInputFormat.setInputPaths(job, "/wcinput/");

FileOutputFormat.setOutputPath(job, new Path("/wcoutput3/"));

如果遇到权限问题,配置执行时的虚拟机参数-DHADOOP_USER_NAME=root

10、也可以将hadoop的四个配置文件拿下来放到src根目录下,就不需要进行手工配置了,默认到classpath目录寻找

11、或者将配置文件放到别的地方,使用conf.addResource(.class.getClassLoader.getResourceAsStream)方式添加,不推荐使用绝对路径的方式

12、建立maven-hadoop项目:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

<modelversion>4.0.0</modelversion>

<groupid>mashibing.com</groupid>

<artifactid>maven</artifactid>

<version>0.0.1-SNAPSHOT</version>

<name>wc</name>

<description>hello mp</description>

<properties>

<project.build.sourceencoding>UTF-8</project.build.sourceencoding>

<hadoop.version>2.7.3</hadoop.version>

</properties>

<dependencies>

<dependency>

<groupId>junit</groupId>

<artifactId>junit</artifactId>

<version>4.12</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-client</artifactId>

<version>${hadoop.version}</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-common</artifactId>

<version>${hadoop.version}</version>

</dependency>

<dependency>

<groupId>org.apache.hadoop</groupId>

<artifactId>hadoop-hdfs</artifactId>

<version>${hadoop.version}</version>

</dependency>

</dependencies>

</project>

13、配置log4j.properties,放到src/main/resources目录下

log4j.rootCategory=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=[QC] %p [%t] %C.%M(%L) | %m%n

以上是 Hadoop(7)--java编写mapreduce程序 的全部内容, 来源链接: utcz.com/z/393449.html

回到顶部