当前位置: 代码迷 >> 综合 >> Hadoop之MR简单例子(WordCount)
  详细解决方案

Hadoop之MR简单例子(WordCount)

热度:9   发布时间:2023-12-16 21:40:10.0

一、Map/Reduce

package com.mrtest.hadoop;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;/*** 单词计数* @author xuyao*/
public class WordCount {public static class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> {private static final IntWritable ONE = new IntWritable(1);private Text word = new Text();@Overridepublic void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());while (itr.hasMoreTokens()) {this.word.set(itr.nextToken());context.write(this.word, ONE);}}}public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {private IntWritable result = new IntWritable();@Overridepublic void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {int sum = 0;IntWritable val;for (Iterator i = values.iterator(); i.hasNext(); sum += val.get()) {val = (IntWritable) i.next();}this.result.set(sum);context.write(key, this.result);}}public static void main(String[] args) throws Exception {System.setProperty("hadoop.home.dir", "D:\hadoop-common-2.7.3");Configuration configuration = new Configuration();Job job = Job.getInstance(configuration, "WordCount");job.setJarByClass(WordCount.class);//设置Map和Reduce类job.setMapperClass(WordCountMapper.class);job.setReducerClass(WordCountReducer.class);//设置reduce输出类型job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);//设置map输出类型job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(IntWritable.class);String[] otherArgs = new String[]{"input/dream.txt", "output"};Path path = new Path(otherArgs[1]);FileSystem fileSystem = path.getFileSystem(configuration);if (fileSystem.exists(path)) {fileSystem.delete(path, true);}// 设置输入/输出数据存放位置FileInputFormat.setInputPaths(job, new Path(otherArgs[0]));FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));boolean result = job.waitForCompletion(true);System.exit(result ? 0 : 1);}}

二、结果

三,遇到的问题

报错语句1:

HADOOP_HOME or hadoop.home.dir are not set

解决方法:

1,下载winutils.exe包:https://github.com/SweetInk/hadoop-common-bin 。然后将其放到hadoop的包(意思就是说,windows中需要解压hadoop的安装包)的bin目录下。

2,在程序起始处添加语句:

System.setProperty("hadoop.home.dir", "D:\\hadoop-common-2.7.3");

3,在system32文件夹下添加hadoop.dll文件 

报错语句2:

org.apache.hadoop.mapred.FileAlreadyExistsException

解决方法:

输出目录是运行时程序生成的,删除就行了

报错语句3:

java.lang.UnsatisfiedLinkError

解决方法:

原因是hadoop版本不一致,程序用的hadoop jar 包的版本是2.7.3,而我之前引入的hadoop.home的文件版本是2.2.0

重新下载hadoop包并配置hadoop_home 或者重新引入jar包即可

一定要注意细节!这个错误网上的说法五花八门,花了好久才找出来