Hadoop framework comes prepackaged with many Mapper and Reducer classes. This post explains some of these predefined Mappers and Reducers in Hadoop and shows examples using the predefined Mappers and Reducers classes.
Predefined Mapper classes in Hadoop
- ChainMapper- The ChainMapper class allows to use multiple Mapper classes within a single Map task. Using this predefined class you can chain mapper classes where output of one map task becomes input of the second map task. That helps in breaking a complex task with lots of data processing into a chain of smaller tasks.
- FieldSelectionMapper- This class implements a mapper class
that can be used to perform field selections in a manner similar to Unix
cut. The input data is treated as fields separated by a user specified
separator (the default value is "\t"). The user can specify a list of
fields that form the map output keys, and a list of fields that form the
map output values. The field separator is under attribute "mapreduce.fieldsel.data.field.separator"
The map output field list spec is under attribute "mapreduce.fieldsel.map.output.key.value.fields.spec".
The value is expected to be like "keyFieldsSpec:valueFieldsSpec".
The value is expected to be like "keyFieldsSpec:valueFieldsSpec".
Both keyFieldsSpec and valueFieldsSpec are comma (,) separated field spec:
fieldSpec,fieldSpec,fieldSpec ... Each field spec can be a simple number
(e.g. 5) specifying a specific field, or a range (like 2-5) to specify a
range of fields, or an open range (like 3-) specifying all the fields
starting from field 3. Here is an example: "4,3,0,1:6,5,1-3,7-". It specifies to use fields 4,3,0 and 1 for keys, and use fields 6,5,1,2,3,7 and above for values.
By using this predefined class you don't need to write your own mapper with the split logic, you can configure FieldSelectionMapper with the required data to split the record. Jump to FieldSelectionMapper example. - InverseMapper- This predefined Mapper swaps keys and values.
- TokenCounterMapper- Tokenize the input values and emit each word with a count of 1. This predefined class can be used where you want to do the sum of values like in a word count MapReduce program. Jump to TokenCounterMapper example.
- MultithreadedMapper- This Mapper is a Multithreaded implementation for org.apache.hadoop.mapreduce.Mapper. This predefined mapper is useful if your job is more I/O bound than CPU bound.
- ValueAggregatorMapper- This class implements the generic mapper of Aggregate.
- WrappedMapper- This predefined mapper wraps a given one to allow custom Mapper.Context implementations.
- RegexMapper- A Mapper that extracts text matching a regular expression.
Predefined Reducer classes in Hadoop
- ChainReducer- The ChainReducer class allows to chain multiple Mapper classes after a Reducer within the Reducer task. For each record output by the Reducer, the Mapper classes are invoked in a chained fashion. The output of the reducer becomes the input of the first mapper and output of first becomes the input of the second, and so on until the last Mapper, the output of the last Mapper will be written to the task's output.
- IntSumReducer- This predefined Reducer is used to sum the int values grouped with a key. You can use this predefined reducer where you want to get the sum of values grouped by keys. Jump to IntSumReducer example.
- LongSumReducer- This predefined Reducer is used to sum the long values grouped with a key.
- FieldSelectionReducer- This class implements a reducer class that can be used to perform field selections in a manner similar to unix cut. The input data is treated as fields separated by a user specified separator (the default value is "\t"). The user can specify a list of fields that form the reduce output keys, and a list of fields that form the reduce output values. The fields are the union of those from the key and those from the value. The field separator is under attribute "mapreduce.fieldsel.data.field.separator" The reduce output field list spec is under attribute "mapreduce.fieldsel.reduce.output.key.value.fields.spec". The value is expected to be like "keyFieldsSpec:valueFieldsSpec" key/valueFieldsSpec are comma (,) separated field spec: fieldSpec,fieldSpec,fieldSpec ... As example: "4,3,0,1:6,5,1-3,7-". It specifies to use fields 4,3,0 and 1 for keys, and use fields 6,5,1,2,3,7 and above for values.
- ValueAggregatorReducer- This class implements the generic reducer of Aggregate.
- WrappedReducer- A Reducer which wraps a given one to allow for custom Reducer.Context implementations.
Predefined Mapper and Reducer - ChainMapper and ChainReducer example
- To see an example of chaining MapReduce job using Predefined ChainMapper and ChainReducer classes refer Chaining MapReduce Job in Hadoop .
Predefined Mapper and Reducer - FieldSelectionMapper example
If you have to get few fields of the input file you can use FieldSelectionMapper for the same. Let’s say you have data in following format for item, zone and total sales.
Item1 zone-1 234 Item1 zone-2 456 Item3 zone-2 123
And you need to find total sales for each item which means you’ll have to extract field 0 and field 2 in your Mapper.
import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper; import org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionMapper; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; public class SalesCalc extends Configured implements Tool { // Reduce function public static class TotalSalesReducer extends Reducer<Text, Text, Text, IntWritable>{ public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException { int sum = 0; for (Text val : values) { sum += Integer.parseInt(val.toString()); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { int exitFlag = ToolRunner.run(new SalesCalc(), args); System.exit(exitFlag); } @Override public int run(String[] args) throws Exception { Configuration conf = getConf(); // setting the separator conf.set(FieldSelectionHelper.DATA_FIELD_SEPERATOR, "\t"); // Configure the fields that are to be extracted conf.set(FieldSelectionHelper.MAP_OUTPUT_KEY_VALUE_SPEC, "0:2"); Job job = Job.getInstance(conf, "Sales"); job.setJarByClass(getClass()); // setting predefined FieldSelectionMapper job.setMapperClass(FieldSelectionMapper.class); job.setReducerClass(TotalSalesReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); return job.waitForCompletion(true) ? 0 : 1; } }
Predefined Mapper and Reducer - TokenCounterMapper and IntSumReducer example
You can write a word count MapReduce program using predefined TokenCounterMapper and IntSumReducer. In that case you don’t need to write any logic just configure these classes and run your MapReduce job.
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.map.TokenCounterMapper; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; public class WordCount extends Configured implements Tool{ public static void main(String[] args) throws Exception{ int exitFlag = ToolRunner.run(new SimpleWordCount(), args); System.exit(exitFlag); } @Override public int run(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "WordCount"); job.setJarByClass(getClass()); // Setting pre-defing mapper and reducer job.setMapperClass(TokenCounterMapper.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); return job.waitForCompletion(true) ? 0 : 1; } }
That's all for this topic Predefined Mapper And Reducer Classes in Hadoop. If you have any doubt or any suggestions to make please drop a comment. Thanks!
>>>Return to Hadoop Framework Tutorial Page
Related Topics
You may also like-