Tuesday, March 5, 2013

Introduction to MapReduce : Hadoop Programming Component

MapReduce: A Simple Introduction


MapReduce is a framework for processing parallel problems across the cluster (Cluster is the large network of synchronized nodes connected together) or a grid (if the nodes are shared across geographically and administratively distributed systems, and use more heterogeneous hardware). Computational processing can occur on data stored either in a file system (unstructured) for example HDFS (Hadoop Distributed File System) or in a database (structured) for example any RDBMS. MapReduce can take advantage of locality of data, processing data on or near the storage assets to decrease transmission of data.
Figure 1 : MapReduce Flow Structure.

MapReduce: Logical view

The Map and Reduce functions of MapReduce are both defined with respect to data structured in (key, value) pairs. Map takes one pair of data with a type in one data domain, and returns a list of pairs in a different domain:
Map (k1,v1) → list(k2,v2)
The Map function is applied in parallel to every pair in the input dataset. This produces a list of pairs for each call. After that, the MapReduce framework collects all pairs with the same key from all lists and groups them together, creating one group for each key.
The Reduce function is then applied in parallel to each group, which in turn produces a collection of values in the same domain:
Reduce(k2, list (v2)) → list(v3)

                          As an example of the utility of map: Suppose you had a function toUpper(str) which returns an uppercase version of its input string. You could use this function with map to turn a list of strings into a list of uppercase strings. Note that we are not modifying the input string here: we are returning a new string that will form part of a new output list.
Each Reduce call typically produces either one value v3 or an empty return, though one call is allowed to return more than one value. The returns of all calls are collected as the desired result list. Thus the MapReduce framework transforms a list of (key, value) pairs into a list of values. This behaviour is different from the typical functional programming map and reduces combination, which accepts a list of arbitrary values and returns one single value that combines all the values returned by map.


Figure 3: MapReduce Key-Value pair example.

Dataflow

The frozen part of the MapReduce framework is a large distributed sort. The hot spots, which the application defines, are:
  1. an input reader
  2. a Map function
  3. a partition function
  4. a compare function
  5. a Reduce function
  6. an output writer

1.      Input reader

The input reader divides the input into appropriate size 'splits' (in practice typically 16 MB to 128 MB) and the framework assigns one split to each Map function. The input reader reads data from stable storage (typically a distributed file system) and generates key/value pairs.
A common example will read a directory full of text files and return each line as a record.

2.      Map function

The Map function takes a series of key/value pairs, processes each, and generates zero or more output key/value pairs. The input and output types of the map can be (and often are) different from each other.
If the application is doing a word count, the map function would break the line into words and output a key/value pair for each word. Each output pair would contain the word as the key and the number of instances of that word in the line as the value.

3.      Partition function

Each Map function output is allocated to a particular reducer by the application's partition function for shredding purposes. The partition function is given the key and the number of reducers and returns the index of the desired reduces.
A typical default is to hash the key and use the hash value modulo the number of reducers. It is important to pick a partition function that gives an approximately uniform distribution of data per shard for load-balancing purposes, otherwise the MapReduce operation can be held up waiting for slow reducers (reducers assigned more than their share of data) to finish.
Between the map and reduce stages, the data is shuffled (parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced it to the shard in which it will be reduced. The shuffle can sometimes take longer than the computation time depending on network bandwidth, CPU speeds, data produced and time taken by map and reduce computations.

4.      Comparison function

The input for each Reduce is pulled from the machine where the Map ran and sorted using the application's comparison function

5.      Reduce function

The framework calls the application's Reduce function once for each unique key in the sorted order. The Reduce can iterate through the values that are associated with that key and produce zero or more outputs.
In the word count example, the Reduce function takes the input values, sums them and generates a single output of the word and the final sum.

6.      Output writer

The Output Writer writes the output of the Reduce to the stable storage, usually a distributed file system.

Example 1: MapReduce All Phases





                                                   Figure 4: MapReduce All Phases.

Example 2: The Programming View

The prototypical MapReduce example counts the appearance of each word in a set of documents
The Mapper:
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
                     String line = value.toString();
                     StringTokenizer tokenizer = new StringTokenizer(line);
                     while (tokenizer.hasMoreTokens()) {
                           word.set(tokenizer.nextToken());
                           context.write(word, one);
                     }
              }

The Reducer:
public void reduce(Text key, Iterable<IntWritable> values, Context context)
              throws IOException, InterruptedException {
                     int sum = 0;
                     for (IntWritable val : values) {
                           sum += val.get();
                     }
                     context.write(key, new IntWritable(sum));
}

Here, each document is split into words, and each word is counted by the map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.

Uses:

MapReduce is useful in a wide range of applications, including distributed pattern-based searching, distributed sorting, web link-graph reversal, term-vector per host, web access log stats, inverted index construction, document clustering, machine learning, and statistical machine translation. Moreover, the MapReduce model has been adapted to several computing environments like multi-core and many-core systems, desktop grids, volunteer computing environments, dynamic cloud environments, and mobile environments.

Limitations

1.         For maximum parallelism, you need the Maps and Reduces to be stateless, to not depend on any data generated in the same MapReduce job. You cannot control the order in which the maps run, or the reductions.
2.         It is very inefficient if you are repeating similar searches again and again. A database with an index will always be faster than running an MR job over unindexed data. However, if that index needs to be regenerated whenever data is added, and data is being added continually, MR jobs may have an edge. That inefficiency can be measured in both CPU time and power consumed.
3.         In the Hadoop implementation Reduce operations do not take place until all the Maps are complete (or have failed and been skipped). As a result, you do not get any data back until the entire mapping has finished.
4.         There is a general assumption that the output of the reduce is smaller than the input to the Map. That is, you are taking a large datasource and generating smaller final values.


Followers