• Call us: +91 9501707741
  • tutorialzdiary@gmail.com


HADOOP Tutorials

Chapter 1 : Introduction
Chapter 2 : What is Hadoop ?
Chapter 3 : HDFS
Chapter 4: HDFS Security
Chapter 5 : Sqoop
Chapter 6 : Apache Pig

HADOOP Hands on

Hadoop 1 : Ingesting Data into Hadoop Through Sqoop
Hadoop 2 : Basics of Apache Hive
Hadoop 2.1 : Apache Hive Tables
Hadoop 3: Basics of Apache Pig
Hadoop 4 : Basics of Apache HBase

HADOOP Interview Questions and Answers

Part 1 : Big Data Basics
Part 2 : MapReduce
Part 3 : Mapreduce II
Part 4 : Hive I
Part 5 : Hive II
Part 6 : Hbase I

HADOOP Training

BigData and Hadoop Training Course

Part 2 : MapReduce

Question: What is Hadoop Map Reduce ?
Answer: Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.

A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

Question: How Map Reduce Functions?
Answer: Applications typically implement the Mapper and Reducer interfaces to provide the map and reduce methods. These form the core of the job.

Mapper maps input key/value pairs to a set of intermediate key/value pairs.Maps are the individual tasks that transform input records into intermediate records. The transformed intermediate records do not need to be of the same type as the input records. A given input pair may map to zero or many output pairs.

Reducer reduces a set of intermediate values which share a key to a smaller set of values.

Question: What is Replication Factor ?
Answer: Every file in HDFS is stored in replicated form. The Default replication factor is set to three.
In Hadoop there are various Datanode , So if one Datanode fails the file still available in other data node and just in case if this Rack fails, a file is still available in other Rack.

Question: What is InputFormat in MapReduce?
Answer: The InputFormat defines how to read data from a file into the Mapper instances. Hadoop comes with several implementations of InputFormat; some work with text files and describe different ways in which the text files can be interpreted. Others, like SequenceFileInputFormat, are purpose-built for reading particular binary file formats.

More powerfully, you can define your own InputFormat implementations to format the input to your programs however you want. For example, the default TextInputFormat reads lines of text files. The key it emits for each record is the byte offset of the line read (as a LongWritable), and the value is the contents of the line up to the terminating ‘\n’ character (as a Text object).
Another important job of the InputFormat is to divide the input data sources (e.g., input files) into fragments that make up the inputs to individual map tasks. These fragments are called “splits” and are encapsulated in instances of the InputSplit interface.

Most files, for example, are split up on the boundaries of the underlying blocks in HDFS, and are represented by instances of the FileInputSplit class. Other files may be unsplittable, depending on application-specific data. Dividing up other data sources (e.g., tables from a database) into splits would be performed in a different, application-specific fashion.

The TextInputFormat divides files into splits strictly by byte offsets. It then reads individual lines of the files from the split in as record inputs to the Mapper. The RecordReader associated with TextInputFormat must be robust enough to handle the fact that the splits do not necessarily correspond neatly to line-ending boundaries. In fact, the RecordReader will read past the theoretical end of a split to the end of a line in one record. The reader associated with the next split in the file will scan for the first full line in the split to begin processing that fragment.


Question: What is the speculative execution in MapReduce?
Answer: One problem with the Hadoop system is that by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program. For example, if one node has a slow disk controller, then it may be reading its input at only 10% the speed of all the other nodes. So when 99 map tasks are already complete, the system is still waiting for the final map task to check in, which takes much longer than all the other nodes.
By forcing tasks to run in isolation from one another, individual tasks do not know where their inputs come from. Tasks trust the Hadoop platform to just deliver the appropriate input. Therefore, the same input can be processed multiple times in parallel, to exploit differences in machine capabilities. As most of the tasks in a job are coming to a close, the Hadoop platform will schedule redundant copies of the remaining tasks across several nodes which do not have other work to perform. This process is known as speculative execution. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.

Question: After the Map phase finishes, the Hadoop framework does “Partitioning, Shuffle and sort”. Explain what happens in this phase?
Answer: After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling. A different subset of the intermediate key space is assigned to each reduce node; these subsets (known as “partitions”) are the inputs to the reduce tasks. Each map task may emit (key, value) pairs to any partition; all values for the same key are always reduced together regardless of which mapper is its origin. Therefore, the map nodes must all agree on where to send the different pieces of the intermediate data. The Partitioner class determines which partition a given (key, value) pair will go to. The default partitioner computes a hash value for the key and assigns the partition based on this result.

Sort: Each reduces task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer.

Question: If no custom partitioner is defined in Hadoop then how is data partitioned before it is sent to the reducer?
Answer: A map reduce job contains most of the time more than one reducer. When a mapper emits key value pair, it has to go to one of the reducers. Which reducer ? The process of sending specific key value pairs to specific reducers is called partioning.
In Hadoop, the default partitioner is hash partitioner ,which hashes a record’s key to determine which partition the record belongs. The Number of partition is then equal to the number of reduce tasks for the job.

Question: What is a Combiner?
Answer: When a MapReduce Job is run on a large dataset, Hadoop Mapper generates large chunks of intermediate data that is passed on to Hadoop Reducer for further processing, which leads to massive network congestion. MapReduce framework offers a function known as ‘Combiner’ that can play a crucial role in reducing network congestion. As a matter of fact ‘Combiner’ is also termed as ‘Mini-reducer’. It is important to note that the primary job of a Hadoop Combiner is to process the output data from Hadoop Mapper, before passing it to a Hadoop Reducer.


Leave a reply

Your email address will not be published. Required fields are marked *

Training Enquiry