Hadoop 2.x Quick Notes :: Part - 3


Bhaskar S 12/28/2014


Introduction

In Part-2 we laid out the steps to install, setup, and start-up a 3-node Hadoop 2.x cluster.

Now, the BIG question - why Hadoop 2.x ?

To answer this question, let us refresh our memory on the Hadoop 1.x ecosystem.

The following Figure-1 depicts the Hadoop 1.x Ecosystem at a high level:

Hadoop 1.x Ecosystem
Figure-1

The following table describes the core components of the Hadoop 1.x ecosystem:

Component Description
NameNode It manages the file system namespace, the metadata about the files in HDFS such as file names, their permissions, and the locations (which nodes of the cluster) of each data block of each file and regulates access to the files in HDFS.

It is the single point of failure for the Hadoop 1.x system. If we lose the NameNode, everything will come to a stand-still.

Typically, it is run on a master node of the cluster with lots of RAM as it stores the metadata for each of the files in HDFS in-memory

DataNode It manages the storage attached to the node on which it is running and is responsible for data block creation, deletion, and replication in HDFS. It is the workhorse of HDFS and performs all the read and write operations.

It constantly communicates with the NameNode to report on the list of data blocks it is storing.

In addition, it also communicates with the other DataNodes in the cluster for data block replication.

Typically, it runs on the slave nodes of the cluster

SecondaryNameNode It is responsible for periodically taking a snapshot of the metadata file(s) from the NameNode, merge them and load the merged version back to the NameNode. The name is a bit misleading - it is really not a backup for the NameNode.

Typically, it is run on a master node of the cluster

JobTracker It is the entry point through which application clients submit a unit of work (also known as a Job) for data processing. The JobTracker first consults with the NameNode to figure the locations (nodes) of the data blocks for the input files to be processed and then assigns the appropriate Map and Reduce tasks to the nodes that have the data blocks.

The JobTracker monitors the progress of the tasks and re-submits any failed tasks. Like the NameNode, it is the single point of failure in the Hadoop 1.x system. If we lose the JobTracker, no jobs can be executed.

Typically, it is run on a master node of the cluster

TaskTracker It is the workhorse of the Hadoop 1.x MapReduce distributed data processing system and executes the Map and Reduce tasks assigned by the JobTracker. It constantly communicates with the JobTracker to report status of the task(s) it is executing.

Typically, it runs on the slave nodes of the cluster

The following are the limitations of the Hadoop 1.x ecosystem:

The Hadoop 2.x ecosystem was developed to address the above mentioned limitations and open the platform for future enhancements.

The following Figure-2 depicts the Hadoop 2.x Ecosystem at a high level:

Hadoop 2.x Ecosystem
Figure-2

The following table describes the core components of the Hadoop 2.x ecosystem:

Component Description
NameNode It manages the file system namespace, the metadata about the files in HDFS such as file names, their permissions, and the locations (which nodes of the cluster) of each data block of each file and regulates access to the files in HDFS.

Hadoop 2.x NameNode HA feature allows two separate nodes in the cluster to run NameNodes with one NameNode in an active state and the other is in a passive standby mode. The active NameNode is responsible for all client operations in the cluster and logs all state changes to a durable log file. The standby NameNode is watching and applying changes from the durable log file to its own state. In the event of a failover, the standby NameNode will ensure that it has read and applied allo the changes from the durable log before promoting itself as the active NameNode

DataNode It manages the storage attached to the node on which it is running and is responsible for data block creation, deletion, and replication in HDFS. It is the workhorse of HDFS and performs all the read and write operations.

It constantly communicates with the NameNode to report on the list of data blocks it is storing.

In addition, it also communicates with the other DataNodes in the cluster for data block replication.

In a NameNode HA configuration, the DataNodes are configured with the location of both the active and standby NameNodes so that they send data block location information and heartbeats to both the NameNodes

SecondaryNameNode It is responsible for periodically taking a snapshot of the metadata file(s) from the NameNode, merge them and load the merged version back to the NameNode.

There is no need for the SecondaryNameNode in a NameNode HA configuration

ResourceManager It is the ultimate authority that governs the entire cluster and manages the assignment of applications to underlying cluster resources.

There is one active instance of the ResourceManager and it typically runs on a master node of the cluster.

ResourceManager HA feature allows two separate nodes in the cluster to run ResourceManagers with one ResourceManager in an active state and the other is in a passive standby mode.

The ResourceManager consists of two main components - the Scheduler and the ApplicationsManager.

The Scheduler is responsible for scheduling and allocating resources to the various applications based on the resource requirements of the applications. The Scheduler performs its scheduling function based on the abstract notion of a resource Container which incorporates elements such as cpu, memory, etc. Currently, only memory is supported.

The Scheduler uses a pluggable architecture for the scheduling policy. The current implementations are the CapacityScheduler and the FairScheduler. The CapacityScheduler is the default.

The ApplicationsManager is responsible for accepting job-submissions from clients and negotiating the first Container for executing the per-application specific ApplicationMaster. It provides the service of monitoring and restarting the ApplicationMaster in an event of failure

NodeManager It is the per-node agent that is responsible for the monitoring and management of abstract resource Containers (represent per-node resources available for application tasks) over its life cycle and tracking the health of the node.

A Container represents an abstract notion of an allocated resource (cpu, memory, etc) in the cluster.

The per-application ApplicationMaster is created for each application running in the cluster. It has the responsibility of negotiating appropriate resource Containers for the various tasks of the application (map, reduce, or other) from the ResourceManager (Scheduler) and through the NodeManagers, tracking their status as well as monitoring their progress

References

Hadoop 2.x Quick Notes :: Part - 1

Hadoop 2.x Quick Notes :: Part - 2