HDFS Architecture

July 13, 2016

HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. Multiple data node act as a slaves.

 

 

NameNode: It is master node that controls the whole cluster and division of files into block. Typical block size is 64MB or 128MB, however it can be configured using parameter which are in <HADOOP_INSTALL>/conf/hdfs-site.xml. And how many copies of each block are to be kept in cluster, is also a configurable parameter in same file.  Namenode keeps two data structures:
 

            1. Namespace (filename to blocks mapping)
            2. Inodes (block to datanode mapping)
 

Namespace and inodes are always in memory so that it can referenced quickly. However only namespace is persisted to hard disk and on restart of cluster, inodes are created by namenode based on the information is gets from each data node periodically. Namenode never initiates an interaction with datanode, instead datanodes keep sending heartbeat to namenode and in response namenode also send the tasks to be performed by each data node. The communication between namenode and datanodes happen on RPC. Real world example below.

 

 

 

 

Datanode: DataNodes are responsible for serving read and write requests from the file client.

 

Based on namenode’s instruction, the datanode also perform block creation, deletion, and replication operations.

 

Every three seconds send heart beat and block report information to the namenode. Every 10th heartbeat namenode sends a blockreport.

 

Heartbeats: DataNodes send the heartbetat to the Namenode – Once every 3 Seconds.

 

Namenode used heartbeats to detect the Datanode failure.

 

Secondary Namenode:

 

Copies FsImage and Edit Log from NameNode to a temporary directory.

 

Merges FSImage and Edit Log into a new FSImage in temporary directory.

 

Uploads new FSImage to the NameNode – Edit Log on NameNode is purged.

 

As we can see, the HDFS architecture allows for multiple nodes making it linearly scalable. So large number of clients can be served by adding more data nodes to the cluster. Since a copy of the data always resides on another nodes it is also inherently fault tolerant.  This allows for massively parallel computation in a distributed environment.

Please reload

Featured Posts

Its all about Apache Sqoop

July 22, 2016

1/10
Please reload

Recent Posts

July 13, 2016

July 13, 2016

Please reload

Archive
Please reload

Search By Tags

I'm busy working on my blog posts. Watch this space!

Please reload

Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square