Search code examples
hdfshadoop2hadoop-2.7.2

Could an HDFS read/write process be suspended/resumed?


I have one question regarding the HDFS read/write process:

Assuming that we have a client (for the sake of the example let's say that the client is a HADOOP map process) who requests to read a file from HDFS and or to write a file to HDFS, which is the process which actually does the read/write from/to the HDFS?

I know that there is a process for the Namenode and a process for each Datanode, what are their responsibilities to the system in general but I am confused in this scenario.

Is it the client's process by itself or is there another process in the HDFS, created and dedicated to the this specific client, in order to access and read/write from/to the HDFS?

Finally, if the second answer is true, is there any possibility that this process can be suspended for a while?

I have done some research and the most important solutions that I found were Oozie and JobControl class from hadoop API.

But, because I am not sure about the above workflow, I am not sure what process I am suspending and resuming with these tools.

Is it the client's process or a process which runs in HDFS in order to serve the request of the client?


Solution

  • Have a look at these SE posts to understand how HDFS writes work:

    Hadoop 2.0 data write operation acknowledgement

    Hadoop file write

    Hadoop: HDFS File Writes & Reads

    Apart from file/block writes, above question explain about datanode failure scenarios.

    The current block on the good datanodes is given a new identity, which is communicated to the namenode, so that the partial block on the failed datanode will be deleted if the failed datanode recovers later on. The failed datanode is removed from the pipeline, and a new pipeline is constructed from the two good datanodes.

    One failure in datanode triggers corrective actions by framework.

    Regarding your second query :

    You have two types of schedulers :

    FairScheduler

    CapacityScheduler

    Have a look at this article on suspend and resume

    In a multi-application cluster environment, jobs running inside Hadoop YARN may be of lower-priority than jobs running outside Hadoop YARN like HBase. To give way to other higher-priority jobs inside Hadoop, a user or some cluster-level resource scheduling service should be able to suspend and/or resume some particular jobs within Hadoop YARN.

    When target jobs inside Hadoop are suspended, those already allocated and running task containers will continue to run until their completion or active preemption by other ways. But no more new containers would be allocated to the target jobs.

    In contrast, when suspended jobs are put into resume mode, they will continue to run from the previous job progress and have new task containers allocated to complete the rest of the jobs.