ERROR datanode.DataNode: Exception in secureMain

1.7k Views Asked by At

I was trying to install Hadoop on windows. Namenode is working fine but Data Node is not working fine. Following error is being displayed again and again even after trying for several times. Following Error is being shown on CMD regarding dataNode:

    2021-12-16 20:24:32,624 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/C:/Users/mtalha.umair/datanode 2021-12-16 20:24:32,624 ERROR datanode.DataNode: Exception in secureMain org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid value configured for dfs.datanode.failed.volumes.tolerated -
1. Value configured is >= to the number of configured volumes (1).
        at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:176)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2756)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2900)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2924) 2021-12-16 20:24:32,640 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid value configured for dfs.datanode.failed.volumes.tolerated - 1. Value configured is >= to the number of configured volumes (1). 2021-12-16 20:24:32,640 INFO datanode.DataNode: SHUTDOWN_MSG:

I have referred to many different articles but to no avail. I have tried to use another version of Hadoop but the problem remains and as I am just starting out, I can't fully understand the problem therefore I need help

these are my configurations

-For core-site.xml

<configuration>  
  <property>  
    <name>fs.defaultFS</name>  
    <value>hdfs://localhost:9000</value>  
  </property> 
</configuration>

    
  • For mapred-site.xml

    mapreduce.framework.name yarn

-For yarn-site.xml

<configuration>  
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>  
  </property>  
  <property>
    <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
  </property> 
 </configuration>

-For hdfs-site.xml

<configuration>

   <property>
     <name>dfs.namenode.name.dir</name>
     <value>/D:/big-data/hadoop-3.1.3/data/namenode</value>    
  </property>    
  <property>
     <name>dfs.datanode.data.dir</name>
     <value>datanode</value>    </property>    <property>
    <name>dfs.datanode.failed.volumes.tolerated</name>
    <value>1</value> </property>    <property>
     <name>dfs.replication</name>
     <value>1</value>    
  </property>
</configuration>
1

There are 1 best solutions below

0
Matt Andruff On

Well unfortunately the reason this is failing is exactly what the message says. Let me try to say it another way.

  • dfs.datanode.failed.volumes.tolerated = 1
  • The number of (dfs.datanode.data.dir) folders you have configured is 1.

You are saying you will tolerate no data drives (1 drive configured and you'll tolerate it breaking). This does not make sense and is why this is being raised as an issue.

You need to alter it so there's a gap of at least 1 (so that you can still have a running datanode.) Here are your options:

  • Configure more data volumes (2) with dfs.datanode.failed.volumes.tolerated Set to 1. For example, store data in both your C and D drive.
  • dfs.datanode.failed.volumes.tolerated to 0; and keep you data volumes as is (1)