[hadoop] http://localhost:50070 does not work HADOOP

I already installed Hadoop on my machine "Ubuntu 13.05" and now I have an error when browsing localhost:50070 the browser says that the page does not exist.

This question is related to hadoop

The answer is


  • step 1 : bin/stop-all.sh
  • step 2 : bin/hadoop namenode -format
  • step 3 : bin/start-all.sh

First all need to do is start hadoop nodes and Trackers, simply by typing start-all.sh on ur terminal. To check all the trackers and nodes are started write 'jps' command. if everything is fine and working, go to your browser type the following url http://localhost:50070


After installing and configuring Hadoop, you can quickly run the command netstat -tulpn

to find the ports open. In the new version of Hadoop 3.1.3 the ports are as follows:-

localhost:8042 Hadoop, localhost:9870 HDFS, localhost:8088 YARN


Since Hadoop 3.0.0 - Alpha 1 there was a Change in the port configuration:

http://localhost:50070

was moved to

http://localhost:9870

see https://issues.apache.org/jira/browse/HDFS-9427


port 50070 changed to 9870 in 3.0.0-alpha1

In fact, lots of others ports changed too. Look:

Namenode ports: 50470 --> 9871, 50070 --> 9870, 8020 --> 9820
Secondary NN ports: 50091 --> 9869, 50090 --> 9868
Datanode ports: 50020 --> 9867, 50010 --> 9866, 50475 --> 9865, 50075 --> 9864

Source


First, check that java processes that are running using "jps". If you are in a pseudo-distributed mode you must have the following proccesses:

  • Namenode
  • Jobtracker
  • Tasktracker
  • Datanode
  • SecondaryNamenode

If you are missing any, use the restart commands:

$HADOOP_INSTALL/bin/stop-all.sh
$HADOOP_INSTALL/bin/start-all.sh

It can also be because you haven't open that port on the machine:

iptables -A INPUT -p tcp --dport 50070 -j ACCEPT

Try

namenode -format
start-all.sh
stop-all.sh
jps

see namenode and datanode are running and browse

localhost:50070

If localhost:50070 is still not working, then you need to allows ports. So, check

netstat -anp | grep 50070

if you are running and old version of Hadoop (hadoop 1.2) you got an error because http://localhost:50070/dfshealth.html does'nt exit. Check http://localhost:50070/dfshealth.jsp which works !


Enable the port in your system it is for CentOS 7 flow the commands below

1.firewall-cmd --get-active-zones

2.firewall-cmd --zone=dmz --add-port=50070/tcp --permanent

3.firewall-cmd --zone=public --add-port=50070/tcp --permanent

4.firewall-cmd --zone=dmz --add-port=9000/tcp --permanent

5.firewall-cmd --zone=public --add-port=9000/tcp --permanent 6.firewall-cmd --reload


There is a similar question and answer at: Start Hadoop 50075 Port is not resolved

Take a look at your core-site.xml file to determine which port it is set to. If 0, it will randomly pick a port, so be sure to set one.


If you can open the http://localhost:8088/cluster but can't open http://localhost:50070/. Maybe datanode didn't start-up or namenode didn't formate.

Hadoop version 2.6.4

  • step 1:

check whether your namenode has been formated, if not type:

$ stop-all.sh
$ /path/to/hdfs namenode -format
$ start-all.sh
  • step 2:

check your namenode tmp file path, to see in /tmp, if the namenode directory is in /tmp, you need set tmp path in core-site.xml, because every time when you reboot or start your machine, the files in /tmp will be removed, you need set a tmp dir path.

add the following to it.

<property>
    <name>hadoop.tmp.dir</name>
    <value>/path/to/hadoop/tmp</value>
</property>
  • step 3:

check step 2, stop hadoop and remove the namenode tmp dir in /tmp, then type /path/to/hdfs namenode -format, and start the hadoop. The is also a tmp dir in $HADOOP_HOME

If all the above don't help, please comment below!


For recent hadoop versions (I'm using 2.7.1)

The start\stop scripts are located in the sbin folder. The scripts are:

  • ./sbin/start-dfs.sh
  • ./sbin/stop-dfs.sh
  • ./sbin/start-yarn.sh
  • ./sbin/stop-yarn.sh

I didn't have to do anything with yarn though to get the NameNodeServer instance running.

Now my mistake was that I didn't format the NameNodeServer HDFS.

bin/hdfs namenode -format

I'm not quite sure what that does at the moment but it obviously prepares the space on which the NameNodeServer will use to operate.