Here i've listed few simple steps to upgrade Hadoop NameNode with out loss of exsiting Data in the cluster.
It's advisable to take backup of Hadoop metadata placed under : dfs.namenode.name.data OR dfs.name.dir dir
Step's
1) stop-yarn.sh
2) stop-dfs.sh
3) Download and configure the latest version of Hadoop
4) cd $HADOOP_PREFIX/etc/hadoop
in hdfs-site.xml ,
change the dfs.namenode.name.dir and (if in case of pseudo node) dfs.datanode.data.dir to point to the old version of Hadoop path
5) ./sbin/hadoop-daemon.sh start namenode -upgrade
6) you will see following message in Web UI namenodeIP:50070 "Upgrade in progress. Not yet finalized." and SafeMode is ON
7) ./bin/hdfs dfsadmin -finalizeUpgrade
8) investigate the NameNode log, which should contains this information,
9) safeMode will go off automatically, once you complete all these..
10) start the DFS
./sbin/start-dfs.sh --config $HADOOP_PREFIX/etc/hadoop
11) start the Yarn
./sbin/start-yarn.sh --config $HADOOP_PREFIX/etc/hadoop
2) stop-dfs.sh
3) Download and configure the latest version of Hadoop
4) cd $HADOOP_PREFIX/etc/hadoop
in hdfs-site.xml ,
change the dfs.namenode.name.dir and (if in case of pseudo node) dfs.datanode.data.dir to point to the old version of Hadoop path
5) ./sbin/hadoop-daemon.sh start namenode -upgrade
6) you will see following message in Web UI namenodeIP:50070 "Upgrade in progress. Not yet finalized." and SafeMode is ON
7) ./bin/hdfs dfsadmin -finalizeUpgrade
8) investigate the NameNode log, which should contains this information,
Upgradepgrade of local storage directories. old LV = -57; old CTime = 0. new LV = -57; new CTime = 1417064332016
9) safeMode will go off automatically, once you complete all these..
10) start the DFS
./sbin/start-dfs.sh --config $HADOOP_PREFIX/etc/hadoop
11) start the Yarn
./sbin/start-yarn.sh --config $HADOOP_PREFIX/etc/hadoop