Friday, October 17, 2014

Error and Solaution : Detailed step by step instruction on Spark over Yarn - Part 2

Excpetion Apache SPARK deployment
This is continue post, find Spark issues part 1 here

I have Hadoop cluster setup, decided to Deploy Apache Spark over Yarn.
for test case I have tried different option to summit Saprk job.
Here I have discussed few Exception / issues during
Spark deployment on Yarn.

Error 1)

:19: error: value saveAsTextFile is not a member of Array[(String, Int)] arr.saveAsTextFile("hdfs://localhost:9000/sparkhadoop/sp1")

Step to reproduce

val file = sc.textFile("hdfs://master:9000/sparkdata/file2.txt")

val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)

val arr = counts.collect()

arr.saveAsTextFile("hdfs://master:9000/sparkhadoop/sp1")

Solution

Error caused on the bolted line above. Its due to storing the array value to the HDFS. In scala for Spark everything should be in RDD (Resilient Distributed datasets). so that scala variable can use Spark realated objects / methos. in this case just convert array into RDD ( replace bolded line by )
sc.makeRDD(arr).saveAsTextFile("hdfs://master:9000/sparkhadoop/sp1")


Error 2)

when I run the above wordcount example, I got this error too,
WARN TaskSetManager: Lost task 1.1 in stage 5.0 (TID 47, boss): org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1474416393-10.184.36.194-1406741613979:blk_1073742690_1866 file=sparkdata/file2.txt

Solution

I was geting data from Hadoop HDFS filesystems, my Datanode was down. i just started datanode alone by

root@boss:/opt/hadoop-2.2.0# ./sbin/hadoop-daemon.sh start datanode


Error 3)
My nodemanager keep on goes off. i tried many time to start up by

root@solaiv[hadoop-2.5.1]# ./sbin/yarn-daemon.sh start nodemanager

FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager java.lang.NoClassDefFoundError: org/apache/hadoop/http/HttpServer2$Builder
Solution

     I checked the hadoop classpath
root@boss:/opt/hadoop-2.5.1# ./bin/hadoop classpath
Few Jar file were still refering to old version of Hadoop i.e hadoop-2.2.0. corrected by
changing Latest hadoop-2.5.1 version to HADOOP_HOME.

Related posts

Few more issues Apache Spark on Yarn

Error and Solaution : Detailed step by step instruction on Spark over Yarn - Part 1

Apache SPARK deployment on YARN
I have Hadoop cluster setup, decided to Deploy Apache Spark over Yarn.
for test case I have tried different option to summit Saprk job.
Here I have discussed few Exception / issues during
Spark deployment on Yarn.

Error 1)
Initially I Connected Spark via shell using "local" mode. everything working great.
root@boss:/opt/spark-1.1.0-bin-hadoop2.4# ./bin/spark-shell --master local[2]
when I tried to coonect via "master" mode,
root@boss:/opt/spark-1.1.0-bin-hadoop2.4# ./bin/spark-shell --master spark://boss:7077
I can safly enter into spark-shell, Then I summitted the job,
Resource manager(domainname:8088) Acceptted my job but not allowed to run the job.
I have been waiting quite long time then decide to check log files.

14/09/25 15:54:59 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

14/09/25 15:55:14 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
.....

Solution
There was no runnning worker for the master to assign the job. Started the worker for the master running on server "solai" and port "7077"
root@boss:/opt/spark-1.1.0-bin-hadoop2.4# ./bin/spark-class org.apache.spark.deploy.worker.Worker spark://solai:7077


Error 2)
then I decided to enter spark-shell via Yarn.
root@boss:/opt/spark-1.1.0-bin-hadoop2.4# ./bin/spark-shell --master yarn-client

Spark assembly has been built with Hive, including Datanucleus jars on classpath Exception in thread "main" java.lang.Exception: When running with master 'yarn-client' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment. at org.apache.spark.deploy.SparkSubmitArguments.
checkRequiredArguments(SparkSubmitArguments.scala:182) at org.apache.spark.deploy.SparkSubmitArguments.(SparkSubmitArguments.scala:62) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
scala:70) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Solution
as said in error set HADOOP_CONF_DIR in $SPARK_HOME/conf/spark-env.sh (create this file from conf/spark-env-template.sh)

HADOOP_CONF_DIR=/opt/hadoop-2.5.1/etc/hadoop


Error 3)
Successfully deployed Spark on top of Yarn. Next, Tryied to submit the job via Yarn-cluster.

root@slavedk:/opt/spark-1.1.0-bin-hadoop2.4# ./bin/spark-submit --class org.apache.spark.examples.JavaWordCount --master yarn-cluster lib/spark-examples-1.1.0-hadoop2.4.0.jar /sip/ /op/sh2

Container launch failed for container_1412078568642_0005_01_000003 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1412083584173 found 1412082408578

OR

Application application_1411747873135_0002 failed 2 times due to AM Container for appattempt_1411747873135_0002_000002 exited with exitCode: -100 due to: Container expired since it was unused.Failing this attempt.. Failing the application
Solution
     One of the options would be increasing lifespan of container by changing the default time. defaut container expairy itervel is 10 sec.. make it as 20sec.
add the below property to the Hadoop yarn-site.xml file. in my case /opt/hadoop-2.5.1/etc/hadoop/yarn-site.xml



<property>
  	<name>yarn.resourcemanager.rm.container-allocation.
        expiry-interval-ms
</name> <value>2000000</value> </property>


Related posts

Few more issues Apache Spark on Yarn

Labels

Adsense