I am using the org.apache.spark.launcher.SparkLauncher class to launch and monitor Spark jobs on my Yarn cluster. Unfortunately, it is not detecting jobs that exit prematurely due to an uncaught exception.
Looking at the jobs in the Yarn Resource Manager Web UI, they have a State of FINISHED and a FinalStatus of FAILED. When I call SparkAppHandle.getState() for the job, it returns FINISHED and not FAILED.
I am using spark-launcher 2.10 and Spark 1.6