How to set queue with org.apache.spark.launcher.SparkLauncher

795 Views Asked by At

If I use spark-submit command line to submit spark task to yarn, I would be able to set queue with --queue myqueuename

Full command would be

/myserver/spark-2.2.0-bin-hadoop2.4/bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--executor-memory 4G \
--executor-cores 2 \
--queue myqueuename \
--class task.MyTask \
 "/my/jar/path/spark-app-full.jar"  \
--input  /data/input/path \
--output  /data/output/path/ 

However, how to set queue with SparkLauncher from java. I would like to launch spark task programatically.

My java code so far

SparkAppHandle handle = new SparkLauncher()
        .setSparkHome("/myserver/spark-2.2.0-bin-hadoop2.4")
        .setAppResource(jarPath)
        .setMainClass("task.MyTask")
        .setMaster("yarn")
        .setDeployMode("cluster")
        .setConf(SparkLauncher.EXECUTOR_MEMORY, "8G")
        .setConf(SparkLauncher.EXECUTOR_CORES, "4")
        .addAppArgs("--input",
                inputPath,
                "--output",
                outputPath)
        .startApplication(taskListener);

I would like to know how to specific queue in java code using SparkLauncher

1

There are 1 best solutions below

0
egordoe On BEST ANSWER

According to https://spark.apache.org/docs/latest/running-on-yarn.html, setConf("spark.yarn.queue", "myqueuename") does the trick:

SparkAppHandle handle = new SparkLauncher()
        .setSparkHome("/myserver/spark-2.2.0-bin-hadoop2.4")
        .setAppResource(jarPath)
        .setMainClass("task.MyTask")
        .setMaster("yarn")
        .setDeployMode("cluster")
        .setConf(SparkLauncher.EXECUTOR_MEMORY, "8G")
        .setConf(SparkLauncher.EXECUTOR_CORES, "4")
        .setConf("spark.yarn.queue", "myqueuename") // <-- SETTING A QUEUE NAME
        .addAppArgs("--input",
                inputPath,
                "--output",
                outputPath)
        .startApplication(taskListener);