I am trying to run spark-driver and executor pods using SparkLauncher. however while driver-pod boot up, it gives the following error : MountVolume.SetUp failed for volume "spark-conf-volume-driver" : configmap "spark-drv-8458dd8e0e30c9d0-conf-map" not found

I have created a new serviceAccount, roles and rolebinding in a specific namespace.

java code snippet for sparkLauncher is as follows:

  SparkLauncher launcher =
        new SparkLauncher()
            .setMaster("k8s://https://" + host+ ":" + port)
            .setDeployMode("cluster")
            .setMainClass("example.sparkLauncher")
            .setConf("spark.kubernetes.namespace", "namespace-spark")
            .setConf(SparkLauncher.DRIVER_MEMORY, "500m")
            .setConf(SparkLauncher.EXECUTOR_CORES, "1")
            .setConf(SparkLauncher.EXECUTOR_MEMORY, "500m")
            .setConf("spark.executor.instances", "1")
            .setAppName("spark-Launcher")
            .setAppResource("local:///jars/spark.jar")
            .setSparkHome("/opt/spark")
            .setConf(
                "spark.kubernetes.container.image",
                "spark-launcher-example")
            .setConf("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
            .setConf("spark.kubernetes.driver.volumes.emptyDir.spark-shared.mount.path", "/tmp")
            .setConf("spark.kubernetes.driver.volumes.emptyDir.spark-shared.mount.readOnly", "false")
            .addJar("local:///jars/spark.jar")
            .setVerbose(true)
            .addAppArgs(
         argPropertyString);


 launcher.launch();

After running this , spark-driver pod comes up and instantly gets terminated with the above mentioned error. How to resolve this error? or how to add these volumemounts / configmap through sparkConf?

0

There are 0 best solutions below