I'm trying to provision and run a job 'forever' in Databricks via Terraform, but get an error on the first run of Terraform apply. Just continuous job itself appears to get going, but Terraform itself fails (which has downstream effects). The job itself could run forever, but when and if it fails, should be restarted.
I think this should be the relevant Terraform code:
resource "databricks_notebook" "simulated_device" {
path = "/simulated-data/simulated_device.py"
language = "PYTHON"
source = "../simulated-data/device.py"
}
resource "databricks_job" "simulated_data" {
name = "Generate Simulated Data"
existing_cluster_id = databricks_cluster.telemetry_cluster.cluster_id
always_running = true
notebook_task {
notebook_path = databricks_notebook.simulated_device.path
}
continuous {
pause_status = "UNPAUSED"
}
}
I consistently get Error: cannot create job: cannot start job run: An active continuous job can't be executed manually. A new run can be triggered by cancelling the existing run. The error references the databricks_job block from the code sample above.
It seems like I'm somehow telling Databricks to start the job after it's already continuous. This could possibly be a bug of some sort, but it seems more likely that I'm misunderstanding something.
I'm using version 1.14.3 of the Databricks Terraform provider.
Right now you need to use either
always_runningorcontinuos, although I would recommend to usecontinuousas it's a part of Jobs API, whilealways_runningwas an attempt to provide similar funcitonality from the terraform provider itself. Most probably you'll able to use both in the future