I have a notebook that utilizes a Python file to import some dictionaries. Both the notebook and the .py file reside in the repository within the development workspace. However, after merging these files into the workspace, the .py file is automatically converted into a Databricks notebook instead of remaining as a Python file. The .py file contains only dictionaries.
To work around this issue, I manually created a global _dicts.py file in the workspace to ensure my script runs properly. For instance, I created a test.py file in the repository and merged it into the master branch. We use Azure DevOps as our CI/CD pipeline, and after a successful build, the files are merged into the workspace. However, the .py files deployed into the workspace are being changed into Databricks notebooks.
from global_dicts import _transactionTypes from global_dicts import _methods from global_dicts import _statuses from global_dicts import _resources from global_dicts import _endDeviceType from global_dicts import _readingQuality
I tried editing the file manually and it worked I also saw an article that explained how to fix - I attached the link below-. However, it didn't help me to achieve what I was looking for.
I can reproduce the issue with
databricks CLIas well.According to the document, it is more likely a limitation of
Azure Databricksrather than an issue caused byAzure DevOps.Since it says There is limited support for workspace file operations from serverless compute, I figured out a workaround with the help of
My Personal Compute Cluster, where I could runcurlcommand to copy file from Azure Storage Account into Azure Databricks Workspace as a File not aNotebook. Here are my steps.Upload my
.pyfile into Azure Storage Account blob and generateBlob SAS URL;Connect to
My Personal Compute Cluster-> In theTerminalrun thecurlcommand;