I have some embarasingly parallel code that I'd like to scale against a "serverless" backend like Azure Container Apps (or instances, or any other "containers on demand" service).
I am writing my own implementation, but I feel I'm re-inventing the wheel here.
Here is some pseudo-code to illustrate what I'm trying to build:
# Before
def compute_stuff(input_value:float)->float:
...
result = []
for i in range(1000):
result.append(
compute_stuff(i)
)
# After
results = run_in_container_apps( # Maybe some async stuff as well
func=compute_stuff,
parameters=[i for i in range(1000)],
n_instances=-1
)
I looked into AzureML / Kubernetes, but I feel that what I'm looking for is a little simpler.