I could use some help. I have a process, that is completely independent from one another. Currently it works as a for loop. Since there are quite many repetitions of this same process, it would be much faster being parallelized:
My try so far:
# def jobs_on_rep(n, info, eg, af, n_memory, data, t):
#...
# data.append(new_row)
#...
# return data, map_filename
if __name__ == "__main__":
pool = Pool(os.cpu_count())
for t in range(0, rep):
pool.imap(jobs_on_rep, t)
# jobs_on_rep(n, info, eg, af, n_memory, data, t)
In comments the normal for loop. When the entire loop is done, data is exported as csv.
Any advice, why it won't work?
I found a way, that works for me: