I'm using flask, sqlalchemy. For example, I have the following model
class B(model):
name = charfield
class A(model):
name = charfield
b = fk
def save(self, ...):
self.b.name = some_random_string
self.b.save()
self.super().save()
And I bulk update the A model as follows.
for a in A.query.all():
a.name = 'foobar'
a.save() # Here the associated model B is also updated.
However, this is not efficient, as we have to query the DB every time we make a save(), which is time consuming. We considered methods like sqlalchemy's bulk_update_mappings, but it is difficult to keep track of overridden save() methods and create the right set of objects, and it is also difficult to maintain the code to be consistent with the overridden save() internal logic. E.g. Whenever the save() method is changed, the bulk_update logic needs to be updated as well.
Is there an efficient way to do this, e.g.
for a in A.query.all():
a.name = 'foobar'
a.save() # No communication with the db here.
db.session.commit() # This is where we flush and commit all the object changes we made in the for #loop above (not just updates to model a, but to related models) to the db at once.
To recap, we want to efficiently update a large number of objects in model A, but we want to keep the logic inside the overridden save() method consistent, meaning we want to keep everything that happens in the save() of model A without immediately reflecting it in the DB, and then reflect it all at once later.
Is there a good way to do this?