In Z3 (Python) is there any way to 'bias' the SAT search towards a 'criteria'?
A case example: I would like Z3 to obtain a model, but not any model: if possible, give me a model that has a great amount of negated literals.
Thus, for instance, if we have to search A or B a possible model is [A = True, B = True], but I would rather have received the model [A = True, B = False] or the model [A = False, B = True], since they have more False assignments.
Of course, I guess the 'criteria' must be much more concrete (say, if possible: I prefer models with the half of literals to False ), but I think the idea is understandable.
I do not care whether the method is native or not. Any help?
There are two main ways to handle this sort of problems in z3. Either using the optimizer, or manually computing via multiple-calls to the solver.
Using the optimizer
As Axel noted, one way to handle this problem is to use the optimizing solver. Your example would be coded as follows:
This prints:
You can add weights to soft-constraints, which gives a way to associate a penalty if the constraint is violated. For instance, if you wanted to make
Atrue if at all possible, but don't care much forB, then you'd associate a bigger penalty withNot(B):This prints:
The way to think about this is as follows: You're asking z3 to:
add)add_soft.) If a solution isn't possible that satisfies them all, then the solver is allowed to "violate" them, trying to minimize the total cost of all violated constraints, computed by summing the weights up.1. You can also group these constraints, though I doubt you need that generality.So, in the second example, z3 violated
Not(A), because doing so has a cost of1, while violatingNot(B)would've incurred a cost of10.Note that when you use the optimizer, z3 uses a different engine than the one it uses for regular SMT solving: In particular, this engine is not incremental. That is, if you call
checktwice on it (after introducing some new constraints), it'll solve the whole problem from scratch, instead of learning from the results of the first. Also, the optimizing solver is not as optimized as the regular solver (pun intended!), so it usually performs worse on straight satisfiability as well. See https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/nbjorner-nuz.pdf for details.Manual approach
If you don't want to use the optimizer, you can also do this "manually" using the idea of tracking variables. The idea is to identify soft-constraints (i.e., those that can be violated at some cost), and associate them with tracker variables.
Here's the basic algorithm:
Make a list of your soft constraints. Int the above example, they'll be
Not(A)andNot(B). (That is, you'd like these to be satisfied giving you negative literals, but obviously you want these to be satisfied only if possible.) Call theseS_i. Let's say you haveNof them.For each such constraint, create a new tracker variable, which will be a boolean. Call these
t_i.Assert
Nregular constraints, each of the formImplies(t_i, S_i), for each soft-constraint.Use a pseudo-boolean constraint, of the form
AtMostK, to force that at mostKof these tracker variablest_iare false. Then use a binary-search like schema to find the optimal value ofK. Note that since you're using the regular solver, you can use it in the incremental mode, i.e., with calls topushandpop.For a detailed description of this technique, see https://stackoverflow.com/a/15075840/936310.
Summary
Which of the above two methods will work is problem dependent. Using the optimizer is easiest from an end-user point of view, but you're pulling in heavy machinery that you may not need and you can thus suffer from a performance penalty. The second method might be faster, at the risk of more complicated (and thus error prone!) programming. As usual, do some benchmarking to see which works the best for your particular problem.