I would like to create a constraint on a Bitvector (X) that masks another Bitvector to its n
least significant bits. I would like this to be as efficient as possible. What I have tried so far is to represent the second bitvector as: 1<<n - 1
, where n
is another Bitvector. This gives me two problems:
n
to be smaller than the width of X
. If I do, it fails with a type error on n.Any ideas for a more efficient way of approaching this, or solving the width issue?
It's not quite clear what you're trying to do, posting actual code is always more helpful. But, from your description, you can simply shift left first and then shift right again. This'll push 0's from the bottom, and then drop them off; making sure the bit-vector has the least-most n
bits left. The amount you shift should equal to bitsize - n
where bitsize
is the length of your bit-vectors and n
is how many least-significant bits you want to preserve. Assuming you're dealing with 64-bit vectors, it'd look something like:
(declare-const n (_ BitVec 64))
(declare-const x (_ BitVec 64))
(define-fun amnt () (_ BitVec 64) (bvsub #x0000000000000040 n))
(define-fun maskedX () (_ BitVec 64) (bvlshr (bvshl x amnt) amnt))
(The constant #x0000000000000040
is how you write 64 in SMT-lib as a 64-bit bit-vector constant.) Note that this implicitly assumes n
is at most 64
: If that's not true, then subtraction will wrap-around and you'll get a different constraint. I'm assuming there already is a constraint in your system that says n
is at most the bit-vector size you're dealing with.
Regarding efficiency: There's really no obvious way to make bit-vector constraints like this fast or slow: It really depends on what other constraints you have around. So, impossible to opine whether this is the best way to achieve what you want without knowing anything else about your problem. It is usually not helpful to think about "speed" in SMTLib when symbolic values are present, there are so many factors that go into solver efficiency.
Regarding types: SMTLib is based on a very simple first-order type system. So, yes: Almost all bit-vector operations have to have the exact same sizes as arguments. This is by design: Variable-length bit-vectors are simply not available in the logic, as it would render it useless since the satisfiablity of formulas would depend on actual bit-sizes you instantiate them to.
If this doesn't help, I'd recommend posting an actual code snippet of what you're trying to do and the problem you're running into. The more concrete the example is, the better.