Search code examples
bit-manipulationbig-otime-complexitymaskingbit-shift

Time Complexity of bit operations


As a result of some original research and the need to develope tools for it I've come up with some new and I hope better/quicker ways of performing certain mathematical operations. Atm I'm working on the psuedocode to post them on the site in response to questions which have already been asked.

However before I do that I want to optimise them as much as possible and so I need someone to clarify how bit operations work from a time complexity perspective.

For example say I want to evaluate which of two 8 bit integers is larger. I'm using 8 bits as an example but in reality they could be much larger.

10000100

10100000

As it stands the relation operator can be evaluated having compared the 6 MSB's. Notionally I could subtract 10000000 from both without affecting the inequality.

00100000

00000100

  • Q1. This would speed up the evaluation of the relation operator if the reading starts from the MSB but does it or do the leading 0's have to be evaluated anyway? Obvioulsy subtracting isn't worth doing since subtracting 10000000 is itself a 8 bit operation but say instead I could set the MSB or specific bit's using a single or two bit operation then it could be worthwhile.
  • Q2. Two methods I can think of that might fit the bill are bitshifting left and then right to destroy the leading 1 or using a mask but are there other methods? I'm particularly interested in ones which might let you set any bit not just the leading bits. If it's specific to a certain language just let me know that please.
  • Q3. Since masks are N bits then is using a mask not an N bit operation itself?
  • Q4. What about evaluating a specific bit, what methods exist and how time complex is the operation? Do all proceeding bits have to be read in first so that it's a N bit operation or can you "jump" to a certain bit?
  • Q5 Two strings being conjugated from a time complexity perspective. Does that happen by associating two memory addresses together or does one string get read and copied into the other so that it's a String.length operation?

Thanks.

Further clarification. I've been rereading the notes I pulled from a few places and although Dukeling confirmed what I thought should be the case I need to triple check. Take for example the simple multiplication algorithm. I see in numerous places the claim that this is a Log^2(N) operation and the given reason for it being a Log^2(N) operation is due to it consisting of Log(N) additions of a Log(N) number. The problem I have with this is although that is true it ignores the fact that each of those Log(N) numbers will be the result of a bit shift and by the end we will have bitshifted at least Log(N) times. As bitshift by 1 is a Log(N) operation the bitshifts considered alone give us Log^2(N) operations. It therefore makes no sense to me when I see it further claimed that in practice multiplication doesn't in fact use Log^2(N) operations as various methods can reduce the number of required additions. Since the bitshifting alone gives us Log^2(N) I'm left confused as to how that claim can be true.


Solution

    1. The leading 0 bits have to be evaluated, unless you store the index of the MSB and write your own routine to do the comparison.

    2. Bit-shifting is an N-bit operation.

    3. Masking is also an N-bit operation.

    4. Depends on how you represent it internally, it's relatively easy to jump to the correct byte, but high-level languages (if you're using one) usually don't allow you to directly access a specific bit, you'll need some operation (e.g. bit-shift (of that byte only)) to get that bit.

    5. Concatenating strings takes O(m+n) where m and n are the lengths of the respective strings. All strings I know of are represented sequentially in memory. You also don't necessarily have access to the memory after either of the strings (unless you enforce this by allocating enough memory), thus both must be copied to a new location. Though nothing is stopping you from creating your own data structure.

    So ... just a straight bit-by-bit or byte-by-byte comparison (possibly with the starting-position optimization mentioned in 1) is probably the best you're going to get.

    PS - Any posted research should show sufficient proof (in some form or another) as motivation as to why it is better than something else.