As a result of some original research and the need to develope tools for it I've come up with some new and I hope better/quicker ways of performing certain mathematical operations. Atm I'm working on the psuedocode to post them on the site in response to questions which have already been asked.
However before I do that I want to optimise them as much as possible and so I need someone to clarify how bit operations work from a time complexity perspective.
For example say I want to evaluate which of two 8 bit integers is larger. I'm using 8 bits as an example but in reality they could be much larger.
As it stands the relation operator can be evaluated having compared the 6 MSB's. Notionally I could subtract 10000000 from both without affecting the inequality.
Thanks.
Further clarification. I've been rereading the notes I pulled from a few places and although Dukeling confirmed what I thought should be the case I need to triple check. Take for example the simple multiplication algorithm. I see in numerous places the claim that this is a Log^2(N) operation and the given reason for it being a Log^2(N) operation is due to it consisting of Log(N) additions of a Log(N) number. The problem I have with this is although that is true it ignores the fact that each of those Log(N) numbers will be the result of a bit shift and by the end we will have bitshifted at least Log(N) times. As bitshift by 1 is a Log(N) operation the bitshifts considered alone give us Log^2(N) operations. It therefore makes no sense to me when I see it further claimed that in practice multiplication doesn't in fact use Log^2(N) operations as various methods can reduce the number of required additions. Since the bitshifting alone gives us Log^2(N) I'm left confused as to how that claim can be true.
The leading 0 bits have to be evaluated, unless you store the index of the MSB and write your own routine to do the comparison.
Bit-shifting is an N-bit operation.
Masking is also an N-bit operation.
Depends on how you represent it internally, it's relatively easy to jump to the correct byte, but high-level languages (if you're using one) usually don't allow you to directly access a specific bit, you'll need some operation (e.g. bit-shift (of that byte only)) to get that bit.
Concatenating strings takes O(m+n) where m and n are the lengths of the respective strings. All strings I know of are represented sequentially in memory. You also don't necessarily have access to the memory after either of the strings (unless you enforce this by allocating enough memory), thus both must be copied to a new location. Though nothing is stopping you from creating your own data structure.
So ... just a straight bit-by-bit or byte-by-byte comparison (possibly with the starting-position optimization mentioned in 1) is probably the best you're going to get.
PS - Any posted research should show sufficient proof (in some form or another) as motivation as to why it is better than something else.