Search code examples
c++linuxcygwin

Converting 2s compliment to a decimal. Output is double


I have a project and part of the project is to convert a given string of 0s and 1s to a decimal. (So, convert binary to decimal, 2-s and non 2s compliment) However I am running into a weird problem. When I compile and run the program with an IDE, such as Visual Studio and Code Block, the output comes out to be the right one. But when I compile and run it with Cygwin or into a Linux machine the numbers comes out to be twice. Not all come twice(just like multiplied by 2). For example, 8 bit 2s compliment: 1111 1111 should be -1 in decimal, but this program that when it is run into a Linux machine outputs -2.

Does anyone one have any idea why is this happening?

Here is the code for the functions;

int convertToDecimal(string line)
{
    int num = 0;
    if (line[0] == '1')
    {
        for (int i = 0; i < line.length(); i++)
        {
            if (line[i] == '1')
                line[i] = '0';
            else
                line[i] = '1';
        }
        for (int i = 0; i < line.length(); i++)
        {
            int  j = line.length() - 1 - i;
            if (line[j] == '1')
                num = num + pow(2.0, double(i));
        }
        num = -1 * (num + 1);
        return num;
    }

    for (int i = 0; i < line.length(); i++)
    {
        int  j = line.length() - 1 - i;
        if (line[j] == '1')
            num = num + 1 * pow(2, i);
    }
    return num;
}


The expected output should be and what i get with Codeblocks or Visual Studio: 
11111111111111111111111111111111    -1
11111111111111111111111111111110    -2
11111111111111111111111111111101    -3
11111111111111111111111111111100    -4
00000000000000000000000000001010    10

What I get is when run into Cygwin or Linux machine:
11111111111111111111111111111111    -2
11111111111111111111111111111110    -4
11111111111111111111111111111101    -6
11111111111111111111111111111100    -8
00000000000000000000000000001010    20

Any help is greatly appreciated. Why is it happening and what may fix it. I have never run into a problem like this. I running the Cygwin CodeBlocks, and VS into a Window 8.1 Also, is there a way to write in the program so it detects what machine is being run to?


Solution

  • The standard way to read a binary number is to start with an integer value of 0, and for each bit read (left to right), multiply the current value by 2 (i.e. shift left by 1 bit) and then add the bit read to get the new value. For example:

    Input: 01011

    +---------+-----------+
    | Char In | New Value |
    +---------+-----------+
    |    0    |     0     |
    |    1    |     1     |
    |    0    |     2     |
    |    1    |     5     |
    |    1    |    11     |
    +---------+-----------+
    

    If you know that a value with a leading 1 is to be treated as a signed value, use a signed integer running value, and as a special case set it to all ones (~0L or equivalent) if the very first digit is 1.

    Input: 101011

    +---------+-----------+
    | Char In | New Value |
    +---------+-----------+
    |    1    |    -1     |
    |    0    |    -2     |
    |    1    |    -3     |
    |    0    |    -6     |
    |    1    |   -11     |
    |    1    |   -21     |
    +---------+-----------+
    

    As for your specific issue, aside from the very confusing means of computing a two's complement, your code makes the mistake of assuming that the input string is composed only of 0 or 1 characters. Any other characters get treated as if they were 0 characters. So if something is tacking an extra character to the end of your string, then it would show up as this bug.

    My bet is that your code that calls this function does not sanitize the string and there's an extra character (probably a \r or space) on the end.