Search code examples
c#bigintegerhexones-complement

What is the proper way to construct a BigInteger from an implied unsigned hexadecimal string?


I'm running into a problem where as I have an implied unsigned hexadecimal number as a string, provided from user input, that needs to be converted into a BigInteger.

Thanks to the signed nature of a BigInteger any input where the highest order bit is set (0x8 / 1000b) the resulting number is treated as negative. This issue however can't be resolved by simply checking the sign bit and multiplying by -1 or getting the absolute value due to ones's complement which will not respect the underlying notation e.g. treating all values 0xF* as a -1.

As follows are some example input/output

var style = NumberStyles.HexNumber | NumberStyles.AllowHexSpecifier;


BigInteger.TryParse("6", style) == 6   // 0110 bin
BigInteger.TryParse("8", style) == -8  // 1000 bin
BigInteger.TryParse("9", style) == -7  // 1001 bin
BigInteger.TryParse("A", style) == -6  // 1010 bin
...
BigInteger.TryParse("F", style) == -1  // 1111 bin
...
BigInteger.TryParse("FA", style) == -6 // 1111 1010 bin
BigInteger.TryParse("FF", style) == -1 // 1111 1111 bin
...
BigInteger.TryParse("FFFF", style) == -1 // 1111 1111 1111 1111 bin

What is the proper way to construct a BigInteger from an implied unsigned hexadecimal string?


Solution

  • Prefixing your hex string with a "0" should do it:

    BigInteger.TryParse(string.Format("0{0}", "FFFF"), style, ...)
    

    My BigInteger is 65535 in the example above.

    Edit

    Excerpt from the BigInteger documentation:

    When parsing a hexadecimal string, the BigInteger.Parse(String, NumberStyles) and BigInteger.Parse(String, NumberStyles, IFormatProvider) methods assume that if the most significant bit of the first byte in the string is set, or if the first hexadecimal digit of the string represents the lower four bits of a byte value, the value is represented by using two's complement representation. For example, both "FF01" and "F01" represent the decimal value -255. To differentiate positive from negative values, positive values should include a leading zero. The relevant overloads of the ToString method, when they are passed the "X" format string, add a leading zero to the returned hexadecimal string for positive values.