I am trying to implement a custom 1024 bit integer datatype in C. I want to implement operations like addition, subtraction and multiplication. Here is the structure I am defining:
typedef enum{POSITIVE, NEGATIVE} Sign;
typedef unsigned int uint32;
typedef struct int1024_tag {
Sign sign;
uint32* ints;
}bigInt;
Using this design allows me to efficiently store the integers in parts of 32 bits and operate on them efficiently.
However, as of now I need to input a hex string to initialize my bigInt
variables. I want to know if there is a way to input a decimal string like char input2_dec[] = "8902384390968597266"
and convert it into the string char input2_hex[] = "7B8B9F6FCDAA5B12"
so that I can pass them to my currently defined functions.
I tried converting digit by digit. But that approach fails as it involves computation with big numbers - and it beats the whole purpose of defining my own datatype and writing code for big number computation from scratch.
Rather than convert the decimal string to a hex string, you can convert the decimal string directly.
Presumably you have a function that can convert an int
into a bigint
with one "digit". You can use this to convert individual decimal digits, then perform a multiply-by-10 and add loop for each digit.
Psedocode:
bigInt result = bigint_0;
char *p = input;
while (p) {
int value = *p - '0';
bigInt digit = new_bigint(value);
result = bigint_add(bigint_mult(result, bigint_10), digit);
p++;
}