Currently there are these two functions for converting an integer to a string in a given character set, and back:
const parseInt = (value, code) => {
return value.split('').reduce((r, a) => r * code.length + code.indexOf(a), 0)
}
const toString = (value, code) => {
var digit,
radix = code.length,
result = '';
do {
digit = value % radix;
result = code[digit] + result;
value = Math.floor(value / radix);
} while (value)
return result;
}
toString(115, 'abcdefghijklmnop') // => 'hd'
However, they are inefficient from the standpoint of if this were C or things with static memory allocation. The result = code[digit] + result
is building a string one character at a time, completely regenerating the character chain on each iteration.
How can this be streamlined so it:
And how could you change the parseInt
function to account for reversing the toString
after toString
is reimplemented?
Also is it possible to get rid of the Math.floor
? If not, that is okay but would hope there is potentially a way. If there is a way to get rid of Math.floor
if certain constraints on character-set-length or input size is adhered to, please mention in the comments and I may ask another question for that. Thank you!
Obviously you can reverse the function (somehow) and add a .reverse()
on the character array, but ideally we could optimize with subtracting steps (and rearranging), rather than adding extra steps. This is to be highly optimized.
Note. The solution algorithm output doesn't have to be the same as the above algorithms. The output can be reversed, that is okay. Just looking for a highly optimized variant of this function that generally accomplishes the same thing: encoding an integer into a string using a character set, and reversing it to get the integer back.
The number of digits you need for a strictly positive value
encoded with radix
is given by this formula, truncated to the nearest integer:
⌊1 + logradixvalue⌋
Reading your question, I suspect that you want to avoid floating point arithmetic, and so the above formula. So then you would need to do a "dry run" to see which size you need for storing the result, i.e. calculating the above expression through iteration and integer operations.
So here is how that could look:
const parseInt = (value, code) => {
let result = 0,
radix = code.length;
for (let a of value) {
result = result * radix + code.indexOf(a);
}
return result;
}
const baseLog = (base, x) => Math.log2(x) / Math.log2(base);
const toString = (value, code) => {
let digit,
radix = code.length,
result,
len = 1,
temp = value;
// Calculate len = Math.max(1, 1 + Math.floor(baseLog(radix, value)))
while (temp >= radix) {
len++;
temp = (temp - temp % radix) / radix;
}
// In another language you would allocate the memory here:
result = Array(len);
do {
digit = value % radix;
len--;
// Fill from back to start of array
result[len] = code[digit];
value = (value - digit) / radix;
} while (len);
return result.join(""); // Convert char-array to string
}
console.log(toString(115, 'abcdefghijklmnop')); // => 'hd'
console.log(parseInt("hd", 'abcdefghijklmnop')); // => 115
If we ask the question "what is the greatest integer that we can write in binary that has 8 digits?" then the answer is:
11111111
If we ask the same question, but for hexadecimal digits (still 8 digits), then it is:
FFFFFFFF
And with decimal digits it would be:
99999999
For all choices of a radix, it is radix8−1. And when the number of digits is n, it generalises to:
maxvalue = radixn − 1
If you move the 1 to the other side of the equation and then take the logarithm of both sides, you get:
logradix(maxvalue+1) = n
And so, if we are given the value instead of the number of digits, then we derive the number of digits (n) with:
n = ⌈logradix(value+1)⌉
Which (for integer value) is equivalent to:
n = ⌊1 + logradixvalue⌋