First of all, a disclaimer; hash is a somewhat inaccurate term for what I'm aiming for, please, feel free to suggest a better title.
At any rate, I'm currently attempting to program a complex spatial algorithm running in real-time. In order to save cycles, I've decided to generate a lookup table that contains all of the 32,000 possibilities.
If I were to do this conventionally, the values(Inclusive range and field count) 2x +0 -> +15 and 3x -2 -> +2 would be mapped to two four-bit and three three-bit values respectively, giving me a lookup-table size of 2 ^ (2*4 + 3*3) = 131,072 entries, a nearly 410% waste.
Given the nature of the algorithm, collisions would absolutely cripple its functionality (so no traditional hash functions unless I could guarantee no collisions with all relevant values). Beyond that, the structure I'm working with is rather large (ie, I would /really/ like to avoid allocating any more than 200% of what I need). Finally, since this table will be referenced so often, I'd like to avoid the overhead of a traditional hash-table in both bucket lookups and an excessively complex hash function.
Having taken a more traditional computer-science approach, I'm beginning to strongly believe the solution lies in some mathematics of base-conversion I'm completely ignorant of. Any idea if this is the case?
You can calculate an index the same way you calculated the maximum number of combinations, by multiplying each element. Take each element from most significant to least significant, add a constant to make it range from 0 to n-1, and multiply by the number of combinations remaining.
Given your 0 to 15 values of a, b (range of 16) and -2 to +2 values of c, d, e (range of 5):
index = a * 16*5*5*5 + b * 5*5*5 + (c+2) * 5*5 + (d+2) * 5 + (e+2);