The HT does not rehash. We use a simple division method as Hash-function. We assume the Hash-function is efficient at equally distributing the entries. The goal is to have O(1) insertion, deletion and find.
The optimal number of buckets is a compromise between memory consumption and hash collisions, for intended usage patterns.
For example, if something is very frequently used you might limit the size of the hash table to half the size of a CPU's cache to reduce the chance of "cache miss accessing hash table"; and this can be faster than using a larger hash table (with worse cache misses and lower chance of hash collisions). Alternatively; if it's used infrequently (and therefore you expect cache misses regardless of hash table size) then a larger size is more likely to be optimal.
Of course real systems have multiple caches (L1, L2, L3) plus virtual memory translation caches (TLBs) plus RAM limits (plus swap space limits); real software has more than just one hash table competing for resources in the memory hierarchy; and often the software developers have no idea what other processes might be running (competing for physical RAM, polluting caches, etc) or what any end user's hardware is (sizes of caches, etc). All of this makes it virtually impossible to determine "optimal" with any method (including extensive benchmarking).
The only practical option is to take an educated guess based on various assumptions (about usage, the amount of data and how good the hashing function will be in practice, the CPU, the other things that might be using CPUs and memory, ...); and make the source code configurable (e.g. #define HASH_TABLE_SIZE ..
) so you can easily re-assess the guess later.