I have a large dictionary that I currently define as such:
my_dict = { … }
Because it has a lot of keys (4000 keys and I have 101 dictionaries of that size), defining my 101 dictionaries like that sounds like bad practice.
What should I do, should I make a list of those dictionaries, make a pickle file out of it and importing the pickle file?
Is there a better solution? Im interested by having a solution that is efficient in terms of time. I cannot use SQL because the I want to be distributing the code on different computers that aren’t necessarily connected to the same database / internet.
Im using Python 3.10.2.
Both pickles and JSON files need to be loaded fully before they can be used, so you'd be shipping 101 pickles or JSONs if you wanted to be able to access one without loading the rest, and even so, you'd still need to load one dict fully into memory to access any key.
Honestly, the better solution is SQL, in particular the sqlite3
module built-in to Python, and no, you don't need to be connected anywhere.
You get cheap random lookup (SELECT value FROM data WHERE dict_id = 123 AND key = 2345
) and cheap "all keys in this dict" (SELECT key, value FROM data where dict_id = 345
) and cheap "all everything" (SELECT dict_id, key, value FROM data
) (and if you need, you can extend that with more indexes).
You would then just ship your premade SQLite file with your application (there are, heh, application notes on that).