I understand that in the particular scenario of the best case of the binary search, the running time graph (a constant line) cannot be bounded below by any multiple of the log function (since log approaches infinity as n approaches infinity), which makes Θ(log n) inapplicable to the description of the best case, and thus it would be wrong to say Θ(log n) describes the runtime growth of all cases of binary search.
Is my reasoning correct?
If so, does this mean that there is always an implicit set of cases we are describing when using bigO/theta/omega notation? For instance, if someone says an algorithm runs in O(nˆ2) time, do they implicitly mean the algorithm runs in O(nˆ2) in either all cases (best, worst, and average), a subset of the cases (e.g. worst and average, but not best), or a particular case? In other words, does this mean there is no "general" bigO/theta/omega description for an algorithm, only for specific cases of algorithms (where it only sometimes may happen that all cases might be described the same)?
I would say you are correct in your reasoning. Binary search is not Θ(log n) for every case.
The following statements would be correct:
However (and this is a matter of interpretation), I might also forgive someone for saying:
since often, average or worst-case is assumed. But like you say, it is not true for every case, since for specific inputs, binary search can run in O(1).
The nice thing with big-O notation, as opposed to big-Θ, is that it expresses an upper bound on the complexity, meaning we can use it to say e.g. that an algorithm is fast without having to care about trivial or "lucky" cases and so on. If you say an algorithm runs in O(n2), it typically means worst-case, because then the bound will apply for every case. If not, you tend to specify that it's e.g. average case only, such as often done for quicksort. This is why you usually see big-O used instead of big-Θ when listing algorithm complexities, e.g. on the Wikipedia page for binary search.