In my intro artificial intelligence course, we looked at several different search algorithms and their properties. When analyzing their space and time complexities, we would draw up a big-O complexity which was in terms of properties of the tree itself (for example, the branching factor b or the depth of the shallowest solution d).
My problem is that defining time and space complexity in this way seems completely useless, given that the whole point we are searching for a solution in the first place is because we don't know the nature of the tree/solution (the path to it, how shallow it is, etc.). So, to my current understanding, this form of time/space complexity analysis does not give someone who must select an algorithm for their search problem a meaningful way to discern which one is better (in terms of time/space complexity).
Citing @Matt Timmermans' comment above as it answers my question:
"...you would typically have an expectation about average branching factor, and m would be a value that you choose depending on the complexity of the search algorithm. i.e., "how much of the tree it is practical to explore"."
From what I understand (also drawing from @AbcAeffchen's answer), we estimate the value of variables like b and m to try and concoct an estimate specific to our tree so we don't have to fall back on worst-case analysis which many times does not reflect reality. This provides us with a more fine-grained estimate. We would not, however, care about the actual values of b and m.