I'm having a problem to understand the differencies about these 3 notations in the worst-case.
worst-case of big o = upperbound, will never run faster.
worst case of big omega = lowerbound, will never run quicker in the worst-case?
worst case of big theta = between lower and upperbound, will run between these bounds in the worst-case?
Edit:
Sorry for beeing unclear. Im not sure if my understanding is correct about worst-case of big o, worst case of big omega and worst case of big theta. What i wrote after the "=" is what i think it is.
What you outline seems generally correct. An important distinction to draw is the one between the case and the bound.
The case is the class of inputs currently under consideration. You may consider the class of inputs on which your algorithm performs the worst, or those for which your algorithm does the best.
The bound is a function which says how the resources used by your algorithm - usually time or space - change with changing inputs. Upper bounds and lower bounds are typical.
You are free to mix and match cases and bounds as you see fit.
Maybe you'd like to know an upper bound on the worst case of your algorithm so you can know what scaling will look like in the worst possible case. This can give you confidence that your solution will scale well enough regardless of how it is used
Maybe you'd like to know a lower bound on the worst case to know whether your solution is as fast as it might possibly be - if its scaling behavior is optimal for the problem you're trying to solve. For instance, we know that comparison-based integer sorting is Omega(n log n) without any additional assumptions. If you write a sorting procedure, do you achieve this bound, or is your method slower?
Best-case analysis surely has its uses as well. You can also talk about average-case analysis by taking some distribution of inputs of a given parameter, and find corresponding bounds; a similar kind of analysis is amortized analysis where your "case" is actually a sequence of operations.