I understand the mechanics of carrying out different cases of complexity analysis for algorithms, but have been given a few scenarios and have been asked which type of analysis I would use for each case.
The types of analysis are "worst-case", "average-case", "amortized".
Surely to ensure that algorithms are as efficient as possible, we would always choose to use "worst-case"?
I realise this is subjective, but surely there are merits to using each of the analysis methods?
These are 4 scenarios I was given in a recent job interview an could not decide any of them apart from the one about the pilot.
A company has invented a new web search engine and wishes to analyse how quickly how quickly it returns results for a set of common search queries.
A pilot is flying a plane and his inputs on the control stick are converted into wing surface ovements by calculations made in software. The stability of the plane depends on fast responses; we want to analyse if the plane is safe.
A database is sorted the first time a query is made, if previously unsorted. We want to analyse how long a number of consecutive queries would take to perform using this database system.
A cloud computing company hosting an algorithm for weather forecasting and needs to guarantee to compute the next national daily forecast from pressure and other observations data in under 4 hours.
For real-time systems you need worst-case complexity; that covers your plane safety and guaranteed national forecast.
There are many application where you may want amortized and average case analysis (provided that you know "average case" distribution) or even smoothed analysis alongside with the worst-case. There are systems where the choice of the "best" algorithm depends on whether you talk about "worst" or "average", and sometimes they run multiple algorithms in parallel and whichever finishes faster aborts the other ones and outputs.