We want to find all permutations of a string of length n. Then, you are to search an array of fixed constant size, say 3000 and check if the string is in the array.
String arr[3000];
Because we will have !n permutations, we need to do !n searches.
Also, what difference does it make when you check 2 different strings against an element in the array versus just checking 1 string?
What is the time complexity?
My thoughts is that it will take at worst, log2(3000)
to go through the array once. Time complexity of that is O(log2(3000))
which is O(1)
.
Now, you need to go through this array !n times so time complexity is O(!n)
.
So the binary search reducing the number of searches required should not be the focus when analyzing the time complexity of this algorithm.
My question is, binary search does reduce the number of searches and if you are gonna go through it n!
times, shouldn't this be a significant difference?
Any insight to better my understanding is appreciated.
Big O complexity analysis only deals with quantities that are subject to change, by definition. That you get vacuous answers when all your quantities are constant is expected.
The constant factors are relevant when comparing two algorithms of equal Big-O, so your change from 3000 -> log2(3000) is a factor of about 200.
Thus you use the binary search because you are doing more than Big-O analysis. You have also estimated the constant factors, and see an easy 200x speedup
But equally you can have multiple terms in your complexity. You might say:
n
be the input string lengthm
be the size of arr
O( n * n! * log(m) )
(n
for the string equality, n!
for the permutations, log(m)
for the binary searching)It also rather depends on a model of cost. Usually this maps back to some abstract machine, e.g. we assume that operations have a certain cost. E.g. You might compare sorting algorithms by just the count of comparisons, or by just the count of swaps, or by the counts of both comparisons and swaps.