I was wondering what is the time complexity of the function duplicate would be because I was thinking it is O(nlogn) due to using the binary search algorithm and at the same time implementing a while loop to search through the left and right sides of the location of the first instance inside the array where the value was found to see if its duplicated inside the sorted array. I would like to know if its O(logn) because I was thinking that the while loop is not actually looping through all elements, just the ones nearby the value we are searching for to see if its duplicated. I would appreciate any advice in making it O(logn) if its not.
public class Problem2 {
public static void main(String[] args) {
int[] arrayA = {-1, 2, 3, 5, 6, 6, 6, 9, 10};
int repeatedValue = 6;
System.out.println(duplicates(arrayA, repeatedValue));
}
public static int duplicates(int[] a, int x){
int counter = 0; //counter for duplicates.
int index = binarySearch(a, x); // index = 4 where x is.
int leftIndex = index - 1; // leftIndex = 3
int rightIndex = index + 1; // rightIndex = 5
//Condition incase value does not exist.
if(index == -1)
return -1;
//While loop to check left and right side of index for duplicates.
while(a[leftIndex] == x || a[rightIndex] == x){
if(a[leftIndex] == x){
counter++;
leftIndex--;
}
else if(a[rightIndex] == x){
counter++;
rightIndex++;
}
}
//returning the counter plus one because we did not count the first instance
//when it was searched.
return counter + 1;
}
public static int binarySearch(int[] a, int x){
int low = 0, high = a.length - 1;
while(low <= high){
int mid = (low + high) / 2;
if(a[mid] < x)
low = mid + 1;
else if(a[mid] > x)
high = mid - 1;
else
return mid;
}
return -1;
}
}
It's not O(n*logn)
because you're NOT doing a linear search FOR every binary search: you're doing a linear search AND a binary search.
I's either O(logn)
or O(n)
depending on how many duplicates there are of your target number. Since O(n)
is larger than O(logn)
, your worst-case complexity is O(n)
.
To say something more about your average-case complexity, we need to know what the average-case input looks like.
If on average the number of duplicates in a list of size n
(that you actually search for) is lower than logn
, you would have O(logn)
average-case complexity.
As for advice on how to make it O(logn)
:
To do a binary search, you are already doing a pre-computation step on your input, which is to sort the array.
Change the pre-computation step to not only sort the array, but also deduplicate the array and store the number of duplicates of each unique number in a separate array.
Then the search algorithm becomes:
This only makes sense if you're searching for a lot of different target numbers on the same input data.