I came across this problem and wondering if there could be a better complexity to solve the problem.
For e.g.
DESIRED OUTPUT ==> [2, 4]
EDIT: Put another example
DESIRED OUTPUT ==> [2,4]
So far, what I've found and tried runs in O(nlogn) + O(blogn) and O(n).
O(nlogn) + O(blogn) approach:
int binarysearch(vector<int>arr, int l, int r, int target)
{
int mid;
while(l <= r)
{
mid = (r+l) / 2;
if(arr[mid] > target)
r = mid - 1;
else
l = mid + 1;
}
return r;
}
vector<int> counts(vector<int> a, vector<int> b)
{
vector<int> result;
sort(a.begin(), a.end()); // O(nlogn)
for(auto i : b){
int count = binarysearch(a, 0, a.size()-1, b); // b*O(log n) times
result.push_back(count)
}
return result;
}
O(n) approach:
vector<int> counts(vector<int> a, vector<int> b)
{
vector<int> result;
int maxi = *max_element(b.begin(), b.end()) + 1;
int mymap[maxi] = {0};
for(auto i : a) mymap[i]++;
for(int i = 1; i < maxi; i++){
mymap[i] = mymap[i] + mymap[i-1];
}
for(auto i : b){
result.push_back(mymap[i]);
}
return result;
}
Okay, it turns out there's a faster way to compute the map index. For e.g. given a = {1,4,2,4,5,8,80} and b = {3,1000000}. DESIRED OUTPUT would be [2,7].
Using my previous approach, I would need to compute mymap[4], mymap[5].. mymap[9999]....mymap[1000000]. And that's why the program crashes and returns running time error.
the way we handle this is to use for(auto& entry:mymap) which gives access to all dictionary/map. Then, we use the upper_bound STL C++ to return the correct map.
vector<int> counts(vector<int> nums, vector<int> maxes){
vector<int> result;
map<int,unsigned int> mymap;
for(auto i : nums) mymap[i]++;
// doesn't need to run 1000000 times
int temp = 0;
for(auto& entry: mymap){
entry.second = entry.second + temp;
temp = entry.second;
//cout << "first : " << entry.first << "second: " << entry.second << endl;
}
map<int,unsigned int>::iterator itl;
for(auto i : maxes){
itl = --mymap.upper_bound(i); // points to the correct map
result.push_back(itl->second);
}
return result;
}