I have below aggregate query for querying around 9+ lakhs of record:
db.getCollection('Job').aggregate([
{
"$sort": {
"lastUpdatedAt": 1
}
},
{
"$match": {
"$and": [
{ "lastUpdatedAt": { "$gte": new Date("2019-04-01") } },
{
"$or": [
{ "job.scheduledEndTime": { "$lte": new Date("2019-07-15") } },
{ "lastUpdatedAt": { "$lte": new Date("2019-07-15") } }
]
},
{ "reduced": null }
]
}
},
{ "$project": { "_id": 1 } },
{ "$sort": { "_id": 1 } },
{ "$limit": 100 }
])
In above query i have compound indexes created and each field have a separate index also there:
{ "lastUpdatedAt": -1, "job.scheduledEndTime": -1, "reduced": 1 }
and that's a sparse index because reduced field is not in all the documents that is set by me once i get the result from the above query and after performing some operations in there.
Here's what .explain told me : https://pastebin.com/ERgChr2N
The above query took around 100 to 130 sec to return me the result so is there any way to optimise the above query.
Thanks in advance.
Hello all i changed my query to find one and i get all the result in less-than 1 sec here's my find query :
db.getCollection('Job').find({ $and: [
{ lastUpdatedAt: {$gte: new Date("2019-04-01")} },
{ "$or": [
{"job.scheduledEndTime":{"$lte": new Date("2019-07-15")}},
{"lastUpdatedAt":{"$lte": new Date("2019-07-15")}}
] },
{ reduced: null }
]}, {_id: 1}).sort({_id: 1}).limit(100)