I am currently using Mongoose for a project and I am trying to optimize a few aggregating queries.
Using caching in order to cache the results is pretty straightforward for MongoDB as:
const res = cache.get(key);
if (res) {
return res;
}
MyModel.aggregate([]).then(docs => {
cache.add(key, docs);
});
But in my scenario, I have a bunch of aggregations that have similar heavy operations at the very first stages of their pipeline
const c = MyModel.aggregate([
{$match : {},}}
{$project :{}},
{$unwind :{}},
// extra stages for c
]);
const d = MyModel.aggregate([
{$match : {},}}
{$project :{}},
{$unwind :{}},
// extra stages for d
]);
I have optimized my schemas with indexes and the allowDiskUse
option and I am looking for a bit more juice.
Is there any way to populate the first stages of a pipeline using a caching mechanism or even a way to pipe aggregations? Does MongoDB cache any results within the stage pipeline?
Moving transformations to the client side is not an option as I want to use a much power as I can from my database.
Thanks in advance.