I am using mongodb
aggregation framework for all the get requests,when i say get request to fetch single document as well as multiple documents.For few requests i have around 90 to 100 stages in my aggregation pipeline($match
,$skip
,$limit
,$lookup
,$unwind
,$addFields
,$group
) these are stages what i am maintaining sequentially.If i pass limit more than 50 its giving error like group stage has exceeded the memory limit
or else it taking around 60 seconds to execute the query.I my database each and every collection has more than 60,000 documents.How do i resolve this? Does sharding solves my problem?