I've an application which shows records of a SQL Server table inside a grid after applying pagination. User can filter or apply order onto any column.
Problem arise when that table becomes too big like 280 Millions of records. When user apply order to any of its column, SQL Server starts consuming much time to order records and show top 20 inside grid.
I know if I apply an index onto it, results gets faster but I don't know how many columns user can use for ordering and I can't apply so many indexes.
Is there any way to enhance performance? You are open to give me suggestions regarding other DBMS.
I tried to apply indexes which works but I need some other better solutions besides indexes.
I have had this exact problem, in my case using Datatables.js and fetching rows server side, giving the (false!!) illusion of infinite scrolling.
I had to do some incomplete things to make it usable.
Monitor the users' choice of sorting order for a while to work out the common orderings they want.
Capture the queries for those common orderings, then run them, ON THE REAL DATA not a dev server, in SSMS, with Show Actual Execution Plan turned on. Write down any indexes suggested by the plan.
Do that for all the popular orderings then study the indexes to see if they have any leftmost columns in common.
Create an index and see whether the plans improve.
Repeat until things improve enough that the user complaints decrease to a dull roar.
Add some UI elements to let users know of slowness.
If you leave the query-capture logic in place in your app, you can respond with another index if a user needs it badly.
It is a bit of an ad-hoc mess to solve to satisfy users of this kind of data retrieval feature. Good luck.