We're using an Access Database as the back-end to our software product. The program has been alpha/beta tested at a company for about 2 years now and we've noted that one of our tables has been filled with over a hundred thousand records in that time. This is probably not an example of the heaviest usage that our product will endure, and we're concerned about performance 5-10 years down the road.
There's a logical way for us to break down this enormous table into several thousand tables containing a few hundred records, but I don't think this solution is likely to help with any possible slowdown as the database will become bloated with tables instead of data (Though I have no formal training in databases, so what do I know).
I was hoping someone more informed than I might provide some insight into whether we're likely to see a significant slowdown and if so, which solution is likely to keep performance better in the long-term?
The program has been alpha/beta tested at a company for about 2 years now
For the last about 10 years Microsoft has advised people NOT to use Access as database but to go with SQL Server in various versions.
and we're concerned about performance 5-10 years down the road
Given developments of the lat - hm - 10 years I would not be. I would seriously be concerned whether Access is actually still able to store data in 10 years down the road at all, or whether the call is "program for sql server" at one point in between.
There's a logical way for us to break down this enormous table into several thousand tables containing a few hundred records, but I don't think this solution is likely to help with any possible slowdown as the database will become bloated with tables instead of data
Access is well able to handle a million or 5 million records. SQL Server goes well into BILLIONS of records. The moment you get into problems with Access, basically, you earn any problems you get based on the - and I really find no way to say it more nicely - the tremendous ignorance to even try using access for a serious database, as - as I already said - MS is discouraging this for the last 10 years.
THOUSANDS of tables to split off a table is unwise; SQL databases are not designed for this. Even using clustered tables in SQL Server Enterprise (doing exactly this) is not really targeting you having tens of thousands of partitions.
You are a LOT more likely to just die in access - access is simply not a database server. Back to the drawing board.
That said, Access about 18 years ago or so added some technology acquired with FoxPro allowing it to easily handle tables with millions of records (not tens but hundreds of millions), so you are very safe at this moment (except the nightmare of trying to do a db repair, backup etc. on something like that, or even the nightmare of running a multi-user application through a network share.
SQL Server, otoh, I have a table currently at around 650 million records growing to about 10 or 20 billion in the next 6 months when data loads start, and no problems so far.