Currently I'm in the process of investigation a great framework for working with distributed database systems - shardingsphere.
I just checked the documentation and it looks (and it actually is) that you can easily specify one data source and multiple tables upon it. So actually it's allowed to do sharding on a single database instance (don't mess with the partitioning which is database feature).
I'm just curious, in terms of performance - does it really make sense? What comes to my mind is that probably it may help with indexes lookup time or table bloating. At the same time range queries might be not available at all or have some performance issues because of multiple tables scan.
I honestly didn't see any info or benchmark on this topic so would appreciate any numbers of feedback from the past experience.
I've seen sharding used on a single database instance when there is an expectation that eventually the database will outgrow a single instance, and each shard will be relocated to another server. By sharding early, you know your application code is prepared for that from day one. It's difficult to refactor a legacy application that was never designed for sharded data, because the original developers are probably long gone, and you have little or no budget to do refactoring work.
At my last job, we had 3 or 4 apps with over 100 MySQL shards each. There were at least 25 other apps with between 2 and 50 shards each. Dozens of other apps were struggling with 1 shard, and really needed to split. Then there was a long tail of hundreds of other smaller microservices for which 1 shard each was plenty.