We have a ASP.Net core 3.1 Rest API that's been running successfully for the last 5 years. It uses EF Core 5.1 against a SQLite database.
We're now in the process of migrating from SQLite to AWS Aurora Postgres.
With that in mind we've added the Npgsql.EntityFrameworkCore.PostgresSQL nuget package and amended the connection string to something similar to the following:
"Host=[Our AWS host]; Port=5432; User ID=postgres; Password=XXXXXXXX; Database=api_test_db"
We have a suite of integration tests that we run against the API. When connecting to the SQLite database they all run successfully. However, when run against Postgres they start to fail after 20 or so tests have run with the following error:
"The connection pool has been exhausted, either raise MaxPoolSize (currently 100) or Timeout (currently 15 seconds)"
I've tried changing the connection string by adding "Pooling=false" which resulted in the same error. Then I tried removing "Pooling=false" and adding "Maximum Pool Size = 200" (and more). Again, resulting in the same error.
Because those changes made no difference I suspected that somehow EF wasn't using the connection string I thought it was so I deliberately changed the Database element of connection string to a database name that doesn't exist and it failed immediately. Thus proving that the correct connection string is being used.
Other things to note regarding our use of EF core:
Rather than injecting a concrete DbContext class we inject an IContext interface into our services.
We register the interface with the service collection like so:
services.AddScoped<IContext>(serviceProvider =>
{
var connectionString = "...";
var context = new Context(connectionString);
return context;
});
The Context class looks like this:
public class Context : DbContext, IContext
{
...
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (_connectionString.ToLower().Contains("sqlite"))
{
optionsBuilder.UseSqlite(_connectionString,
options => options.UseQuerySplittingBehavior(QuerySplittingBehavior.SingleQuery));
}
else
{
optionsBuilder.UseNpgsql(_connectionString,
options => options.UseQuerySplittingBehavior(QuerySplittingBehavior.SingleQuery))
}
}
}
As I said, this codebase has worked successfully for many years against SQLite. But of course there's no concept of Pooling with SQLite like there is with Npgsql.
I've read Postgres Npgsql Connection Pooling and other related SO posts but can't figure out the problem.
Any idea what we're doing wrong?
I took @Shay Rojansky's advise and built a minimal program and as he suggested I found the problem.
It turned out that we had some middleware in our API that checks for the existence of either the SQLite database file or the Postgres database depending on which DB the API should be working against for that request. In the case of Postgres we were opening a connection to the database and never closing it. Which meant the connection object was never being returned to the pool and therefore after 100 connections the pool was exhausted and our API couldn't handle the request.
Thanks to everybody for their suggestions. Particular thanks to @Shay Rojansky for all his great work on EF and NpgSql.