Search code examples
sqlalchemyflask-sqlalchemygoogle-cloud-sql

Close all connections to reduce uptime on Cloud SQL instance


Context

I am running a postgres db on the smallest Cloud SQL instance n1-standard-1 (1 CPU, 3.75 GB ram, 10 GB SSD storage). I access it through a flask app running on App Engine, using sqlalchemy. I only have 20 users on my platform with very sparse usage of the db (~100 queries per day).

Problem

Sqlalchemy creates a connection pool for more performant queries. This means that the connections my app makes to my db are persistent, even if the flask app is dormant. As long as there is a connection, the Cloud SQL instance stays up. So I am paying for useless uptime. The Cloud SQL instance currently costs me $60/month. I would like to reduce this.

Attempted solutions

I started to look at ways to close all connections thru sqlalchemy. The only solution I have found is to disable connection pooling on my flask app. This will prevent connections from persisting. My fear with this approach is that it is off the beaten path (I am using Flask-SQLAlchemy, which does not have the underlying sqla poolclass option exposed (link). So I am wary of implementing it in production.

Thank you for your help.


Solution

  • Cloud SQL has examples posted on how to managing connections, including with SQLAlchemey here. As mentioned in another comment, you could also use the [NullPool][2] class, which doesn't pool connections.

    However it's worth pointing out that Gen 2 Cloud SQL instances don't have a 'serverless' feature - they are running even if there are no connections to it. If you want a truly 'serverless' product that is pay per use, you could use something like Cloud Firestore. If you would prefer to stick with a relational database, you could downgrade your instance size to either a db-f1-micro or db-g1-small instance size - which are considerably cheaper (see pricing).