Search code examples
mysqldatabasecoldfusiondatabase-performancelucee

how bad is it to have "extra" database queries?


I come from the front-end world in web development where we try really hard to limit the number of HTTP requests issued (by consolidating css, js files, images, etc.).

With db connections (MySQL), obviously you don't want to have unnecessary connections, but as a general rule, how bad is it to have multiple small queries? (they execute quickly)

I ask because I'm moving my application to a clustered environment and where before I was caching some stuff in server memory (as I was running on a single server), I am now trying to make my app "stateless" and in my current implementation that means more small db calls. This will help me with load balancing (avoiding sticky sessions) and also keep server memory usage down.

We're not talking a ton of queries, maybe 6-8 db calls instead of 2-4, returning anywhere from a handful of records to a few thousand records. Each of them executes quickly, less than 30ms (some much less), but I don't know if there is some "connection latency" I should be concerned about.

Thanks for your insight.


Solution

  • Short answer: (1) make sure you're staying at the same big-O level, reuse connections, measure performance; (2) think about how much you care about data consistency.

    Long answer:

    Performance

    Strictly from performance perspective, and generally speaking, unless you are already close to maxing out your database resources, such as max connections, this is not likely to have major impact. But there are certain things you should keep in mind:

    • do the "6-8" queries that replace "2-4" queries stay in the same execution time? e.g. if current database interaction is at O(1) is it going to change to O(n)? Or current O(n) going to change to O(n^2)? If yes, you should think about what that means for your application
    • most application servers can reuse existing database connections, or have persistent database connection pools; make sure your application does not establish a new connection for every query; otherwise this is going to make it even more inefficient
    • in many common cases, mainly on larger tables with complex indexes and joins, doing few queries by primary keys may be more efficient than joining those tables in a single query; this would be the case if, while doing such joins, the server not only takes longer to perform the complex query, but also blocks other queries against affected tables

    Generally speaking about performance, the rule of thumb is - always measure.

    Consistency

    Performance is not the only aspect to consider, however. Also think about how much you care about data consistency in your application.

    For example, consider a simple case - tables A and B that have one-to-one relationship and you are querying for a single record using a primary key. If you join those tables and retrieve result using a single query, you'll either get a record from both A and B, or no records from either, which is what your application expects too. Now consider if you split that up into 2 queries (and you're not using transactions with preferred isolation levels) - you get a record from table A, but before you could grab the matching record from table B, it is deleted/updated by another process. Now your application has a record from A but none from B.

    General question here is - do you care about ACID compliance of your relational data as it pertains to the queries you are breaking apart? If the answer is yes, you must think about how your application logic will react in these specific cases.