Search code examples
sql-serverdatabasetriggersquery-performancesql-job

How many SQL jobs a sql server can handle?


I am creating a database medical system and then I came to a point where I am trying to create a notification feature and i will use SQL jobs in it, where the SQL job responsibility is to check some tables and the entities that will find it need to be notified for a change in certain data will put their ids in an entity called Notification and a trigger will be called for the app to check that table and send the notificiation.

what I want to ask is how many SQL jobs can a sql server handle ?

Does the number of running SQL jobs in background affect the performance of my application or the database performance in a way or another ?

NOTE: the SQL job will run every 10 seconds

I couldn't find any useful information online.

thanks in advance.


Solution

  • This question really doesn't have enough background to get a definitive answer. What are the considerations?

    1. Do the queries in your ten-second job actually complete in ten seconds, even when your DBMS is under its peak transactional workload? Obviously, if the job routinely doesn't complete in ten seconds, you'll get jobs piling up.
    2. Do the queries in your job lock up tables and/or indexes so the transactional load can't run efficiently? (You should use SET ISOLATION LEVEL READ UNCOMMITTED; as much as you can so database reads won't lock things unnecessarily.)
    3. Do the queries in your job do a lot of rows' worth of inserts and updates, and so swamp the SQL Server transaction logs?
    4. How big is your server? (CPU cores? RAM? IO capacity?) How big is your database?
    5. If your project succeeds and you get many users, will your answers to the above questions remain the same? (Hint: no.)

    You should spend some time on the execution plans for the queries in your job, and try to make them as efficient as possible. Add the necessary indexes. If necessary refactor the queries to make them more efficient. SSMS will show you the execution plans and suggest appropriate indexes.

    If your job is doing things like deleting expired rows, you may want to build the expiration in your data model. For example, suppose your job does

     DELETE FROM readings WHERE expiration_date >= GETDATE()
    

    and your application does this, relying on your job to avoid getting expired readings.

     SELECT something FROM readings
    

    You can refactor your application query to say

     SELECT something FROM readings WHERE expiration_date < GETDATE()
    

    and then run your job overnight, at a quiet time, rather than every ten seconds.

    A ten-second job is not the greatest idea in the world. If you can rework your application so it will function correctly with a ten-second, ten-minute, or twelve-hour job, you'll have a more resilient production system. At any rate if something goes wrong with the job when your system is very busy you'll have more than ten seconds to fix it.