Search code examples
pythonsql-serverlinuxraspberry-pi3pyodbc

pyodbc cursor.execute freezes when many connection are done from different devices


Almost 10 raspberry pi uses the same script as mentioned below and try to send data to database at every minute(at almost same time).

Almost no issue when only one device is communicating with database but problem starts as I start to add more raspberry pi.


Code:

import pyodbc 

def update_database():
        try:            
            mydb = pyodbc.connect('DRIVER=FreeTDS;SERVER=192.xxx.xxx.xxx;PORT=xxxx;DATABASE=xxxxxxx;UID=xxxxxxx;PWD=xxxxxxx;TDS_Version=4.2;')
            mycursor = mydb.cursor()

            if mycursor.tables(table='DBDBDBDB', tableType='TABLE').fetchone():
                print("DB exists")
            else:
                print("DB absent")
                DB_Connection_status=0


            ycursor = mydb.cursor()            
            sql = "INSERT INTO DBDBDBDB (docdt, timestamp, cocd, param1, param2, param3, param4, param5, param6, param7, param8) \
                    VALUES (?,?,?,?,?,?,?,?,?,?,?)"


            DB_Connection_status=1
            systemlog("DB Connected")
        except:
            DB_Connection_status=0
            print ("Connection with database: failed")
            systemlog("DB Not Connected")



        for index_y in range (0,6):
            #few lines of code here

            for index_x in range (0,60): 
                #few lines of code here

                if(DB_Connection_status==1):

                    val = (formatted_date, formatted_time, COCD, str(var1), \
                           str(var2), str(var3),\
                           'A','1', str( var4[index_y][index_x]), \
                           str(var5[index_y][toggle_sw_num-1]),0)
                    try:
                        mycursor.execute(sql, val)
                    except:
                        systemlog("DB record skipped")


        if(DB_Connection_status==1):
            try:
                mydb.commit()
                print(mycursor.rowcount, "record inserted.")
                systemlog("Record inserted")
            except:
                systemlog("DB commit skipped")

while 1:
    #few lines of code here
    if(system_minute!=oldsystem_minute):
        oldsystem_minute=system_minute
        #few lines of code here
        update_database()


Initially It was throwing error as shown below before I added error handling for cursor execute and commit

Traceback (most recent call last):
  File "mod.py", line 461, in <module>
  File "mod.py", line 213, in update_database
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL Server]Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. (1205) (SQLExecDirectW)')

But this error handling is just to avoid code crash.

Is there any issue in code sequence? For example should I call commit/close every time on cursor execute? Any help on this?

OS: raspbian running on raspberry pi, Python: 2.7, DB: MSSQL

Thanks


Solution

  • should I call commit/close every time on cursor execute?

    Yes. commit(). Locks on inserted rows and indexes are held for the duration of the transaction. commit() will release all the locks for that session.

    If committing after each row is a performance problem you can either turn on delayed durability, or, instead of a nested loop with single-row inserts, build a JSON document with all 360 rows, send that to SQL Server and have SQL Server parse and INSERT it in a single statement.