Search code examples
c#asp.net.netsql-serverlinq-to-sql

C# Linq To SQL insert a single big row performance issue


I have a program that use old Linq To SQL to connect an ASP.NET application to a SQL Server DB. ASP.NET application and SQL Server instance are on the same machine, and both "environment" are upadated (IIS 10, NET Framework 4.8 and SQL Server 2019).

In the software i have to handle a virtual Cart with the customer order. Cart has many field, one of them is a nvarchar and contains the "cart document" a stirng that tipically is few KB, but sometime may reach few MB (never more than 10MB)

When i udpate a document string in the reange of 2-3MB, and then update the single row that contains it, the udpate operation is really, really slow (2-2,5s). Here update code :

    protected void Upsert(CartDto cart, bool isValidationUpsert = false )
    {
        lock (_sync)
        {
            if ((cart?.Id ?? 0) <= 0)
                throw new ExtendedArgumentException("cartId");

            using (var dbContext = ServiceLocator.ConnectionProvider.Instace<CartDataContext>())
            {
                var repository = new CartRepository(dbContext);
                
                var existingCart = repository.Read(crt => crt.ID == cart.Id).FirstOrDefault();
                if (existingCart == null)
                {
                    existingCart = new tbl_set_Cart();
                    existingCart.Feed(cart);

                    repository.Create(existingCart);
                }
                else
                {
                    existingCart.Feed(cart);
                    repository.Update(existingCart);
                }

                dbContext.SubmitChanges(); //<<--- This speecifi operation will take 2-2,5s previous instructions take a neglectable time
            }
        }
    }

I have no idea about the why, nor how to improve performance in this scenario

--EDITED : as suggested, i have profiled the oepration on the DB and experienced the same delay (~2,5) event if i run the SQL code directly onto SQL Server (using SSMS to connect and execute code).

Here SQL code and perforamance statistics :

DECLARE @p0 AS INT = [cart_id];
DECLARE @p1 AS INT = [entry_count];
DECLARE @p2 AS NVARCHAR(MAX) = '..document..';

UPDATE [dbo].[tbl_set_Cart]
SET [ITEMS_COUNT] = @p1, [ITEMS] = @p2
WHERE [ID] = @p0

enter image description here

Here my table schema, as you can see nothing it's very simple :

/****** Object:  Table [dbo].[tbl_set_Cart]    Script Date: 02/12/2021 15:44:07 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_set_Cart](
    [ID] [int] NOT NULL,
    [AS400_CUSTOMER_COD] [nvarchar](50) NOT NULL,
    [AS400_LISTIN] [int] NOT NULL,
    [VALUE] [nvarchar](max) NOT NULL,
    [DELIVERY_COSTS] [nvarchar](max) NOT NULL,
    [ITEMS_COUNT] [int] NOT NULL,
    [ITEMS] [nvarchar](max) NOT NULL,
    [KIND] [int] NOT NULL,
    [CHECKOUT_INFO] [nvarchar](max) NOT NULL,
    [ISSUES] [nvarchar](max) NOT NULL,
    [LAST_CHECK] [datetime] NOT NULL,
    [USER_ID] [int] NOT NULL,
    [IMPERSONATED_USER_ID] [int] NOT NULL,
    [OVERRIDE_PRICES] [bit] NOT NULL,
    [HAS_ISSUE] [bit] NOT NULL,
    [IS_CONFIRMED] [bit] NOT NULL,
    [IS_COLLECTED] [bit] NOT NULL,
    [_METADATA] [nvarchar](max) NOT NULL,
 CONSTRAINT [PK_tbl_set_Cart] PRIMARY KEY CLUSTERED 
(
    [ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Solution

  • After investigating deeper the DB profiling with the help of DBA Stack Overflow users (here the discussion https://dba.stackexchange.com/questions/303400/sql-server-how-to-upload-big-json-into-column-performance-issue/303409#303409) turn out to be an issue probably related to disk.

    Because some production system hit the same problem as my development machine i ask how to improve performance and recevied the beautiful tips of store the compressed version of my data. Data are not to big (in my scanrio) to be too slow for an in-memory at runtime compression/decompression, and that drammatically reduce the time (LZMA used). From 2,5s to ~0,3 a really good improvement.

    Thanks to all for precious help and tips.