Search code examples
mysqlsqlquery-optimizationbatch-updates

does this update query have any possible rewrite options?


i have a below query and I don't know how to do explain plan for it. so what i have is temp table create query and table structure.

create temporary table if not exists tmp_staging_task_ids as
         select distinct s.usr_task_id
           from ue_events_staging s
          where s.queue_id is null
          limit 6500;

the above select query explain plan ;

*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: s
   partitions: NULL
         type: ref
possible_keys: ue_events_staging_queue_id,usr_task_id,queue_id_usr_task_id,queue_id_app_id
          key: queue_id_usr_task_id
      key_len: 303
          ref: const
         rows: 17774428
     filtered: 100.00
        Extra: Using where; Using index; Using temporary

Query;

update ue_events_staging s
         join tmp_staging_task_ids t on t.usr_task_id = s.usr_task_id
          set s.queue_id = 'queue_id';

table structure;

Create Table: CREATE TABLE `ue_events_staging` (
  `id` bigint NOT NULL AUTO_INCREMENT,
  `queue_id` varchar(100) DEFAULT NULL,
  `usr_task_id` bigint NOT NULL,
  `app_id` bigint NOT NULL,
  `platform` tinyint NOT NULL,
  `capture_time` bigint NOT NULL,
  `input_type` varchar(50) NOT NULL,
  `type` varchar(100) NOT NULL,
  `event_type` varchar(10) NOT NULL,
  `screen` varchar(100) NOT NULL,
  `object_name` varchar(255) DEFAULT NULL,
  `app_custom_tag` varchar(255) DEFAULT NULL,
  `exception_class_name` varchar(250) DEFAULT NULL,
  `exception_tag` varchar(250) DEFAULT NULL,
  `non_responsive` tinyint(1) DEFAULT '0',
  `is_first` tinyint(1) DEFAULT '0',
  `is_second` tinyint(1) DEFAULT '0',
  `is_last` tinyint(1) DEFAULT '0',
  `is_quit` tinyint(1) DEFAULT '0',
  `x_coordinate` double DEFAULT NULL,
  `y_coordinate` double DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `ue_events_staging_queue_id` (`queue_id`),
  KEY `usr_task_id` (`usr_task_id`),
  KEY `screen` (`app_id`,`platform`,`screen`),
  KEY `app_id_queue_id` (`app_id`,`queue_id`),
  KEY `queue_id_usr_task_id` (`queue_id`,`usr_task_id`),
  KEY `queue_id_app_id` (`queue_id`,`app_id`)

please check the possibilities it takes around 3.5K seconds and causes load.


Solution

  • This looks like you're doing your updates in batches of 6500 rows.

    If you don't need that temporary table, you can refactor your update query to stand alone. You don't need the temporary table because you can put its WHERE queue_id IS NULL directly into the WHERE of your UPDATE.

    UPDATE ue_events_staging
       SET queue_id = 'queue_id'
     WHERE queue_id IS NULL
     LIMIT 6500;
    

    Your temporary table creation step pulls 6500 distinct (arbitrarily chosen) usr_task_id values from your table. Some of those values may relate to more than one row in your table, so your UPDATE statement may update more than 6500 rows in your table.

    The refactoring I suggest will update 6500 arbitarily chosen rows in your table. At the end of the statement it's possible that some rows with a particular usr_task_id value will be updated and others will not. If that's acceptable for your business rules it will be faster.

    If your business rules require all rows with each particular usr_task_id value to be updated at once, you could try this to simplify both statements.

    create temporary table if not exists tmp_staging_task_ids as
             select s.usr_task_id
               from ue_events_staging s
              where s.queue_id is null
              limit 6500;
    
    update ue_events_staging 
       set queue_id = 'queue_id'
     where usr_task_id IN 
            (select usr_task_id from tmp_staging_task_ids);
    

    This gets rid of the DISTINCT operator in the creation of your temporary table and may save a little time. The IN clause implies DISTINCT values.

    "Arbitrarily chosen"? Statements without ORDER BY and with LIMIT clauses instruct MySQL to choose rows arbitrarily. MySQL picks the rows that are fastest to retrieve (hopefully).