I have an complex database. I can simplify it like that:
The table a:
CREATE TABLE a
(
Id int(10) unsigned NOT NULL AUTO_INCREMENT,
A int(11) DEFAULT NULL,
CalcUniqId int(11) DEFAULT NULL,
PRIMARY KEY (Id)
) ENGINE=InnoDB;
CREATE TRIGGER a_before_ins_tr before INSERT on a
FOR EACH ROW
BEGIN
select Max(CalcUniqId) from A into @MaxCalcUniqId;
set new.CalcUniqId=IfNull(@MaxCalcUniqId,1)+1;
END $
It works like that:
start transaction
insert into A(A)
... insert in other tables. It take between 30 and 60 seconds
commit;
The problem is, the trigger returns the same CalcUniqId for all transaction that run at the same time.
Is there any solution or work arround.
Is this a solution:
start transaction;
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
insert into A(A) values(10);
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
....
commit;
Man can run this test:
Session 1:
Step1: start transaction;
Step2: insert into A(A) values(1);
Step3: commit;
Session 2:
Step1: start transaction;
Step2: insert into A(A) values(2);
Step3: commit;
Run in session 1 the steps 1,2 and in session 2 the steps 1,2. than step 3 in both. After that do
select Id, A, CalcUniqId from a;
both have the same CalcUniqId=2.
Change the SELECT
in the Trigger to this:
select Max(CalcUniqId) from A into @MaxCalcUniqId
FOR UPDATE; -- add this
That tells the transaction that you intend to change the value; that blocks the other transactions from changing it.
This will probably lead to your 30-60 sec transactions being run one after another. And probably dying due to exceeding lock_wait_timeout
. Rather than increasing that setting (which is already "too high"), please explain the bigger picture. Perhaps we can concoct a workaround that gets the 'correct' value and runs in parallel.