Search code examples
mysqltoadalter

How to add a column normally into a mysql table with more than 100K records?


As specified in the above heading normally, what I did is not a normal way of altering a table to add one column.

We are using MySql 5.5, and Toad for doing operations in database. There is one table appln_doc which is having nearly 300K records storing documents of applicants, like images and that too of huge size.

Now we have to add a new column is_signature of tinyint type. We tried in three different ways

  1. Using Toads in built provision - Double click on table, a new window will there under column tab add a new column with name, type and size and click Alter button.
  2. Using alter table query in toad itself.
  3. Using putty we logged inside mysql and executed the same query.

All the three efforts lead into same problem Fixing “Lock wait timeout exceeded; try restarting transaction” for a 'stuck" Mysql table. So we tried to kill the waiting process and again tried to alter the table, still the result was same.

We killed the process again and restarted the mysql server and again tried to add the column, still the problem was same.

Lastly we exported all the table data to an excel sheet and truncated the table. After that when we tried to add that column it was successful. Then that exported excel sheet was added with a new column is_signature with all its values as 0 as a default value to the new column. Then we exported that data back to the table again. That's why I said that I didn't add the column in a normal way.

So has anybody faced any situation like this and has got a better solution than than this? Can anybody tell why this is happening is it because of the bulk and size of data stored in that table?

PS : The table appln_doc was having a child table appln_doc_details with no data. Only this table is having problem while altering.


Solution

  • At the end of the day, no matter what tools you use, it all boils down to one of two scenarios:

    1. An alter table statement
    2. Create new table/copy data/delete old table/rename new table.

    The first one is generally faster, the second one is generally more flexible (there are some things for a table that cannot be altered).

    Either way, handling this much data just takes a lot of time, and there's nothing you can really do about it.

    On the bright side, almost all timeouts are configurable somewhere. I don't know how to configure this particular one, but I'm 99% sure that you can. Find out how and increase it to be big enough. For 300K records, I think that the operation will take around 10 minutes or less, but of course it depends. Set it to 10 hours. :)