Search code examples
mysqlperformancemyisamload-data-infile

mysql insert into indexed table taking long time after few million records


I have a table like this

CREATE TABLE IF NOT EXISTS `dnddata` (
  `numbers` varchar(10) NOT NULL,
  `opstype` char(1) NOT NULL,
  PRIMARY KEY (`numbers`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
/*!50100 PARTITION BY KEY (numbers)
PARTITIONS 25 */;

i have to insert 300 million records. i am inserting 10 million records each time using load data in file from csv file.

To insert 10 million records taking nearly 5 min first time. Time is increasing each time gradually. after 30 million records it stops inserting and memory using 100% server not responding.

below my my.cnf file setting

bulk_insert_buffer_size = 100M
key_buffer = 100M
sort_buffer_size = 50M
read_buffer = 50M

i am using cpu with 2 G memory.

details for 30 million records

    Space usage
Type    Usage
Data    545.3   MiB
Index   694.8   MiB
Total   1,240.1 MiB

MySQL client version: 5.5.14

with out index it is inserting fine 10 million in 50 sec.

Please tell me what kind of setting need to change.


Solution

  • Try:

    ALTER TABLE dnddata DISABLE KEYS;
    
    LOAD DATA INFILE...
    
    ALTER TABLE dnddata ENABLE KEYS;
    

    Also configure my.cnf as:

    key_buffer_size = 1G
    sort_buffer_size = 4M
    read_buffer_size = 1M
    read_rnd_buffer_size = 1M
    join_buffer_size = 2M
    bulk_insert_buffer_size = 32M
    myisam_sort_buffer_size = 256M
    

    restart MySQL server after modifying MySQL config file.

    formula below might help you configure parameters properly:

    key_buffer_size + (read_buffer_size + sort_buffer_size) * max_connections = K bytes of memory (< 2GB)
    

    do you really need numbers as varchar(10)? making it char(10) or int will help improve performance.

    creating PRIMARY KEY on a varchar column is overhead so you can drop it if its not needed.