I have PHP (Laravel 5.7) application connected to a cluster of DBs powered by Galera cluster. I am running cron job every night to clear thousands of thousands log data from DB tables. I run Delete
statement of the MySQL to delete the rows in the table based on the time stamp range.
My Delete Operations looks like
DB::delete('DELETE FROM incoming_logs WHERE created_at < DATE_SUB(NOW(), INTERVAL 1 DAY)');
DB::delete('DELETE FROM provisioning_logs WHERE created_at < DATE_SUB(NOW(), INTERVAL '.Config::get('app.keep_provisioning_logs') .' DAY)');
NOTE: And these are defined in Laravel console commands and those commands scheduled in Laravel tasks.
Problem:
Every time after running the cron job to delete the huge amount of data: Main memory
of the DB servers max out and when I clear the cache memory manually using following command, I get my Main Memory
free.
sync; echo 1 > /proc/sys/vm/drop_caches
I need to know about why cache is not being automatically re-claimed, or why after cron job my memory went out as well as I faced problem of too many connection
error on MySQL.
Or is this just because my DB servers (Total 3 DB servers under load balancer and syncing using galena) syncing real time, so when i try to delete thousands records using cron job, the server which get that delete request try to delete and maintain cache to sync with other DBs? I am not sure whats going on. Please help me with my memory maxing out problem.
My Architecture:
One Load balancer (HA proxy having three DBs under it) and galera cluster to sync data among those three DBs.
You are deleting half the table every night? Consider deleting an hour's worth every hour.
Do you have INDEX(created_at)
? That would help the hourly purge, though perhaps not the daily purge.
This task is an excellent one for PARTITIONing
.
These, and more techniques, are discussed here.