I have a weekly backup that runs for one of my mysql databases for one of my websites (ccms). This backup is about 1.2GB and takes about 30 min to run.
When this database backup runs, all my other railo websites can not connect and go "down" for the duration of the backup.
One of the errors I have managed to catch was:
"[show] railo.runtime.exp.RequestTimeoutException: request (:119) is run into a
timeout (1200 seconds) and has been stopped. open locks at this time (c:/railo/webapps/root/ccms/parsed/photo.view.cfm,
c:/railo/webapps/root/ccms/parsed/profile.view.cfm, c:/railo/webapps/root/ccms/parsed/album.view.cfm,
c:/railo/webapps/root/ccms/parsed/public.dologin.cfm)."
What I believe is happening is that the tables required for those pages (the "ccms" website) are being locked due to the backup, which is fair enough.
But, why is that causing the other railo websites to time out? For example, the error I pasted above was actually taken from a different website, not the "ccms" website that it references in the error. Any website I try and run fails and throws an error that references the "ccms" website, which is the one being backed up. How do I avoid this?
Any insight would be greatly appreciated. Thanks
One possibility is that because your timeout appears to be 20 minutes is that each time a request comes in to the site which IS being backed up, that thread blocks on waiting for the DB.
Railo has a pool of worker threads to handle requests and now one of them is tied up. As requests continue to come in, any requests to the affected site tie up another thread. Eventually there are no more workers in the pool and all subsequent requests are queued up to be processed once workers become available.
I'm not an expert on debugging Railo, but the above seems plausible to me. You could consider running different Railo processes for different sites, which would isolate them or drastically lowering your DB timeout (if acceptable).