I plan to scale up my Rails application to multiple instances, but they will still use the same database. If two users end up using two different instances of the application, editing the same account, then this will definitely cause race conditions somewhere - What's the best way to prevent this given that this is a Rails application?
The only database related settings I know of allow specification of the actual servers IP. If a program could be a middle man somehow it would solve the problem... Otherwise the application instances will have to communicate with each other in some way right??
Unless there is a way to solve this using settings in Postgres...
Any help would be appreciated!! Thanks!
I suggest to implement an optimistic locking strategy. Rails supports this out of the box. Writing a fully-fledged tutorial for this is out of the scope of a stackoverflow answer. Please find detailed instructions online.
The basics are:
For each model you want to protect, you will need to add an integer column called lock_version
to the model's database table. Plus in every form you will need to add a hidden field that carries this lock_version. And of course allow the lock_version param in the controller if you are using safe params.
Whenever you save a record and a lock_version is present, Rails will make sure that the lock_version matches the one in the database. If they don't match, this means that another update has occurred in the meantime and the current update fails with a StaleObjectError
.
As a fallback strategy, I suggest you display a flash message to the user whose update was rejected. Also, render the same form they were filling in again, using the current values from the database (not the attempted inputs, since you want the user to reconsider if their changes were appropriate).
(You can implement the fallback strategy application wide by defining a rescue_from
hook in your ApplicationController
.)
HTH!