My application is facing issues due to its chatty nature with database. Due to many I/O operations or database calls, it is taking time to complete a flow specialy Batch Jobs. Code optimization is going on and this process will take some time to show positive results.
As @BaileyS suggested, a good compromise is to use a durable SSD drive. If you have a lot of data, you could use the SSD only to store database indexes and keep the data on a regular drive.
I wouldn't recommend using a RAM disk, unless you back it up very often, or the data on it are unimportant.
Instead, I'd try to maximize file caching of your OS. If your files are cached in RAM then working with them is quite similar as if they were on a RAM disk. On Linux, you can lower kernel's vfs_cache_pressure
which means that the kernel will try harder to keep cached files in RAM. And also set vm.swappiness
to 100 so that the system will swap out unused memory pages more actively, keeping RAM available for caching. This can speed up things considerably. See Can I configure my Linux system for more aggressive file system caching?