Search code examples
xmlpostgresqljdbcdeploymentignite

The best to deploy an Ignite application with CacheStore, JDBC driver and xml configuration


What's best way to deploy ignite application? ZeroDeployment sounds very easy, but I haven't found an easy way. I built an application to try Ignite's write-behind method. I extended CacheStoreAdapter which uses PostgreSQL jdbc to insert data. It worked well from IDE, but I haven't found a great way to deploy it to server. My laptop connects to server with Ignite & PostgreSQL installed via VPN. And JDBC from my laptop go through vpn, so I'd like to test the performance of application when it runs on server.

I tried two ways: 1. The application start ignite in client mode, it worked but I found deployment involves several steps. - copy fat jar including jdbc driver into ignite/libs folder. - copy config.xml into config folder - use ignite.sh config/config.xml to start server node - use java -jar application.jar to start the client node - client program finishes successfully but server write-behind code(CacheStoreAdapter.write) errors out and complains no jdbc driver found. After I copied a separate postgresql-9.4.1212.jre6.jar to libs and restart Ignite server node, the write behind succeeded.

  1. I changed the application to start ignite in server mode. xml file is included in the fat jar file.
    • same as above: copy fat application.jar including jdbc driver into ignite/libs folder.
    • use java -jar application.jar to start the server node write to cache succeeded but write-behind code errors out and also complains no jdbc driver found. Please note that the postgresql-9.4.1212.jre6.jar exists in the ignite libs folder. And I only have this server node up and running.

[00:24:20,244][SEVERE][flusher-0-#23%null%][GridCacheWriteBehindStore] Unable to update underlying store: com.xxxx.xxx.xxx.datastore.CustomStore@555cf22 ........... Caused by: java.sql.SQLException: No suitable driver found for jdbc:postgresql://xxx.xxxx.xxxx.com:5432/customdb

  1. I use ignite.sh config/config.xml to start server node and then use java -jar application.jar to start ignite in server mode. Now I have two server nodes. This time, everything completed successfully. It seems the asynchronous write-behind has a specific way to look for jdbc driver?

Is there a better way to do this? I agree that in closer-to-production situations, a cluster with multiple server nodes are probably reality. If I change the configurations in xml, will the changes be populated to other nodes? Or should I update the xml in other nodes too? Or is using java configuration objects the better way, since it'll be loaded to peers automatically?

Thank you for your time and advice!


Solution

  • I see that you have two things which you try to deploy: configuration (xml file) and libs. For your case I would recommend the following approach:

    • Configuration file should not be deployed on each server nodes, but server and client should be able to find each other. It means that ignite configuration which used for client and server nodes should contains properly configured IP finder (VmIpFinder or MulticastIpFinder). https://apacheignite.readme.io/docs/cluster-config Configuration for server nodes may not contain cache configuration and etc.
    • Libs (JDBC store and JDBC driver) should be deployed on all nodes. Zero deployment does not work for this case. All classes in cache configuration should be available on all nodes.