I am developing an application for Android, for use in agricultural machine.
This application will generate on average 10 to 40 logs per minute.
After 20 hours, there will be from 12,000 to 48,000 records stored in SQLite.
My question is about the best choice to submit these records to the server using the Internet.
Today I am using a Java Web application with JSF pages and JAX-RS (Jersey) for communication with Android. For the database I am using PostgreSQL 9 connecting via JPA (EclipseLink).
In tests I'm doing, it seems to me to be problematic send all this data via REST, because the process is lengthy. I'm having timeout problems and among other things.
To try to work around the problem and ensure data consistency. I am sending the data in paging, and start the database insertion only after sending all records.
This solves part of the problem, but I'm not sure that will be the best approach. The insertion of data on the server is lengthier and Android can not wait for the process to complete, due to timeout.
Thus, the user is not sure that the process was successful. Only after a long time, the insertion end and the user can confirm that the process was successful.
It also generates other problems, because I have to prevent the user to perform a new sending data until it receives confirmation from the previous process.
What is the best solution for this type of case?
One of the solutions that I intend to look is WebSocket, but would like to know the experience of other developers.
After some testing and evaluation, I get a good solution for this case. The process of upload and store the records to the server was under 40 seconds.
In the Android application I'm using the following set:
GZIPOutputStream
to compress the data: Allows Jackson writes compressed data.The result of this set was very good for me. It is possible to read, compress and send 50,000 records of Android to the server in about 15 seconds. The compressed data is approximately 400KB.
In the server:
GZIPInputStream
to uncompress the data: Allows Jackson read the compressed data.The server receives the compressed data and store to disk. After a task is scheduled to uncompress and insert in the database. The process takes 15 to 20 seconds to 50,000 records.
That was good for me, I hope this can contribute to other developers as well.