I'm creating a small service where I poll around 100 accounts (in a Twitter-like service) frequently (every 5 seconds or so) to check for new messages, as the service doesn't yet provide a streaming API (like Twitter actually does).
In my head, I have the architecture planned as queuing Ticker
s every 5 seconds for every user. Once the tick fires I make an API call to the service, check their messages, and call SELECT
to my Postgres database to get the specific user details and check the date of the most recent message, and if there are messages newer than that UPDATE
the entry and notify the user. Repeat ad nauseum.
I'm not very experienced in backend things and architecture, so I want to make sure this isn't an absolutely absurd setup. Is the amount of calls to the database sensible? Am I abusing goroutines?
Let me answer given what you describe.
I want to make sure this isn't an absolutely absurd setup.
I understand the following. For each user, you create a tick every 5 seconds in one goroutine. Another goroutine consumes those ticks, performing the polling and comparing the date of the last message with the date you have recorded in your PostgreSQL database.
The answer is: it depends. How many users do you have and how many can your application support? In my experience the best way to answer this question is to measure performance of your application.
Is the amount of calls to the database sensible?
It depends. To give you some reassurance, I have seen a single PostgreSQL database take hundreds of SELECT
per second. I don't see a design mistake, so benchmarking your application is the way to go.
I am abusing goroutines?
Do you mean like executing too many of them? I think it is unlikely that you are abusing goroutines that way. If there is a particular reason you think this could be the case, posting the corresponding code snippet could make your question more precise.