I created a machine learning model with Prophet:
https://www.kaggle.com/marcmetz/ticket-sales-prediction-facebook-prophet
I have a web application running with Django. From that application, I want to be able to lookup predictions from the model I created. I assume the best way to do is to deploy my model on Google Cloud Platform or AWS (?) and access forecasts through API calls from my web application to one of these services.
My question now: Is that way I described it the right way to do so? I still struggle to decide if either AWS or Google Cloud is the better solution for my case, especially with Prophet. I could only find examples with scikit-learn
. Any of you who has experience with that and can point me in the right direction?
It really depends on the type of model that you are using. In many cases, the model inference is getting a data point (similar to the data points you trained it with) and the model will generate a prediction to that requested data point. In such cases, you need to host the model somewhere in the cloud or on the edge.
However, Prophet is often generating the predictions for the future as part of the training of the model. In this case, you only need to serve the predictions that were already calculated, and you can serve them as a CSV file from S3, or as lookup values from a DynamoDB or other lookup data stores.