I am building a Blog as a JS app. The JS app will consume an Api for content.
I am considering having a server side script that detects search bots and if found responds with plain crawlable HTML from the Api, otherwise loads the JS app which then does a XHR request to get the content from the Api and update the DOM.
Basically, if the request comes from a bot we consume the API on the server and respond with plain HTML,
or,
if the request come from a 'normal' user agent, the JS App consumes the API to get the content and serve it to the user.
Are there any caveats in using this approach?
Yes. Search engines penalise sites that deliver different content to them and regular users.
Build the site with Progressive Enhancement and Unobtrusive JavaScript instead.