Basically not existing routes get catched when using the dev
command and the error / 404 page gets displayed. But when using export
and uploading the generated files to a webserver this does not work. Instead, the index page is displayed, but none of the logic works, like clicking on another link for navigation.
I had a catch all slug before in the code, but removed and deleted all the files that were generated by the export
command, to make sure that it is removed. Could this be the issue? How would the slugs file look like?
When using sapper export
the script will start from your index page and visit (and render) all pages reachable by links on the page. This way you have a static version of your website that you upload to your hosting. It replaces the server side rendering sapper normally does, but just for the first page the user visits, all the rest will start work as normal.
Since a 404 page is shown when the user goes somewhere that does not exist you will (usually) not have a link there and therefore the script will not render that page.
In order to tell sapper to also crawl that page you have to add it as an entry point
In package.json
"export": "sapper export --entry "/ /404""
This extra paramater will tell the script to start at / (the main index file) and do the entire process again, starting at /404 (which shouldn't exist and thus throw your error page)