Search code examples
azurecontainersappservice

Azure App Service behavior change, SSH stopped working


I have been beating my head against a wall for way more hours than I'd like to admit. I have 2 apps on one app service plan and they have been running great for over a year. One if a production instance and one is a dev instance, both use Azure DevOps and have a docker container built on respective main/develop branch commits. As of this morning I enabled memcached to use for sessions instead of a db backend, maybe 12 users at once with a 5 min timeout, no big deal.

Before this morning my log stream would look like

enter image description here

All of a sudden now it looks like this

enter image description here

I can confirm my app is operational, 100% functional, other than I can't get to the web ssh and I have no messages in the live log other than the weird Kudo controller.

enter image description here

I can confirm no changes to the sshd_conf that MS requires in place, nor did I deviate from the root password that MS requires. I'm so confused as to what has changed. My prod instance, which I haven't messed with, is still running and SSH working fine.

I have compared my two sets of Dockerfile/entry-point.sh and they are identical other than the lines needed to install and start memcached. Again, removing those lines had no effect, so at this point I'm wondering if something in Kudu is stuck/broke?

Anyone ever experienced something like this?


Solution

  • After further searching I found an obscure reference to resources.azure.com and finding the associated resource and changing two values in the JSON. Changing these values allow the scmSite to be rebooted.

    1. Change 'status' to 'Stopped'
    2. Change 'scmSiteAlsoStopped' to true

    Once I saved and waited, everything, including SSH was back to normal. Looks like Kudu just had an issue it couldn't recover from.