I think I know why that happens, docker registry container loses the records of what was pushed to the repo, I wish search was done against the backend directly (in my case I use s3 bucket). I have a cloud template, so I start my stack from scratch very often, downloading the registry container over again, setting up all the configs, of course I lose all the data of the images I pushed to the repo. My ec2 instance running the docker registry container can be updated or go down for any reason and a brand new one will be spun up by an auto scaling group.
So my question is basically, what is the best available way to always connect to a registry and always see all images in the s3 repo when performing a search, whether registry was rebooted or a new one was spun up? Mounting a volume from ec2 instance won't work because it is very possible the ec2 instance can go down with the docker registry container at some point in time.
I am assuming you use the sqlalchemy search backend. I believe it is possible that you are loosing the data about the images you pushed because the search backend stores data locally by default. This scenario and ways round it are discussed on the docker registry forum https://github.com/docker/docker-registry/issues/475, from that page:
By default, the search backend uses a sqlite database locally. You can configure that by changing the connection string to a database somewhere external to your autoscaling group or within the confines of the VPC (if you're using one, that is). Could this be the issue you're seeing?
The discussion goes on to suggest to put an external database somewhere, potentially on EC2 and that there would be no need to worry if that goes down and comes back up empty, since empty databases will be re-indexed by the registry.