I am currently considering the switch from pip / virtualenv to pipenv for my web projects (mainly Django). They use a "minimal downtime deployment" process, inspired by the one described at https://kuttler.eu/en/post/django-deployments-without-downtime/
TL;DR
Is there a pipenv command to create a new environment from scratch (new interpreter, no packages), then install the dependencies, then set it as the default for the current directory ?
Let's take a quick example. I have a project installed on my debian server, with the following structure:
/srv
└── project
├── .git/
├── etc/
├── sources/
├── venv_20170922/
└── venv -> venv_20170922
Currently when I need to deploy it, I want to limit as most as possible the duration the website is offline. Please see this simplified views of steps I usually follow (indentation is here only to help understanding the process):
cd /srv/project
git pull
virtualenv -p python3 venv_20171015
source venv_20171015/bin/activate
pip install -r sources/requirements.txt
pushd sources
python manage.py migrate
python manage.py collectstatic
popd
deactivate
supervisorctl stop myproject
# Now the website is offline
ln -f -s venv_20171015 venv
supervisorctl start myproject
# Now the website is back online
With this process, the website is offline only a few moment, only the time needed to stop, to update the symbolic link and to start again. Supervisor script run a gunicorn process from the environment based on "venv" path.
But how can I reproduce a similar behavior with pipenv ? As far as I know, the environment is created on the fly the first time pipenv
command is used inside a project folder. In such case, is there some commands to fine control this behavior ?
Instead of using pipenv on the live server, use it only on your development machine and create a requirements.txt
file using the following command:
$ pipenv lock -r > requirements.txt
During deployment install the packages with pip (no need to install or use pipenv at all):
$ pip install -r requirements.txt