Date

All of the websites on my server are currently deployed using git push. This ensures that all of the changes are easily traceable and easily reverted if necessary. This deploy mechanism is mostly powered by Jeff Lindsay (progrium)'s gitreceive script. Here are some notes on how to create a similar setup. These notes assume you already have a Linux server that is running nginx with virtual hosts (although the steps can be adapted to other setups as well).

The first step in this setup process is to set up a user to receive git commits via git push and the gitreceive script. Instead of duplicating the instructions, I will simply refer to the README for the gitreceive script instead. The important steps to perform are the "Set up a git user on the server" and "Create a user by uploading a public key from your laptop" steps. Although you might want to test the setup using the default receiver script in the README, you will replace the receiver script with a completely different one in the next step. I will note that the username I use for gitreceive is not git but webdeploy, and you must ensure that the GITUSER environment variable is set appropriately if you are also using a username other than git for your server. You will obviously also have to substitute webdeploy with the appropriate username in the steps below if you are not using the same username in your setup.

The next step is to copy an appropriate receiver script into the home directory of the webdeploy user. The receiver script I use can be found here on my GitHub. However, this script makes many assumptions about how files are laid out in each site's git repository and on the server.

The git repositories for each site on my server are laid out somewhat like this:

<root>
├── build.sh
├── build/
├── conf/
│   └── nginx.conf
├── content/
│   └── ...
└── ...

Notably, the nginx.conf fragment for each virtual host is located within the source itself. This helps ensure that any possible reverse proxying, settings for FastCGI/uWSGI/etc., and other similar configuration are all versioned along with the actual source code. There is also a build.sh script that does any necessary processing of the source code (e.g. minifying). The build/ directory becomes the actual webroot for each site. (Unfortunately this does mean that extra copying/hardlinking is required for static files that do not need to undergo any processing during the build process.) The rest of the repository (e.g. content/) is not served despite eventually also ending up under /var/www (this is controlled by a root directive in each site's nginx.conf fragment).

Because changing the nginx.conf file requires a reload of nginx which normally requires root, the next step is to modify the sudoers file to allow webdeploy to do this. To do so, run sudo visudo and add a line like the following:

webdeploy ALL=(ALL) NOPASSWD: /bin/systemctl reload nginx.service

Unfortunately, the sudoers file is rather confusing. The line here is allowing the webdeploy user to run only the /bin/systemctl binary and only with the arguments reload nginx.service. The webdeploy user can run this command without a password and can run it as any user (i.e. root). However, webdeploy won't be allowed to perform any other actions such as disable or to control any other service.

In order to allow the receiver script to actually create new files and directories under /var/www, we need to change some permissions. One way that this can be done is to give webdeploy ownership of all of /var/www. However, I opted to use extended ACLs to grant permissions instead. Although it makes almost no difference in this case and traditional groups can be used rather than ACLs in many cases, extended ACLs in general help to reduce group proliferation when granting fine-grained access to different parts of the filesystem to different users. If you choose to use ACLs, you can grant the webdeploy user full permissions on /var/www using

sudo setfacl -m "u:webdeploy:rwx" /var/www

My receiver script uses a number of symlinks in order to make site updates less obviously non-atomic. The next step is to set these up. For each domain, there are two symlinks <site> and <site>-prev under /var/www. For example, part of /var/www might look like this:

$ ls -l /var/www
lrwxrwxrwx  1 webdeploy webdeploy   62 Aug 29 09:33 robertou.com -> /var/www/robertou.com-3a8d6216872d83f2d68c82fcb09b6904796bd70a
drwxrwxr-x  1 webdeploy webdeploy  270 Aug 29 09:33 robertou.com-3a8d6216872d83f2d68c82fcb09b6904796bd70a
drwxrwxr-x  1 webdeploy webdeploy  276 Aug 29 09:22 robertou.com-6c19354ac248201a032fab3524602550051f3310
lrwxrwxrwx  1 webdeploy webdeploy   62 Aug 29 09:33 robertou.com-prev -> /var/www/robertou.com-6c19354ac248201a032fab3524602550051f3310
...

Initially, the symlinks can both be set to point to dummy targets. The purpose of the -prev symlink is to facilitate emergency reverts without needing to rerun the build process for the previous version of the site.

Finally, the nginx.conf fragments are included in the main /etc/nginx.conf by (assuming a distribution default configuration that uses sites-enabled and sites-available) symlinking /var/www/<site>/conf/nginx.conf into /etc/nginx/sites-available and then symlinking that symlink into /etc/nginx/sites-enabled like so:

$ ls -l /etc/nginx/sites-enabled/robertou.com 
lrwxrwxrwx 1 root root 31 Mar  2 12:04 /etc/nginx/sites-enabled/robertou.com -> ../sites-available/robertou.com
$ ls -l /etc/nginx/sites-available/robertou.com 
lrwxrwxrwx 1 root root 37 Mar  2 12:03 /etc/nginx/sites-available/robertou.com -> /var/www/robertou.com/conf/nginx.conf

Once this is all set up, doing a push to the webdeploy user should automatically:

  1. Receive and unpack the new version of the site.
  2. Run its build.sh script
  3. Move the "current site" symlink to point to the new version.
  4. Move the "old site" symlink to point to the old version.
  5. Delete the "old old" version.
  6. Reload nginx.

My setup here has a number of known bugs:

  • A build failure will leave stray files in /var/www. This problem was ignored for my use case because I assumed that I would always run the build locally before pushing and because I can simply use sudo to remove the stray files if a build does unexpectedly fail.
  • Submodules are not handled. The gitreceive repository has a hint on how to fix this.
  • It is not resistant against malicious users. Because the webdeploy user runs build.sh under its privileges, a malicious user can grief any site on the same server. This script as such is therefore not suitable for a shared hosting environment. gitreceive itself also apparently has a bug that allows creating repositories anywhere on the system (subject to normal write permission checks). Only give trusted users access to the webdeploy user.