Using Git to deploy changes to web sites has been around for a while - year and years. I just thought I'd share a small twist that I haven't seen others doing, yet. I started out the normal way by adding a post-receive hook on the git server, but the problem with that is it's not under version control itself. So when I want to change how I deploy I need to login to the server and make the changes, and I have to track those changes somewhere else - I guess another repo perhaps. So I came up with a bit better way.

I have a post-receive hook I put up on all my deployable web site repositories. It does the usual checkout but instead of taking further action it actually moves to the checked out directory and looks for a deploy script to call. So now the actual deployment is within the repo, and can be modified along with other code. And it can handle deployment differently for each repo/site.

Here's the scripts I use - first, the post-receive hook. This is real simple and goes in your git server repo hooks directory:


unset GIT_DIR

while read from to branch
    mkdir -p "${DEPLOY_WORK}"
    GIT_WORK_TREE="${DEPLOY_WORK}" git checkout -f "${branch}" 
    cd "${DEPLOY_WORK}"
    if [ -f deploy ]; then
        ./deploy "${branch##*/}"
    rm -rf "${DEPLOY_WORK}"

Then I make a deploy script that sits inside the repo. And a nice thing here is it can be in python or whatever you like, as long as you have that on your server. In my case I put static web sites in an Amazon S3 bucket because it's fast, scales well, and basically free for low traffic web sites. So I use the nice s3cmd tool to take care of uploading.

# for s3 deploy of git repo
# script to upload src directory to a bucket selected by branch script argument $1
# include in repo and git server post-receive hook can call to deploy
# depends on s3cmd - pip install s3cmd


declare -A branch

if [[ "${branch[$1]}" ]]; then
    echo "Deploying "$1" to "$bucket
    touch .gzs .gitignore .s3ignore
    gzs=$(find $src -name '*.gz')
    for f in $gzs; do 
      echo $fx >> .gzs
      echo ${fx%.gz} >> .gzs
      s3cmd sync --guess-mime-type --no-mime-magic --acl-public --add-header="Content-Encoding:gzip" --no-preserve --add-header="Cache-Control:public, max-age=86400" "$f" "s3://$bucket/${fx%.gz}";
    s3cmd sync -r --exclude-from '.s3ignore' --exclude-from '.gzs' --exclude-from '.gitignore' --delete-removed --acl-public --no-preserve --guess-mime-type --no-mime-magic --add-header="Cache-Control:public, max-age=86400" $src/ s3://$bucket
    rm .gzs
    echo "Branch "$1" has no bucket - not deployed."

The cool thing here is that this deploy script looks at the branch being deployed and chooses what bucket to push to. It could make other choices like what web root directory to copy to on the server. Mine also checks for .gz and renames and sets content encoding. It can even make other changes - nasty ones too, so be aware someone with access to your local git repo can run as the git user on your server - you have limited the privelages for your git user, right?

When I work on a web site I do it in the test branch. And a simple git push sends any detected changes to the server where the deploy script is invoked to push the right content up to the right bucket on S3. When I'm happy with changes, I git checkout master and git merge test, then git push. And auto-magically it ends up in the production bucket. Here's what see when I push my test branch - output from my deploy script:

neocogent$ git push
Counting objects: 42, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (14/14), done.
Writing objects: 100% (16/16), 1.26 KiB | 0 bytes/s, done.
Total 16 (delta 8), reused 0 (delta 0)
remote: Previous HEAD position was c80a8ab... how i deploy
remote: HEAD is now at 7359b74... tweaks
remote: Deploying test to
remote: upload: 'output/author/neocogent.html' -> 's3://'  [1 of 4]
remote:  47281 of 47281   100% in    0s   392.00 kB/s  done
remote: upload: 'output/blog/2017/01/how-i-do-deploy.html' -> 's3://'  [2 of 4]
remote:  21006 of 21006   100% in    0s   186.52 kB/s  done
remote: upload: 'output/index.html' -> 's3://'  [3 of 4]
remote:  47162 of 47162   100% in    0s   403.86 kB/s  done
remote: upload: 'output/sitemap.xml' -> 's3://'  [4 of 4]
remote:  8756 of 8756   100% in    0s    95.35 kB/s  done
remote: Done. Uploaded 124205 bytes in 1.0 seconds, 121.29 kB/s.
   c80a8ab..7359b74  test -> test

Typically this is pretty fast as Git only sends the changes to the server and it compresses data. Manually uploading to S3 is quite slow from my location so having the server expand files and send from there on a "big pipe" is super quick. Notice above - I cannot get 400 KB/s upload from home.

I also have a few aliases that reduce command fatigue - put these in your ~/.gitconfig (all repos) or .git/config (local repo). repush allows me to re-deploy even without changing files; useful for testing. The other two are handy too.

    repush = "!f() { git commit --allow-empty --amend --no-edit; git push; }; f"
    golive = "!f() { git checkout master; git merge test; git push; }; f"
    test = "!f() { git checkout test; }; f"

Linux, Electronics, Open Source Programming, Bitcoin, and more

© Copyright 2018 neoCogent. All rights reserved.

About Me - Hire Me