static content generator: a full rundown

Published on 18 Oct 2014

In the previous experiments, I did all the test on commercial PaaS, OpenShift and Heroku. My movements on commercial PaaS were quite limited as I couldn’t really figure out the problems to fix them. This time, I spun a Docker instance and began tinkering with git.

Hexo vs Hugo

I was introduced to static content generator when @pali7x introduced me to nanoc, a popular choice among all. We have more SCGs such as middleman, jekyll, and octopress. What made me really into SCGs was not about its performance (although that is a contributing factor), but the simplicity and customizability. It feels like using Arch linux.

Some of you might argue that Arch linux has the definition of the polar opposite of simplicity, but we the hobbyist hackers define simplicity quite differently. In our realm, simplicity means when you are able to perceive, control, and manipulate every moving parts of something, and that thing has a very good performance. There is an adage “it does one thing, and it does it well”, running in the linux and open source community. SCG is the good example of this.

Installing nanoc during the very first time gave me an experience akin to waking up on the wrong side of the bed, and I was quite frustrated. I looked for alternatives, and I found hexo and then hugo. I distanced myself from hexo at first because it was created by asian (irony here, because I am asian too), but after dealing with the immaturity of hugo, I had to pick hexo.

git –bare init

Before settling down with plain git instead dokku (a mini PaaS with buildpack support), I was eyeing for open source PaaS solutions (as written here).

The stopping factors are that Deis requires Chef (new learning curve), and Flynn has too many running parts (more learning curve here).

// on the server
$ git --bare init

// local computer
$ git init
$ git add . && git commit -m "initial commit"
$ git remote add origin <git server>:<git folder>
$ git push origin master

The commands above are the basic concept of creating a git repo on the server, and connecting a local git folder to the git repo on the server.

The real trick here is the post-receive hook.



Basically a hook is a file with bunch of bash commands, so it is less frightening for linux user. The post-receive above is executed after the git on the server receives changes from local. To put it simply, once you’ve pushed changes by issuing git push origin master, this post-receive hook is executed and you will see live output on local terminal.

nginx inside Docker, nginx on the host

Remember that I’ve said I am using a Docker instance for the git + scg? If you pay attention to the post-receive hook above you might realize that I use cp -R $TMP_GIT_CLONE/public/*. The popular way of doing this is to install the same SCG on the server. It means that if you are using Hexo on the local PC, you have to install Hexo on the server. So instead of using cp -R, you use hexo generate or hexo server to serve the static files.

I have a different taste though, and to keep my flow as simple as possible I opt to cp -R. Looks cleaner that way.

I didn’t deploy a Docker instance with shared storage (my bad my bad), because that could simplify workflow even more. Since I didn’t expose any port other than the one for SSH (git requires SSH), it is clear that I need some sort of strategy for nginx.

==[outside traffic]==> nginx (host) ==> nginx (instance, via Docker's internal IP) ==> the serve/ folder

To get the internal IP for the Docker instance, run sudo inspect <instance name>, the command outputs a lot of things there, so look for IPAdress entry.

"NetworkSettings": {
        "Bridge": "docker0",
        "Gateway": "",
        "IPAddress": "",
        "IPPrefixLen": 16,
        "PortMapping": null,

This is the vHost config file scg.conf for the nginx inside the Docker instance:

server {
    listen <xxxx>;
    root /home/<user>/<www>;
    index index.php index.html index.html;

    location / {
    try_files $uri $uri/ =404;

And this one is the scg.conf for the nginx on the host:

server {
    listen 80;

    location / {
        proxy_pass http://172.17.0.<x>:<xxxx>;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

Basically the scg.conf of the Docker instance points traffic to a specific folder, while the scg.conf of the host acts as the reverse-proxy.

updating the blog: the workflow

$ hexo new "post name"
$ atom source/_post/
$ hexo generate
$ git add . && git commit -m "added new article: <post-name>.md"
$ git push origin master

When hexo new is issued, it creates a Markdown file inside the said folder above. hexo generate then creates static files based on the theme as defined in _config.yml together with the Markdown files inside source/_post folder. hexo server can be used to view the blog locally.


Worth it.

footnote / update

I changed few things for the post-receive hook there. The [activity history 007.3][09] gives you this:



But what I’ve written in this article gives you this:



Notice there I use cp -R $TMP_GIT_CLONE/public/* $PUBLIC_WWW instead of cp -R $TMP_GIT_CLONE/* $PUBLIC_WWW. The reason is that previously I git‘ed the public/ folder of the hexo (that holds the static files). Now I git the whole hexo root. By using this strategy, I have a full backup on the server.

If you are deploying by using git, be on the lookout for the .gitignore file. When you are inside the hexo blog folder, issue this command: cat .gitignore.


If you can see there, this .gitignore file has defined the folder public/. It means that when you push the changes, the files inside the public/ folder get discarded.