wireguard deployment

Published on 14 Apr 2019

I became really interested to try wireguard after I saw Ars Techica posted their review few months back. What sold me to wireguard? Well, it is not that hard to sell when you advertise speed and simplicity. I finally gave wireguard a go on 6 April 2019. I agree that it was less challenging to set up (as opposed to OpenVPN) and pretty fun to use.

Note that I am aware of the existence of Docker containers for OpenVPN and IPsec. However, knowing how to install from scratch sans compiling is a good thing to do/know.

The Motivation

The motivation to use wireguard did not solely stem from the desire to have a virtual private network (VPN). Rather, it was born out of my frustration finding the best way to expose local or internal services to the internet. I just purchased a single-board computer (SBC) Rock64 and there were services on that board that I would like to access from anywhere on this planet.

Conventionally, exposing local services to the internet can be done in two ways: through dynamic DNS service (3rd party) or using tunneling service (also 3rd party). In my book, dependency on 3rd party does not rank highly and best be avoided if at all possible.

I went on Google to find out what others did. Luckily, I stumbled upon Jordan Crawford’s tutorial on working with Tinc VPN. His method has one limitation: one must have a virtual private server (VPS) with static IP address to reach the SBC. With this method, local services on the SBC can be exposed to the internet by tunneling traffic through an internet-facing VPS.

The VPS requirement did not jettison the idea. VPS nowadays are fairly cheap. My coffee costs me around $4.91 every day (high-end coffee, mind you). Paying $5.00 every month for a decent KVM-based server is not that hard to do. Vultr’s reasonably cheapest offering starts at $3.50/month. Amazon’s Lightsail cheapest offering also starts at $3.50. Point is, VPS requirement should not be a dealbreaker.

Vultr’s cheapest one is priced at $2.50/month, but that one comes with IPv6 only.

I like this idea, except, I was not planning to use Tinc. The new kid on the block, wireguard, demanded my attention and I could not resist the temptation to play around with it.

The Test Setup

As of 6 April 2019, I performed a test run. The setup consisted of a VPS (Vultr), my linux desktop, and my Android phone. The VPS, codenamed X09V “Henesys”, was designated as the wireguard server with the IP address of 10.23.5.1. My linux desktop was given the IP address 10.23.5.2 and my Android phone had 10.23.5.3.

Before I go further, I would like to say thanks to Emanuel Duss for the tutorial on setting up wireguard.

On both VPS and linux desktop, wg0 was used as the wireguard interface. This means that both had the wireguard configuration file located at /etc/wireguard/wg0.conf, since the interface name derived from the name of the configuration file. The package wireguard was installed from its official repository.

For this test, the Vultr VPS was running on Ubuntu 18.04 LTS.
-- add wireguard PPA
$ sudo add-apt-repository ppa:wireguard/wireguard; sudo apt update

-- install wireguard
$ sudo apt install wireguard -y

-- create wg0.conf file
$ sudo touch /etc/wireguard/wg0.conf

# Configuring the Server

I started the configuration process on the server by generating the public and private keys. I created a directory called wireguard inside home directory and generated both keys inside that directory.

-- create the directory ~/wireguard
$ mkdir ~/wireguard

-- set file creation mode
$ umask 077

-- generate keys on the server
$ cd ~/wireguard
$ wg genkey | tee privatekey | wg pubkey > publickey

The content of the keys can be viewed by simply using the cat command. I was surprised to see the keys were pretty short.

On the server, my wg0.conf looked like this:

$ sudo cat /etc/wireguard/wg0.conf
[Interface]
Address = 10.23.5.1/24, fc00:23:5::1/64
ListenPort = 1500
PrivateKey = <server privatekey>
PreUp = iptables -t nat -A POSTROUTING -s 10.23.5.0/24  -o ens3 -j MASQUERADE; ip6tables -t nat -A POSTROUTING -s fc00:23:5::/64 -o ens3 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -s 10.23.5.0/24  -o ens3 -j MASQUERADE; ip6tables -t nat -D POSTROUTING -s fc00:23:5::/64 -o ens3 -j MASQUERADE

# linux desktop
[Peer]
PublicKey = <linux desktop publickey>
AllowedIPs = 10.23.5.2/32

# Android phone
[Peer]
PublicKey = <phone public key>
AllowedIPs = 10.23.5.3/32

A number of things to note here.

  • ListenPort tells the server to listen at port 1500.
  • PreUp and PostDown tell wireguard to create iptables rule to establish NAT chain. PreUp tells the wireguard to add the rules; PostDown tells the wireguard to delete the rules.
  • AllowedIPs on the peer section tells the wireguard on the server to only allow client to send packages from this source IP address.

The iptables rules above are critical to ensure that internet traffic can flow back and forth from the clients to the server. Note that in this configuration file, the -o specifies the ens3 interface. On my Vultr VPS, the interface that was facing the internet was the ens3. This might be DIFFERENT, say, if you are on Digital Ocean or Amazon Lightsail. So, be careful when copy-pasting this.

So, check your interface by running ip addr, or you will be spending hours pounding your head against wall

Configuring the Client: linux desktop

For generating the private and private key, I did the same thing as what I previously did on my VPS. As for the wg0.conf configuration file, it looked like this:

$ sudo cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = <pc client private key>
Address = 10.23.5.2/24, fc00:23:5::2/64
DNS = 1.1.1.1

[Peer]
PublicKey = <VPS server public key>
Endpoint = <server public IP>:1500
AllowedIPs = 0.0.0.0/0

The client wg0.conf is less busier than the one on server, because it does not need the PreUp and PostDown directives. To avoid DNS leak, the DNS field was added and pointed to CloudFlare’s DNS 1.1.1.1. The Endpoint field specifies the server’s public-facing IP address.

The AllowedIPs on client is different than what was specified on the server. This essentially tells the wireguard on PC to accept traffic from anywhere that came through the wireguard server. This is what I wanted. I would like to have traffic packets from anywhere to be sent to my PC through my VPS.

This is the definition of VPN, right?

Moving forward, I will also write about split-tunneling where you can turn on the wireguard without full tunneling but can still access the internal services.

Before Starting the Connection

Before starting wireguard on server, it must know how to do IP forwarding. By default, any modern linux distribution will have IP forwarding disabled by default because most people don’t need it.

To activate IP forwarding, I created a file called wireguard.conf inside the /etc/sysctl.d/ directory.

$ sudo cat /etc/sysctl.d/wireguard.conf
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

To activate, run

$ sudo sysctl -p /etc/sysctl.d/wireguard.conf

Note that this does not persist on reboot. I should probably come up with something elegant for this, maybe something like systemd unit file? I don’t know. We will see.

Starting the Connection

It is simple.

$ wg-quick up wg0

To bring it down.

$ wg-quick down wg0

From local linux desktop, I was able to ping both server at 10.23.5.1 and google.com. From inside the server, I was able to ping to linux desktop at 10.23.5.2. Now the server and the linux desktop were on the same network.

I call this network as (drum rolls) the Demonic Cluster. I will let you know why in the future why the name is Demonic Cluster lol.

Unexpected Behavior

At first, my linux desktop threw errors saying that it could not find resolvconf.

Weird.

I went online to investigate what this meant, only to my dismay there were discussions on things that were beyond me. But, I was able to find a solution.

Install resolvconf.

$ sudo apt install resolvconf -y

DNS Blackholing with Pi-Hole

February 2018 (a little over a year ago), I installed Pi-Hole on my VPS and configured it to listen on the internet-facing IP. It was a fun experience. However, many advised to not run Pi-Hole on an internet-facing server. After all, Pi-Hole is meant to be installed in a local network for local clients to resolve DNS while blocking ads. Why? Well, malicious actors can somehow perform DNS reflection/amplification DDoS.

Now, given that I could access services on my VPS through the wireguard tunnel, which should be available only to connected parties without being exposed to the public internet, I guess I could refresh the “Pi-Hole on VPS” plan with my new setup?

Yeah, I did that, and it worked.

Before going further, here is what I did. The interface was set to wg0. By default, the installation picked the internet-facing IP address to listen to. I changed it to its wireguard IP address, 10.23.5.1 so that ONLY clients in the same network can see it instead of the whole universe. Then on the client’s wg0.conf, the DNS directive was changed from CloudFlare’s 1.1.1.1 to 10.23.5.1 so that DNS requests go through my Pi-Hole on the VPS.

Going back a few sections before, I mentioned about split tunneling, right? It came to me that I did not need a full tunneling. The ability to reach my Pi-Hole would be fantastic enough.

Hence, on the linux desktop’s wg0.conf, I changed the AllowedIPs to allow split tunneling and to use the Pi-Hole as the DNS resolver.

$ sudo cat /etc/wireguard/wg0.conf
...
DNS = 10.23.5.1
...
AllowedIPs = 10.23.5.0/24
...

With this setup, I was not using the wireguard to tunnel all the traffic. In other words, the public IP on my linux desktop would still be the IP that my internet service provider assigned to me.

This setup still constitutes as a VPN because I can access my internal network at 10.23.5.0/24.

Overview of Pi-Hole Installation

Just watch this GIF. If you have installed Pi-Hole before, the instruction I wrote above would not be that challenging to follow.

pi-hole installation

Securing Pi-Hole Admin Interface from Public

One thing to note here is that now on my VPS I have two network interfaces: the public-facing eth0 and the one spawned by wireguard, the wg0 interface. Concerning the Pi-Hole installation, its port 80 was accessible from both eth0 (public) and wg0 (internal). Being accessible to the public would present some kind of security risk, so that had to change.

At first, I tried using ufw but it did not work as expected (I had no idea why). Maybe I sucked at writing the rules. So, I turned to iptables and it magically worked!

To block acess on port 80 through the eth0 interface, I ran this command:

$ sudo iptables -A INPUT -i eth0 -p tcp --destination-port 80 -j DROP

… where the directive -A tells iptables to append something to the INPUT chain, and here it tells iptables to DROP everything.

To remove the directive, the command is the same but replace -A with -D, which stands for delete.

-- remove the rule
$ sudo iptables -D INPUT -i eth0 -p tcp --destination-port 80 -j DROP

Subdomain & Resolving Services

In the future, this Demonic Cluster is going to have more services running. What I have in mind right now are Huginn, ArchiveBox, Commento, etc.

At the earlier stage, accessing local services done by typing the IP address and the port number. How inelegant that solution was! I was not willing to sacrifice some brain space to memorize some random numbers. What about you? I am sure you don’t want that as well.

Something had to be done. One of the solutions was to edit 10.23.5.1’s /etc/hosts file. Say that there was a service running at port 9813 on 10.23.5.1, so I had this in my /etc/hosts. If you recall, I gave name to 10.23.5.1, which was henesys. So, let’s use that.

$ cat /etc/hosts
10.23.5.1 henesys
Every time after editing the /etc/hosts, do not forget to reload the DNS server via Pi-Hole web interface.

However, /etc/hosts is not made to handle ports as well. To reach port, this job falls onto http server to perform what we call as the reverse-proxy. For this, I went with Caddy. Here is an example of Caddyfile for performing reverse-proxy.

$ cat /home/user/Caddyfile
service.henesys:8080 {
  tls off
  proxy / 10.23.5.1:9813 {
    transparent
  }
}

To run caddy with this configuration:

$ caddy -conf=/home/user/Caddyfile

For now, this looks good. Assuming caddy threw no error, the service should be accessible at service.henesys:8080.

My actual Caddy setup is a little bit elaborated than this and I won’t cover it here. Maybe I might cover it in the future?

Closing Word

I truly had fun learning this. In the past, I had the experience installing OpenVPN (and OpenVPN access), Printunl, and IPsec + L2TP. I dare to say that the simplicity of wireguard is unrivaled. However, this software is still an infant so I am hoping that more good stuff shall come out of this project.

Oh by the way, I came across this a blog post a few days ago, written by Dennis Schubert with the premise that VPN is not the magic bullet to secure your online activity. I would say that my view on VPN is aligned with him. You should read it too.