I tested Nomad on Windows PC using Vagrant (VirtualBox backend, running Fedora 34).
In this small single-node test setup, Nomad is used to orchestrate service containers, Consul for service discovery, and Traefik for reverse-proxy.
Consul and Nomad are run from single binaries installed at /usr/local/bin
, while Traefik was run from Nomad.
In a production environment, single-node deployment is not appropriate.
Table of Content:
- Introduction
- Vagrant with VirtualBox
- Installing Nomad
- Installing Consul
- Installing Docker
- Example Using Redis
- Load Balancing Example with Traefik
Introduction
Milan Aleksic shared his write-up, Using Ansible & Nomad for a homelab (part 1). I have known and tried Ansible in the past, but Nomad piqued my interest. After some Google searching, I learned that it is an alternative to Kubernetes (k8s), fortunately with less steep learning curve. On the same day, I decided to give it a try on my Windows PC laptop.
I relied mostly on Tom Bamford’s Nomad/Consul/HAProxy tutorial, Introduction to HashiCorp Nomad, with some changes here and there.
Vagrant with VirtualBox
Download and install Vagrant. To run a Fedora box:
vagrant init bento/fedora-34
vagrant up
A new Vagrantfile
will be created in the current directory after issuing vagrant init
command.
Once ready, ssh
into this new Vagrant box.
vagrant ssh
Once inside, some housekeeping, and then installing nomad
by downloading its official binary.
Note that the default user is vagrant
.
$ sudo dnf upgrade
$ sudo dnf install htop git unzip
For managing images downloaded by Vagrant on the host machine.
vagrant box list
vagrant box remove <box-name>
Note: I first tried using fedora/35-cloud-base --box-version 35.20211026.0
image.
However, I could not use dnf
; either get killed or I got kicked out from the SSH.
Installing Nomad
Download the official binary and install it.
$ wget https://releases.hashicorp.com/nomad/1.2.6/nomad_1.2.6_linux_amd64.zip
$ unzip nomad_*
$ sudo mv nomad /usr/local/bin
Run nomad
with:
$ sudo nomad agent -dev
The -dev
flag tells nomad
to run both as server and client on current node.
This is not recommended for a production environment.
However, it binds to localhost
at port 4646
, so we cannot access it from the host.
There are two options here: run the Vagrant in bridged, public, or port-forward.
I chose public network, only because it is simpler that way.
Exit the Vagrant box and modify the Vagrantfile
to add the following lines within the config
block to use the public network method.
config.vm.network "public_network"
The "public_network"
allows the VM to get its own IP from the main router.
Then, run vagrant reload
to apply the change.
Log into the system again via ssh
and check IP with ip a
command.
There should be two adapters, e.g. eth0
and eth1
.
In my case, eth1
adapter is connected directly to my wireless router.
$ sudo nomad agent -dev -bind 0.0.0.0 -log-level WARN
With -bind
flag, nomad
runs on all interfaces.
With -log-level WARN
flag, I am reducing clutter to the stdout
.
Installing Consul
Download the official binary and install it.
$ wget https://releases.hashicorp.com/consul/1.11.3/consul_1.11.3_linux_amd64.zip
$ unzip consul-*
$ sudo mv consul /usr/local/bin
Run consul
:
$ consul agent -dev -client 0.0.0.0 -log-level WARN
By default, consul
recognizes nomad
and shows both nomad-client
and nomad
available on the system.
Installing Docker
To orchestrate (docker) containers, docker
must first be installed and its daemon must be run.
$ sudo dnf -y install dnf-plugins-core
$ sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
$ sudo dnf install docker-ce docker-ce-cli containerd.io
$ sudo systemctl enable --now docker
$ sudo usermod -aG docker $USER
Also, install podman
as well:
$ sudo dnf install -y podman
Note that once docker
daemon started running, nomad
should be able to pick it up.
However for podman
, a driver for nomad
has to be installed first (not covered here).
Example Using Redis
Making sure Nomad and Consul are working.
For this testing purposes, I submitted jobs directly using the Jobs web interface instead of writing .hcl
Nomad job spec.
job "redis" {
datacenters = ["dc1"]
group "cache" {
count = 2
network {
port "db" {
to = 6379
}
}
service {
name = "redis"
tags = ["cache", "db"]
}
task "redis" {
driver = "docker"
config {
image = "redis:3.2"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
Nomad integrates nicely with Consul.
The service
stanza allows Consul to pick it up.
Load-Balancing Example with Traefik
The demo application, printing its own address with port upon access. Here, 2 instances are used to demonstrate.
job "demo-webapp" {
datacenters = ["dc1"]
group "demo" {
count = 2
network {
port "http"{
to = -1
}
}
service {
name = "demo-webapp"
port = "http"
tags = [
"traefik.enable=true",
"traefik.http.routers.http.rule=Path(`/myapp`)",
]
check {
type = "http"
path = "/"
interval = "2s"
timeout = "2s"
}
}
task "server" {
env {
PORT = "${NOMAD_PORT_http}"
NODE_IP = "${NOMAD_IP_http}"
}
driver = "docker"
config {
image = "hashicorp/demo-webapp-lb-guide"
ports = ["http"]
}
resources {
memory = 150
}
}
}
}
Under the service
stanza, there is tags
field.
Consul will pick this up, and since Traefik is listening to Consul, Traefik will pick it up and apply the router as defined here:
traefik.http.routers.http.rule=Path(`/myapp`)
… which means Traefik will serve the app at <ip>:<port>/myapp
, where in this case it will be <ip>:8080/myapp
.
For Traefik
job "traefik" {
region = "global"
datacenters = ["dc1"]
type = "service"
group "traefik" {
count = 1
network {
port "http" {
static = 8080
}
port "api" {
static = 8081
}
}
service {
name = "traefik"
check {
name = "alive"
type = "tcp"
port = "http"
interval = "10s"
timeout = "2s"
}
}
task "traefik" {
driver = "docker"
config {
image = "traefik:v2.2"
network_mode = "host"
volumes = [
"local/traefik.toml:/etc/traefik/traefik.toml",
]
}
template {
data = <<EOF
[entryPoints]
[entryPoints.http]
address = ":8080"
[entryPoints.traefik]
address = ":8081"
[api]
dashboard = true
insecure = true
# Enable Consul Catalog configuration backend.
[providers.consulCatalog]
prefix = "traefik"
exposedByDefault = false
[providers.consulCatalog.endpoint]
address = "127.0.0.1:8500"
scheme = "http"
EOF
destination = "local/traefik.toml"
}
resources {
cpu = 100
memory = 128
}
}
}
}
Go to <ip-address>/myapp
to try, refresh to see the port changes.