Skip to content

timcurzon/nomad-cluster

Repository files navigation

Nomad Dev Cluster

What is it?

The Nomad dev cluster is a virtualised self-contained learning environment aimed at people who are interested in the Nomad container orchestration tool as well as the greater HashiCorp ecosystem (Consul & Vault are also included).

Further information on Nomad can be found in the official introductory documentation.

Index

Getting started

Requirements

Requirements are pretty basic. You'll need the following locally installed:

  1. Virtualbox
  2. Vagrant

Both are available for common OS's (Linux, macOS, Windows).

First time startup

Once you have the requirements installed, enure you've checke out the repo:

git clone https://github.com/timcurzon/nomad-cluster.git

Move into the repo directory and start up the cluster...

cd [repoDir]
vagrant up

At this point, 3 almost identical virtualbox machines will be created, each representing a single node in the cluster.

Machine access

Each machine is named node-[number] (where [number] is 1-3), and can be accessed via:

  • SSH simply vagrant ssh node-{1-3}
  • IP Address 172.16.0.10{1-3} (private, accessible only on the host machine)

The following service UIs are initially available:

  • Nomad: http://172.16.0.10{1-3}:4646
  • Consul: http://172.16.0.10{1-3}:8500

Note that as we haven't set up any DNS for the cluster, all access is direct via IP address. DNS options are covered in the local DNS section.

Basic services

Now you have a running cluster, it's time to start up some services.

Firstly, Fabio, the cluster edge router. SSH into node-1, then...

nomad run /services/fabio.nomad

Check out the Fabio job status in the Nomad UI (3 instances should be running), then check the Fabio UI:

  • Nomad job UI: http://172.16.0.101:4646/ui/jobs/fabio
  • Fabio UI: http://172.16.0.101:9998

Starting up Vault

This step is optional, but is critical if you want to play around with setting up SSL/TLS services.

Note that the 3 Vault instances are provisioned by Nomad (in Docker containers). In addition the Vault binary is available on each cluster node (along with appropriate environement variables) to allow CLI interaction.

To start up Vault, SSH into node-1, then...

nomad run /services/vault.nomad

Check the Vault job allocation status in the Nomad UI: http://172.16.0.101:4646/ui/jobs/vault. Once the job is running (all allocations successful)...

  1. Initialise vault:

    • UI: Access the first Vault UI at http://172.16.0.101:8200, enter 1 for both the "Key Shares" and "Key Threshold" values & click "Initialize"
    • CLI: SSH into node-1 then vault operator init -key-shares=1 -key-threshold=1
    • Download or note down the "Initial root token" & "Key 1" values.
  2. Now you have a root token, make a copy of the SaltStack overrides example file & name it overrides.sls (located at saltstack/pillar/overrides.sls.example). Replace the placeholder string "[[insert vault root token value here]]" with the root token value. On the host machine then trigger a Vagrant reload with provision (this allows Nomad to use Vault)...

    vagrant reload --provision
  3. Finally, you need to unseal the Vault instance on each node. This is required every time the service is started up.

    • UI: Head back to the Vault UI on each node at:
    • CLI: SSH into each cluster node in turn and run vault operator unseal, enter the unseal Key 1 as prompted

DISCLAIMER: these values are not acceptable for a production environment, you should also refer to the Vault production hardening docs to learn how to harden Vault appropriately.

Initial cluster snapshot

Now you have a fully initialised cluster you may want to take a snapshot of each node for when you need to revert to a fresh state (see Virtual box user manual, & the snapshot section).

Local DNS

To resolve any services you create on the cluster, some form of (local) DNS will be needed. The simplest approach (at least on Linux) is to use DNSMasq. It is simple to setup (Ubuntu 18.04). The following config should work well enough:

port=53
interface=lo
listen-address=127.0.0.1
bind-interfaces
server=[your upstream DNS server, e.g 8.8.8.8 for Google]

# Cluster domain
address=/.devcluster/172.16.0.101 # No wildcard round-robin DNS, route via node-1
#address=/.devcluster/172.16.0.102
#address=/.devcluster/172.16.0.103

# Per node cluster access
address=/.node-1.devcluster/172.16.0.101
address=/.node-2.devcluster/172.16.0.102
address=/.node-3.devcluster/172.16.0.103

Remember to update the address setting if you change the cluster domain.

Customisation

The SaltStack pillar file saltstack/pillar/overrides.sls.example contains explanations & examples of common configuration values you might want to override. Make a copy of the example file, name it overrides.sls & edit accordingly to override default pillar values.

Cluster name

This guide uses the default cluster name (which is "devcluster").

Notes

Networking

Networking is setup to approximately resemble a production environment, where a cluster node has 2 adapters - one public, and one private dedicated to intra-cluster communication.

Due to the virtualised environment requirements, the actual networking implementation is a little more complicated. Each node has 3 network interfaces, with 1 available outside the VM.

Note that {1-3} represents the cluster node number

  • 10.0.2.x Auto-provisioned by Vagrant, used for outbound network access via NAT
  • 172.16.0.10{1-3} External (& internal) interface, for accessing cluster services from the outside
  • 172.16.30.{1-3} Internal node to node interface, dedicated to Nomad server & cluster service traffic (the fan network bridge routes over this interface)

There are also two bridges:

  • 172.31.{1-3}.n Fan networking - Docker assigns IPs on this range to containers
    • Where {1-3} is the cluster node number
    • n is the per service IP (up to 254 services per cluster node)
  • 172.17.0.1 Default docker0 bridge (unused)

Service to service (container to container) addressing as achieved through fan networking - see Fan Networking on Ubuntu for technical details. In summary, it allows up to 254 uniquely addressable services per node, each routeable from any node in the cluster.

SaltStack

SaltStack is a configuration management tool, a bit like Puppet or Ansible. It is triggered by Vagrant upon provisioning to configure each node (Vagrant has a built in SaltStack provisioner). To perform any non-trivial node customisation, the state & pillar files (configuration actions & key value data respectively) are where you'll likely begin (note the customisation section above though).

SaltStack file overview:

  • saltstack/pillar - configuration data
  • saltstack/salt - configuration actions (services with a more complicated setup may have a directory of support files)

To give a very brief overview of operations - SaltStack starts with processing the top.sls state file which in turn references other state files that are run on the specified nodes, e.g. '*' means run on all nodes (note the node name is specified by Vagrant - see node.vm.hostname definition). The state files for the cluster are specified per service / common action.

Please refer to the docs for further information.

Further Exercises

Links & References

Local DNS setup

About

Nomad / Docker container development & learning cluster using Virtualbox

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published