Consul Integration
Consul is a tool for discovering and configuring services in your infrastructure. Consul's key features include service discovery, health checking, a KV store, and robust support for multi-datacenter deployments. Nomad's integration with Consul enables automatic clustering, built-in service registration, and dynamic rendering of configuration files and environment variables. The sections below describe the integration in more detail.
Configuration
In order to use Consul with Nomad, you will need to configure and install Consul on your nodes alongside Nomad, or schedule it as a system job. Nomad does not run Consul for you.
To enable Consul integration, please refer to the Nomad agent Consul configuration documentation.
Automatic Clustering with Consul
Nomad servers and clients will be automatically informed of each other's existence when a running Consul cluster already exists and the Consul agent is installed and configured on each host. Please refer to the Automatic Clustering with Consul guide for more information.
Service Discovery
Nomad schedules workloads of various types across a cluster of generic hosts. Because of this, placement is not known in advance and you will need to use service discovery to connect tasks to other services deployed across your cluster. Nomad integrates with Consul to provide service discovery and monitoring.
To configure a job to register with service discovery, please refer to the
service
job specification documentation.
Service Mesh
Consul service mesh provides service-to-service connection authorization and encryption using mutual Transport Layer Security (TLS). Nomad can automatically provision the components necessary to securely connect your tasks to Consul's service mesh.
Refer to the Consul Service Mesh integration page for more information.
Dynamic Configuration
Nomad's job specification includes a template
block that uses a Consul
ecosystem tool called Consul Template. This mechanism creates a convenient
way to ship configuration files that are populated from environment variables,
Consul data, Vault secrets, or just general configurations within a Nomad task.
For more information on Nomad's template block and how it leverages Consul
Template, please see the template
job specification documentation.
DNS
To provide Consul DNS to Nomad workloads using bridge or CNI networking
mode, you will need to configure Consul's DNS listener to be exposed to the
workload network namespace, or configure systemd-resolved
, dnsmasq
, or
similar DNS stub resolvers to forward DNS. See Forward DNS for Consul service
discovery for details.
You can avoid exposing the Consul agent on a public IP by setting the Consul
bind_addr
to bind on a private IP address (the default is to use the
client_addr
). You will also need to either have Consul bind to port 53 for
DNS if you are not using DNS forwarding.
You will also need to set the nameserver to the address. This address is
exposed as the consul.dns.addr
node attribute or the DNSStubListener
configuration value for systemd-resolved
.
An simpler solution is to use Consul Connect transparent proxy mode to automatically configure tasks to use Consul DNS.
Consul ACL
The Consul ACL system protects the cluster from unauthorized access. When enabled, both Consul and Nomad must be properly configured in order for their integrations to work.
Refer to the Consul ACL integration page for more information.
Consul Namespaces Enterprise
Nomad provides integration with Consul Namespaces for
service registrations specified in service
blocks and Consul KV reads in
template
blocks.
By default, Nomad will not specify a Consul namespace on service registrations
or KV store reads, which Consul then implicitly resolves to the "default"
namespace. This default namespace behavior can be modified by setting the
namespace
field in the Nomad agent Consul
configuration block.
For more control over Consul namespaces, Nomad Enterprise supports configuring
the Consul namespace at the group or task level in
the Nomad job spec as well as the -consul-namespace
command line argument for job run
.
The Consul namespace used for a set of group or task service registrations
within a group, as well as template
KV store access is determined from the
following hierarchy from highest to lowest precedence:
group and task configuration: Consul namespace field defined in the job at the task or group level.
job run command option: Consul namespace defined in the
-consul-namespace
command line option on job submission.job run command environment various: Consul namespace defined as the
CONSUL_NAMESPACE
environment variable on job submission.agent configuration: Consul namespace defined in the
namespace
Nomad agent Consul configuration parameter.Consul default: If no Consul namespace options are configured, Consul will automatically make use of the
"default"
namespace.
Multiple Consul Clusters Enterprise
Nomad Enterprise supports access to multiple Consul clusters. They can be
configured using multiple consul
blocks with different
name
values. If a name
is not provided, the cluster configuration is called
default
. Nomad automatic clustering uses the default
cluster for service
discovery.
Jobs that need access to Consul may specify which Consul cluster to use with
the consul.cluster
parameter.
Assumptions
Each Nomad client should have a local Consul agent running on the same host, reachable by Nomad. Nomad clients should never share a Consul agent or talk directly to the Consul servers. Nomad is not compatible with Consul Data Plane.
The service discovery feature in Nomad depends on operators making sure that the Nomad client can reach the Consul agent.
Tasks running inside Nomad also need to reach out to the Consul agent if they want to use any of the Consul APIs. Ex: A task running inside a docker container in the bridge mode won't be able to talk to a Consul Agent running on the loopback interface of the host since the container in the bridge mode has its own network interface and doesn't see interfaces on the global network namespace of the host. There are a couple of ways to solve this, one way is to run the container in the host networking mode, or make the Consul agent listen on an interface in the network namespace of the container.
The
consul
binary must be present in Nomad's$PATH
to run the Envoy proxy sidecar on client nodes.Consul service mesh using network namespaces is only supported on Linux.
Compatibility
All currently supported versions of Nomad are compatible with recent versions of Consul, with some exceptions.
- Nomad is not compatible with Consul Data Plane.
Consul 1.16.0+ | Consul 1.17.0+ | Consul 1.18.0+ | |
---|---|---|---|
Nomad 1.7.0+ | ✅ | ✅ | ✅ |
Nomad 1.6.0+ | ✅ | ✅ | ✅ |
Nomad 1.5.0+ | ✅ | ✅ | ✅ |