Advanced Server Installation

This section outlines installing the Netmaker server, including Netmaker, Netmaker UI, rqlite, and CoreDNS.

System Compatibility

Netmaker requires elevated privileges on the host machine to perform network operations. Netmaker must be able to modify interfaces and set firewall rules using iptables.

Typically, Netmaker is run inside of containers, using Docker or Kubernetes.

Netmaker can be run without containers, but this is not recommended. You must run the Netmaker binary, CoreDNS binary, database, and a web server directly on the host.

Each of these components has its own individual requirements and the management complexity increases exponentially by running outside of containers.

For first-time installs, we recommend the quick install guide. The following documents are meant for more advanced installation environments and are not recommended for most users. However, these documents can be helpful in customizing or troubleshooting your own installation.

Server Configuration Reference

Netmaker sets its configuration in the following order of precedence:

  1. Environment Variables: Typically values set in the Docker Compose. This is the most common way of setting server values.

  2. Config File: Values set in the config/environments/*.yaml file.

  3. Defaults: Default values set on the server if no value is provided in configuration.

In most situations, if you wish to modify a server setting, set it in the netmaker.env file, then run “docker kill netmaker” and “docker-compose up -d”.

Variable Description


Default: “”

Description: MUST SET THIS VALUE. This is the public, resolvable DNS name of the MQ Broker. For instance:


Default: (Server detects the public IP address of machine)

Description: The public IP of the server where the machine is running.


Default: “”

Description: MUST SET THIS VALUE. This is the public, resolvable address of the API, including the port. For instance:


Default: “”

Description: The public IP of the CoreDNS server. Will typically be the same as the server where Netmaker is running (same as SERVER_HOST).


Default: Equals SERVER_HOST if set, “” if SERVER_HOST is unset.

Description: Should be the same as SERVER_API_CONN_STRING minus the port.


Default: 8081

Description: Should be the same as the port on SERVER_API_CONN_STRING in most cases. Sets the port for the API on the server.


Default: “secretkey”

Description: The admin master key for accessing the API. Change this in any production installation.


Default: “*”

Description: The “allowed origin” for API requests. Change to restrict where API requests can come from.


Default: “on”

Description: Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST). Change to “off” to turn off.


Default: “off”

Description: Enables DNS Mode, meaning config files will be generated for CoreDNS.


Default: “sqlite”

Description: Specify db type to connect with. Currently, options include “sqlite”, “rqlite”, and “postgres”.



Description: Specify the necessary string to connect with your local or remote sql database.


Default: “localhost”

Description: Host where postgres is running.


Default: “5432”

Description: port postgres is running.


Default: “netmaker”

Description: DB to use in postgres.


Default: “postgres”

Description: User for postgres.


Default: “nopass”

Description: Password for postgres.


Default: “off”

Description: The server enables you to set PostUp and PostDown commands for nodes, which is standard for WireGuard with wg-quick, but is also Remote Code Execution, which is a critical vulnerability if the server is exploited. Because of this, it’s turned off by default, but if turned on, PostUp and PostDown become editable.


Default: “on”

Description: If “on”, will allow you to always show the key values of “access keys”. This could be considered a vulnerability, so if turned “off”, will only display key values once, and it is up to the users to store the key values locally.


Default: <system mac addres>

Description: This setting is used for HA configurations of the server, to identify between different servers. Nodes are given ID’s like netmaker-1, netmaker-2, and netmaker-3. If the server is not HA, there is no reason to set this field.


Default: “on”

Description: If “on”, the server will send anonymous telemetry data once daily, which is used to improve the product. Data sent includes counts (integer values) for the number of nodes, types of nodes, users, and networks. It also sends the version of the server.


Default: (public IP of server)

Description: The address of the mq server. If running from docker compose it will be “mq”. If using “host networking”, it will find and detect the IP of the mq container. Otherwise, need to input address. If not set, it will use the public IP of the server. The port 1883 will be appended automatically. This is the expected reachable port for MQ and cannot be changed at this time.


Default: “off”

Description: Whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables and forwarding for MQ.


Default: “on”

Description: # Allows netmaker to manage iptables locally to set forwarding rules. Largely for DNS or SSH forwarding (see below). It will also set a default “allow forwarding” policy on the host. It’s better to leave this on unless you know what you’re doing.


Default: “”

Description: Comma-separated list of services for which to configure port forwarding on the machine. Options include “mq,dns,ssh”. MQ IS DEPRECIATED, DO NOT SET THIS.’ssh’ forwards port 22 over WireGuard, enabling ssh to server over WireGuard. However, if you set the Netmaker server as an Remote Access Gateway (ingress), this will break SSH on Remote Access Clients, so be careful. DNS enables private DNS over WireGuard. If you would like to use private DNS with ext clients, turn this on.


Default: 0

Description: Specify the level of logging you would like on the server. Goes up to 3 for debugging. If you run into issues, up the verbosity.

Config File Reference

A config file may be placed under config/environments/<env-name>.yml. To read this file at runtime, provide the environment variable NETMAKER_ENV at runtime. For instance, dev.yml paired with ENV=dev. Netmaker will load the specified Config file. This allows you to store and manage configurations for different environments. Below is a reference Config File you may use.

  apihost: "" # defaults to or remote ip (SERVER_HOST) if DisableRemoteIPCheck is not set to true. SERVER_API_HOST if set
  apiport: "" # defaults to 8081 or HTTP_PORT (if set)
  masterkey: "" # defaults to 'secretkey' or MASTER_KEY (if set)
  allowedorigin: "" # defaults to '*' or CORS_ALLOWED_ORIGIN (if set)
  restbackend: "" # defaults to "on" or REST_BACKEND (if set)
  clientmode: "" # defaults to "on" or CLIENT_MODE (if set)
  dnsmode: "" # defaults to "on" or DNS_MODE (if set)
  sqlconn: "" # defaults to "http://" or SQL_CONN (if set)
  disableremoteipcheck: "" # defaults to "false" or DISABLE_REMOTE_IP_CHECK (if set)
  version: "" # version of server
  rce: "" # defaults to "off"
  mqhost: "" # defaults to "mq"
  nodeid: "" # defaults to macaddress of machine
  messagequeuebackend: "" # default to "on"
  database: "" # defaults to "sqlite"
  verbosity: "" # defaults to 0
  authprovider: "" # defaults to ""
  displaykeys: "" #  defaults to "on"
  manageiptables: "" # defaults to "on"
  portforwardservices: "" # defaults to "", options include "dns" and "ssh"
  hostnetwork: "" # defaults to "off"
  mqport: "" # defaults to 8883
  mqserverport: "" # defaults to 1883
  server: "" # defaults to "", should be broker domain

Compose File - Annotated

All environment variables and options are enabled in this file. It is the equivalent to running the “full install” from the above section. However, all environment variables are included and are set to the default values provided by Netmaker (if the environment variable was left unset, it would not change the installation). Comments are added to each option to show how you might use it to modify your installation.

As of v0.18.0, netmaker now uses a stun server (Session Traversal Utilities for NAT). This provides a tool for communications protocols to detect and traverse NATs that are located in the path between two endpoints. By default, netmaker uses publicly available STUN servers. You are free to set up your own stun severs and use those to augment/replace the public STUN servers. Update the STUN_LIST to list the STUN servers you wish to use. Two resources for installing your own STUN server are:

There are also some environment variables that have been changed, or removed. Your updated docker-compose and .env files should look like this.

version: "3.4"


    container_name: netmaker
    image: gravitl/netmaker:$SERVER_IMAGE_TAG
    env_file: ./netmaker.env
    restart: always
      - dnsconfig:/root/config/dnsconfig
      - sqldata:/root/data
      # config-dependant vars
      # The domain/host IP indicating the mq broker address
      - BROKER_ENDPOINT=wss://broker.${NM_DOMAIN}
      # The base domain of netmaker
      # Address of the CoreDNS server. Defaults to SERVER_HOST
      # Overrides SERVER_HOST if set. Useful for making HTTP available via different interfaces/networks.

    container_name: netmaker-ui
    image: gravitl/netmaker-ui:$UI_IMAGE_TAG
    env_file: ./netmaker.env
      # config-dependant vars
      # URL where UI will send API requests. Change based on SERVER_HOST, SERVER_HTTP_HOST, and API_PORT
      BACKEND_URL: "https://api.${NM_DOMAIN}"
      - netmaker
      - "netmaker:api"
    restart: always

    image: caddy:2.6.2
    container_name: caddy
    env_file: ./netmaker.env
    restart: unless-stopped
      - "host.docker.internal:host-gateway"
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./certs:/root/certs
      - caddy_data:/data
      - caddy_conf:/config
      - "80:80"
      - "443:443"

    container_name: coredns
    image: coredns/coredns
    command: -conf /root/dnsconfig/Corefile
    env_file: ./netmaker.env
      - netmaker
    restart: always
      - dnsconfig:/root/dnsconfig
    container_name: mq
    image: eclipse-mosquitto:2.0.15-openssl
    env_file: ./netmaker.env
      - netmaker
    restart: unless-stopped
    command: [ "/mosquitto/config/" ]
      - ./mosquitto.conf:/mosquitto/config/mosquitto.conf
      - ./
      - mosquitto_logs:/mosquitto/log
      - mosquitto_data:/mosquitto/data

  caddy_data: { } # runtime data for caddy
  caddy_conf: { } # configuration file for Caddy
  sqldata: { }
  dnsconfig: { } # storage for coredns
  mosquitto_logs: { } # storage for mqtt logs
  mosquitto_data: { } # storage for mqtt data
# Email used for SSL certificates
# The base domain of netmaker
# Public IP of machine
# The admin master key for accessing the API. Change this in any production installation.
# The username to set for MQ access
# The password to set for MQ access
# used for HA - identifies this server vs other servers
# Enables DNS Mode, meaning all nodes will set hosts file for private dns settings
# Enable auto update of netclient ? ENUM:- enabled,disabled | default=enabled
# The HTTP API port for Netmaker. Used for API calls / communication from front end.
# If changed, need to change port of BACKEND_URL for netmaker-ui.
# The "allowed origin" for API requests. Change to restrict where API requests can come from with comma-separated
# URLs. ex:-,
# Show keys permanently in UI (until deleted) as opposed to 1-time display.
# Database to use - sqlite, postgres, or rqlite
# The address of the mq server. If running from docker compose it will be "mq". Otherwise, need to input address.
# If using "host networking", it will find and detect the IP of the mq container.
SERVER_BROKER_ENDPOINT="ws://mq:1883" # For EMQX websockets use `SERVER_BROKER_ENDPOINT=ws://mq:8083/mqtt`
# The reachable port of STUN on the server
# Logging verbosity level - 1, 2, or 3
# Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST).
# Change to "off" to turn off.
# If turned "on", Server will not set Host based on remote IP check.
# This is already overridden if SERVER_HOST is set. Turned "off" by default.
# Whether or not to send telemetry data to help improve Netmaker. Switch to "off" to opt out of sending telemetry.
# OAuth section
# "<azure-ad|github|google|oidc>"
# "<client id of your oauth provider>"
# "<client secret of your oauth provider>"
# "https://dashboard.<netmaker base domain>"
# "<only for azure, you may optionally specify the tenant for the OAuth>"
# - URL of oidc provider

Your Caddyfile should look like this.


    email YOUR_EMAIL

# Dashboard
https://dashboard.NETMAKER_BASE_DOMAIN {
    # Apply basic security headers
    header {
            # Enable cross origin access to *.NETMAKER_BASE_DOMAIN
            Access-Control-Allow-Origin *.NETMAKER_BASE_DOMAIN
            # Enable HTTP Strict Transport Security (HSTS)
            Strict-Transport-Security "max-age=31536000;"
            # Enable cross-site filter (XSS) and tell browser to block detected attacks
            X-XSS-Protection "1; mode=block"
            # Disallow the site to be rendered within a frame on a foreign domain (clickjacking protection)
            X-Frame-Options "SAMEORIGIN"
            # Prevent search engines from indexing
            X-Robots-Tag "none"
            # Remove the server name

    reverse_proxy http://netmaker-ui

        reverse_proxy http://netmaker:8081

# MQ
        reverse_proxy ws://mq:8883

Available docker-compose files

The default options for docker-compose can be found here:

The following is a brief description of each:


Netmaker offers an EMQX option as a broker for your server. The main configuration changes between mosquitto and EMQX is going to take place in the docker-compose.yml, netmaker.env and the Caddyfile.

You can find the EMQX docker-compose file in the netmaker repo.

You should not need to make any changes to the docker-compose-emqx.yml file. Just download this file using the command provided below on the same directory as netmaker.env file. It will grab information from the netmaker.env file.


In your Caddyfile, the only change you need to make is in the mq block.

# MQ
wss://broker.{$NM_DOMAIN} {
    reverse_proxy ws://mq:8083

basically just replace the port number on line reverse_proxy ws://mq:8883 with 8083 emqx websocket port number.

In your netmaker.env file, just replace the line SERVER_BROKER_ENDPOINT="ws://mq:1883" with SERVER_BROKER_ENDPOINT=ws://mq:8083/mqtt. Basically just change the port to 8083 and add /mqtt after that.

In your docker-compose.yml file, add /mqtt at the end of this line BROKER_ENDPOINT=wss://broker.${NM_DOMAIN} which results in BROKER_ENDPOINT=wss://broker.${NM_DOMAIN}/mqtt.

- BROKER_ENDPOINT=wss://broker.${NM_DOMAIN}/mqtt
- EMQX_REST_ENDPOINT=http://mq:18083

Then two new lines - BROKER_TYPE=emqx and - EMQX_REST_ENDPOINT=http://mq:18083 needs to be added after the line - BROKER_ENDPOINT=wss://broker.${NM_DOMAIN}/mqtt as shown above.

If you are using a professional server, you will need to make changes to your netmaker-exporter section in your docker-compose.override.yml file.

    container_name: netmaker-exporter
    image: gravitl/netmaker-exporter:latest
    restart: always
        - netmaker
        SERVER_BROKER_ENDPOINT: "ws://mq:8083/mqtt"
        BROKER_ENDPOINT: "wss://broker.nm.${NM_DOMAIN}/mqtt"
        PROMETHEUS_HOST: "https://prometheus.${NM_DOMAIN}"

At this point you should be able to docker-compose down && docker-compose up -d && docker-compose -f docker-compose-emqx.yml up -d. Your docker logs mq should look something like this.

Listener ssl:default on started.
Listener tcp:default on started.
Listener ws:default on started.
Listener wss:default on started.
Listener http:dashboard on :18083 started.
EMQX 5.0.9 is running now!

Your server is now running on an EMQX broker. you can view your EMQX dashboard with http://<serverip>:18083/. The signin credentials are the EMQX_DASHBOARD__DEFAULT_USERNAME and EMQX_DASHBOARD__DEFAULT_PASSWORD located in your netmaker.env file. This dashboard will give you all the information about your mq activity going on in your netmaker server.

Linux Install without Docker

Most systems support Docker, but some do not. In such environments, there are many options for installing Netmaker. Netmaker is available as a binary file, and there is a zip file of the Netmaker UI static HTML on GitHub. Beyond the UI and Server, you may want to optionally install a database (SQLite is embedded, rqlite or postgres are supported) and CoreDNS (also optional).

Once this is enabled and configured for a domain, you can continue with the below. The recommended server runs Ubuntu 20.04.

Database Setup (optional)

You can run the netmaker binary standalone and it will run an embedded SQLite server. Data goes in the data/ directory. Optionally, you can run PostgreSQL or rqlite. Instructions for rqlite are below.

  1. Install rqlite on your server:

  2. Run rqlite: rqlited -node-id 1 ~/node.1

If using rqlite or postgres, you must change the DATABASE environment/config variable and enter connection details.

Server Setup (using sqlite)

  1. Get the binary. wget -O /etc/netmaker/netmaker$VERSION/netmaker

  2. Move the binary to /usr/sbin and make it executable.

  3. create a config file. /etc/netmaker/netmaker.yml

  server: "<YOUR_BASE_DOMAIN>"
  broker: wss://broker.<YOUR_BASE_DOMAIN>
  apiport: "8081"
  apiconn: "api.<YOUR_BASE_DOMAIN>:443"
  masterkey: "<SECRET_KEY>"
  mqpassword: "<YOUR_PASSWORD>"
  mqusername: "<YOUR_USERNAME>"
  serverbrokerendpoint: "ws://mq:1883"
  1. Update YOUR_BASE_DOMAIN and SECRET_KEY as well as username and passwords for mq.

  2. create your netmaker.service file /etc/systemd/system/netmaker.service

Description=Netmaker Server


ExecStart=/usr/sbin/netmaker -c /etc/netmaker/netmaker.yml

  1. systemctl daemon-reload

  2. Check status: sudo journalctl -u netmaker

  3. If any settings are incorrect such as host or sql credentials, change them under /etc/netmaker/netmaker.yml and then run sudo systemctl restart netmaker

UI Setup

The following uses Caddy as a file server/proxy.

  1. Download and Unzip UI asset files from and put them in /var/www/netmaker

    sudo wget -O /tmp/ sudo unzip /tmp/ -d /var/www/netmaker

  2. Create config.js in /var/www/netmaker


Proxy / Http server


  1. Install Caddy

  2. You should have a Caddy file from installing caddy. Replace the contents of that file with this configuration.

    # ZeroSSL account
    email <YOUR_EMAIL>

# Dashboard
https://dashboard.<YOUR_BASE_DOMAIN> {
    header {
        Access-Control-Allow-Origin *.<YOUR_BASE_DOMAIN>
        Strict-Transport-Security "max-age=31536000;"
        X-XSS-Protection "1; mode=block"
        X-Frame-Options "SAMEORIGIN"
        X-Robots-Tag "none"
    root * /var/www/netmaker

https://api.<YOUR_BASE_DOMAIN> {
  1. start Caddy


You will need an MQTT broker on the host. We recommend Mosquitto. In addition, it must use the mosquitto.conf file. Depending on The version, you will use one of the two files.

# use this config for versions earlier than v0.16.1

per_listener_settings true

listener 8883
allow_anonymous false
require_certificate true
use_identity_as_username false
cafile /etc/mosquitto/certs/root.pem
certfile /etc/mosquitto/certs/server.pem
keyfile /etc/mosquitto/certs/server.key

listener 1883
allow_anonymous true

Start netmaker Copy root.pem, server.pem, and server.key from /etc/netmaker to /etc/mosquitto/certs/

#use this config file for v0.16.1 and later.
per_listener_settings false
listener 8883
allow_anonymous false

listener 1883
allow_anonymous false

plugin /usr/lib/x86_64-linux-gnu/
plugin_opt_config_file /etc/mosquitto/data/dynamic-security.json

Copy dynamic-security.json from etc/netmaker to /etc/mosquitto/data. Restart netmaker. Restart mosquitto. You can check the status of caddy, mosquitto, and netmaker with journalctl -fu <ONE_OF_THOSE_THREE> to make sure everything is working.

Kubernetes Install

Server Install

This template assumes your cluster uses Nginx for ingress with valid wildcard certificates. If using an ingress controller other than Nginx (ex: Traefik), you will need to manually modify the Ingress entries in this template to match your environment.

This template also requires RWX storage. Please change references to storageClassName in this template to your cluster’s Storage Class.


Replace the NETMAKER_BASE_DOMAIN references to the base domain you would like for your Netmaker services (ui,api,grpc). Typically this will be something like

sed -i ‘s/NETMAKER_BASE_DOMAIN/<your base domain>/g’ netmaker-template.yaml

Now, assuming Ingress and Storage match correctly with your cluster configuration, you can install Netmaker.

kubectl create ns nm
kubectl config set-context --current --namespace=nm
kubectl apply -f netmaker-template.yaml -n nm

In about 3 minutes, everything should be up and running:

kubectl get ingress nm-ui-ingress-nginx

Netclient Daemonset

The following instructions assume you have Netmaker running and a network you would like to add your cluster into. The Netmaker server does not need to be running inside of a cluster for this.

sed -i ‘s/ACCESS_TOKEN_VALUE/< your access token value>/g’ netclient-template.yaml
kubectl apply -f netclient-template.yaml

For a more detailed guide on integrating Netmaker with MicroK8s, check out this guide.

Nginx Reverse Proxy Setup with https

The Swag Proxy makes it easy to generate a valid SSL certificate for the config below. Here is the documentation for the installation.

The following file configures Netmaker as a subdomain. This config is an adaption from the swag proxy project.


server {
    # Redirect HTTP to HTTPS.
    listen 80;
    server_name *; # Please change to your domain
    return 301 https://$host$request_uri;

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name; # Please change to your domain
    include /config/nginx/ssl.conf;
    location / {
        proxy_pass http://<NETMAKER_IP>:8082;

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name; # Please change to your domain
    include /config/nginx/ssl.conf;

    location / {
        proxy_pass http://<NETMAKER_IP>:8081;
        proxy_set_header            Host; # Please change to your domain
        proxy_pass_request_headers  on;

Nginx Proxy Manager Setup

To use Netmaker with Nginx Proxy Manager, three proxy hosts should be added, one for each subdomain used by netmaker. Each subdomain should have SSL enable and be configured as follows:
Forward Hostname/IP: netmaker
Forward Port: 8081
Forward Hostname/IP: netmaker-ui
Forward Port: 80
Forward Hostname/IP: netmaker
Forward Port: 50051
Custom Locations:
Add location /
Forward Hostname/IP: netmaker
Forward Port: 50051
Custom config (gear button): grpc_pass netmaker:50051;

The following is a cleaned up config generated by Nginx Proxy Manager to show how nginx can be configured to support Netmaker. This does not include the neccessary SSL configuration.

# ------------------------------------------------------------
# ------------------------------------------------------------
server {
  set $forward_scheme http;
  set $server         "netmaker-ui";
  set $port           80;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
  location / {
    # Proxy!
    include conf.d/include/proxy.conf;
        #above file includes:
        #add_header       X-Served-By $host;
        #proxy_set_header Host $host;
        #proxy_set_header X-Forwarded-Scheme $scheme;
        #proxy_set_header X-Forwarded-Proto  $scheme;
        #proxy_set_header X-Forwarded-For    $remote_addr;
        #proxy_set_header X-Real-IP          $remote_addr;
        #proxy_pass       $forward_scheme://$server:$port$request_uri;

# ------------------------------------------------------------
# ------------------------------------------------------------
server {
  set $forward_scheme http;
  set $server         "netmaker";
  set $port           8081;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
  location / {
    # Proxy!
    include conf.d/include/proxy.conf;
        #above file includes:
        #add_header       X-Served-By $host;
        #proxy_set_header Host $host;
        #proxy_set_header X-Forwarded-Scheme $scheme;
        #proxy_set_header X-Forwarded-Proto  $scheme;
        #proxy_set_header X-Forwarded-For    $remote_addr;
        #proxy_set_header X-Real-IP          $remote_addr;
        #proxy_pass       $forward_scheme://$server:$port$request_uri;

# ------------------------------------------------------------
# ------------------------------------------------------------
server {
  set $forward_scheme http;
  set $server         "netmaker";
  set $port           50051;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Scheme $scheme;
    proxy_set_header X-Forwarded-Proto  $scheme;
    proxy_set_header X-Forwarded-For    $remote_addr;
    proxy_set_header X-Real-IP          $remote_addr;
    proxy_pass       http://netmaker:50051;
    grpc_pass netmaker:50051;

Highly Available Installation (Kubernetes)

Netmaker comes with a Helm chart to deploy with High Availability on Kubernetes:

helm repo add netmaker
helm repo update


To run HA Netmaker on Kubernetes, your cluster must have the following: - RWO and RWX Storage Classes (RWX is only required if running Netmaker with DNS Management enabled). - An Ingress Controller and valid TLS certificates - This chart can currently generate ingress for Nginx or Traefik Ingress with LetsEncrypt + Cert Manager - If LetsEncrypt and CertManager are not deployed, you must manually configure certificates for your ingress

Furthermore, the chart will by default install and use a postgresql cluster as its datastore.

Example Installations:

An annotated install command:

helm install netmaker/netmaker --generate-name \ # generate a random id for the deploy
--set \ # the base wildcard domain to use for the netmaker api/dashboard/grpc ingress
--set replicas=3 \ # number of server replicas to deploy (3 by default)
--set ingress.enabled=true \ # deploy ingress automatically (requires nginx or traefik and cert-manager + letsencrypt)
--set ingress.className=nginx \ # ingress class to use
--set ingress.tls.issuerName=letsencrypt-prod \ # LetsEncrypt certificate issuer to use
--set dns.enabled=true \ # deploy and enable private DNS management with CoreDNS
--set dns.clusterIP= --set dns.RWX.storageClassName=nfs \ # required fields for DNS
--set postgresql-ha.postgresql.replicaCount=2 \ # number of DB replicas to deploy (default 2)

The below command will install netmaker with two server replicas, a coredns server, and ingress with routes of,, and CoreDNS will be reachable at, and will use NFS to share a volume with Netmaker (to configure dns entries).

helm install netmaker/netmaker --generate-name --set \
--set replicas=2 --set ingress.enabled=true --set dns.enabled=true \
--set dns.clusterIP= --set dns.RWX.storageClassName=nfs \
--set ingress.className=nginx

The below command will install netmaker with three server replicas (the default), no coredns, and ingress with routes of,, and There will be one UI replica instead of two and one database instance instead of two. Traefik will look for a ClusterIssuer named “le-prod-2” to get valid certificates for the ingress.

helm3 install netmaker/netmaker --generate-name \
--set --set postgresql-ha.postgresql.replicaCount=1 \
--set ui.replicas=1 --set ingress.enabled=true \
--set ingress.tls.issuerName=le-prod-2 --set ingress.className=traefik

Below, we discuss the considerations for Ingress, Kernel WireGuard, and DNS.


To run HA Netmaker, you must have ingress installed and enabled on your cluster with valid TLS certificates (not self-signed). If you are running Nginx as your Ingress Controller and LetsEncrypt for TLS certificate management, you can run the helm install with the following settings:

  • –set ingress.enabled=true

  • –set<your LE issuer name>

If you are not using Nginx or Traefik and LetsEncrypt, we recommend leaving ingress.enabled=false (default), and then manually creating the ingress objects post-install. You will need three ingress objects with TLS:

  • dashboard.<baseDomain>

  • api.<baseDomain>

  • grpc.<baseDomain>

If deploying manually, the gRPC ingress object requires special considerations. Look up the proper way to route grpc with your ingress controller. For instance, on Traefik, an IngressRouteTCP object is required.

There are some example ingress objects in the kube/example folder.

Kernel WireGuard

If you have control of the Kubernetes worker node servers, we recommend first installing WireGuard on the hosts, and then installing HA Netmaker in Kernel mode. By default, Netmaker will install with userspace WireGuard (wireguard-go) for maximum compatibility and to avoid needing permissions at the host level. If you have installed WireGuard on your hosts, you should install Netmaker’s helm chart with the following option:

  • –set wireguard.kernel=true


By Default, the helm chart will deploy without DNS enabled. To enable DNS, specify with:

  • –set dns.enabled=true

This will require specifying a RWX storage class, e.g.:

  • –set dns.RWX.storageClassName=nfs

This will also require specifying a service address for DNS. Choose a valid ipv4 address from the service IP CIDR for your cluster, e.g.:

  • –set dns.clusterIP=

This address will only be reachable from hosts that have access to the cluster service CIDR. It is only designed for use cases related to k8s. If you want a more general-use Netmaker server on Kubernetes for use cases outside of k8s, you will need to do one of the following: - bind the CoreDNS service to port 53 on one of your worker nodes and set the COREDNS_ADDRESS equal to the public IP of the worker node - Create a private Network with Netmaker and set the COREDNS_ADDRESS equal to the private address of the host running CoreDNS. For this, CoreDNS will need a node selector and will ideally run on the same host as one of the Netmaker server instances.


To view all options for the chart, please visit the README in the code repo here .

Security Settings

In some cases, it is useful to secure your web dashboard behind a firewall so it can only be accessed in that location. However, you may not want the API behind that firewall so the other nodes can interact with the network without the heightened security. This can be done in Your Caddyfile if you are using caddy.

For Caddy

In your /root/Caddyfile look in the Dashboard section for reverse_proxy http://netmaker-ui

Above that line add the following

@blocked not remote_ip <ip1> <ip2> <ip3>
respond @blocked "Nope" 403

Replace the <ip> placeholders with your whitelist IP ranges.

For Traefik

  1. In the labels section, add the following line:

  1. Then look for this line:


and change it to this:


Replace YOUR_IP_CIDR with the whitelist IP range (can be multiple ranges).

After changes are made for your reverse proxy, docker-compose down && docker-compose up -d and you should be all set. You can now keep your dashboard secure and your API more available without having to change netmaker-ui ports.

Setup Netmaker on IPv6 only machine

This is not a guide how to add an overlay network(with IPv6) in Netmaker, it can be found in Setup page. This is to setup Netmaker working on an IPv6 only machine.

About the install script

At the moment which the document is written, the install script only supports IPv4. For the installation, the IPv4 needs to be enabled temporary anyway.

Add AAAA record for domain name resolution

The Netmaker client communicates with Netmaker server by domain name. AAAA record here is to resolve the domain name to IPv6 address.

By default, Netmaker works on IPv4. Because Docker works on IPv4 by default. After the installation, there are several steps to enable IPv6 for Docker and Netmaker.

Enable IPv6 support for Docker

  1. Add/Edit the configuration file /etc/docker/daemon.json:

  "experimental": true,
  "ip6tables": true
  1. Restart the Docker daemon for your changes to take effect.

sudo systemctl restart docker
  1. Create a new IPv6 network, for example,

docker network create --ipv6 --subnet 2001:0DB8::/112 ip6net

where “ip6net” is the network name, “2001:0DB8::/112” is the network range.

Enable IPv6 support for Netmaker

  1. Edit docker-compose.yml file and add the following lines at the bottom.

    external: true
  1. The same in docker-compose.yml file, add networks field for every service.

  - ip6net
  1. Run commands “docker-compose down” and “docker-compose up -d” to restart Netmaker server