tl;dr The code for the complete Unifi setup is available in the niels-s/unifi-terraform-example repo
This post is part of a small series, go and read the previous post to setup the volume mount
Now in this post, we set up an Nginx proxy with Certbot. Certbot is an open-source tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS.
I’ve chosen to use a proxy for a couple of reasons.
- configuring certificates for Java applications is a pain in the ass
- more control on SSL configurations
- smoother integration with Let’s Encrypt certificates
Configure a static IP
Before we get started with the Nginx configuration, we are first making sure we have a floating IP assigned to our Droplet. The terminology is perhaps a little unfamiliar, but it’s better know as a static IP. We need this IP so we can configure our DNS record to point to our Droplet so Certbot can retrieve a new Let’s Encrypt certificate.
resource "digitalocean_floating_ip" "unifi_controller" {
region = digitalocean_droplet.unifi_controller.region
}
resource "digitalocean_floating_ip_assignment" "unifi_controller" {
ip_address = digitalocean_floating_ip.unifi_controller.ip_address
droplet_id = digitalocean_droplet.unifi_controller.id
lifecycle {
create_before_destroy = true
}
}
Same as we encountered setting up the block volume, the most crucial part is
that you configure the digitalocean_floating_ip_assignment
. Otherwise, the
floating IP is created but not assigned to any Droplet. And you need to pay
extra for any not assigned floating IP.
I won’t show how to configure the DNS record since I manage my DNS zone with Cloudflare. But make sure you configure your DNS records for the next steps to work correctly.
Also, don’t forget to link your floating IP to your project otherwise, your Terraform state is out of sync with the state of the Digital Ocean project.
resource "digitalocean_project" "unifi" {
...
resources = [
...
"do:floatingip:${digitalocean_floating_ip.unifi_controller.ip_address}"
]
}
Setup docker network
Configuring your user-defined docker network has a couple of advantages over using the default network. The 2 main advantages here are
- User-defined bridges provide automatic DNS resolution between containers.
- Containers connected to the same user-defined bridge network automatically expose all ports to each other.
For more information, check the docker documentation.
data "ignition_systemd_unit" "install_docker_network_unit" {
name = "install_docker_network.service"
enabled = true
content = <<-CONFIG
[Unit]
Description=Install user defined docker network
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/docker network create unifi-network
CONFIG
}
We chose oneshot as service type, which indicates that the process is short-lived and that systemd should wait for the process to exit before continuing with other units
The RemainAfterExit=yes
directive is used to indicate that the service
should be considered active even after the process exits.
We configure the ExecStart
directive with the instruction to create
a user-defined docker network named unifi-network
.
And to make sure the service can properly configure the network, we indicate it requires the docker service before it can start.
Note that wasn’t my first approach, initially, I’ve added the command to the
ExecStartPre
directives of the Nginx and Unifi service which we configure
later on, but because these units start simultaneously we end up with 2 networks
with the same name. This behavior was a little strange, in hindsight, the above
approach feels more natural. I found some inspiration for this approach on the Weave Works Blog
For more information on systemd unit files, you can check the excellent documentation provided by Digital Ocean
Setup the Nginx + Certbot service
I was looking for a setup where I can use Nginx and Let’s Encrypt certificates without too much custom work, and luckily I stumbled on the docker-nginx-certbot Docker image which nicely integrates the 2 projects.
data "ignition_systemd_unit" "nginx_proxy_unit" {
name = "nginx_proxy.service"
enabled = true
content = <<-CONFIG
[Unit]
Description= Nginx Proxy with Certbot
After=docker.service
Requires=docker.service
Requires=install_docker_network.service
[Service]
Restart=always
TimeoutStartSec=0
ExecStartPre=/usr/bin/docker pull staticfloat/nginx-certbot
ExecStartPre=-/usr/bin/docker stop nginxproxy
ExecStartPre=-/usr/bin/docker rm nginxproxy
ExecStart=/usr/bin/docker run \
--name nginxproxy \
--network unifi-network \
--restart=no \
-e CERTBOT_EMAIL=${var.certbot_email} \
-p 80:80 \
-p 443:443 \
-p 3478:3478/udp \
-p 6789:6789 \
-p 8080:8080 \
-v /var/log/nginx:/var/log/nginx \
-v /var/log/letsencrypt:/var/log/letsencrypt \
-v /mnt/unifi_controller_data/letsencrypt:/etc/letsencrypt:rw \
-v /mnt/unifi_controller_data/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /mnt/unifi_controller_data/nginx/conf.d:/etc/nginx/user.conf.d:ro \
-v /mnt/unifi_controller_data/nginx/streams.d:/etc/nginx/streams.d:ro \
staticfloat/nginx-certbot
[Install]
WantedBy=multi-user.target
CONFIG
}
Like with the User-Defined network, we configure multiple Requires
directives.
One to make sure Docker is up and running and another to make sure the
User-Defined network is created since we use it to start our Nginx Docker
container.
Next, we start pulling the staticfloat/nginx-certbot image
, which unfortunately doesn’t have tagged images but only the latest tag. It’s
inconvenient since I consider using the latest tag a bad code smell, but for now
, it will do. And we added an ExecStartPre
to stop and kill the docker image
to be sure.
Finally, we configure the ExecStart
directive with the command it takes to
start our Nginx proxy container properly.
--network unifi-network
attached the Nginx container to the user-defined
network we created, which is critical to communicate with the Unifi container
later on.
-e CERTBOT_EMAIL=${var.certbot_email}
exposes an environment variable inside
the container which is used by Certbot to retrieve new certificates. It’s
important to configure this one env var. Otherwise, Certbot won’t try and fetch
any certificates.
-p 80:80 \
-p 443:443 \
-p 3478:3478/udp \
-p 6789:6789 \
-p 8080:8080 \
Next, we configure all the ports we need to expose to be able to run the Unifi controller. You can find a list of ports in the Unifi Documentation and their purpose. Please pay attention to port 3478 since we also specify which protocol we want to use UDP instead of the default TCP.
-v /var/log/nginx:/var/log/nginx \
-v /var/log/letsencrypt:/var/log/letsencrypt \
-v /mnt/unifi_controller_data/letsencrypt:/etc/letsencrypt:rw \
-v /mnt/unifi_controller_data/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /mnt/unifi_controller_data/nginx/conf.d:/etc/nginx/user.conf.d:ro \
-v /mnt/unifi_controller_data/nginx/streams.d:/etc/nginx/streams.d:ro \
Lastly, we configure a couple of volume mounts. The first two entries configure mounts to save the log output from Nginx and Certbot (Let’s Encrypt) onto the file system of the host. The 3rd mount saves the data from Certbot to our volume mount, so for example, in case we rebuild our Droplet, the certificate is already present, and Certbot won’t need to request a new certificate.
And the following three mounts configure Nginx, but I discuss them in more details later on
Configure Nginx
To configure Nginx, I’ve set up 3 separate files. First, we configure the main
Nginx configuration. The main reason I needed to overwrite the default setup is
that it didn’t include a separate block for streams
, which is needed to
load-balance the UDP connections.
data "ignition_file" "nginx_conf_file" {
filesystem = data.ignition_filesystem.unifi_controller_data_mount.name
path = "/nginx/nginx.conf"
mode = 0644
content {
content = <<-CONFIG
...
stream {
include /etc/nginx/streams.d/*.conf;
}
CONFIG
}
You can find the full configuration file in the repo.
Next, we configure the HTTP servers in a dedicated file. We use a dedicated file
because the Certbot integration scripts of the image monitor the directory
/etc/nginx/conf.d/*.conf
. It makes sure the needed certificates are requested
with Let’s Encrypt.
We configure 3 servers
-
redirecting plain http to https (port 80)
-
main Unifi UI served over https (port 443)
-
Unifi devices and controller communication (port 8080)
Let’s go over the critical parts in this configuration:
...
ssl_certificate /etc/letsencrypt/live/${var.hostname}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${var.hostname}/privkey.pem;
...
As you see, the ssl_certificate
and ssl_certificate_key
are stored in a
specific location. You must use an FQDN as a hostname because that is used to
request the SSL Certificate.
The Unifi Controller uses WebSockets to provide status updates to the dashboard. The WebSocket communication also happens over port 443. To facilitate WebSocket connection is critical that you apply the following configuration.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
location / {
...
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
This Nginx blog post goes into more details on how to upgrade the connections.
Also, little precaution on the ingition_file
paths, these are relative to the
filesystem that’s being used. As you can see for the nginx configuration.
data "ignition_file" "nginx_conf_file" {
filesystem = data.ignition_filesystem.unifi_controller_data_mount.name
path = "/nginx/nginx.conf"
...
}
So this file is saved at /mnt/unifi_controller_data/nginx/nginx.conf
!
PS: Don’t forget to analyze your SSL configuration with the SSL Labs test because these settings are always evolving.
Checkout the original Nginx Proxy for Unifi
I’ve only highlighted some small code snippets but the complete code for the Nginx Proxy setup can be found at Github. The code for the complete Unifi setup is available in the niels-s/unifi-terraform-example repo
This post is part of a small series, go and read the next post to setup a the Unifi Controller