Cloud Traefik Authelia AWS Remote Linux Desktop ThinLinc

Setting ThinLinc on up an AWS EC2 instance and behind Traefik and Authelia

Jul, 04, 24
Written by: James Freeman

We are thrilled to welcome James Freeman as a guest blogger! The topic of this blog was decided by a vote on the ThinLinc Community.

James brings over two decades of experience in the technology sector, with a specialized focus on Ansible and Linux systems. He is the co-author of “Mastering Ansible” and has shared his extensive knowledge at numerous industry events, including AnsibleFest.

Setting ThinLinc on up an AWS EC2 instance and behind Traefik and Authelia


Thank you for reading this document – it’s a pleasure to be writing as a guest blogger for Cendio! This document, as voted for by the community on the forum, is an opinionated guide to setting up ThinLinc on an EC2 instance (on AWS), behind Traefik and Authelia.

I personally like opinionated documents – they form a complete worked example of how to do something, that you can follow through yourself from start to finish and produce a working configuration, even if you have no prior experience. Granted not everyone will want or use the exact configuration I’m creating here, and that’s fine – I hope that this guide will help you with your own project, and that you’ll be able to take what you need from it, learning a few things along the way.


The first question is, what are we trying to achieve here, and why? Since finding it, I have really come to appreciate ThinLinc for many reasons. I am finding that my clients love it too.

The setup we’re going to create here was actually born out of a personal requirement – to have a low-cost, cloud-based virtual desktop that I could log into from anywhere without any special apps or devices. This comes in particularly handy when I’m on guest WiFi networks, some of which I’ve found are incredibly restrictive when it comes to the ports they allow. If you’ve used ThinLinc before, you’ll know that it has an excellent HTML5 interface which you can use on any device including your phone or tablet. However, the downsides were that it runs on port 300/tcp (a non-standard port that I’ve found myself firewalled off from), and that, as this is going in front of something that I want to be highly secure, I have a personal preference to add MFA to the setup.

Thus, the goals I set out to achieve were:

  • Cloud-based “virtual desktop”
  • ThinLinc Web UI is accessible over the standard HTTPS port
  • TLS encryption with the option to integrate LetsEncrypt
  • Minimal to no changes to the default ThinLinc configuration

These are going to take some work, so that’s as far as we’ll take things in this article. However, you would be forgiven for thinking, “Well, ThinLinc already has a Web UI – why don’t I just change the port from 300 to 443? Or indeed set up an iptables redirect from port 300 to 443?

These are entirely valid options – as I mentioned before, this is an opinionated way of doing things and there’s no right or wrong here. However, the goal of this initial piece of work is to build a framework so that we can:

  • Easily integrate LetsEncrypt so we don’t have to deal with validation errors in our web browser
  • Access via FQDN (required for removal of TLS errors)
  • Enable MFA in front of the ThinLinc login to protect our sensitive data
  • The option to share the HTTPS port with other web services in future (for me this is a big requirement – there are so many useful services you can self-host, and I find that a lot of the guest Wi-Fi that I use allows port 443, but blocks other “non-standard” ports)

As mentioned in the original set of goals, another big driver is to be able to do this without making significant changes to the ThinLinc configuration. We could of course heavily customise it, and again this isn’t wrong – but I find in the field that doing this becomes problematic when the time comes to upgrade – the closer you’ve kept your services to the developer’s intended configuration, the easier the upgrade path generally is.

With all that out of the way, let’s get into building this framework.

Pre-requisites and Assumptions

This document assumes you have a working knowledge of Linux and the shell, and are comfortable with setting up EC2 in AWS. In addition, you will need to complete the following pre-requisite steps before following this guide:

  1. Although you don’t have to follow this guide on AWS, I will provide AWS CLI commands to create the instance. If you want to follow this guide to the letter, you will need an AWS account (I’ll make sure everything I do can fit into the Free Tier)
  2. You will need to install the AWS CLI to run these specific commands – see
  3. You will need to create administrator credentials for use with the AWS CLI tool – refer to this document to set up appropriate credentials:
  4. Configure the AWS CLI with your credentials:
  5. You will need an SSH terminal application for use once you’ve created your EC2 instance – there are no restrictions here on what you can use.
  6. A web browser that supports ThinLinc Web Access – see requirements:


The code and configuration given in this article are intended to provide you with a framework, a foundation to build on, and as such I’ve minimized the configuration as much as possible to keep it concise and simple. This setup is not intended for use “as-is” in a production environment, and you should take steps to further secure it, and validate it against your own security standards.

AWS EC2 instance setup

Before we can perform any of the actual host setup, we need to create an EC2 instance to perform the relevant steps on. Throughout this post, I’ll use the Frankfurt (eu-centra-1) region, but this process should work on any region of your choice.

export AWS_DEFAULT_REGION=eu-central-1

I’m not going to get into DNS in this post, so the first step is to allocate ourselves an Elastic IP address so that we know where to find our host on the internet. Note that these might incur charges so you can skip this step if you want to complete everything on the Free Tier, but you may have to reconfigure some things later in the event that your public IP changes:

ALLOCATION_ID=$(aws ec2 allocate-address --query 'AllocationId' --output text)

We’re also going to need to create an SSH key so that we can access our new instance:

$ aws ec2 create-key-pair --key-name ThinLincDemo --query 'KeyMaterial' --output text > ThinLincDemo.pem
$ chmod 0400 ThinLincDemo.pem

If you’ve already completed this and want to use your existing key, you’ll need the KeyName of your previously created key, which you can query with:

$ aws ec2 describe-key-pairs --query 'KeyPairs[].KeyName' --output text

With this complete, we’ll now create a Security Group for use with our EC2 instance, that allows SSH and HTTPS only, from anywhere.

$ SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name ThinLincDemoSG --description "Security group for ThinLinc - allow SSH and HTTPS access" --output text)

$ aws ec2 authorize-security-group-ingress --group-id $SECURITY_GROUP_ID --protocol tcp --port 22 --cidr
$ aws ec2 authorize-security-group-ingress --group-id $SECURITY_GROUP_ID --protocol tcp --port 443 --cidr

Now we can spin up our EC2 instance – I’m going to use a t3.micro instance type which is within the Free Tier, and the latest AMI I can find for Ubuntu 22.04 LTS. I realise that Ubuntu 24.04 has been released at the time of writing, but official AMIs are not yet available so we’ll use the previous LTS release. This setup should work the same when Ubuntu 24.04 AMIs are officially released.

I’m using this command to look up the AMI ID that I want – the --owners flag specifies Canonical’s account which is how we’re filtering down to their official images:

$ aws ec2 describe-images --owners 099720109477 --filters "Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-jammy*-amd64-server-2024*" --query 'Images[*].[ImageId,Name]' --output table

Once we’ve found our preferred AMI, we can launch the EC2 instance with this command:

$ INSTANCE_ID=$(aws ec2 run-instances --image-id ami-00975bcf7116d087c --count 1 --instance-type t3.micro --key-name ThinLincDemo --security-group-ids $SECURITY_GROUP_ID --query 'Instances[0].InstanceId' --output text)

Finally, if you’re using an Elastic IP address with this instance, you can associate it using this command:

$ aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOCATION_ID

If all has gone according to plan, you should now be able to connect to your instance on its public IP address:

$ ssh -i ThinLincDemo.pem ubuntu@$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)

Congratulations – that’s one of the most fundamental parts of this process complete!

Installing ThinLinc

Now that we have an EC2 instance running, let’s get ThinLinc installed. Proceed to the download page to download the server package:

Then install the package according to the instructions here:

In brief summary, these are the commands I ran, but you should install the ThinLinc server according to your requirements:

# Copy the ThinLinc server ZIP file to the server
$ scp -i ThinLincDemo.pem ubuntu@$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query 'Reservations[*].Instances[*].PublicIpAddress' --output text):

# On the server, install ThinLinc server
$ sudo apt -y install unzip
$ unzip
$ cd tl-4.16.0-server/
$ sh ./install-server

Next, to ensure that ThinLinc will respond on our public IP address, we’ll need to edit vsmagent.conf and restart the vsmagent service as follows (this is the only change we’re making to the base ThinLinc configuration):

$ sudo sed -i "s/^agent_hostname=.*/agent_hostname=$(curl -s" /opt/thinlinc/etc/conf.d/vsmagent.hconf
$ sudo systemctl restart vsmagent.service

If we’re actually going to test out our ThinLinc setup at the end of this process, then we need one more fundamental part before we move on to configuring the web access – we need a graphical interface installed on our Linux host! As we’re running a micro instance here, I’m going to install a lightweight one, but you can replace this with your personal preference, especially if you have sufficient system resources.

$ sudo apt -y install xubuntu-desktop

WARNING! Even one of the lighter-weight desktop environments such as XFCE4 will not run well on a t2.micro or t3.micro instance so although you can use this Free Tier instance type for testing purposes, I don’t recommend it if you want to actively use the remote desktop environment – you would definitely want to go for a more powerful instance type.

Finally, you’ll need to set a password for the default user on your EC2 instance as this isn’t done by default for obvious reasons! The default user account is often ec2-user, but on the official Ubuntu images it is ubuntu, so I’ll set my password interactively using the following command:

$ passwd

Install Traefik

Traefik has become one of my favourite ways to proxy my web applications, and I’m fairly sure I’ve only just scratched the surface of what it can do. We’re going to install it here using Docker – after all, this is where its strength lies – and the procedure from hereon-in should, if you use Docker, be almost identical regardless of which flavour of Linux you use. This is the beauty of running applications in containers after all.

I’m going to install Docker CE here. We can install Docker using the convenient script and a few simple commands as follows:

$ curl -fsSL -o
$ sh
$ sudo gpasswd -a $USER docker

You will need to log out and back into your SSH session to pick up the group change and run docker commands using your unprivileged user account.

Once you’ve done this, it’s time to start building our Traefik configuration. The Traefik configuration is divided into two parts: 1. The static configuration – this is the startup configuration, and is read once on Traefik startup 2. The dynamic configuration – this configuration is, as it sounds, dynamic and can be sourced from both a plain text configuration file and providers such as Docker. The latter is incredibly powerful because you can start up a Docker container after Traefik and its routing/proxy configuration will be added to Traefik at runtime without impacting any of the other services it is already providing.

In our setup, we’re going to need both, and also two forms of dynamic configuration as certain configuration directives can only be read from a plain-text configuration file at this time. Don’t worry too much about these concepts – all will become clear as we build out our configuration.

To get started, let’s create a directory structure to contain our configuration files.

$ mkdir -p ~/traefik/{traefik-config,authelia-config}
$ cd ~/traefik

Now we’re going to create a Docker network on which to run our Traefik-related containers. This is a one-off command and only needs to be run once when you’re setting up a new host.

$ docker network create traefik_public

As we’re working with self-signed TLS certificates at this stage, we also need to create a new self-signed certificate and private key for Traefik to use on our public-facing endpoint. Note that I’m setting a Common Name in the certificate, but this isn’t really necessary at this stage as we’re not setting up DNS in this example.

$ openssl req -x509 -newkey rsa:2048 -keyout traefik-config/tldemo.key -out traefik-config/tldemo.crt -days 3560 -nodes -subj "/C=UK/ST=London/O=tldemo/"

Docker Compose Configuration

With the groundwork complete, let’s start building out our Docker configuration. I’m going to use Docker Compose for this example and it provides a nice, easy-to-read definition of your container configuration which you can start, stop and debug with simple commands, and which you can also commit to version control.

Let’s start with the following block of the file – we’ll break it down into chunks to help you understand what we’re creating:

$ cat <<EOF > docker-compose.yml
    image: traefik:v3.0
 - target: 80
        published: 80
        protocol: tcp
        mode: host
 - target: 443
        published: 443
        protocol: tcp
        mode: host
 - target: 8080
        published: 8080
        protocol: tcp
 - /var/run/docker.sock:/var/run/docker.sock:ro
 - ./traefik-config:/etc/traefik
 - traefik_public

As you can see (if you’ve not come across one before), Docker Compose files are YAML-based. Breaking it down at a high level, this segment of the file is telling Docker the following:

  • We’re going to launch a new Docker Service called traefik which will use the traefik:v3.0 image from Docker Hub.
  • Publish ports 80, 443 and 8080/tcp on the host – thus Traefik is going to act as our web endpoint
    • Port 80 is blocked by our Security Group, but I’ve included it here for completeness. You might want to allow this if you are going to add HTTP to HTTPS redirection to this configuration.
    • Port 8080 is used for the Traefik management interface – for security reasons it makes sense to not open this port in our Security Group, but you can use SSH tunnelling to access the management interface if you want to see or troubleshoot your running configuration.
  • Mount the Docker socket file to the running container – this enables Traefik to read the dynamic configuration provided by other Docker Services.
  • Mount the traefik-config local directory we created above to /etc/traefik/ inside the container – part of our dynamic configuration will live here.
  • Connect the container to the traefik_public Docker network we created earlier.

With the fundamentals completed, we can now add our static configuration to the service definition – these take the form of command line switches we’re passing to Traefik:

$ cat <<EOF >> docker-compose.yml
 - --global.checkNewVersion=true
 - --global.sendAnonymousUsage=true
 - --api.dashboard=true
 - --api.insecure=true
 - --entryPoints.http.address=:80
 - --entryPoints.https.address=:443
 - --entryPoints.http.http.redirections.entryPoint.scheme=https
 - --entryPoints.https.http.tls.certResolver=main
 - --providers.docker.endpoint=unix:///var/run/docker.sock
 - --providers.file.filename=/etc/traefik/config.yml
 - --experimental.plugins.subfilter.version=v0.5.0
 - --log.level=INFO

These configuration options tell Traefik to:

  • Check for new versions, and also send anonymous usage stats back to the publisher. Feel free to set these to false if you wish.
  • Enable the management interface on port 8080 and allow anonymous access – set the api.* parameters to false to disable this.
  • Define web entrypoints for HTTP and HTTPS on their respective normal port numbers.
  • Configure two dynamic configuration providers – the Docker socket file we mounted earlier, and the config.yml file which we’re going to create separately.
  • Add a plugin called subfilter to Traefik
  • Set the log level to INFO – turn this up to DEBUG if you’re experiencing issues.

The final piece of this service configuration is to define some service labels – this is how dynamic configuration is provided to the Docker daemon, to be read through its socket file:

$ cat <<EOF >> docker-compose.yml
 - 'traefik.http.middlewares.authelia.forwardAuth.address=http://authelia:9091/api/verify?rd=https://$(curl --silent'
 - 'traefik.http.middlewares.authelia.forwardAuth.trustForwardHeader=true'
 - 'traefik.http.middlewares.authelia.forwardAuth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'


Although these look complex, all you need to know at this stage is that we are:

  • Telling Traefik where to forward its authentication requests to – we’ll create a separate service called authelia in a minute.
    • Note the shell expansion in this line – as we’re not using DNS, we need to put the external static IP address of the Docker host into the URL. There are a number of ways you can source it, and you can replace this string by hand if you wish.
  • Setting headers to support the authentication process.

The final part of our Docker Compose file looks like this:

$ cat <<EOF >> docker-compose.yml
    image: authelia/authelia
    restart: unless-stopped
 - ./authelia-config:/config
 - traefik_public
 - "traefik.enable=true"
 - ""
 - "traefik.http.routers.authelia.rule=PathPrefix(`/authelia`)"
 - "traefik.http.routers.authelia.entrypoints=https"
 - ""
 - "traefik.http.routers.authelia.service=authelia"

    external: true

Authelia is an excellent authentication service that you can run in combination with Traefik. Although what we’re doing initially here looks a bit pointless – putting one static login in front of another – the strength of Authelia is that you can build on the basic configuration we’re going to start with here to do things such as integrate with MFA providers, provide TOTP, SAML logins, and integrate with directory services like LDAP. In my full setup, I am using MFA on top of Authelia to provide an extra layer of protection to ThinLinc, again without having to make extensive configuration changes to it or my Linux install.

Thus this part of the file is:

  • Defining a second Docker Service called authelia, which will run the latest version of the authelia/authelia image from Docker Hub
  • Mount the authelia-config directory we created earlier so that we can read its configuration
  • Attach the container to the traefik_public network
  • Add dynamic configuration for Traefik in the labels section – here we are:
    • Enabling Traefik for this service and telling it to use the traefik_public network.
    • Telling Traefik that Authelia is hosted at https://<your-public-ip>/authelia – this makes it distinct from ThinLinc’s Web UI, and doesn’t overlap with any paths that it uses.
    • Tell Traefik to use its HTTPS entrypoint.
    • Tell Traefik that behind the scenes, Authelia is listening on port 9091 – it will handle the forwarding for us.

Finally, at the bottom, we need a declaration of the traefik_public network that we created earlier.

Further Traefik dynamic configuration

Once the above is completed, we need to create the configuration files for both Traefik and Authelia. We’ll start with Traefik first. The configuration is a simple YAML file that lives in the directory we created earlier.

Again let’s break this down into chunks so that we can understand what we’re configuring. First off:

$ cat <<EOF > traefik-config/config.yml
        certFile: /etc/traefik/tldemo.crt
        keyFile: /etc/traefik/tldemo.key
 - certFile: /etc/traefik/tldemo.crt
      keyFile: /etc/traefik/tldemo.key
 - default


In case you were wondering why we’re configuring this year, and not using labels in the Docker Compose file, Traefik SSL stores can only be defined via the file provider at this time:

Although quite verbose, this part of the configuration file is telling Traefik to create a TLS certificate store called default, which will contain our self-signed certificate that we created earlier. This will be served as our default certificate.

Next, we define the following – this is where the magic happens for ThinLinc, to forward our HTTPS traffic to the local Web UI running on port 300:

$ cat <<EOF >> traefik-config/config.yml
 - "https"
      rule: "PathPrefix (`/`)"
 - authelia@docker
 - subfilterPort@file
      service: thinlinc

      insecureSkipVerify: true

        serversTransport: tlTransport
 - url: "https://$(ip -4 addr show docker0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'):300"
        passHostHeader: true


Again, the configuration is not as scary as it might first look. Here we are:

  • Creating a Traefik router called thinlinc.
    • This listens on the HTTPS entrypoint we created.
    • Hosts the Web UI on the root path of the web server – if we don’t do this, we need to start rewriting URLs, and that’s extra complexity we don’t want at this stage. When you add DNS to this setup, you can start defining subdomains for each service which makes the whole process much easier.
    • Telling Traefik to add two middlewares to this router – one is the authelia service which was defined in our Docker Compose file, and the other is defined later in this file.
  • We’re then telling Traefik to skip verification of the TLS certificate on the internal web server it’s proxying – without this, it won’t work with the self-signed certificate installed on the ThinLinc Web UI by default.
  • Finally we create a service called thinlinc (references from the router) which:
    • Sets up a loadbalancer to point to the backend URL of the ThinLinc Web UI.
      • Again note the Web UI – we’re using this to dynamically populate this field with the local IP of the docker0 interface, as this is the physical interface on which Traefik will be able to reach the ThinLinc Web UI.
    • Uses our server transport that tells it to skip TLS verification
    • Passes Host Headers through.

The final part of the configuration file looks like this:

$ cat <<EOF >> traefik-config/config.yml
          lastModified: true
            regex: ":300"
            replacement: ":443"

Here we’re defining a new middleware called subfilterPort, which uses the subfilter plugin we loaded in the static configuration. We’re using this to replace any instances of the string :300 with :443 in the host headers – without this, the ThinLinc login will work, but then the process will fail when it tries to redirect to port 300 which it expects to be running on. ThinLinc is unaware of this replacement – it is only made on the external side of the Traefik proxy.

Authelia Configuration

We’re almost ready to run our setup! One piece remains at this stage – the configuration file for Authelia. Let’s now build that up as before (and it’s YAML again so hopefully you’re getting used to it by now!). Let’s start and build it up in sections again:

$ cat <<EOF >authelia-config/configuration.yml
theme: light

# Server settings
  address: 'tcp://:9091/authelia'

# Log settings
  level: debug
  format: text

# Storage configuration
  encryption_key: 'a_very_important_secret'
    path: /config/db.sqlite3


This part of the configuration file is telling Authelia that:

  • It is to use its light theme.
  • The server is to listen on port 9091/tcp, and listen on the paths / (it always listens on this path) and /authelia (the PathPrefix we defined in our Traefik rules earlier).
  • It is to log at the debug level – you can turn this down later if you wish.
  • The server is to store its local configuration in an SQLite database in the folder we mounted earlier.
    • Authelia creates and manages this database for you – you don’t need to create it – just be aware it’s there
    • Be sure to change the encryption_key!

Now let’s build out the configuration and tell Authelia where to find its user database:

$ cat <<EOF >>authelia-config/configuration.yml
# User information
    disable: true
    path: /config/users_database.yml
      algorithm: argon2
      iterations: 1
      memory: 1024
      parallelism: 8
      salt_length: 16
      key_length: 32


Here we’re telling Authelia to:

  • Read its user database from another local YAML file that we’ll create in a minute
  • How to hash the passwords
  • To disable password reset functionality as we would need to set up e-mail notifications for this to be possible.

Now we’ll set up the access control rules and session settings – this is how Authelia knows what to allow or deny, and how long login sessions last for:

$ cat <<EOF >>authelia-config/configuration.yml
# Access control settings
  default_policy: deny
 - domain: "$(curl --silent"
      policy: one_factor

# Session settings
  name: authelia_session
  expiration: 1h
  inactivity: 5m
 - domain: "$(curl --silent"
      authelia_url: 'https://$(curl --silent'


Most of this is fairly self-explanatory, but to ensure clarity:

  • Our default access control policy is to deny all users.
  • For the static IP address we’re using (again note the shell expansion), do not use MFA.
  • Sessions cookies will be created for our public IP (gathered as before).
  • Default timeouts for inactivity and session expiration are set to 5 minutes and 1 hour respectively.

The final chunk of the file looks like this:

$ cat <<EOF >>authelia-config/configuration.yml
# Regulation settings
  max_retries: 3
  find_time: 2m
  ban_time: 5m

# Duo API settings
  disable: true

# TOTP settings

  disable_startup_check: false
    filename: '/config/notification.txt'

These settings are the remaining mandatory ones – mostly we’re setting sensible defaults as we’re not using any form for MFA/TOTP. Note the regulation section, which helps prevent brute force attacks by banning users for ban_time if max_retries attempts are made within the find_time interval.

Also note that some form of notifier is required, and the simplest one to configure is a flat text file which again will live in our local mount point. This again is created and managed for you.

The very last piece of this puzzle before we can run our setup is to define a user account so that we can log in. I’m going to base mine on the default file that Authelia auto-creates at startup if you don’t otherwise create one, but feel free to create your own users and, obviously, more secure passwords!

$ cat <<EOF >authelia-config/users_database.yml
    disabled: false
    displayname: "Test User"
    password: "$argon2id$v=19$m=32768,t=1,p=8$eUhVT1dQa082YVk2VUhDMQ$E8QI4jHbUBt3EdsU1NFDu4Bq5jObKNx7nBKSn1EYQxk"  # Password is 'authelia'
 - admins
 - dev

You can define your own password hash interactively by running the following shell command:

$ docker run -it authelia/authelia:latest authelia crypto hash generate argon2

The above command was sourced from, and you can find lots more useful information about creating users and passwords there to secure your installation.

Running your setup

Congratulations! If you made it here, you’ve created a fully working foundational setup to start proxying and securing your ThinLinc Web UI so that you can run it anywhere! If you want to dive straight in and start it up, simply run the following command from the same location as your docker-compose.yml file we created earlier:

$ docker compose up -d

The -d flag tells Docker to run the services in the background – in this mode, you can exit your terminal session and it will keep running (in fact it will start up even on reboot thanks to the restart: unless-stopped statements in our Docker Compose configuration file). Simply omit this flag if you want to run it interactively, and have the log messages scrolling on the screen (useful for debugging purposes).

If you’ve started it running in the background, you can still access the logs using:

$ docker compose logs -f

The -f flag tells the command to follow (tail) the logs – again incredibly useful for debugging – omit this if you just want to print the log messages up to the current point in time to the console.

Finally, you can shut your services down with the command:

$ docker compose down

That’s it – if all has gone well, you can access your new setup at https://<your-public-ip>. You should see it redirect you to the Authelia login page – enter your login credentials as specified in users_database.yml, and then you’ll get your familiar ThinLinc login page. Log in with your Linux credentials and you should get your desktop session!

On a final note, it probably goes without saying but you will have noted extensive use of static IP addresses in configuration files. This enabled us to complete this setup without talking about DNS (which can come next!), but be aware that if your public IP address changes, or indeed the IP address of the docker0 interface, you will need to edit all the places in your configuration where the static IP address is placed, and then restart the services.

I do hope you’ve found this helpful, and that it enhances your experience of working with ThinLinc!