We are thrilled to welcome James Freeman as a guest blogger! The topic of this blog was decided by a vote on the ThinLinc Community.
James brings over two decades of experience in the technology sector, with a specialized focus on Ansible and Linux systems. He is the co-author of “Mastering Ansible” and has shared his extensive knowledge at numerous industry events, including AnsibleFest.
Thank you for reading this document - it's a pleasure to be writing as a guest blogger for Cendio! This document, as voted for by the community on the forum, is an opinionated guide to setting up ThinLinc on an EC2 instance (on AWS), behind Traefik and Authelia.
I personally like opinionated documents - they form a complete worked example of how to do something, that you can follow through yourself from start to finish and produce a working configuration, even if you have no prior experience. Granted not everyone will want or use the exact configuration I'm creating here, and that's fine - I hope that this guide will help you with your own project, and that you'll be able to take what you need from it, learning a few things along the way.
The first question is, what are we trying to achieve here, and why? Since finding it, I have really come to appreciate ThinLinc for many reasons. I am finding that my clients love it too.
The setup we're going to create here was actually born out of a personal requirement - to have a low-cost, cloud-based virtual desktop that I could log into from anywhere without any special apps or devices. This comes in particularly handy when I'm on guest WiFi networks, some of which I've found are incredibly restrictive when it comes to the ports they allow. If you've used ThinLinc before, you'll know that it has an excellent HTML5 interface which you can use on any device including your phone or tablet. However, the downsides were that it runs on port 300/tcp (a non-standard port that I've found myself firewalled off from), and that, as this is going in front of something that I want to be highly secure, I have a personal preference to add MFA to the setup.
Thus, the goals I set out to achieve were:
These are going to take some work, so that's as far as we'll take things in this article. However, you would be forgiven for thinking, “Well, ThinLinc already has a Web UI - why don't I just change the port from 300 to 443? Or indeed set up an iptables redirect from port 300 to 443?
These are entirely valid options - as I mentioned before, this is an opinionated way of doing things and there's no right or wrong here. However, the goal of this initial piece of work is to build a framework so that we can:
As mentioned in the original set of goals, another big driver is to be able to do this without making significant changes to the ThinLinc configuration. We could of course heavily customise it, and again this isn't wrong - but I find in the field that doing this becomes problematic when the time comes to upgrade - the closer you've kept your services to the developer's intended configuration, the easier the upgrade path generally is.
With all that out of the way, let's get into building this framework.
This document assumes you have a working knowledge of Linux and the shell, and are comfortable with setting up EC2 in AWS. In addition, you will need to complete the following pre-requisite steps before following this guide:
The code and configuration given in this article are intended to provide you with a framework, a foundation to build on, and as such I've minimized the configuration as much as possible to keep it concise and simple. This setup is not intended for use "as-is" in a production environment, and you should take steps to further secure it, and validate it against your own security standards.
Before we can perform any of the actual host setup, we need to create an EC2 instance to perform the relevant steps on. Throughout this post, I'll use the Frankfurt (eu-centra-1) region, but this process should work on any region of your choice.
export AWS_DEFAULT_REGION=eu-central-1
I'm not going to get into DNS in this post, so the first step is to allocate ourselves an Elastic IP address so that we know where to find our host on the internet. Note that these might incur charges so you can skip this step if you want to complete everything on the Free Tier, but you may have to reconfigure some things later in the event that your public IP changes:
ALLOCATION_ID=$(aws ec2 allocate-address --query 'AllocationId' --output text)
We're also going to need to create an SSH key so that we can access our new instance:
$ aws ec2 create-key-pair --key-name ThinLincDemo --query 'KeyMaterial' --output text > ThinLincDemo.pem
$ chmod 0400 ThinLincDemo.pem
If you've already completed this and want to use your existing key, you'll need the KeyName of your previously created key, which you can query with:
$ aws ec2 describe-key-pairs --query 'KeyPairs[].KeyName' --output text
With this complete, we'll now create a Security Group for use with our EC2 instance, that allows SSH and HTTPS only, from anywhere.
$ SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name ThinLincDemoSG --description "Security group for ThinLinc - allow SSH and HTTPS access" --output text) $ aws ec2 authorize-security-group-ingress --group-id $SECURITY_GROUP_ID --protocol tcp --port 22 --cidr 0.0.0.0/0 $ aws ec2 authorize-security-group-ingress --group-id $SECURITY_GROUP_ID --protocol tcp --port 443 --cidr 0.0.0.0/0
Now we can spin up our EC2 instance - I'm going to use a t3.micro instance type which is within the Free Tier, and the latest AMI I can find for Ubuntu 22.04 LTS. I realise that Ubuntu 24.04 has been released at the time of writing, but official AMIs are not yet available so we'll use the previous LTS release. This setup should work the same when Ubuntu 24.04 AMIs are officially released.
I'm using this command to look up the AMI ID that I want - the --owners flag specifies Canonical's account which is how we're filtering down to their official images:
$ aws ec2 describe-images --owners 099720109477 --filters "Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-jammy*-amd64-server-2024*" --query 'Images[*].[ImageId,Name]' --output table
Once we've found our preferred AMI, we can launch the EC2 instance with this command:
$ INSTANCE_ID=$(aws ec2 run-instances --image-id ami-00975bcf7116d087c --count 1 --instance-type t3.micro --key-name ThinLincDemo --security-group-ids $SECURITY_GROUP_ID --query 'Instances[0].InstanceId' --output text)
Finally, if you're using an Elastic IP address with this instance, you can associate it using this command:
$ aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOCATION_ID
If all has gone according to plan, you should now be able to connect to your instance on its public IP address:
$ ssh -i ThinLincDemo.pem ubuntu@$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
Congratulations - that's one of the most fundamental parts of this process complete!
Now that we have an EC2 instance running, let's get ThinLinc installed. Proceed to the download page to download the server package:
https://www.cendio.com/thinlinc/download/
Then install the package according to the instructions here:
https://www.cendio.com/resources/docs/tag/install_install.html
In brief summary, these are the commands I ran, but you should install the ThinLinc server according to your requirements:
# Copy the ThinLinc server ZIP file to the server $ scp -i ThinLincDemo.pem tl-4.16.0-server.zip ubuntu@$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query 'Reservations[*].Instances[*].PublicIpAddress' --output text): # On the server, install ThinLinc server $ sudo apt -y install unzip $ unzip tl-4.16.0-server.zip $ cd tl-4.16.0-server/ $ sh ./install-server
Next, to ensure that ThinLinc will respond on our public IP address, we'll need to edit vsmagent.conf and restart the vsmagent service as follows (this is the only change we're making to the base ThinLinc configuration):
$ sudo sed -i "s/^agent_hostname=.*/agent_hostname=$(curl -s http://icanhazip.com)/g" /opt/thinlinc/etc/conf.d/vsmagent.hconf $ sudo systemctl restart vsmagent.service
If we're actually going to test out our ThinLinc setup at the end of this process, then we need one more fundamental part before we move on to configuring the web access - we need a graphical interface installed on our Linux host! As we're running a micro instance here, I'm going to install a lightweight one, but you can replace this with your personal preference, especially if you have sufficient system resources.
$ sudo apt -y install xubuntu-desktop
WARNING! Even one of the lighter-weight desktop environments such as XFCE4 will not run well on a t2.micro or t3.micro instance so although you can use this Free Tier instance type for testing purposes, I don't recommend it if you want to actively use the remote desktop environment - you would definitely want to go for a more powerful instance type.
Finally, you'll need to set a password for the default user on your EC2 instance as this isn't done by default for obvious reasons! The default user account is often ec2-user, but on the official Ubuntu images it is ubuntu, so I'll set my password interactively using the following command:
$ passwd
Traefik has become one of my favourite ways to proxy my web applications, and I'm fairly sure I've only just scratched the surface of what it can do. We're going to install it here using Docker - after all, this is where its strength lies - and the procedure from hereon-in should, if you use Docker, be almost identical regardless of which flavour of Linux you use. This is the beauty of running applications in containers after all.
I'm going to install Docker CE here. We can install Docker using the convenient get-docker.sh script and a few simple commands as follows:
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh
$ sudo gpasswd -a $USER docker
You will need to log out and back into your SSH session to pick up the group change and run docker commands using your unprivileged user account.
Once you've done this, it's time to start building our Traefik configuration. The Traefik configuration is divided into two parts: 1. The static configuration - this is the startup configuration, and is read once on Traefik startup 2. The dynamic configuration - this configuration is, as it sounds, dynamic and can be sourced from both a plain text configuration file and providers such as Docker. The latter is incredibly powerful because you can start up a Docker container after Traefik and its routing/proxy configuration will be added to Traefik at runtime without impacting any of the other services it is already providing.
In our setup, we're going to need both, and also two forms of dynamic configuration as certain configuration directives can only be read from a plain-text configuration file at this time. Don't worry too much about these concepts - all will become clear as we build out our configuration.
To get started, let's create a directory structure to contain our configuration files.
$ mkdir -p ~/traefik/{traefik-config,authelia-config}
$ cd ~/traefik
Now we're going to create a Docker network on which to run our Traefik-related containers. This is a one-off command and only needs to be run once when you're setting up a new host.
$ docker network create traefik_public
As we're working with self-signed TLS certificates at this stage, we also need to create a new self-signed certificate and private key for Traefik to use on our public-facing endpoint. Note that I'm setting a Common Name in the certificate, but this isn't really necessary at this stage as we're not setting up DNS in this example.
$ openssl req -x509 -newkey rsa:2048 -keyout traefik-config/tldemo.key -out traefik-config/tldemo.crt -days 3560 -nodes -subj "/C=UK/ST=London/O=tldemo/CN=tldemo.example.com"
With the groundwork complete, let's start building out our Docker configuration. I'm going to use Docker Compose for this example and it provides a nice, easy-to-read definition of your container configuration which you can start, stop and debug with simple commands, and which you can also commit to version control.
Let's start with the following block of the file - we'll break it down into chunks to help you understand what we're creating:
$ cat <<EOF > docker-compose.yml services: traefik: image: traefik:v3.0 ports: - target: 80 published: 80 protocol: tcp mode: host - target: 443 published: 443 protocol: tcp mode: host - target: 8080 published: 8080 protocol: tcp volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - ./traefik-config:/etc/traefik networks: - traefik_public EOF
As you can see (if you've not come across one before), Docker Compose files are YAML-based. Breaking it down at a high level, this segment of the file is telling Docker the following:
traefik which will use the traefik:v3.0 image from Docker Hub.80, 443 and 8080/tcp on the host - thus Traefik is going to act as our web endpoint
traefik-config local directory we created above to /etc/traefik/ inside the container - part of our dynamic configuration will live here.traefik_public Docker network we created earlier.With the fundamentals completed, we can now add our static configuration to the service definition - these take the form of command line switches we're passing to Traefik:
$ cat <<EOF >> docker-compose.yml
command:
- --global.checkNewVersion=true
- --global.sendAnonymousUsage=true
- --api.dashboard=true
- --api.insecure=true
- --entryPoints.http.address=:80
- --entryPoints.https.address=:443
- --entryPoints.http.http.redirections.entryPoint.to=https
- --entryPoints.http.http.redirections.entryPoint.scheme=https
- --entryPoints.https.http.tls.certResolver=main
- --providers.docker.endpoint=unix:///var/run/docker.sock
- --providers.docker.watch=true
- --providers.file.filename=/etc/traefik/config.yml
- --providers.file.watch=true
- --experimental.plugins.subfilter.modulename=github.com/DirtyCajunRice/subfilter
- --experimental.plugins.subfilter.version=v0.5.0
- --log.level=INFO
EOF
These configuration options tell Traefik to:
false if you wish.8080 and allow anonymous access - set the api.* parameters to false to disable this.config.yml file which we're going to create separately.subfilter to TraefikINFO - turn this up to DEBUG if you're experiencing issues.The final piece of this service configuration is to define some service labels - this is how dynamic configuration is provided to the Docker daemon, to be read through its socket file:
$ cat <<EOF >> docker-compose.yml
labels:
- 'traefik.http.middlewares.authelia.forwardAuth.address=http://authelia:9091/api/verify?rd=https://$(curl --silent http://icanhazip.com)/authelia'
- 'traefik.http.middlewares.authelia.forwardAuth.trustForwardHeader=true'
- 'traefik.http.middlewares.authelia.forwardAuth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Name,Remote-Email'
EOF
Although these look complex, all you need to know at this stage is that we are:
authelia in a minute.
The final part of our Docker Compose file looks like this:
$ cat <<EOF >> docker-compose.yml authelia: image: authelia/authelia restart: unless-stopped volumes: - ./authelia-config:/config networks: - traefik_public labels: - "traefik.enable=true" - "traefik.docker.network=traefik_public" - "traefik.http.routers.authelia.rule=PathPrefix(`/authelia`)" - "traefik.http.routers.authelia.entrypoints=https" - "traefik.http.services.authelia.loadbalancer.server.port=9091" - "traefik.http.routers.authelia.service=authelia" networks: traefik_public: external: true EOF
Authelia is an excellent authentication service that you can run in combination with Traefik. Although what we're doing initially here looks a bit pointless - putting one static login in front of another - the strength of Authelia is that you can build on the basic configuration we're going to start with here to do things such as integrate with MFA providers, provide TOTP, SAML logins, and integrate with directory services like LDAP. In my full setup, I am using MFA on top of Authelia to provide an extra layer of protection to ThinLinc, again without having to make extensive configuration changes to it or my Linux install.
Thus this part of the file is:
authelia, which will run the latest version of the authelia/authelia image from Docker Hubauthelia-config directory we created earlier so that we can read its configurationtraefik_public networklabels section - here we are:
traefik_public network.https://<your-public-ip>/authelia - this makes it distinct from ThinLinc's Web UI, and doesn't overlap with any paths that it uses.9091 - it will handle the forwarding for us.Finally, at the bottom, we need a declaration of the traefik_public network that we created earlier.
Once the above is completed, we need to create the configuration files for both Traefik and Authelia. We'll start with Traefik first. The configuration is a simple YAML file that lives in the directory we created earlier.
Again let's break this down into chunks so that we can understand what we're configuring. First off:
$ cat <<EOF > traefik-config/config.yml tls: stores: default: defaultCertificate: certFile: /etc/traefik/tldemo.crt keyFile: /etc/traefik/tldemo.key certificates: - certFile: /etc/traefik/tldemo.crt keyFile: /etc/traefik/tldemo.key stores: - default EOF
In case you were wondering why we're configuring this year, and not using labels in the Docker Compose file, Traefik SSL stores can only be defined via the file provider at this time: https://doc.traefik.io/traefik/https/tls/
Although quite verbose, this part of the configuration file is telling Traefik to create a TLS certificate store called default, which will contain our self-signed certificate that we created earlier. This will be served as our default certificate.
Next, we define the following - this is where the magic happens for ThinLinc, to forward our HTTPS traffic to the local Web UI running on port 300:
$ cat <<EOF >> traefik-config/config.yml http: routers: thinlinc: entrypoints: - "https" rule: "PathPrefix (`/`)" middlewares: - authelia@docker - subfilterPort@file service: thinlinc serversTransports: tlTransport: insecureSkipVerify: true services: thinlinc: loadBalancer: serversTransport: tlTransport servers: - url: "https://$(ip -4 addr show docker0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'):300" passHostHeader: true EOF
Again, the configuration is not as scary as it might first look. Here we are:
thinlinc.
thinlinc (references from the router) which:
docker0 interface, as this is the physical interface on which Traefik will be able to reach the ThinLinc Web UI.The final part of the configuration file looks like this:
$ cat <<EOF >> traefik-config/config.yml middlewares: subfilterPort: plugin: subfilter: lastModified: true filters: regex: ":300" replacement: ":443" EOF
Here we're defining a new middleware called subfilterPort, which uses the subfilter plugin we loaded in the static configuration. We're using this to replace any instances of the string :300 with :443 in the host headers - without this, the ThinLinc login will work, but then the process will fail when it tries to redirect to port 300 which it expects to be running on. ThinLinc is unaware of this replacement - it is only made on the external side of the Traefik proxy.
We're almost ready to run our setup! One piece remains at this stage - the configuration file for Authelia. Let's now build that up as before (and it's YAML again so hopefully you're getting used to it by now!). Let's start and build it up in sections again:
$ cat <<EOF >authelia-config/configuration.yml theme: light # Server settings server: address: 'tcp://:9091/authelia' # Log settings log: level: debug format: text # Storage configuration storage: encryption_key: 'a_very_important_secret' local: path: /config/db.sqlite3 EOF
This part of the configuration file is telling Authelia that:
light theme.9091/tcp, and listen on the paths / (it always listens on this path) and /authelia (the PathPrefix we defined in our Traefik rules earlier).debug level - you can turn this down later if you wish.encryption_key!Now let's build out the configuration and tell Authelia where to find its user database:
$ cat <<EOF >>authelia-config/configuration.yml # User information authentication_backend: password_reset: disable: true file: path: /config/users_database.yml password: algorithm: argon2 iterations: 1 memory: 1024 parallelism: 8 salt_length: 16 key_length: 32 EOF
Here we're telling Authelia to:
Now we'll set up the access control rules and session settings - this is how Authelia knows what to allow or deny, and how long login sessions last for:
$ cat <<EOF >>authelia-config/configuration.yml # Access control settings access_control: default_policy: deny rules: - domain: "$(curl --silent http://icanhazip.com)" policy: one_factor # Session settings session: name: authelia_session expiration: 1h inactivity: 5m cookies: - domain: "$(curl --silent http://icanhazip.com)" authelia_url: 'https://$(curl --silent http://icanhazip.com)/authelia' EOF
Most of this is fairly self-explanatory, but to ensure clarity:
deny all users.The final chunk of the file looks like this:
$ cat <<EOF >>authelia-config/configuration.yml # Regulation settings regulation: max_retries: 3 find_time: 2m ban_time: 5m # Duo API settings duo_api: disable: true # TOTP settings totp: issuer: authelia.com notifier: disable_startup_check: false filesystem: filename: '/config/notification.txt' EOF
These settings are the remaining mandatory ones - mostly we're setting sensible defaults as we're not using any form for MFA/TOTP. Note the regulation section, which helps prevent brute force attacks by banning users for ban_time if max_retries attempts are made within the find_time interval.
Also note that some form of notifier is required, and the simplest one to configure is a flat text file which again will live in our local mount point. This again is created and managed for you.
The very last piece of this puzzle before we can run our setup is to define a user account so that we can log in. I'm going to base mine on the default file that Authelia auto-creates at startup if you don't otherwise create one, but feel free to create your own users and, obviously, more secure passwords!
$ cat <<EOF >authelia-config/users_database.yml users: authelia: disabled: false displayname: "Test User" password: "$argon2id$v=19$m=32768,t=1,p=8$eUhVT1dQa082YVk2VUhDMQ$E8QI4jHbUBt3EdsU1NFDu4Bq5jObKNx7nBKSn1EYQxk" # Password is 'authelia' email: authelia@authelia.com groups: - admins - dev EOF
You can define your own password hash interactively by running the following shell command:
$ docker run -it authelia/authelia:latest authelia crypto hash generate argon2
The above command was sourced from https://www.authelia.com/reference/guides/passwords/, and you can find lots more useful information about creating users and passwords there to secure your installation.
Congratulations! If you made it here, you've created a fully working foundational setup to start proxying and securing your ThinLinc Web UI so that you can run it anywhere! If you want to dive straight in and start it up, simply run the following command from the same location as your docker-compose.yml file we created earlier:
$ docker compose up -d
The -d flag tells Docker to run the services in the background - in this mode, you can exit your terminal session and it will keep running (in fact it will start up even on reboot thanks to the restart: unless-stopped statements in our Docker Compose configuration file). Simply omit this flag if you want to run it interactively, and have the log messages scrolling on the screen (useful for debugging purposes).
If you've started it running in the background, you can still access the logs using:
$ docker compose logs -f
The -f flag tells the command to follow (tail) the logs - again incredibly useful for debugging - omit this if you just want to print the log messages up to the current point in time to the console.
Finally, you can shut your services down with the command:
$ docker compose down
That's it - if all has gone well, you can access your new setup at https://<your-public-ip>. You should see it redirect you to the Authelia login page - enter your login credentials as specified in users_database.yml, and then you'll get your familiar ThinLinc login page. Log in with your Linux credentials and you should get your desktop session!
On a final note, it probably goes without saying but you will have noted extensive use of static IP addresses in configuration files. This enabled us to complete this setup without talking about DNS (which can come next!), but be aware that if your public IP address changes, or indeed the IP address of the docker0 interface, you will need to edit all the places in your configuration where the static IP address is placed, and then restart the services.
I do hope you've found this helpful, and that it enhances your experience of working with ThinLinc!