Configuring ThinLinc Servers in a Cluster

In this section, we will describe how to configure a ThinLinc cluster with multiple agent servers to allow load-balancing and redundancy.

Note

This section does not address configuration of high availability (HA). For information on configuring your ThinLinc cluster for high availability, see High Availability (HA).

A ThinLinc cluster consists of one master server (or multiple master servers in a HA configuration) with multiple agent servers grouped into subclusters. While ThinLinc in its simplest configuration may be run with both the master and agent installed on the same machine, running ThinLinc in a cluster configuration conveys numerous advantages:

  1. A cluster configuration allows automatic load-balancing of sessions across multiple agents

  2. Having multiple agents offers redundancy; for example, if one agent goes down or is taken out of service for maintenance, other agents are still available to handle user sessions

  3. A cluster configuration is scalable. Since most of the workload is taken up by the agents and not the master, adding more capacity to your ThinLinc installation is generally as easy as installing one or more new agent servers

Cluster Configuration

When configuring ThinLinc servers as a cluster, one needs to configure both the master server and the agents. The master server needs configuration for subclusters (even if there is only one agent) and the agents need to know which server is the master for access control.

The configuration parameter /vsmagent/master_hostname for each agent that is included in a ThinLinc cluster must be configured with the address of the master server for the cluster. This gives the master access to communicate with and control the agent server.

Additionally, the configuration parameter /webaccess/login_page should be configured with the URL of the Web Access login page on the master server. This should be done for each agent that is included in that cluster. This is recommended since it makes sure users end up on the correct page after logging out or disconnecting from Web Access.

Once the master and agents within a cluster are configured, and the vsmagent, vsmserver and tlwebaccess services have been restarted, these ThinLinc servers will then function as a cluster.

Subclusters

A subcluster is a group of agents. At least one subcluster is always active, even in a single server setup. Each subcluster can serve a specific purpose within the ThinLinc cluster. The dimension for grouping can be chosen at will and could for example be; location, project, application etc.

ThinLinc ships with one default subcluster configuration which is used for creating new sessions by any user. It is allowed to define as many subclusters as needed. Each subcluster can be associated with users and with user groups.

To associate a user with a subcluster, either use the users or groups configuration parameter for the specific subcluster. See Parameters in /vsmserver/subclusters/ for details on these subcluster configuration parameters.

If a subcluster does have neither user nor group associations configured, it is used as a default subcluster. Users that does not belong to any subcluster, will have their sessions created on the default subcluster. If a user is not associated with a subcluster and there is no default subcluster configured, the user will not be able to logon to the ThinLinc cluster.

Loadbalancing of new sessions is performed using the list of agents defined in the agents parameter within each subcluster. See Stopping new session creation on select agents in a cluster for details on preventing new sessions on some agents.

Note

The subcluster association rules limit the creation of new sessions and does not apply to already existing sessions. So given a case where subcluster association was changed after session startup, the user is able to reconnect to a session outside their configured subcluster. However, next time this user creates a session it will be created on the configured subcluster.

A subcluster is defined as a folder under the /vsmserver/subclusters configuration folder in vsmserver.hconf on the master. The foldername defines a unique subcluster name.

Here follows an example subcluster configuration for a geographic location based grouping of agents:

[/vsmserver/subclusters/Default]
agents=tla01.eu.cendio.com tla02.eu.cendio.com

[/vsmserver/subclusters/usa]
agents=tla01.usa.cendio.com tla02.usa.cendio.com
groups=ThinLinc_USA

[/vsmserver/subclusters/india]
agents=tal01.india.cendio.com tla02.india.cendio.com
groups=ThinLinc_India

During the selection process for which subcluster a new session is created on, the following rules apply:

  1. users has precidence over groups. This means that if a user belongs to a group that is associated with subcluster A and the user also is specified in users for subcluster B, the user session will be created on subcluster B.

  2. groups has precidence over the default subcluster. This means that if a user belongs to a group that is associated with subcluster B, the user session will be created subcluster B and not on the default subcluster A.

  3. If user does not belong to an associated group nor is explictly specified by users for a subcluster, the new session will be created on the default subcluster.

Note

Try to avoid the following configurations that will result in undefined behaviors for subclusters:

  1. Avoid two subclusters without associated users and groups, eg. default subclusters. It is undefined which of them that will be the default subcluster used for users that are not associated to a specific subcluster.

  2. If a user is a member of two user groups which are used for two different subclusters, it is undefined which subcluster the new session will be created on.

  3. If a user is specifed in users of two different subclusters, it is undefined which subcluster the new session will be created on.

Keeping agent configuration synchronized in a cluster

When multiple agents have been configured as part of a cluster, it is usually desirable to keep their configurations synchronised. Instead of making the same change seperately on each agent, ThinLinc ships with the tool tl-rsync-all to ensure that configuration changes can be synchronised easily across all agents in a cluster. See Commands on the ThinLinc Server for more information on how to use this tool.

The tl-rsync-all command should be run on the master server, since it is the master which has the knowledge of which agents there are in the cluster. For this reason, it is often useful to configure the master server as an agent as well, even if it will not be used to host user sessions in general. This allows the master to be used as a “template” agent, where configuration changes can be made and tested by an administrator before pushing them out to the rest of the agents in the cluster. In this way, configuration changes are never made on the agents themselves; rather, the changes are always made on the master server, and then distributed to the agents using tl-rsync-all.

An example of how one might implement such a system is to configure the master server as an agent which only accepts sessions for a single administrative user. The steps to do this are as follows:

  1. Configure the master as an agent too. On a ThinLinc master, the vsmagent service should already have been installed during the ThinLinc installation process; make sure that this service is running.

  2. Create an administrative user, for example tladmin . Give this user administrative privileges if required, i.e. sudo access.

  3. Create a subcluster for the master server and associate administrator user tladmin to it. See following example:

    [/vsmserver/subclusters/Template]
    agents=127.0.0.1
    users=tladmin
    

    See Subclusters for details on subcluster configuration.

  4. Restart the vsmserver service.

In this way, configuration changes are never made on the agents themselves; rather, the changes are always made on the master server, and then tested by logging in as the tladmin user. If successful, these changes are then distributed to the agents using tl-rsync-all.

Stopping new session creation on select agents in a cluster

When, for example, a maintenance window is scheduled for some agents servers it is sometimes desirable that no new sessions are started on that part of the cluster. It is possible to prevent new sessions from being started without affecting running sessions.

To stop new session creation on specific agents, those agents need to be removed from the agents configuration variable in the associated subcluster. Once the vsmserver service is restarted on the master server, new user sessions will not be created on the removed agents. Users with existing sessions can continue working normally and users with disconnected sessions will still be able to reconnect.