A 32-bit Linux distribution, based on GLIBC 2.3.4 or greater. An i686 (or compatible) CPU with MMX and SSE support is required.
A 64-bit Linux distribution, based on GLIBC 2.5.1 or greater. An x86_64 (or compatible) CPU is required.
RPM or dpkg support
Libraries and commands from LSB 4.1, specifically those listed in the Core and Printing modules (except LSB specific interfaces). Additionally, "libX11" is also required.
MIT Kerberos runtime libraries (libgssapi_krb5.so.2).
ss from iproute2
Python 2.4 or newer 2.X version
PyGTK 2.10.0 or newer
python-ldap (required when using ThinLinc LDAP tools.)
CUPS (Common UNIX Printing System) (only required when using nearest printer or local printers, see Chapter 5, Printer Features )
An SSH (secure shell) server
Accurate time synchronization between all ThinLinc servers
As long as your platform fulfills the requirements above, ThinLinc should work as expected. As part of the quality assurance work for each release, ThinLinc is tested extensively on a few platforms. For this release of ThinLinc, the list of such platforms are:
Red Hat® Enterprise Linux Server 7
SUSE® Linux Enterprise Server 12
Ubuntu Desktop® 16.04 (64-bit)
Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and Windows Server 2016. Both 32- and 64-bit systems are supported.
The amount of computer resources needed to run a ThinLinc cluster varies greatly with the number of users, the type of hardware used for the servers, the application mix run by the users and the type of users. Trying to estimate the number of servers needed for a specific cluster is not something that can be done using a predefined table of facts. Instead decisions should be made based on benchmarks and experience.
Below, we will try to give some ideas on what kind of resources are needed based on customer experience. With time and experience from your own cluster with your own application set, you will work out your own set of figures.
It is important to remember that the ThinLinc load balancing feature makes it easy to add another server when the need arises. Start out with a number of servers and add more as the load increases.
There are several types of resources needed in a ThinLinc cluster.
About 100MiB of disk is needed for the software and data being part of ThinLinc. Each active session also requires a very small amount of data (normally less than 100KiB) for storage of session data and the session log. In addition to that, there must be disk available for the operating system, the applications users run and logs.
The amount of CPU is very hard to estimate as it depends completely on the set of applications run by the users, and also on how active the users are as well as which response times are accepted by the users. A server that without problem copes with 100 users running LibreOffice calc updating a spreadsheet now and then will cope with a considerably lower amount of concurrent users if they are accessing internet sites with streaming video.
When ThinLinc is used as a Windows Remote Desktop Server frontend, meaning that the only application run is rdesktop, experience shows the amount of CPU needed is around 50-100MHz per active user.
For a full desktop (KDE or Gnome) with typical office and internet applications (LibreOffice, Firefox, some graphics program and users visiting multimedia-intensive web pages, the amount of CPU needed is somewhere between 150 and 300MHz per active user.
The CPU figures above are based on experience from customers running Intel Xeon 7140M (Netburst) CPUs. For other types of CPU, the figures should be adjusted accordingly.
The amount of memory, just as the amount of CPU, is also very dependent on type application set and how active the users are.
When ThinLinc is used as a Windows Remote Desktop Server frontend, with rdesktop being the only application run, experience shows that the amount of memory needed per user is 20-50MiB.
For a full desktop (KDE or Gnome), expect the need for 100-200MiB of memory per user, not including the memory required for individual applications.