ThinLinc Logo

Blog

FastX Review: Remote Desktop Performance Tested

Feb, 10, 26

Over the past few years, we’ve watched GPU-intensive AI projects drive rapid growth at universities and research labs, with some systems expanding around 18% annually. Among other things, these teams need remote access that’s low-latency, scalable on shared resources, and optimized for remote visualization. FastX has built a name promising exactly that, but it’s hard to know how well it holds up once you move to real multi-user environments.

That’s not to say it can’t perform well. Still, we’d already been providing high‑performance remote Linux access for a decade when its first version launched in 2014, and the FastX reviews we keep getting from HPC centers and academic organizations that moved away from it are mixed at best.

Their own benchmarks are useful for setting expectations, but at Cendio, we believe that what really matters is how administrators and end‑users experience concurrency under daily loads. That’s the angle we’ll use here to evaluate FastX performance and see how it compares to ThinLinc and other remote desktop alternatives.

FastX overview: Features and use cases

What is FastX?

FastX is StarNet’s commercial X11-based remote desktop platform. It was originally built for Linux workstations running CAD and EDA tools and later pushed into HPC and research clusters. In that sense, they share some DNA with us. We’ve both been in the Linux industry long enough to know that generic remote desktop tools rarely survive contact with real research workloads. As you’ll see, though, their approach is quite different from ours.

fastx dashboard

Key features

To be honest, FastX won’t save you from wiring up more pieces than you would with a full remote desktop platform. The truth is, however, that it has a few decent capabilities for the right setups, such as:

    • Protocol and rendering: Their proprietary protocol is designed to handle heavy 3D/visualization workloads (similar to how we use VirtualGL) to keep latency manageable on LAN and WAN.

    • Session persistence: This is at the center of our own work. FastX also supports persistent sessions, meaning any long-running simulations or rendering tasks don't die if the session disconnects.

    • Web and native clients: Users can log in through an HTML5 portal or install a desktop client on Windows, macOS, or Linux and connect over SSH.​

    • Clustering support: Includes load-balancing for multi-node environments, though it requires configuring separate manager, license, and database components to achieve high availability.

Use cases

Standard VNC struggles the moment you point something like Synopsys or ANSYS at it over typical WAN or VPN connections. So, the same way organizations like Seagate use ThinLinc to enable 500+ engineers to access their CentOS-based environment without latency killing their productivity, FastX aims to work for similar scenarios.

You’ll find it, mainly, in:

    • HPC centers

    • EDA and engineering firms

    • Academic labs

FastX performance testing: Comprehensive review

As much as we’d like to point you to third-party benchmarks, they’re virtually non‑existent. In fact, most reviews come straight from StarNet’s own labs.

At Cendio, we are a small team that prioritizes transparency above all. We’ve dug through community threads, lab docs, and troubleshooting guides, as well as what our customers have told us after running FastX in production, to find what their actual users think (both the good and the bad)

Responsiveness & user experience

FastX behaves the way you’d expect if you’re coming from SSH + X11. Brown University’s CS department, for example, notes “far less network lag than using X forwarding over regular SSH”. At this point that’s the baseline for any proper remote desktop platform, whether it’s FastX, Open OnDemand, ThinLinc, or any other.

The problem comes with session handling. Illinois Engineering warns that just closing the FastX window can leave sessions hanging and cause login issues later, and Utah’s CHPC documents cases where users have to clean up stale session files or fix their environment before they can get back in.

GUI application performance and rendering quality

You’ll come across claims saying FastX is “faster” than anything else on the market. And yes, just like ThinLinc, it can indeed push 3D tools and visualization workloads from GPU‑backed nodes with VirtualGL. On some reseller material you’ll even see a benchmark saying it achieved 106% and 112% of local machine speed in rendering tests.

Now, where’s the independent dataset showing FastX routinely beating a local Linux desktop across real IDEs or scientific tools? Nowhere, actually. The only place those numbers live is in StarNet’s own site, and no organization we’ve researched has managed to reproduce anything close in a real multi-user production.

Compression effectiveness and quality trade-offs

Here we find the same issue. There are other claims about big bandwidth savings (up to “7x less”), but there’s no independent data comparing it head‑to‑head with modern alternatives in real deployments. The way we see it, it’s safer to treat those numbers as marketing hints, then run your own tests.

Overall FastX strengths

FastX has proven good enough for some environments where the main goal is giving users a quick way into Linux GUIs from a browser, as long as the team is prepared to work with its quirks around session management.

Overall FastX limitations

What we've gathered from our research is that FastX performance depends heavily on how carefully you tune desktops and resource limits. That, coupled with a lack of transparency, makes it hard to predict how it will actually scale until you've already committed to the deployment.

FastX vs ThinLinc: Performance comparison

Performance Criteria FastX ThinLinc
Desktop responsiveness Good - feels near-local on strong connections Excellent - feels local even over slower links
Bandwidth efficiency Moderate - tunable compression Very efficient and adaptive compression
Multi-user scalability May struggle under heavy load Proven to scale to thousands of users
Session stability Reliable but may drop under stress Excellent - persistent and auto-recovers cleanly
Graphics performance Good OpenGL/3D support, but may stutter on extreme loads Excellent GPU and visualization performance across the board
Setup complexity Complex - needs tuning and config Low - simple setup with strong documentation
Cost for 50 users High - commercial per-user licence Cost-effective concurrent model
Audio quality Basic redirection Full-duplex, multi-device audio

Areas where ThinLinc outperforms FastX

Superior network performance

FastX does a decent job with its adaptive compression, but when we built ThinLinc over 20 years ago, we wanted to handle the reality of researchers working outside campus.These past decades we’ve worked to perfect a pipeline that combines server-side rendering and hardware acceleration via VirtualGL, along with smart compression and optimization settings.

fastx customer review

There’s the National Energy Research Scientific Computing Center (NERSC), which explicitly recommends ThinLinc for interacting with GUIs and visualization tools on their system, and lists it as the recommended way to run DDT and other X‑heavy tools over the network.

Purdue University tells users the same thing on Anvil and Gilbreth: ThinLinc “works very well over a high latency, low bandwidth, or off‑campus connection”.

We’d also like to thank a user from our community, Andre, who tested ThinLinc from weak clients (including a Raspberry Pi) over constrained links and found it pulled ahead as soon as latency increased or bandwidth narrowed.

Enhanced scalability

As we mentioned earlier, scaling your head node or cluster with FastX is possible, but you’d need to manage quite a few separate parts on top of your existing infrastructure. In contrast, ThinLinc’s master-agent architecture is designed to grow by just adding more agent nodes and letting the built-in load balancer and session broker do their job.

Better user experience

fastx user experience

While FastX has a decent web‑based workflow, we’re old‑school Linux developers at heart, so ThinLinc adheres closer to the Linux philosophy: do one thing and do it well. We offer native clients for Linux, Windows, and macOS, plus an HTML5 web client. As an administrator, you can choose between a web interface or CLI tools, either of which is fully documented.

Take, for example, Indiana University. We’ve worked with them since 2013 to provide researchers a persistent Linux desktop on top of their large compute clusters. Of course, it doesn’t replace the terminal, but it makes it much easier for young scientists to prep files and visualize data.

Cost-effectiveness

As of December 2025, ThinLinc is licensed per concurrent user session, with a free community license covering up to 10 concurrent users. FastX is currently offering concurrent session licenses as well, but its complexity can drive up indirect costs. It’s also harder to pin down without going through their quote system, and some advanced features for HPC use are tied to the more expensive tier.

Another thing to note is their free version (FreeFastX), which unlike ThinLinc’s, is strictly limited to personal, single-user use, meaning you can't use it to pilot a real small-team deployment without paying up front.

Alternative solutions beyond FastX

ThinLinc – Best overall FastX alternative

When you use ThinLinc, you’re getting an enterprise-grade remote desktop server built on top of the best open standards in the industry. Unlike FastX, which is purely proprietary, we wanted to create a platform that combines the reliability of the projects we maintain (like TigerVNC and noVNC) with the security and support HPC and research institutions need.

ThinLinc – Best overall FastX alternative

NoMachine – Performance-focused option

From time to time, we see smaller labs trying NoMachine for remote Linux access, mainly for performance on single-user setups. And to be fair, it can be a bit faster than FastX for lightweight work. Now, once you try to scale it across a cluster or serve multiple users from a centralized Linux node, it gets complicated and expensive.

X2Go – Open-source alternative

We can’t deny X2Go is a decent option if you need a free, quick way to push a lightweight Qt or MATE desktop over SSH. Still, it doesn’t handle heavy graphics or 3D acceleration as well. As one of our users put it: “Our researchers use SSH (sometimes with X11 forwarding) and/or X2Go. But ThinLinc just works better out of the box and gives fewer problems, so especially for the educational part this is really convenient.”

Moving from FastX

Ripping out a core piece of infrastructure like FastX can be overwhelming if you have hundreds of researchers who just want their MATLAB sessions to work. Moving to ThinLinc, though, is pretty simple. You can run it alongside your existing setup, let the performance speak for itself, and migrate at your own pace.

All you need to do is:

    • Download ThinLinc. It takes around 15 minutes, and you can start with the free version, which gives you access to all the features. Reach out to our team to get a community license first, then install it on a subset of nodes and get a pilot group to run their actual workloads.

    • Prepare your users. Give them the basics: how to get the client, how to log in, and how session persistence works. We provide full documentation and user guides you can hand off directly.

    • Phase the rollout. Start small (maybe one lab or department) and expand as you confirm stability. Once you see the load handling correctly on ThinLinc, you can shut down the old FastX service for good.

FastX – Final verdict

FastX isn’t a bad tool. It kept plenty of mid-size labs and engineering teams comfortable for years and it still does, despite the layers of proprietary opacity, which are typical of other commercial remote desktop software like Citrix and VMware.

As to its performance, we’ve seen it do the job reasonably well for lighter workloads. Whether it hits those "112% speed" claims is another matter entirely. The thing is, the moment you step into real concurrency (50, 100, 500 users hitting the cluster at once) you end up managing the tool almost as much as you manage the actual Linux environment.

We’ve watched that exact scenario play out at more research centers than we can count. Most of them eventually land on ThinLinc because it was built for that scale from the start, and is actually something we’ve been refining with each organization that came our way.

Try ThinLinc free to see how it stacks up against FastX in your own environment.

© 2025 Cendio
Cendio Logo