Bugzilla – Full Text Bug Listing
|Summary:||automatic lossless refresh|
|Product:||ThinLinc||Reporter:||Peter Åstrand <email@example.com>|
|Component:||VNC||Assignee:||Pierre Ossman <firstname.lastname@example.org>|
|Status:||CLOSED FIXED||QA Contact:||Bugzilla mail exporter <email@example.com>|
* Screen areas damaged by JPEG compression should automatically be replaced with copies with no noticeable damage * "Real" updates should get priority over refresh updates (i.e. you should not get worse update rate with this change) * When regions are changed very often, for example with video playback, the server should not send lossless updates for that region until it has stopped changing
|Bug Depends on:||4915, 7158|
|Bug Blocks:||2751, 5106|
|Attachments:||Prototype patch which avoids ALR if regions are constantly changing|
We do not want to use JPEG unless necessary; we don't want JPEG artefacts with office applications, for example. One idea is to do a lossless refresh automatically after a while. Ideally, the server should only use JPEG compression on suitable subsets of the screen.
Bug 2927 covers automatic selection of encoding (and JPEG is already automatically selected). What we need here is: - Automatic detection of areas with lots of updates - Automatic lossless refresh once it settles down I have no idea what the detection code looks like, so the time est. is a very wild guess.
*** Bug 4350 has been marked as a duplicate of this bug. ***
This is a server-side-only change.
First attempt available here: https://github.com/CendioOssman/tigervnc/tree/alr Probably needs more work with regard to tuning the update rate. The work done on bug 5719 and bug 4735 could probably be helpful.
Latest upstream code now respects available bandwidth.
The current implementation does not work very well in the following scenario: * High bandwidth (~300 MB/s) * Video playback In this case, the server will send lossless updates for every video frame, which causes the CPU and network usage to rise. The playback experience is also affected (more choppy).
Created an attachment (id=886) [details] Prototype patch which avoids ALR if regions are constantly changing
I've added some diagnostics and determined that the problem is CPU resources and not bandwidth (as suspected). In my test case it determines that it has 15 ms before the next update. However it spends between 40 and 45 ms sending the lossless refresh. Of those only 1-2 ms is spent waiting for things and the rest is using the CPU. Not sure how to best adjust the update size so we don't overshoot here. Perhaps hard code a conservative value for CPU throughput?
A hard coded throttling to avoid CPU issues seem to do the trick. Some other issues have also been found and fixed: Refresh size miscalculation --------------------------- A mixup between pixels and bytes caused the system to over-estimate the appropriate size. That was also part of the reason we got fps drops. Attempts to send even if there was no room ------------------------------------------ The system tried to send a refresh even if it determined that it was 0 ms to the next "real" updated. As each update has a minimum size this could cause some issues. At least 1 ms needs to be available now. Avoid unnecessary refreshes --------------------------- A final version of the attached patch. The idea is to avoid sending a refresh that will be immediately overwritten as the user will not see it anyway. As we can't tell the future we approximate this by assuming that an area recently updated will soon see another update. We do the refresh once the area has been stable for a short while (100ms). That's long enough to catch common cases like video, but short enough to feel "immediate". Consider high quality JPEG to be "lossless" ------------------------------------------- It's technically not "lossless" as some bits will vary from the "real" image. But users should not be able to see the difference, so let's try to make use of the extra compression that gives us. More aggressive refresh when probing bandwidth ---------------------------------------------- The probing of bandwidth relies on the principle of overshooting a bit and seeing if we get delays. But the refresh tries to stay under the current bandwidth estimation so it never sends enough data to improve the probe. That also means it gets stuck at an estimate much lower than what's actually available. Improve this by telling the refresh system there is more bandwidth as long as we are in the probing stage. Once the bandwidth has been properly determined we go back to a conservative guess.
I've tested this in different scenarios and I cannot see any real difference compared to 4.9.0 in either bandwidth, CPU usage or perceived performance. I've tried: * Unlimited bandwidth, no latency * 4 Mbps, 50 ms * Unlimited bandwidth, 150 ms I tested glxgears, some different movies and general UI interaction (e.g. dragging a window around). Some positive effects from bug 4735 and bug 5719 could be seen, but in no case was the experience any worse. As for positive effects from this bug, it could only really be seen once I configured a lower JPEG quality. At that point the experience got much better on the networks with low bandwidth or high latency. Unlike 4.9.0 though I got a nice, clear picture once the updates calmed down.
> * Screen areas damaged by JPEG compression should automatically be replaced with copies with no noticeable damage Yup. > * "Real" updates should get priority over refresh updates (i.e. you should not get worse update rate with this change) I can not see any difference compared to 4.9.0, so seems to work fine. > * When regions are changed very often, for example with video playback, the server should not send lossless updates for that region until it has stopped changing CPU and bandwidth are similar to 4.9.0, so seems to work.