Tweaking Starlink - PART I

By Bl@ckbird on maandag 22 november 2021 10:00 - Comments (5)
Category: Networking, Views: 5.489

Optimizing Starlink for Consumers
When you want to increase your internet speed, you have basically 3 options:
  • Optimize your existing internet connection.
  • Bond multiple internet connections together.
  • Smart steering of your internet traffic along multiple internet connections.
Options 2 & 3 are more suitable for businesses and I’ll cover them in my next blogpost. I take Starlink as an example, but most techniques and ideas I mention here can also be used with other (high latency) internet and WAN connections.

What is a good internet connection?
The quality of an internet connection can be defined by:
  • Throughput (How many IP packets can you send/receive per seconde?)
  • Latency (How long take IP packets to complete their journey?)
  • Jitter (How much variation is there in latency?)
  • Packet-loss (How many packets do not reach their destination and are lost in transit?)
Most gaming, voice and video applications use UDP packets. UDP uses a simple, connectionless communication model. UDP packets are just send along the way. When packets are dropped, you may lose a video-frame, but that’s OK: We humans don’t notice that.

When gaming, a high latency internet connection can make the experience a bit sluggish. Unfortunately, UDP traffic can’t be optimized much, as it is an efficient protocol.

Latency of Starlink is about 40 to 50ms, with peaks of up to 100ms. Packet-loss is between 0 and 5%, with peaks of up to 10%

For comparison: When you use traditional satellite communication, latency is about 600 to 850ms.
(A satellite in a geo-stationary orbit is 36.000km from the Earth’s surface. A roundtrip will take two times 2x 36.000km.) Latency and throughput of Starlink is quite good, :) but not compared to an average broadband connection.



Starlink uses radio signals to communicate with satellites at 550km above the Earth’s surface. Water blocks radio signals very well. You want to avoid any trees (that contain water), poles, buildings or other obstructions that blocks a clear view on the sky. In this video you can see how.

Latency & Packet loss vs. Throughput
Most applications don’t use UDP, but TCP. TCP is connection-oriented: A connection between client and server is established before data can be sent. Every few TCP packets needs to be acknowledged to make sure data has been successfully transferred.

This is OK when latency is low, but when latency is high, this has a significant impact on the maximum throughput you can get. Having only a few percent of packet-loss makes the situation even worse.

With the Mathis Equation, you can calculate how much throughput you can get from a network (internet) connection. Google the term gives more information on the topic, for example here.

Optimizing TCP
Though you can’t bend the laws of physics, you can reduce the effects of high latency on TCP traffic. (e.g. Slow download speeds.) To do this you can replace standard TCP (TCP Reno or CUBIC) by TCP BBR.

You can do this by:Another way is using Shadowsocks-libev Proxy Server on your VPS.

High latency reduces throughput of TCP traffic. But this is per TCP session. You can off course “stack” multiple TCP sessions on top of each other by using a download manager. (Who remembers Download Accelerator Plus? :) )

You can also enable multi-threaded downloads in Google Chrome, by going to:
chrome://flags/#enable-parallel-downloading
and enable it. This will of course only optimize your downloads; all other traffic will not be optimized.

Test Results
To test TCP BBR, I’ve configured a VPS with Wireguard and enabled TCP BBR. I used WANem as a WAN Simulator. With WANem you can introduce different levels of latency and packet-loss to your traffic. I’ve tested internet throughput under different scenarios with iPerf3:

Different Scenarios:
  • With and without VPN with TCP BBR enabled.
  • Single stream downloads / multi-stream downloads.
  • With 0 and 5% packet loss.
Different Latencies:
  • 0ms latency. (Just the normal internet latency, no additional latency introduced.)
  • 20ms latency. (Minimum latency of Starlink.)
  • 50ms latency. (Average latency of Starlink.)
  • 100ms latency. (Peak value latency of Starlink.)
  • 600ms latency. (Minimum latency of traditional satellite internet (geo-stationary orbit.))


Notes:
  • All tests were tested with an iPerf3 server running at iperf.par2.as49434.net
  • My internet connection is limited to 100/100 Mbps. (Fiber to the Home)
Performance Analysis
When you look at the average latency of Starlink, throughput increase will be between 30 and 100%. (by using TCP BBR) When you use multiple TCP streams (e.g. multiple download sessions) throughput will be even a little bit higher.

When you experience only 5% packet loss, using a VPN with TCP BBR will increase your throughput by 500 to 1700% !!

Some Graphs
Downloading Ubuntu Server 20.04.3 LTS:
  • Latency is 50ms.
  • Left: Without VPN.
  • Right: With VPN and TCP BBR enabled.
  • Latency is 50ms & 5% packet loss.
  • Left: Without VPN.
  • Right: With VPN and TCP BBR enabled.


TL;DR
Although I’m not a fan of public VPN services from a security point of view,
in my search for information I stumbled upon ProtonVPN referral

They use TCP BBR to optimize your TCP traffic on high latency internet connections. (Including Starlink.)
It’s the easiest way to optimize your Starlink connection, without having to setup your own VPN server.

When you want to test ProtonVPN, make sure you get a subscription, as the free servers are often oversubscribed. They use Wireguard VPN and performance increase should be about the same as mentioned in my test results.

In my next blogpost, I’ll cover optimizing Starlink for businesses.
If you have any questions, leave a comment, or ping me a message. (Pun intended :) )

Volgende: Tweaking Starlink - PART II 11-'21 Tweaking Starlink - PART II
Volgende: 7 Ways to Fail as a Wireless Expert - Home Edition 10-'21 7 Ways to Fail as a Wireless Expert - Home Edition

Comments


By Tweakers user IStealYourGun, maandag 22 november 2021 20:47

Unless you are an employee of evilcorp.inc a public VPN is fine. They will never guarantee a complete anonymus connection, but at least they have a legal commitment, unlike your ISP or your VPS-provider. Most of those company's actively monitor their systems and the traffic. In fact, my ISP even sells anonymus usage data.

Also, the guy is filming it from a hotel room. Unless he is using 4G, that is the location where you always should use a VPN. Not for anonymity, but for protection, because security on those systems is worthless.

By Michiel de Vries, woensdag 24 november 2021 11:37

I would have added the difference in latency caused by light speed travel and device overhead. You can't go any faster then light and this adds a fixed latency to a connection (esp in high orbits).

Another scenario to consider is spiked packet loss, if you loose a complete connection for 5 second once very minute which solution would be better?

By Ewald van Geffen, woensdag 24 november 2021 17:36

waarom is de vpn latency lager dan zonder? dit kan niet in praktijk want de recv/grondstation naar end dest is altijd korter dan een omweg nemen langs je vpn server.

By Ewald van Geffen, woensdag 24 november 2021 17:43

de tcp proxy met brr zet je ook best lokaal, zodat tcp brr wordt gebruikt over de sat link en niet cubic en brr vanaf het internet naar het internet

By Tweakers user Bl@ckbird, donderdag 25 november 2021 15:53

Ewald van Geffen wrote on Wednesday 24 November 2021 @ 17:36:
waarom is de vpn latency lager dan zonder? dit kan niet in praktijk want de recv/grondstation naar end dest is altijd korter dan een omweg nemen langs je vpn server.
Latency is the same, but the effect it has on the maximum throughput you can get, is much smaller when you use TCP BBR.

This link explains it well:
https://atoonk.medium.com...tion-control-84c9c11dc3a9
Ewald van Geffen wrote on Wednesday 24 November 2021 @ 17:43:
de tcp proxy met brr zet je ook best lokaal, zodat tcp brr wordt gebruikt over de sat link en niet cubic en brr vanaf het internet naar het internet
No that would not work:
TCP uses a congestion window in the sender side to do congestion avoidance.
Please refer to:
https://www.cs.umd.edu/users/suman/docs/711s97/node3.html
Note: the congestion control algorithm used for a TCP session is only locally relevant. So, two TCP speakers can use different congestion control algorithms on each side of the TCP session. In other words: the server (sender), can enable BBR locally; there is no need for the client to be BBR aware or support BBR.
Please again refer to:
https://atoonk.medium.com...tion-control-84c9c11dc3a9
Michiel de Vries wrote on Wednesday 24 November 2021 @ 11:37:
I would have added the difference in latency caused by light speed travel and device overhead. You can't go any faster then light and this adds a fixed latency to a connection (esp in high orbits).

Another scenario to consider is spiked packet loss, if you loose a complete connection for 5 second once very minute which solution would be better?
That would add another 40 (80 in total) scenario's to the test. In the end, the end-to-end latency between client and server determines the overall application performance. Whether it would be Starlink or transoceanic cable, the effect latency has on the maximum throughput you can get, is the same.

TCP BBR only helps when packet loss is limited. When packet loss is 100% for 5 whole seconds, that connection is just gone. (It will time out.)

When you have a secondary internet connection available (e.g. xDSL or 4G/LTE) you can dynamically switchover with something like SD-WAN. (Please see my second post for that.) But a secondary internet connection and SD-WAN solutions are business solutions and not affordable for consumers.

[Comment edited on donderdag 25 november 2021 16:12]


Comments are closed