4 min

It's Not The Heat It's The Latency FUTR Podcast

    • Entrepreneurship

There is a lot in the news about SpaceX’s Starlink internet service, and their ability to bring high speed connectivity to remote locations around the world.

While the promise of satellite broadband connections sounds great, they have typically fallen short of what you can get from a wired connection.

One of the big challenges with Satellite broadband is latency. Latency is often very misunderstood. It is the time it takes a packet to traverse a network, and this can drastically impact your total throughput.

To understand this issue, let’s talk about a couple of major network protocols, TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP provides the mechanisms to control the flow of packets across a network. It allows you to recognize when packets get lost and retransmit them. It can identify when the network begins to get congested and throttle back. It does a great many very useful things. The problem is that in order for all of this to happen the sender needs to receive an acknowledgment from the receiver or an ACK packet.

This is where the latency of a network comes in. As the round trip time between networks grows, the amount of data that can flow across a TCP stream goes down. TCP gets hung up waiting for the ACK packets and the transfer rates go down. This is why even though someone might put in a 10Gb connections between Kansas and Jakarta they are disappointed when they don’t get 10Gbs of throughput.

UDP on the other hand does not guarantee transmission like TCP, and sends data without needing to receive a confirmation. This type of protocol is useful for streaming data, where it doesn’t entirely matter if a packet gets lost here or there, and because of this it can move more data over high bandwidth, high latency connections. This is not to say that latency doesn’t still impact it, because it can cause delays, and latency’s brother, jitter, can really mess up communications. 

You can think of jitter as variable latency with is what causes the strange robotic sounds, chirps and dropouts on audio calls.

So, getting back to satellite broadband, there are some latency limitations built into due to the speed of light. We often think of the speed of light as being very fast, but in the world of telecommunications, it can be a bottleneck.

There was a study put out by Carinthia University of Applied Sciences in Vienna, that said Starlink latency varied from two seconds to just under 30ms. They found that on average, Starlink's latency remained below 70 milliseconds, with a latency of 45 milliseconds achievable 77% of the time.

To put some perspective to things, the speed of light from Chicago to NewYork is about 13.3 ms. I have seen fast wired connections that see about 18ms, but more typically it is in the 20-24 ms range. Latency in satellite communications has historically been pretty high in the 45 ms to 600ms range. 

I know, tens of milliseconds doesn’t sound like much, but over billions of packets, it adds up. To give you some perspective on this, here is my voice with a 1ms delay. Here is my voice with a 30ms delay and here is my voice with a 70ms delay.

Speed of light to low earth orbit is typically between 2 and 27ms, but the lower the satellite, the faster it needs to orbit. This means that the satellite will be available for shorter periods of time, which brings challenges for things like signal variation, hand-offs between satellites and acquisition delays.

Elon Musk claims their goal is to get the latency below 20ms, which would be a pretty incredible feat, but still slower than a wired land connection or point to point microwave in most cases. For some perspective a wired local link from 30 miles outside of Chicago to the Chicago NAP is typically under a 1ms.

There is a lot in the news about SpaceX’s Starlink internet service, and their ability to bring high speed connectivity to remote locations around the world.

While the promise of satellite broadband connections sounds great, they have typically fallen short of what you can get from a wired connection.

One of the big challenges with Satellite broadband is latency. Latency is often very misunderstood. It is the time it takes a packet to traverse a network, and this can drastically impact your total throughput.

To understand this issue, let’s talk about a couple of major network protocols, TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP provides the mechanisms to control the flow of packets across a network. It allows you to recognize when packets get lost and retransmit them. It can identify when the network begins to get congested and throttle back. It does a great many very useful things. The problem is that in order for all of this to happen the sender needs to receive an acknowledgment from the receiver or an ACK packet.

This is where the latency of a network comes in. As the round trip time between networks grows, the amount of data that can flow across a TCP stream goes down. TCP gets hung up waiting for the ACK packets and the transfer rates go down. This is why even though someone might put in a 10Gb connections between Kansas and Jakarta they are disappointed when they don’t get 10Gbs of throughput.

UDP on the other hand does not guarantee transmission like TCP, and sends data without needing to receive a confirmation. This type of protocol is useful for streaming data, where it doesn’t entirely matter if a packet gets lost here or there, and because of this it can move more data over high bandwidth, high latency connections. This is not to say that latency doesn’t still impact it, because it can cause delays, and latency’s brother, jitter, can really mess up communications. 

You can think of jitter as variable latency with is what causes the strange robotic sounds, chirps and dropouts on audio calls.

So, getting back to satellite broadband, there are some latency limitations built into due to the speed of light. We often think of the speed of light as being very fast, but in the world of telecommunications, it can be a bottleneck.

There was a study put out by Carinthia University of Applied Sciences in Vienna, that said Starlink latency varied from two seconds to just under 30ms. They found that on average, Starlink's latency remained below 70 milliseconds, with a latency of 45 milliseconds achievable 77% of the time.

To put some perspective to things, the speed of light from Chicago to NewYork is about 13.3 ms. I have seen fast wired connections that see about 18ms, but more typically it is in the 20-24 ms range. Latency in satellite communications has historically been pretty high in the 45 ms to 600ms range. 

I know, tens of milliseconds doesn’t sound like much, but over billions of packets, it adds up. To give you some perspective on this, here is my voice with a 1ms delay. Here is my voice with a 30ms delay and here is my voice with a 70ms delay.

Speed of light to low earth orbit is typically between 2 and 27ms, but the lower the satellite, the faster it needs to orbit. This means that the satellite will be available for shorter periods of time, which brings challenges for things like signal variation, hand-offs between satellites and acquisition delays.

Elon Musk claims their goal is to get the latency below 20ms, which would be a pretty incredible feat, but still slower than a wired land connection or point to point microwave in most cases. For some perspective a wired local link from 30 miles outside of Chicago to the Chicago NAP is typically under a 1ms.

4 min