Tom explains why more bandwidth isn’t always the solution and why he’s always chiding people to not use WiFi to stream their video.
Featuring Tom Merritt.
Please SUBSCRIBE HERE.
A special thanks to all our supporters–without you, none of this would be possible.
Thanks to Kevin MacLeod of Incompetech.com for the theme music.
Thanks to Garrett Weinzierl for the logo!
Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit
Send us email to feedback@dailytechnewsshow.com
Episode Script
Hey folks, Tom Merritt here host of Know a Little More. Long-time listeners know we put these episodes out in batches AKA seasons, because it takes some work to dig down and research them. But not ALL of you are long-time listeners meaning you haven’t heard all the episodes. Add to that the fact that things change, stuff gets updates and facts become clearer. So we’ve decided to take this time between batches of new episodes to re-release some of the older ones. This episode is Latency vs. Bandwidth and was originally released 10/20/2020. We HAVE re-recorded it since the original episode to clarify a couple things.
Enough of the preamble. Let’s begin About Latency Vs. Bandwidth
You have high bandwidth but your connection is still slow
Your friend has slower bandwidth but says something called latency is good?
And that’s why you die so much when you play them online?
Are you confused?
Don’t be.
Let’s help you Know a Little more about Latency vs. Bandwidth
Bandwidth is often expressed as megabits or gigabits per second. Get a fast 500 Mbps connection! That number 500 megabits per second tells you how much data you can receive every second. It doesn’t mean you will but you could.
Also that number isn’t actually your bandwidth so much as it is a measure of the available bandwidth’s optimum throughput. Throughput is the amount of data that is actually transferred over a given period of time.
So 100 Mbps is the highest throughput that the available bandwidth can deliver. But you may not see 100 Mbps of throughput in use, because that throughput in practice is affected by a number of factors, and one of those factors is latency.
Latency is the time it takes for information to get from one point to another. It’s most often measured as a round trip. I make a request, say for a website. The request travels to a server, the server delivers the webpage and that data shows up in my browser. That roundtrip time is the latency. Latency is usually measured in milliseconds. One millisecond is a thousandth a second. So 1000 milliseconds is one second. One second of latency is very very bad.
Oh and by the way, in most command line systems, the Ping command measures latency, so you sometimes hear the latency referred to as the ping rate.
There’s lots of different ways to measure latency, like round trip time or time to first byte but all you really need to know to understand the concept of latency, is that it’s roughly speaking, the time from when you click the link to the time the page loads. It can also be the time from when you press the shoot button to when the enemy robot falls, but you’re getting the idea now.
So back to bandwidth. Bandwidth is really not speed. It can have an effect on speed but it isn’t speed. Latency is closer to speed but it also isn’t actually speed. Throughput is your speed. It’s the amount of data that can be transferred over a given period of time. And your actual throughput is affected by latency and bandwidth.
Analogy time!
Think of bandwidth like your water pipe. If it’s big it can handle more water than if it’s small, right? But if the source isn’t sending a lot of water or sending the water slowly, it doesn’t matter how big the pipe is. A bigger pipe doesn’t mean more water arrives faster.
But a small pipe can slow down water delivery if more water is trying to be sent than the pipe can hold. Bandwidth only “speeds up” your connection in so much as it takes away the limiting factor of the pipe being too small.
Bandwidth and latency affect each other and therefore affect your throughput and your perceived speed. But how?
Maybe a car analogy works better for this.
Latency is how fast the car can go, bandwidth is how many lanes on the road. And throughput is the number of cars traveling on the road in a given time period.
And of course cars are the data.
If the car can only go 55 mph it doesn’t matter if there are 6, 10 or 30 lanes, the car–aka your data– will only arrive as fast as 55 mph can get it there. That’s your latency measurement.
But let’s say the car can (legally) go 100 mph on the road but the road has only one lane AND that lane is filled with other cars not all of them going 100– that’s going to slow the car down. It won’t reach it’s top speed.
Latency can create bottlenecks. Just like lots of slow cars on the freeway can cause traffic jams for the faster ones.
What’s that I hear? A thousand network admins asking me to add Jitter to this explanation? OK, but only because you can kill my network access if I don’t.
To be honest, Jitter is not something you NEED to know about to understand Latency vs Bandwidth but it is another thing that can help explain odd behavior and it will make the Admins happy. And you always want your admins happy.
Jitter is the variability over time of latency. So you have a connection that swings from 100 milliseconds to 600 then to 486 then to 700 then back to 100 milliseconds.
Going back to our car analogy, jitter happens when there are too many cars on the road. Your 100 mph sports car can speed up to 70 but then has to slow down and almost stop then the traffic moves and it speeds up again.
I mean kind of. It’s actually different packets in a connection going at different rates because of varying network connections. But for our purposes, jitter is why you can’t just get ONE latency measurement that will stay the same. It’s also why your Skype call may suddenly drop out and then come back just fine. Any streaming data is sensitive to jitter. It’s kind of it’s own topic but it’s good to know about.
OK back to Latency vs. Bandwidth
So why do ISPs advertise bandwidth?
Probably because it’s under their control. Latency can be affected by a lot of things outside the ISP’s control.
Distance affects latency. You probably know there is an upper speed limit on light. And light travels slower through cables than through a vacuum. Fiber optic cables impede the speed of light the least slowing it down only from 3.3 to 4.9 microseconds per kilometer. But the point is data can only travel so fast and therefore, even under optimal conditions, the farther away the source of your data is the higher the latency is going to be. If the server you’re getting your data is halfway around the world, the latency is just going to be longer.
But the distance isn’t just determined by how far away the server is. Some of this delay might be because of your ISP, Such as satellite ISP’s that have to send the data up to the satellite and back down to the ground again. Even if that server is next door, if you’re using a satellite ISP it has to travel a long way. This is why in terrestrial ISPs your distance from the node can affect your “speed” since it takes longer for your request to get to the ISP where it can route it where it’s going.
But even if you’re really close to the node, and the server is in a data center physically near you, other things can contribute to longer latency.
Routers along the way also affect latency. Here I’m not talking about your router I’m talking about the routers in your ISP, and the internet exchange point and the data centers that your request and the resulting web page have to go through. Each Router along the way takes time to analyze the packets and sometimes add information to help the packet find it’s way or even split the packets into smaller packets. Each router a packet has to go through- as well as switches and bridges– adds to the latency and packets may not always take the route with the fewest routers. That depends on ISP agreements and who’s handling traffic along with a lot of other considerations. This is one of the reasons Netflix tries to put servers inside an ISP’s operation, to eliminate those routes that can introduce latency. It’s also why slow Netflix connections were sometimes fixed by using a VPN. The VPN caused data to use a different route.
Netflix isn’t the only one. You may have heard of the term CDN. That stands for Content Delivery Network. A CDN caches data in multiple locations close to expected users to reduce the travel time of the data.
And depending on the type of data you’re asking for, other things can introduce latency. Is the data stored somewhere off the server. Time to access that storage adds to the latency.
Remember you don’t have a direct connection from your computer to the server. Your request goes through your router, your ISP’s node, then internet exchange points, transit providers, sometimes multiples of those to the data center, possibly multiple servers to get to the right one, switches and bridges to take you there along the way and maybe storage. When you think about all those paths it’s kind of impressive that latency is measured in milliseconds not minutes.
And that’s not just for a Web page’s text right? Each image on a web page is a separate request not to mention scripts and ads and third-party plugins. The perceived speed of a web page loading is affected by lots of other stuff too besides the latency but if the latency is large for each of those elements that page is going to take longer to load. And let’s not even get into what it takes to make a video stream appear coherent.
Another issue is congestion. The one lane road problem. Or in Los Angeles, the no matter how many lane road problem. Bandwidth is a constraint on your throughput. Very low latency may just get bogged down traveling through skinny “pipes” or fat pipes that have a lot of users sending a lot of data through then at the same time. The “Game of Thrones” problem.
So what’s good latency?
Good is relative. Obviously the lower the better but for most uses you won’t notice latency less than 100 milliseconds. Because most of your internet uses, web pages, email etc are not that sensitive to it. Gaming is one that is sensitive because you’re dealing with constant round trip data. Video streaming is latency sensitive because you need all the packets in each frame to keep the video smooth. So 50 or even better 30 milliseconds later or less is desirable for gaming and about 30-60 for streaming video, though there are some tricks that can keep it smooth at higher latencies.
How do you improve latency?
Largely it’s in the hands of the people bringing you your data. ISPs are usually good at minimizing their effect on latency but changing ISPs could help in some cases.
More often, the source of your data and whether they are using a CDN or otherwise keeping the transit routes short, will be the determining factor.
But you may be introducing latency on your end as well and you can at least eliminate those factors. Your router can easily cause extra latency so make sure your router is up to date with its firmware and check to see what it’s optimal ability is. Upgrading your router may help in certain specialty cases.
And plug in with ethernet. WiFi is so much better than it used to be but it still adds some latency just by the nature of having to broadcast through the air then convert back again, while ethernet sends the data straight on through.
So one more time. Bandwidth is not speed. Throughput is affected by available bandwidth and latency. And latency is hard to control.
But I hope this helps you understand why more bandwidth isn’t always the solution and why I’m always chiding people to not use WiFi to stream their video.
In other words I hope now you know a little more about Latency vs. Bandwidth.