What is bandwidth? What is latency?

Status
This thread has been Locked and is not open to further replies. Please start a New Thread if you're having a similar issue. View our Welcome Guide to learn how to use this site.

FriedrichBauer

Thread Starter
Joined
Mar 5, 2021
Messages
4
When first reading about bandwidth online, one often stumbles on articles such as this one that wish to emphasize that bandwidth is NOT the same as internet speed.

I'm sorry, but if I call my ISP tonight and tell them, "Double my bandwidth!" and I then decide to visit a random webpage, and the page loads twice as fast as before, that is faster internet. So to say that bandwidth has nothing to do with internet speed isn't just confusing, it just doesn't seem correct to me. It would have been better if authors such as the one above had stated, instead, that internet speed comes in more than one form and that it just depends on how you look at it. So, to clear up some of the confusion I might have, I'm going to ask several questions about bandwidth and latency, each of them numbered below.

1) Is the reason bandwidth isn't considered the same as internet speed due to the fact that real internet speed depends a lot on the amount of data being downloaded? For instance, suppose I get 100 Mbps download bandwidth from my ISP and I then decide to download a 50 Mb (6.25 MB) file from the internet. In this case, bandwidth would not be the limiting factor. Now suppose I called my ISP and told them to increase my download bandwidth from 100 Mbps to 200 Mbps. In this case, if I were to download the same 50 Mb (6.25 MB) file again, I wouldn't experience faster internet speeds. Is this the reason signing up for a higher bandwidth plan doesn't necessarily translate to higher internet speeds?

2) In its attempt to explain the concept of latency, the same article referenced above made mention of the term "ISP hub" in reference to traditional satellite internet. What exactly is an ISP "hub"? To be clear, in the article, a user's computer will first communicate with a satellite, which will then communicate with an ISP hub, and then presumably with a web server, and then--I'm assuming--back the other way from the web server to the ISP hub to the user's computer. Does the same thing also happen with cable internet? For instance, with cable internet, will a user's computer first have to communicate with the ISP's "hub" before being able to communicate with a web server? How can I find were my ISP hub is located?

3) To borrow an analogy from a different website, if you had a 5-lane highway, the speed limit on the highway would be the ping rate, and the number of lanes would be the bandwidth, with each car representing a certain amount of data, correct?

4) How come a server that is further away from me spatially has a lower ping rate than a server that is closer to me when doing an online speed test? Is the first server just able to respond to requests faster than the second server? Or does it have to do with the type of cables used as part of the ISP's infrastructure? To be clear, the first server in the online speed test is located in Boston and belongs to Comcast, whereas the second server belongs to a smaller ISP named after a nearby city where I live.

5) Why am I not getting a response when I ping some of the devices on my LAN using a device's private IP addresses, i.e. a smart TV or a smartphone?

6) How come the ping rate of a web server varies over the day? Does it have to do with server load?
 

zx10guy

Trusted Advisor
Spam Fighter
Joined
Mar 30, 2008
Messages
6,685
When you look at network performance, it falls into two categories: speed and bandwidth. Speed is obvious as it's how fast a specific datagram is sent from one end to another. Bandwidth is not so obvious and is many times confused with speed. Bandwidth is just capacity. In the end, the goal is to move data from one end to another in the quickest way possible.

When you increase speed, you're pushing data faster between end points holding everything else the same. So if you have a 10MB file. Increasing your network speed from say 100 Mbps to 150 Mbps will result in that file getting to the destination in a shorter amount of time.

Discussions about bandwidth come in when you can't increase the speed of transmission. So what you can do is use some technologies which will allow you to move more data at the same speed prior to implementing these technologies. For a single point to point connection, the concept of jumbo frames enters the picture. The following discussions will get very technical very fast and I'm going to try to keep it as simple as possible. Data moves across various network types in a structured unit called a frame. This frame has a set size in bytes. The structure of the frame has many components such as the header, CRC (for error correction), footer, and the data itself. Each of these sections have a defined size in bytes. You can look at your car as a similar analogy for a frame. You have the engine that is a set size. The trunk is a set size. The interior cabin area is a set size. For Ethernet, the standard frame size is 1500 bytes. Jumbo frames alters the standard byte size of Ethernet to something greater than 1500 bytes. Typical jumbo frame size for Ethernet is 9000 bytes. But can be anything less as it's configurable. All devices along the bath must be able to support jumbo frames. And some network devices support up to 12,000 byte frames which again all devices must support frames of this size. Getting back to the car analogy. Say your standard frame is a regular passenger car to move data. But then you utilize a big SUV. Both vehicles are traveling at the same speed. But the use of the SUV will move more data which results in better performance and lower times to transmit the data.

The other technique is to aggregate connections to increase bandwidth that way. The previous analogy is over a single connection. To leverage the highway analogy. The above was a single lane road. You can only move a set amount of data as you're limited to the capacity of that road. A single car can only travel down it at any given place/time. Now expand the road to add an extra lane. Now you've doubled the capacity of that road as you can now have two cars traveling down the road at the same time. Now increase it again. You should get the picture. This technique is called LAGs or link aggregation. The concept of being able to group multiple connections into a single logical one. LAGs are also a point of confusion as many people think by combining connections the speed goes up. It does not. For example, I aggregate two 1Gbps Ethernet connections. Does this mean I now have a 2Gbps connection? No. The data is still traveling at 1Gbps. But now you have two simultaneous transmissions.

When you discuss bandwidth with ISPs, they're going to look at it as the amount of data they're allowing you to run through their network. This gets into discussion on data caps, traffic shaping, or quality of service. Data caps enter into the picture as ISPs have to account for how much data is being pushed through their networks. If you have heavy users, they're using up more of the hardware than the casual user. To level out the playing field, ISPs have started to charge extra for those that exceed a certain amount of data usage a month. Traffic shaping and quality of service are almost the same. They go about decreasing your speed if you run a certain type of traffic through their network or you consume too much.

Latency is just a measure of the delay the data reaches a destination from the source. Nothing is instantaneous with regards to transmission. There is always a delay. To visualize this concept, look at a garden hose. When you turn on the water, the water doesn't instantaneously come out the end of the hose. There is going to be some delay. The longer the hose the longer the delay until water comes out. This is latency. And why if a server is on the other side of the globe, performance is slower. Same applies to sending data to and from satellites.

Ping has been so misused it's not even funny. It's been made worse by the gaming industry. PING stands for packet internet groper. It's a protocol to get a "hello" response back from a device you want to test to see if there is simple communication established. Ping provides the answer of yes the device on the other end answered and how much of a delay (latency) there was in getting a response back. That's it. Ping falls under a set of TCP/IP protocols called ICMP. Trace route is another ICMP protocol. The results of issuing a Ping depends on a bunch of variables. Whether the device even answers to Ping/ICMP. How busy is the end device. If the device is busy doing other things, it will cause a delay in sending a reply back. If the network path has firewalls along the way that block forwarding of Ping/ICMP. Lots of variables. With your question as to why a server that is closer than another would have a higher latency from a Ping test, it depends on if that server is processing more stuff than the other server and what network route your Ping test took to get to the server. Even though the server may be close geographically, it does not mean your Ping packet took the most direct and shortest route. It could have gone through several states before arriving at the server a few miles away. The route a packet takes on the Internet is dependent on again a bunch of other variables with route tables and route determination. These decisions are made based on traffic loading of specific circuits, speed of next hop routers, and if links are up or down. These decisions are made dynamically by routing protocols running on these ISP routers which all are using a routing protocol called BGP (border gateway protocol).
 
Joined
Jan 9, 2005
Messages
156
Whew!!! Excellent questions and terrific answers. Thanks to you both. I feel like copying and pasting this so I can read it often. (y)
 

FriedrichBauer

Thread Starter
Joined
Mar 5, 2021
Messages
4
When you increase speed, you're pushing data faster between end points holding everything else the same. So if you have a 10MB file. Increasing your network speed from say 100 Mbps to 150 Mbps will result in that file getting to the destination in a shorter amount of time.
I think I finally understand how bandwidth works. So, for instance, to borrow your example, suppose I'm getting a download bandwidth of 100 Mbps from my ISP, and I wish to download a 10MB file. 10 MB is the same as 80 Mb. Let's ignore the fact that real-world download speeds--such as those from online speed tests--will always be lower than the download bandwidth, for simplicity's sake. So, in theory, I should get the file in 80 Mb x (1 second / 100 Mb) = 0.8 seconds. Now, suppose I call my ISP and ask them to increase my download bandwidth from 100 Mbps to 150 Mbps. In this case, if I were to download the same file again, I should get the file in 80 Mb x (1 second / 150 Mb) = .53 seconds.

Is my math correct?

If the answer is yes, then what is the reason increasing one's bandwidth doesn't necessarily translate to higher "speeds," as experienced by the end user? Is it because the speed difference isn't always/is barely perceptible due to their not being a big difference between 0.8 seconds and .53 seconds? I really need to have this last question answered.

Regarding frames, you wrote:
Getting back to the car analogy. Say your standard frame is a regular passenger car to move data. But then you utilize a big SUV. Both vehicles are traveling at the same speed. But the use of the SUV will move more data which results in better performance and lower times to transmit the data.
So, if one person represents 10 MB, and an SUV can carry a maximum of 6 people vs. 4 people for a regular sedan, and the SUV is represented by a jumbo frame vs. a standard-sized frame for the sedan, that means the jumbo frame can carry 60 MB of data vs. 40 MB for a standard frame?

Also, when using an online speed test, what are they measuring exactly? What is the download speed from the online speed tests results? Is that the same as throughput?

What is the difference between initiating a multi-threaded vs. single-threaded connection on Ookla's speed test? Why should I care whether the speed test uses a multi-threaded or single-threaded connection?

According to this source:
Most large downloads [over the web] and streaming services operate over a single connection to the server, so it makes sense to measure the throughput available over a single connection. Personally this is my preferred test when comparing ISPs even though it may not show what the connection is capable of with multiple simultaneous connections.
Why would single-threaded connections be the preferred choice when comparing different ISP speeds? When comparing different ISP speeds, isn't it more preferable to find out what the real-world speeds of different ISPs are, using a multi-threaded connection rather than a single-threaded connection?

What about "regular" web traffic like accessing a webpage? Does accessing a webpage use a multi-threaded connection or does it use a single-threaded connection?

Lastly, according to this source:
Some connections cannot achieve full speed from a single connection, or have very unstable speeds, and multiple connections allow us to "fill" the available bandwidth more easily. This is particularly true on faster lines. Additionally, many modern applications don't use a single thread when transferring data, so it helps us produce a "real world" result.
But why would it matter whether the connection's speed was unstable when choosing between a single-threaded connection and a multi-threaded connection? How would this affect the test results of the speed test?

Also, I decided to run some tests by pinging several web servers owned and operated by 3 newspapers--The New York Times, Le Monde (the French equivalent of The New York Times), and Libération (a smaller French newspaper)--and was surprised by the results. I realize I should have ran the tests at the same local time in France that I ran them here, but I didn't feel like waiting until tomorrow to do it right; what I mean to say is that I realize they are 6 hours ahead of the U.S., in a different time zone, possibly with slightly different work schedules, which might affect the test results, but I still went ahead and ran the tests anyway.

Here were the results I got:
average latency for nytimes.com = 26.455 ms
average latency for lemonde.fr = 22.320 ms
average latency for liberation.fr = 115.809 ms

I was particularly surprised by the fact that the latency for the Le Monde server was actually lower than that for The New York Times. Does that mean the Le Monde server is using some kind of CDN (content delivery network) to reduce latency?

As for the results I got for the Libération newspaper, they were more in line with what I expected from a server operated overseas.
 
Last edited:
Joined
Jul 30, 2001
Messages
384
I've always thought of bandwidth vs speed like this. The speed of the car is just that, the speed. The bandwidth is how it delivers that speed as the number of cars increases.
 

Couriant

James
Moderator
Joined
Mar 26, 2002
Messages
40,592
..

Also, I decided to run some tests by pinging several web servers owned and operated by 3 newspapers--The New York Times, Le Monde (the French equivalent of The New York Times), and Libération (a smaller French newspaper)--and was surprised by the results. I realize I should have ran the tests at the same local time in France that I ran them here, but I didn't feel like waiting until tomorrow to do it right; what I mean to say is that I realize they are 6 hours ahead of the U.S., in a different time zone, possibly with slightly different work schedules, which might affect the test results, but I still went ahead and ran the tests anyway.

Here were the results I got:
average latency for nytimes.com = 26.455 ms
average latency for lemonde.fr = 22.320 ms
average latency for liberation.fr = 115.809 ms

I was particularly surprised by the fact that the latency for the Le Monde server was actually lower than that for The New York Times. Does that mean the Le Monde server is using some kind of CDN (content delivery network) to reduce latency?

As for the results I got for the Libération newspaper, they were more in line with what I expected from a server operated overseas.
To answer this: If LeMonde is the French New York Times (same parent company) then the latency/ping times are going to be (close to) the same result because they are on the same web server / IP subnet. At least it is when I do a lookup.

1615224219952.png
 

FriedrichBauer

Thread Starter
Joined
Mar 5, 2021
Messages
4
To answer this: If LeMonde is the French New York Times (same parent company) then the latency/ping times are going to be (close to) the same result because they are on the same web server / IP subnet. At least it is when I do a lookup.
Maybe I'm understanding what you wrote, but I think you might have misunderstood what I wrote. When I said that the Le Monde newspaper was the equivalent of the French version of The New York Times, I didn't mean it to say that both newspaper were owned by the same company. Both newspapers are in fact, completely separate entities. What I meant was that the Le Monde newspaper is the paper of record in France that The New York Times is to the U.S--but they're completely different newspapers. I was just wondering what the effect of using a CDN would have on ping, and why I was getting similar latency numbers for both nytimes.com and lemonde.fr. So, why would they be on the same web server if both papers are separate entities? Is that how CDNs work?
 
Last edited:

FriedrichBauer

Thread Starter
Joined
Mar 5, 2021
Messages
4
Never mind on the CDN question. I got my answer. While it is difficult to determine whether a lower ping is the result of a CDN on a server from overseas, it is very likely to be the case. I still need an answer on the other questions in my second post, though.
 
Status
This thread has been Locked and is not open to further replies. Please start a New Thread if you're having a similar issue. View our Welcome Guide to learn how to use this site.

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

As Seen On
As Seen On...

Welcome to Tech Support Guy!

Are you looking for the solution to your computer problem? Join our site today to ask your question. This site is completely free -- paid for by advertisers and donations.

If you're not already familiar with forums, watch our Welcome Guide to get started.

Join over 807,865 other people just like you!

Latest posts

Staff online

Members online

Top