Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?
Ignoring any computation, I guess rhe fastest would be dependent on the medium which transport the data and the limit there is the speed of light
Theoretically, the latency between the streamer and viewers could be zero or near zero.
For playing games online, the minimum possible latency is the speed of light delay. We’re pretty much already at the limit for that one, and we’re even using a lot of pretty clever techniques to mitigate latency such as lag compensation.
Ooh, we’re not at the speed of light as a limit yet, are we? Do you mean “point A to point B” on fibre, or do you actually mean full on “routed-over-the-internet”? Even with fibre (which is slower than the speed of light), you’re never going in a straight line. And, at least where I live, you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly.
For most of us, there is no difference though; you get what you get.
I live in a nice neighborhood but I won’t ever get fiber… we have underground utilities and this area is served by coaxial cable. There’s no way in hell they are digging up miles of streets to lay fiber; you get what you get.
My ISP latency is like 16-20ms but when sim racing it just depends on where the race server is (and where my competitors are). As someone on the US west coast, if I’m matched with folks in EU and some others in AUS/NZ, the server will likely be in EU and my ping will be > 200. My Aussie competitors will be dealing with 300-400.
It’s not impossible to share a track at those latencies, but for close racing or a competitive shooter… errrr that just doesn’t work.
The fact that I’m always at around 200ms for EU servers might be improved if we could run a single strand of fiber from my house to the EU sever (37ms!) but there would still be switching delays, etc. so yeah the speed of light is the limit, but to your point, there’s a lot of other stuff that adds overhead.
Theoretically it doesn’t really matter whether your connection is fiber or copper. Electricity moves through copper roughly at the same speed as light moves through fiber. The advantages that fiber has over copper is that it can be run longer distances without needing boosting, and that you can run an absolute fuckton more end-to-end connections in the same diameter of cable. More connections means less contention - at least at one end of the pipe. The problem then moves to the ISP’s routers :)
I’d say that the chances are actually quite good that you’ll get fiber internet within the next 10 years. Whether or not it improves your internet connection is another question entirely!
Right on man, thanks for the additional context/info. Much appreciated!
Even with fibre (which is slower than the speed of light)
This makes no sense. Are you referring to the speed of light in a vacuum? Fiber transmits data using photons which travel at the speed of light. While, yes, there is often some slowing of signals depending on whether the fiber is single-mode or multi-mode and whether the fiber has intentionally been doped, it’s close enough to the theoretical maximum speed that it’s not really worth splitting hairs (heh) over
There are additionally some delays added during signal processing (modulation and demodulation from the carrier to layer 3) but again this is so fast at this point it’s not really conceivably going to get much faster.
The bottleneck really is contention vs. throughput, rather than the media or modulation/demodulation slash encoding/decoding.
At least to the best of my knowledge!
you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly
That’s generally not how routing works - your packets might take different routes depending on different conditions. Just like how you might take a different road home if you know that there’s roadworks or if the schools are on holiday, it can be genuinely much faster for your packets to take a diversion to avoid, say, a router that’s having a bad day.
Routing protocols are very advanced and capable, taking many metrics into consideration for how traffic is routed. Under ideal conditions, yes, they’d take the physically shortest route possible, but in most cases, because electricity moves so fast, it’s better to take a route that’s hundreds of miles longer to avoid some router that got hacked and is currently participating in some DDoS attack.
I played on Google Stadia from day 1 until it got shut down. I mainly played racing games like F1 and GRID, with the occasional session in RDR2 or The Division 2. Latency was never a problem for me.
The main problem that occured over and over in the community was people’s slow or broken internet connection at home or their WiFi setup.
I would say the technology for cloud gaming is here today, but the home internet connections of a lot of people aren’t ready yet.
Many people don’t understand the continued importance of a home wired LAN. WiFi is, and probably always will be, a fraction of the performance of an ethernet connection.
Yes, I should have mentioned that I’ve always played via wired ethernet connection.
Of course! But anyone on WiFi are going to be subject to more lag, like you said.
WiFi is, and probably always will be, a fraction of the performance of an ethernet connection
In terms of bandwidth, sure, but not in terms of latency, in fact, theoretically, WiFi could be faster than Ethernet. WiFi uses radio waves, which travel faster in air than electrons do in copper and photons do in glass.
The limitation for WiFi is really at the physical layer - i.e. encoding/decoding. With that said, we do already have WiFi with transcoding fast enough to give sufficient performance for fast-paced gaming. While you’re totally correct that, at the moment, Ethernet is more capable in terms of bandwidth and latency, that’s not necessarily going to be true forever, and WiFi is good enough for any purpose at home use. The biggest issues are interference and attenuation - e.g. thick walls, sources of electromagnetic interference
Sure, good points. Even with in-home fiber (very unusual), latency of the medium is so equivalent as to be practically unmeasureable. I think, however, that the bigger factor is that it’s cheaper and easier to get a fast ethernet switch than a fast WiFi router; most WiFi routers don’t have particularly fast CPUs, or high-performance buses.
Honestly, though, I’m just guessing; I doubt any of this has as much of a latency impact as WAN factors. Bandwidth is where you’ll notice WiFi affects, and this can present as latency issues as systems struggle to get updates over a (relatively) narrow pipe.
Thanks for the response, it’s nice to chat with you :)
latency of the medium is so equivalent as to be practically unmeasureable
More or less, yup. There are some cool uses of RF to achieve very high bandwidth, low latency connections (5G as a common example, but Wi-Fi 7 has a theoretical maximum speed of 46Gbps - while this is still far behind the maximum speed of Ethernet (We have 400Gbps Ethernet in use, with 800Gbps in development), it’s catching up very fast - and since most households and businesses with copper cabling will be using mostly CAT5e or 6a Ethernet (1Gbps/100m and 10Gbps/100m respectively), Wi-Fi will soon likely be faster than most copper Ethernet networks. It’s also very likely that 5G internet will all but supplant ADSL and VDSL connections in the coming years. I think twisted-pair copper cabling is following in the footsteps of coax :)
Even with in-home fiber
The minimum latency of a connection through fiber is about the same (actually, slightly less, but not enough to matter) than the same connection made through copper. Signal propagation speed is not a benefit of fiber over copper - the benefits of fiber are that you can have many, many more connections in the same diameter of cable than with copper, it’s immune to electromagnetic interference, and it can run much further distances without needing signal boosting.
most WiFi routers don’t have particularly fast CPUs, or high-performance buses.
That’s one of the main issues, yeah - consumer grade electronics are usually total junk, especially the free routers provided by ISPs, but I’m also thinking of those absolutely horrible “gaming” Wi-Fi routers provided by the likes of ASUS - they have decent specs, but they’re just absolutely overloaded with features that gobble RAM and CPU. Dear consumer electronics manufacturers, please just let the router be a router, and let the Wi-Fi APs be Wi-Fi APs. Combine the router and the Wi-Fi AP if you must, but absolutely please stop suggesting that people can run a hundred services from routers. You should totally upsell that feature in a separate node appliance or something! Sorry, I got distracted.
it’s cheaper and easier to get a fast ethernet switch than a fast WiFi router
I agree, but I also don’t - most consumers don’t really know what a switch is or why they might need one. Most switches found in houses are either integrated with a router, power line adapter, or Wi-Fi access point. While a good switch is absolutely going to be much cheaper than a good Wi-Fi AP, most people wouldn’t really look to buy one. They might search for “Ethernet hub” on Amazon and luck into buying a decent switch, but I think most people think in terms of Wi-Fi these days, so it’s probably easier to get a Wi-Fi AP than a switch.
Also, just a minor nitpick: “fast Ethernet” is a little confusing, as terminology, because that’s the marketing name used to refer to 100mbps Ethernet connections (often indicated on network devices as FE) - so named because it was the successor to 10mbps (regular) Ethernet. (damn you, marketing people! I blame y’all for what you did to USB) When we discuss this kind of thing, it’s clearer to refer to ‘high speed Ethernet’ or refer specifically to line speed (e.g. 10GbE) - unless we’re talking about 100mbps Ethernet! Although, even then, it’s probably a bit confusing these days - I’d call it 10/100 Ethernet usually, rather than fast Ethernet, unless I was being really lazy (“yeah just stick it in the f/e port”)
I doubt any of this has as much of a latency impact as WAN factors
It definitely can do, but in a properly functioning network, I’d agree. If you have a faulty connection or significant source of interference or impedance, then that would be much more of an issue than anything else - otherwise, yeah, it’s going to be the Internet where most of the latency comes in to play. I would estimate that probably 75% of people could get big improvements to their online experience by making changes to their home network, but at a certain point, yes, contention becomes the bottleneck, which is not so easily solved :)
I would say the technology for cloud gaming is here today, but the home internet connections of a lot of people aren’t ready yet.
You witness this a lot with video conferencing. People tell one person their audio/video is shitty, and that person just shrugs and says “yeah, I have bad internet.” In my head I’m screaming “Well, what have you tried?!” or “I see you sitting beside the refrigerator there!”
Those games are quite well matched with cloud streaming. An example of a game which isn’t suitable for cloud gaming would be competitive FPS games such as rainbow 6 siege, where the additional delay imposed by connection between the player and the game can be quite a significant disadvantage. The only way that this would be low enough to become acceptable would be if you live close enough to the host device that the latency is very low, or or the host device is very close to the game server itself.
The speed of light, so 50ms or so assuming locations on Earth. In practice a bit more because you have to go around it rather than through the core. Servers already have to make retroactive calls, which is why it looks like you hit but then you didn’t sometimes.
Interestingly enough, Starlink has lower latency than wire despite the longer path because light travels slower than c through glass fiber.
The base limit is the speed of light/electricity it takes X time for a signal to travel. This is your base latency. For example it takes about 70ms for light to travel half way round the world (it has to go round, not through). This can be improved by talking to servers that are closer to you and by taking links that are direct. But can’t be improved beyond the rules of physics.
On top of this you get really small amounts of processing delays as data is passed through various routers/computers on the way to the destination.
The real problem comes from congestion - if there is a lot of data being transferred between two destinations, the infrastructure between them might not be able to cope. This may result in messages being queued (causing a delay) or dropped (your controls don’t make it to the server!) To avoid this, the network will route your message via somewhere else with less demand, increasing the distance and delay (but spreading the load)
Unfortunately, if that overloaded cable is the one bringing data into your neighborhood, then there likely isn’t an alternative route. In the UK at least, we are (finally) building out a fiver to the premises internet network that effectively fixes any local bottlenecks.
If you want to see where your latency is coming from, you can run a trace route using various applications (or even directly in windows). This will show you the latency between each router that your data is traveling through on its route to it’s destination.
Edit addition: for game streaming the network delays are added onto the natural delays of running the game (controls -> computer -> processing -> display/speakers).
The other big additional delay for streaming is that in order to reduce the network load of streaming the game the image is compressed and encoded to be sent to you (much more than is done for your monitor cable).
This is a computationaly intensive operation that can take a good few ms. The better the computers at either end, the faster this can be done. However the big way forward here is hardware encoding/decoding. By using hardware that is made to just do encoding/decoding and nothing else this can be done much faster.
These encoders are commonly on graphics cards, and the graphics parts of CPUs. As newer encoding formats are created and hardware encoders created (and actually included) this area will becomeuch faster.
Source: programmer with a computer science degree and a vague interest in networking.
On mobile, so sorry for bad editing.
deleted by creator
I think we are constantly progressing in that field. One issue for latency was that controllers used to contact your device, and then the server. Now they can connect directly to the server. Things will improve, like it or not.
For right now, I think the biggest hurdle is with ISPs.
- Data caps can be quite common, in many countries. Essentially creating a huge limit on how much you can (if at all) play.
- Most people’s router, and access point hardware needs upgrading. A lot of the stock router AIOs from ISPs are really bad. Creating a bottleneck before the data even reaches the servers.
Another hurdle I can see is companies profit sharing. Everyone wants a large cut, so I’d expect multiple streaming options… and many failures, like what we’re seeing on the movies/series streaming model… just with games it’ll be soooo much worse.
I feel a lot of the responses here are talking about cloud gaming not game streaming.
Game streaming needs to be easier to do for it to become more popular. There’s a bunch of half baked solutions through different hardware and software when you could just physically move the hardware running the game in most cases.
Cloud gaming is a hard sell when the cost to play most games on your own hardware is really fucking cheap compared to most media. Like the QWERTY keyboard people will do the traditional thing because they aren’t forced to change and it’s good enough.