by Dave Täht
I started writing this piece this morning to talk about two things – bandwidth – which is pretty well understood – and latency, which is not – in the context of getting better performance out of humanity’s synergistic relationship with web based applications.
The problem is the speed of light!
“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”
Yesterday, I accidentally introduced a triangular routing situation on my network, which effectively put me on the moon, in time and space, relative to google. I was a good 3+ seconds away from their servers, where normally I’m about 70ms away.
It made clear the source of the latency problems I’d seen while travelling in Australia and in Nicaragua, where google’s servers (in 2008) were over 300ms and 170ms RTT, respectively.
Everybody outside the USA notices the KER… CHUNK of time they lose between a click to web access… and even in the USA this sort of latency is a problem.
Programmers try really, really hard to mask latency – web browsers spawn threads that do DNS lookups asynchronously, they make connections to multiple sites simultaneously, and they try to render as much of the page as possible as it is still streaming, and for all that, the best most web sites can do is deliver their content in a little over half a second, and most are adding additional layers of redirects and graphical gunk that make matters worse – and all they are doing, is trying to mask the latency that is unavoidable. Read more