macbroadcast´s blog

AMT Multicast for Resilient, Scalable Content Delivery

About AMT
The primary goal of Automatic Multicast without explicit Tunnels (AMT) is to foster the deployment of native IP multicast by enabling a potentially large number of nodes to connect to the already present multicast infrastructure. The protocol specification can be deployed in a few strategically-placed network nodes and in user-installable software modules (pseudo device drivers and/or user-mode daemons) that reside underneath the socket API of end-nodes’ operating systems.

Archived Files:

Berkeley Scientists Discover an “Instant Cosmic Classic” Supernova
August 27, 2011, 7:03 am
Filed under: Uncategorized | Tags: ,

A supernova discovered yesterday is closer to Earth — approximately 21 million light-years away — than any other of its kind in a generation. Astronomers believe they caught the supernova within hours of its explosion, a rare feat made possible with a specialized survey telescope and state-of-the-art computational tools.

The finding of such a supernova so early and so close has energized the astronomical community as they are scrambling to observe it with as many telescopes as possible, including the Hubble Space Telescope.

Joshua Bloom, assistant professor of astronomy at the University of California, Berkeley, called it “the supernova of a generation.” Astronomers at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, who made the discovery predict that it will be a target for research for the next decade, making it one of the most-studied supernova in history.

Paul Vixie Explains How PROTECT IP Will Break The Internet
August 26, 2011, 12:21 pm
Filed under: Big Brother, Decentralization, DNS, globalchange, ipv6, linux, society | Tags: , , ,



from the not-cool-folks dept

It’s pretty difficult to question Paul Vixie’s credibility when it comes to core internet infrastructure. Creator of a variety of key Unix and internet software, he’s still most known for his work on BIND, “the most widely used DNS software on the internet.” So you would think that when he and a few other core internet technologists spoke up about why PROTECT IP wouldbreak fundamental parts of the internet, people would pay attention. Tragically, PROTECT IP supporters, like the MPAA, appear to be totally clueless in arguing against Vixie. Their response is basically “it’s fine to break the internet to evil rogue sites.”

That, of course, is missing the point. It’s not that anyone’s worried about breaking the internet for those sites. It’s that it will break fundamental parts of the internet for everyone else as well. And… it will do this in a way that won’t make a dent in online infringement. Afterdawn sat down with Vixie who gave a clear and concise explanation of why PROTECT IP is a problem. The biggest issue is how it will impact DNSSEC, which adds encrypted signatures to DNS records to make sure that the IP address you’re getting is authentic. You want that. Without that, there are significant security risks. But PROTECT IP ignores that.

Explained simply, for DNSSEC to work, it needs to be able to route around errors. But the way PROTECT IP is written, routing around errors will break the law:

Say your browser, when it’s trying to decide whether some web site is or is not your bank’s web site, sees the modifications or hears no response. It has to be able to try some other mechanism like a proxy or a VPN as a backup solution rather than just giving up (or just accepting the modification and saying “who cares?”). Using a proxy or VPN as a backup solution would, under PROTECT IP, break the law.And, of course, none of these DNS efforts will actually stop infringement. As the Afterdawn article notes: “Bypassing DNS filtering is trivially easy. All you need to do is configure your computer to use DNS servers outside the US which won’t be affected by the law.”

And while supporters of PROTECT IP insist that there’s nothing to worry about because it only impacts those “foreign websites,” that’s misleading in the extreme. PROTECT IP will impact a ton of US-based technology companies. First, if we have a less secure internet, that’s going to be a problem for obvious reasons. Additionally, the way the law works is that it puts a direct burden on US companies to figure out ways to block sites declared rogue (you know, like the Internet Archive and 50 Cent’s personal website), or face liability. This will increase both compliance and legal costs.

It´s the bufferbloat stupid
August 25, 2011, 8:54 pm
Filed under: Decentralization, globalchange, Hacking, howto, ipv6 | Tags: , , , , ,

CeroWrt – Debloating

I don´t know if you heard about Bufferbloat yet, i posted a google tech talk a few weeks ago regarding this issue.

I would recomment watching it, to get a brief overview of what it is about. A few days before i mentioned that there is a interesting project called  CeroWrt , whitch claims to work on this network buffer issues.


Bufferbloat is a widespread problem present throughout the Internet, “end-to-end.” Debloating is a “work in progress” industry wide and will take years. Ultimately, all buffering/queuing in operating systems needs to be carefully managed and be automatically adaptive to the data transfer rates. All network routers (and operating systems!) should be running with AQM (e.g. algorithms such as RED) including home routers: unfortunately, existing algorithms such as RED are unlikely to work correctly in today’s home network environment.

CeroWrt is the test platform for improved AQM algorithms. To achieve ultimate latencies under load across the high bandwidth variation of 802.11 and broadband, new AQM algorithms need testing in addition to more complex changes in internal buffering; these will take time and therefore debloating will be a work in progress for an extended period.

In the upstream direction, the bottleneck link may be adjacent to your home devices (e.g. your laptop on wireless), and in your operating system, outside of our control; you may see problems therefore copying from your home device upstream to the Internet and/or your home file server. Unfortunately, TCP acks can be stalled behind packets queued in a particular direction, so bufferbloat in one direction can result in bad performance (poor latency) in the other direction. If you run Linux, you can help with debloating by working with those working on the debloat-testing work going on on On other operating systems, you should contact your operating system vendor and complain. Be gentle (but insistent), however: before 2011, bufferbloat was not understood to be a general problem, and it will take time to overcome.

Note that bufferbloat only occurs in the device just before the bottleneck in a path. A common strategy when fixes for bufferbloat are not available for the devices either side of a bottleneck, therefore, is to try to arrange to move the bottleneck from a device which is badly bloated to one which is not: e.g. you might arrange to ensure that your wireless bandwidth is always bigger than your broadband bandwidth (and using bandwidth shaping and QoS to avoid the consequences of bufferbloat in that hop as best you can).


Check out daves blog


Bufferbloat: Dark Buffers in the Internet Download PDF version of this article

by Jim Gettys, Kathleen Nichols | November 29, 2011

Topic: Networks



YouTube-Google+ Feature Lets Users Have Video-watching Party
August 20, 2011, 7:14 pm
Filed under: socialweb | Tags: , ,

Will this be a Facebook Killer feature ?

Google+ users can now have a video-watching party on YouTube.

That means if you want to participate in making the latest viral video even more infectious, you can do it by combining the power of Google+’s Hangout and YouTube’s video-sharing functions.

Since the beginning of Google+, early adopters of the new social network have been raving about its Hangout feature, some even calling it G+’s “most interesting and useful feature.” Hangouts have been so well appreciated that soon after Google+ went live, Facebook made a counter move and announced its Skype-powered in-browser video chat service.

For Google+ users, taking advantage of the new YouTube feature is simple.


Social networks adoption lifecycle
August 20, 2011, 1:55 pm
Filed under: socialweb, society, Uncategorized | Tags: ,

Great post about “Social Media Statistics via

The diffusion of new technologies follows a classic normal distribution or “bell curve” and, according to Everett Rogers’ studies (Diffusion of Innovation, 1962), anyone who engages with a given innovation fits into one of five categories: innovators (2.5% of the potential population of adopters), early adopters (12.5%), early majority (34%), late majority (34%), laggards (16%). Each of these groups has unique psychographic characteristics that cause people to be more or less likely to adopt a given technology at a particular point in time. To understand the model you need to focus on the area under the curve that represents the percentage of adoption within each group.


Who is Suing Who In the Mobile Patent Wars?
August 19, 2011, 7:04 pm
Filed under: globalchange, infografic, socialweb, society, Softwarepatents | Tags: ,

Interesting infografic via reuters