Friday, December 5, 2008

ITU Advocates Infrastructure Sharing

[It is interesting to see that the last redoubt of the monopoly telco – the ITU - is now advocating infrastructure sharing. Infrastructure sharing has been as fact of life where there is intense competition, as for example with cell phone towers in the USA. Few cellular companies now own cellular towers or the associated transmission gear. Instead they contract with companies like America Tower that specialize in the business of operating and maintaining cellular towers and transmission gear for a number of carriers, while the carriers themselves focuses on their core competency of providing cell phone services. Similarly last mile business models, as advocated by Google policy staff such as “Home with Tails” is another example where “condominium” sharing of infrastructure enables new business opportunities and significantly lowers costs for all competitors – large and small. Thanks to Frank Coluccio for this pointer – BSA]

ITU Advocates Infrastructure Sharing to Counter Investment Drought

In response to the global financial crisis which may make it more difficult for investors to obtain financing for continuing network development, the International Telecommunication Union (ITU) is advocating infrastructure sharing as a means to continue to rapid rollout of network resources to under-served populations. In its newly published annual report, Trends in Telecommunication Reform 2008: Six Degrees of Sharing, the ITU examines the sharing of civil engineering costs in deploying networks, promoting open access to network support infrastructure (poles, ducts, conduits), essential facilities (submarine cable landing stations and international gateways) as well as access to radio-frequency spectrum and end-user devices.


Additional papers were commissioned by ITU as discussion papers and were written by various lawyers/regulators. See: .

Legal and free: TV show and movies over the Internet

[For those outside of the USA, we can only dream of such developments. Geo-blocking and regulation is preventing the rest of the world access to many of these tools in the USA. But they will come someday and will fundamentally change the business model for the cablecos. Someday consumers will rebel against bundling, channel relocation and forced fed cultural content. Thanks to Dewayne Hendricks for this pointer—BSA]

If you would like to learn why popular TV shows and movies are being made available legally on the Internet for free and how we can get them to our televisions, this interview is for you.
Our guest today is Will Richmond who is the editor and publisher of VideoNuze, an online publication for broadband video decision-makers. VideoNuze concentrates on the emerging Internet Video industry. Will has prior experience in the CATV industry thereby providing valuable perspective on how the sector reacts to developments. This is important because cable companies are the leading providers of broadband Internet access along with being the dominant networks delivering conventional television programs.
As noted in earlier podcasts, a number promising websites are emerging that host, or index, advertising-supported Internet Video of popular TV shows and movies. Examples include Hulu, Fancast, Veoh,, and AOL Video. They’re great for consumers because they are free to the viewer and completely legal.
In our analysis the emergence of such websites could prove to represent the “tipping point” at which consumers push hard enough to find ways to get Internet Video streams to display on their televisions. ABC, NBC, and CBS have all made popular shows available online. There are also hundreds of popular, or once popular, movies from major Hollywood studios available at the websites noted above.
As users get increasingly accustomed to sites like Hulu, they find that they like the convenience of on-demand viewing, personalization of selections, viral sharing of program recommendations, community commentary, email notifications of show postings, and the abundance of interesting programming. Intense users are even avoiding CATV or satellite service. For example, Will’s research concludes that most subscribers will cut CATV service before they cut ISP (Internet Service Provider) service. This is particularly relevant given the current economic downturn.
However, Will’s research also concludes that the cable networks, like ESPN, and AMC, will be reluctant to provide shows to websites like Hulu. He reasons that they will decline to put at risk the traditional fees they collect from CATV operators.

How Canada Fought Bad Copyright Law: Showing Why Copyright Law Matters

[Great post and kudos to Michael Geist in leading the charge in Canada. I also highly recommend Michael’s blog -- BSA]

How Canada Fought Bad Copyright Law: Showing Why Copyright Law Matters (Culture)
by Michael Masnick from the sit-back-and-watch dept on Thursday, December 4th, 2008 @ 4:10AM
You may recall, just about a year ago, there was suddenly a bunch of news over the possibility of Canada introducing its own version of the US's Digital Millennium Copyright Act (DMCA). To the surprise of both the entertainment industry (who helped craft the law) and the politicians who were pushing it, the opposition to this law was incredibly successful in getting its message out. Starting with calls on various blogs and Facebook groups, kicked off by law professor Michael Geist, the issue became a big one throughout the media. The politicians who promised the entertainment industry that they would pass this law tried to delay the introduction, assuming that the opposition, while loud, was thin and would fade away. They were wrong. The issue continued to get attention, and when the law was finally introduced, the opposition, across the board, was widespread and strong. It wasn't just a fringe issue among "internet activists." It was something that people from all over the economy saw as a fundamental issue worth fighting for.

But why?

For years, copyright (and wider intellectual property) law has been considered to be sort of inside baseball, something that only lawyers and the entertainment industry cared about. But that's been changing. There are a variety of reasons for why this happened and why copyright is considered a key issue for so many people in so many parts of the economy. Michael Geist has now put together a film that tries to examine that question. After first discussing how the issue became such a big deal, Geist interviews a number of Canadian copyfighters to get a sense of why copyright is an issue worth fighting about:
Not surprisingly, Geist has also made the movie available in a variety of different formats so people can do what they want with it, including remixing or re-editing it. There's the full version (seen above), an annotated version, a version for subtitling, or you can download the full movie via BitTorrent at either Mininova or Vuze. Unless, of course, you live somewhere where they claim that BitTorrent is evil and must be blocked.

Thursday, November 13, 2008

The incredible shrinking Internet

[Thanks to Harvey Newman for this pointer –BSA}

Ahead of the Curve | Tom Yager »
The incredible shrinking Internet

November 05, 2008

The incredible shrinking Internet

Broadband throttling, caps, and content filtering aren't about
sweeping pirates off the Internet

TAGS: Broadband

Time was, back in the bad old days of regulatory oversight, the feds told telephone companies that they had to extend service to residents outside prime socioeconomic territories. The FCC laid down the law that the local telcos with a regional monopoly on wire and fiber had to share access to that infrastructure with competing ISPs, and to make that happen, telcos were ordered to split their Internet and telephone service branches into separate business units.

Telcos complained loudly about cable TV providers offering VoIP phone service not subject to telco regulations, while at the same time flouting the very regulations they seek to impose on cable companies in the interest of fairness. Cable is hardly squeaky clean, with infamously poor customer service, wavering quality of coverage, prerequisites for cable TV plans, and leased equipment that adds $50 or more to the low monthly prices quoted in their ads.

Just as the FCC has cleared the way for use of narrow bands of analog TV spectrum for Internet traffic, DSL and cable are exacting payback in a manner that should be setting off bells (no pun) in Washington. We know about the tiered Internet, where commercial customers without top-end bundled services have their traffic shoved aside for use by elite subscribers, and for services sold by the carriers themselves. We read less about other threats that attract minimal attention from IT because they seem to affect only residential broadband users, and more specifically, that fraction of residential users that spoils the Internet for everybody with their peer-to-peer file sharing. Nobody can feel sorry for pirates who are getting their pegs clipped.

Changes being made under the rubric of fighting excessive use of bandwidth by that mythical 5 percent of broadband subscribers that use 90 percent of available bandwidth have nothing to do with freeing those ill-used bits for legitimate use. It's hard to get telcos and cable operators to explain why transfer caps, speed throttling, content filtering, and port blocking, necessary measures to avoid being bankrupted by 5 percent of 'net ne'er-do-wells, apply to all Internet subscribers. Unless, that is, you're paid in full for your provider's most expensive data offerings, or you happen to be a telecommunications provider.

The true motivations behind each of the new poisons being unleashed on U.S. broadband customers are easier to understand than the rhetoric held up to justify their application. Limits on the amount of data you're allowed to transfer each month prevent you from using VoIP, other than that provided by your local telco or cable operator, to escape usurious telephone tariffs. Bandwidth throttling, which may kick in when you've exceeded a (likely unstated) threshold for bytes per quarter-hour, turns your third-party VoIP calls into unintelligible mush. Telcos and cable operators not only make it tough to do business as a third-party VoIP provider, but they use bandwidth-imposed quality issues in their marketing to smear VoIP competitors.

Bandwidth capping and throttling also discourage companies that have in mind to offer business-related broadband services, including offsite storage, remote access to desktops, and rich Internet applications. Even if these companies are willing to pay for more bandwidth than they need, a large portion of their potential customer base is subject to restrictions that make using the service too risky. Broadband users will have to budget their Internet use without knowing in advance how much of their capacity a particular application uses. When customers exceed their caps or thresholds, they incur overage charges or involuntarily get kicked up to higher rate plans.

In the capped and throttled Internet, a fledgling distributor of independent films doesn't stand a chance, and neither does a provider of streaming content services. To make these businesses even less attractive, telcos and cable are toying with content filtering that identifies suspect data. This is, we're told, to put down a scourge that affects telecommunications and the entertainment industry equally: the transfer of movies, TV shows, and music.

It seems difficult to imagine why telcos and cable care about content -- these are the same people who insist that the technology doesn't exist to, say, keep businesses from having URL typos send workers and customers to porn sites -- until you understand that the next frontier for telecommunications is streaming and downloadable news and entertainment content. That explains why telcos and cable want to restrict the flow of digital content, but surely identifying commercial content is technologically impractical.

No user of an iPhone
or T-Mobile G1
can believe this. Hosted services such as Shazam can take a random 15- to 30-second clip from a song, match it against a massive database and identify the artist, tune, and CD with a link to iTunes or Amazon.
Skimming a download stream, including the soundtracks of streaming and downloaded videos, will just as easily identify a film or TV show. That does make life tough for pirates. It also makes it difficult for legitimate content distributors in lines of business such as online movie rentals or Internet radio. It would certainly cut YouTube off at the knees, which would make a lot of big studio and record labels happy.

What method, I wonder, would be established for identifying yourself as either duly licensed to distribute the content or covered by Fair Use?
Even if you could negotiate an indulgence with a major telco (you'd have to parlay each of them separately), the customers you're trying to reach run the risk of getting one of those MPAA or RIAA ransom notes. When the content filters kick in, that whole process could be streamlined. The letter could just be stuffed into the envelope along with your phone bill.

I put in that last bit for fun, because I realize that this approaches conspiracy theory. My gut tells me that the customer and business worst case that I've described approximates the future that telcos and cable operators have in mind. I only hope that the new administration and reshaped legislature will realize that pulling the economy out of a nosedive requires the protection of competition and openness of the Internet to ventures of all types and sizes.

Posted by Tom Yager on November 5, 2008 03:00 AM

Tuesday, October 7, 2008

A bold new vision for research networks in Europe

[A committee made of NRENs, researchers and users has produced a very interesting white paper on the future of research networking in Europe. The paper articulates a vision of a network of NREN and pan-national virtual network peers made up of cross border fiber and other direct interconnections, rather than relying exclusively on a traditional hierarchical network architecture. I particularly like the idea of a “network factory” where the job of future network engineers will not be to build networks, but to provide tools to others to create their own virtual networks on top of a common substrate made of NRENs, cross border, campus networks and backbone links. Worth reading. Some excerpts from the paper-BSA]

To plan, provide and manage an advanced networking infrastructure interconnecting NRENs via a hybrid optical interconnect that includes GÉANT3 backbone links, direct Cross-Border-Fibres (CBF) and global connections.

To develop and support an agreed upon portfolio of multi-domain services enabling NREN Operators to manage secure and reliable networking solutions across the extended European Research Area and beyond, around the globe.

.. we are witnessing the emergence of a multitude of user-driven virtual networks and this trend may be considered as a new paradigm shift. GÉANT3 should evolve as a network factory with the network itself being an entity capable to create/host complex objects made of circuits and nodes, enabling community oriented services. Summarizing, there will be a constant need to
evaluate the significance of advanced technology without compromising the production quality of the GÉANT3 service portfolio.

The network infrastructures are becoming increasingly important with the deployment of services that deliver advanced capabilities to the end users through the network. It is expected that the High Performance Computing community, including growing Grid initiatives and fertilized by the emerging Cloud Computing concept will continue to exploit the opportunities that access to high speed networks brings.

Architecturally, the general adoption of Service Oriented Architectures (SOA) will allow the integration of services from many different providers creating a vibrant and competitive set of offerings built on top of the network.

R&E networking is by nature multi-domain, thus services must be established across confederate (loosely coupled) administrative domains: Campuses, NRENs and International interconnections. The latter include GÉANT3 backbone links, Cross-Border-Fibres (CBFs) and connections with global peers. Expedited, eventually automated multi-domain management is an area where our community will continue innovating, triggered by pressing end-user requirements and profiting from the web of trust amongst partner NRENs and their global peers.

Monday, August 25, 2008

Why TiVO and YouTube terrify the broadcasters and carriers

[Despite the availability of hundreds of channels, anybody who has tried to
find anything interesting on TV these days, other than the Olympics, will
cheer these developments. Personally I cant wait to ditch my cable TV
subscription once I get can access to Hulu and a host of other video
services over the Internet. Some excerpts from Lauren Weinstein's excellent
Network Neutrality blog and GigaCom --BSA]

Why TiVo and YouTube Terrify ISPs

Greetings. TiVo is in the process of introducing a direct interface to
YouTube for their Series 3 and TiVo HD units. I saw it in operation for the
first time yesterday. It is seriously slick. You can browse YouTube on any
old connected TV, watching full-screen with surprisingly high quality,
completely acceptable resolution in most cases (apparently an H.264 codec is
in use).

TiVo has a variety of other broadband content facilities, including
downloading of movies, but the availability of the vast range of YouTube
content, along with the familiar search and "more like this"
features, strikes me as something of a sea change.

Suddenly now, there's always going to be something interesting to watch on
TV. Anyone who can't find anything up their alley on YouTube is most likely
either not trying or dead.

But if viewers are reduced to counting bits by draconian bandwidth caps,
such wonders will be nipped in the bud -- and that's apparently what the
large ISPs would like to see (unless they can get a piece of the action, of
course, in addition to subscriber fees). The sorts of convergence
represented by a broadband TiVo terrifies ISPs whose income streams depend
on selling content as well as access.

If a critical mass of viewers becomes comfortable with the concept that
"bits are bits" -- whether they're coming from ISPs' own video services or
from outside Internet sources -- the ISPs' plans to cash in on content are
seriously threatened.

It's becoming increasingly clear that bandwidth caps are being eyed by ISPs
largely as a mechanism to "kill the competition" -- to limit the mass
migration of viewers from traditional program sources to the limitless
bounds of Internet content.

Lauren Weinstein
lauren at or lauren at

Can Online Video Support Its Next Generation?

Hayden Black is nice, funny, quotable and makes two critically acclaimed and
modestly popular web shows. He may not have a face for television, but that
hasn't stopped him from becoming the poster boy for a market of online video
producers that has a growing crowd of early-stage startups looking to meet
its needs.

Black, who has never signed an exclusive deal and whose shows - Goodnight
Burbank and Abigail's Teen Diary - are distributed on some 15 different
hosting sites, says he gets pitched at least once a week to try the services
of any number of new online video platforms, video converters, video ad
networks or analytics providers.

Multiple startups are building on what portals such as YouTube, Revver,
Vimeo and Veoh provide to serve people like Black, who are trying to build
an audience and a business around online content. These days, that could
mean anything from citizen journalism like The Uptake to an online
personality like iJustine or a TV network like MTV. Once such potential
customers create their content they need to distribute, organize and promote
it - things existing tools do, just not particularly well.

Earlier this year, Emeryville, Calif.-based TubeMogul raised $1.5 million
from Knight's Bridge Capital Partners, and it's currently trying to raise
more funding. New York City-based, a video portal that hosts
independent episodic shows and actively works to foster a community among
its creators, raised money from Ambient Sound Investments and Lauder
Partners last year and is also looking to raise more.

More recently, a crop of new, emerging competitors has been receiving small
chunks of funding as well. Episodic, which promises to be similar to blip,
but with richer web-based tools, raised $1.5 million from Granite Ventures.
Another, 750industries, barely has a web site up for its video marketing
service but was able to raise $1 million from Maples Investments and
Baseline Ventures.

Also notable is Trendessence, a bootstrapped startup founded and staffed by
current and former Stanford students that's currently in stealth mode. The
young company, which has built a platform for online video producers and
advertisers to find each other, has scored meetings with top advertisers
including Procter & Gamble, Unilever, Kraft and Motorola by promising it can
hook them up with the brave new world of online video producers. Other new
and newish players include Viddler (hosting), Zadby (product placement
marketplace), Castfire (hosting) and Video Breakouts (analytics).

The delusions of net neutrality

[Another excellent paper by the famous iconoclast Andrew Odlyzko. My only
comment is that I believe that soon there will be little or no revenue
opportunities in delivering video or other services over the Internet, with
or without deep packet inspection. The revenue opportunities of using QoS
and Deep Packet Inspection will pale against the revenue potential of the
"over the top" providers such as Hulu, Google, Skype etc. So how does a
carrier make a buck and pay for the infrastructure? As Andrew points out
customers want connectivity much more than content. My suggestion is to
bundle broadband with resale of energy, where it is in the carrier's
financial interest to give away the Internet and applications and make money
on connectivity-]

The delusions of net neutrality

Andrew Odlyzko

School of Mathematics, University of Minnesota

Minneapolis, MN 55455, USA

odlyzko at

Revised version, August 17, 2008

Abstract. Service providers argue that if net neutrality is not enforced,
they will have sufficient incentives to build special high-quality channels that
will take the Internet to the next level of its evolution. But what if they do
get their wish, net neutrality is consigned to the dustbin, and they do build their
new services, but nobody uses them? If the networks that are built are the ones
that are publicly discussed, that is a likely prospect.

What service providers publicly promise to do, if they are given complete
control of their networks, is to build special facilities for streaming
movies. But there are two fatal defects to that promise. One is that movies are unlikely to offer all that much revenue. The other is that delivering movies in
real-time streaming mode is the wrong solution, expensive and unnecessary.

If service providers are to derive significant revenues and profits by
exploiting freedom from net neutrality limitations, they will need to engage in much
more intrusive control of traffic than just provision of special channels for
streaming movies.

Thursday, May 22, 2008

New OECD Broadband stats - Canada continues to drop

[The latest OECD broadband stats are now available with some very interesting analysis of issues related to future FTTh networks. Canada used to be to number 2 in the OECD broadband standings. It is now number 10. The CBC report provides a good summary of how Canada continues to lose its broadband edge which should be an important lesson to other countries looking at the challenges of national broadband. Some excerpts from OECD and CBC reports. Thanks to Sandy Liu for the CBC pointer--BSA]

Latest OECD Broadband standings main findings full report

The report notes that:

• Governments need to promote competition and give consumers more choices. They should encourage new networks, particularly upgrades to fibre-optic lines.

• Governments providing money to fund broadband rollouts should avoid creating new monopolies. Any new infrastructure built using government funds should be open access – meaning that access to that network is provided on non-discriminatory terms to other market participants.

The regulation of new broadband connections using fibre to the end user will likely be the subject of considerable debate in the next few years. The pressing question is whether fibre optic cables extending to homes, buildings and street curbs should be regulated in the same way as traditional copper telephone lines. As new fibre connections may fall outside existing regulatory frameworks, a re-evaluation of existing policies may be required. Regulators should consider whether network architectures still relying on portions of the historical copper telephone infrastructure should be treated differently from new all-fibre networks.

· Regulators and policy makers are increasingly concerned about fostering competition on next-generation broadband networks. Some are examining the functional separation of the dominant telecommunication provider into two units, one which handles the physical lines and the other which provides retail services over the lines as a way to ensure fair and non-discriminatory access to “last mile” infrastructure. The results of functional separation, particularly on investment, are still far from certain and warrant significant research. Regulators should actively consider other policy options at the same time, which may provide similar outcomes – such as requiring operators to share the internal wiring in buildings.

Governments need to actively look for ways to encourage investment in infrastructure. Civil costs (e.g. building roads, obtaining rights of way) are among the largest entry and investment barriers facing telecommunication firms. Governments should take steps to improve access to passive infrastructure (conduit, poles, and ducts) and co-ordinate civil works as an effective way to encourage investment. Access to rights-of-way should be fair and non-discriminatory. Governments should also encourage and promote the installation of open-access, passive infrastructure any time they undertake public works.

Governments should not prohibit municipalities or utilities from entering telecommunication markets. However, if there are concerns about market distortion, policy makers could limit municipal participation to only basic elements (e.g. the provision of dark fibre networks under open access rules).

Maintaining a level-playing field and reducing anti-competitive practices in the face of high network effects and to promote consumer choice is crucial, i.e. in particular considering the increased use of walled garden approaches, as well as cross-industry mergers and acquisitions. With problems such as vertical integration, lock-in of consumers in certain standards, and poor access to certain content, an environment of contestable markets should be created where small and innovative players can compete. Further analysis of recent trends and impacts of concentration is also needed. When necessary, anti-trust and other policies have the means to restore competition. · It will be crucial to monitor and analyse the new market structures of broadband software, service and content providers in the next few years. Governments have a lot of experience when it comes to ensuring efficient telecommunications markets. However, when it comes to broadband applications, services, software and content, this is mostly new territory. It is important in the coming years that policy makers understand the impacts of new broadband market structures and question whether current policy approaches for ensuring competition actually work.

Governments must intensify efforts to ensure there is sufficient R&D in the field of ICT, so that the economic, social and cultural effectiveness of broadband is guaranteed. The role of government and business in basic R&D may have to be reaffirmed. Any government neglect in this area should be monitored as well as examples of inadequate policy co-ordination, with the aim of increasing the efficiency of broadband-related R&D.

· Strengthening broadband research networks (grids), and facilitating international co-operation through such networks and collaborative research should be a policy priority.

* Denmark, the Netherlands, Iceland, Norway, Switzerland, Finland, Korea and Sweden lead the OECD with broadband penetration well above the OECD average, each surpassing the 30 subscribers per 100 inhabitants threshold.

Canada's global edge in broadband dwindling

Canada's early position as a global broadband internet leader continues to erode, with the country sliding in the latest subscription rankings from the Organization for Economic Co-operation and Development.

Canada had 8.6 million broadband subscribers as of December 2007, or about 26.6 per 100 inhabitants, enough to rank 10th among the 30 developed countries that make up the Organization for Economic Co-operation and Development. In the OECD's previous survey six months ago, Canada ranked ninth, while in 2002 it placed second behind South Korea.

Internet experts said the report painted a poor picture of the state of competition in Canada, where many people tend to have only one or two internet providers — usually a phone company versus a cable firm — to choose from. While ISPs fought vigorously for customers in the early part of this decade by offering enticing deals, which reflected Canada's early lead, they have become less competitive over the past few years.

"This reflects poorly on Canada's advancement in the information economy," said University of Ottawa internet law professor Michael Geist. "Canada remains woefully uncompetitive … We're getting a poor deal."

The average broadband connection in Canada, about 7 mbps, ranks below that average. Canada also fared poorly in cost versus the speed provided, ranking 27th out of 30 at $28.14 U.S. for average broadband monthly price per advertised megabit per second.

Issues that are of concern to Canada, he said, are the download limitations imposed on subscribers — caps that have thus far not been introduced by ISPs in the United States. According to the report, download caps could hold a country's businesses back by limiting their online development.

"This may become an economic disadvantage in countries with relatively low bit caps, particularly as more high-bandwidth applications appear," the report said.

Typical limits on Canadian internet connections are 60 GB per month, with higher-end plans offering around 100 GB. In the U.S., ISPs currently give customers unlimited downloading, with Comcast, the nation's largest provider, considering a cap of 250 GB — more than quadruple the typical Canadian limit.

Canada has also not benefited from regulations that allow smaller third-party ISPs to access the networks of large phone companies such as Bell Canada, a practice that has flourished in Europe, Reynolds said. A rule known as "local-loop unbundling" allows smaller ISPs to rent out portions of a large phone company's network, then attach their own equipment to provide customers with internet access.

The OECD report also noted that several countries are taking the lead in the next generation of broadband deployment — superfast fibre networks. About 40 per cent of Japan's broadband connections are fibre, with South Korea coming second at 34 per cent. Most of the OECD — 18 countries, including Canada — have not yet begun rolling out fibre.

Government 2.0: The Next Generation of Democracy

[Some excerpts from my opinion piece at Internet Evolution -- BSA]

Government 2.0: The Next Generation of Democracy

For years, experts predicted e-government would be a driving force for broadband usage. The assumption was that many government services, from dog tags to taxes, could be done electronically over the Internet. Despite modest success in this area, e-government hasn't pushed broadband usage. But now many pundits are starting to realize that the better value of e-government may be allowing the public to have greater input on government operations and processes.

Now, the goal is to use Web-based collaboration to "reinvent government." Well known author and futurist Don Tapscott advances this concept in his new research project called, Government 2.0: Wikinomics, Government & Democracy.
Wikis and Web 2.0 technologies have the potential to fundamentally transform the way we are governed and radically reshape political philosophy. Not since the days of John Locke, Thomas Hobbes, and Jean-Jacques Rousseau have we had such opportunity and the tools to address many shortcomings of democratic society, especially the domination of special interests and lobbyists.

As Winston Churchill once commented, "Democracy is the worst form of government, but it is better than any other type of government we have ever tried." Government 2.0 represents an opportunity to redress the many shortcomings of the worst possible government we have -- democracy as we know it today.

[..]These are just some simple examples of the possibilities of applying Internet and Web 2.0 technologies to how we are governed. We are only limited by our imagination, in optimizing the Internet as a new revolutionary tool to truly personalize democracy. For more musings on this subject, please see my blog, Democracy 2.0 – Next Generation Democracy.

Glasnost Internet: The threat of Transparency and Privacy to the Internet

[Around the world there is growing alarm at attempts by carriers, ostensibly for traffic management reasons, to install deep packet inspection equipment, but now being used for local web ad insertion and other activities. Network neutrality is increasingly also an issue about network privacy. As such various organizations like the prestigious Max Planck institute and others are developing tools so that consumers can discover whether their carrier is doing deep packet inspection and hopefully thwart these serious potential threats to consumer privacy. To my mind this issue will never disappear because the fundamental issue is the current business model of limited competition and a presupposition that the carrier "owns" the last mile and is therefore free to do what they wish with "their" network. I have long argued that to free ourselves of these threats to Internet privacy and freedom we need a new business model where the consumer "owns" the last mile and free to connect to any service provider they wish at neighbourhood carrier neutral interconnect facility. Next generation Fiber to the Home architectures like CityNet and Burlington Vermont enable this type of capability. For more details see my blog on free fiber to the home Some pointers from NNSquad list, Slashdot and Gordon Cooks Arch-econ- BSA]

"The goal of our Glasnost project is to make access networks, such as
residential cable, DSL, and cellular broadband networks, more
transparent to their customers."

Comcast, Cox Slow BitTorrent Traffic All Day

"A study by the Max Planck Institute for Software Systems found that Comcast and Cox Communications are slowing BitTorrent traffic at all times of day, not just peak hours. Comcast was found to be interrupting at least 30% of BitTorrent upload attempts around the clock. At noon, Comcast was interfering with more than 80% of BitTorrent traffic, but it was also slowing more than 60% of BitTorrent traffic at other times, including midnight, 3 a.m. and 8 p.m. Eastern Time in the U.S., the time zone where Comcast is based. Cox was interfering with 100% of the BitTorrent traffic at 1 a.m., 4 a.m. and 5 a.m. Eastern Time. Comcast spokeswoman Sena Fitzmaurice downplayed the results saying, 'P-to-p traffic doesn't necessarily follow normal traffic flows.'";409444582;pp;1;fp;16;fpid;1

Elude Your ISP's BitTorrent Blockade
"More and more ISPs are blocking or throttling traffic to the peer-to-peer file-sharing service, even if you are downloading copyright free content. Have you been targeted? How can you get around the restrictions? This PC World report shows you a number of tips and tools can help you determine whether you're facing a BitTorrent blockade and, if so, help you get around it."

Deep packet inspection under assault over privacy concerns

Add the Canadian Internet Policy and Public Interest Clinic (CIPPIC) to the list of groups concerned about the privacy implications of widespread deep packet inspection (DPI) by ISPs. CIPPIC has filed an official complaint with Canada's Privacy Commissioner, Jennifer Stoddart, asking her office to investigate Bell Canada's use of DPI (and we're flattered to be quoted as an expert source in the complaint). In addition, the group would welcome a wider investigation into possible DPI use at cable operators Rogers and Shaw, as well.

Charter to monitor surfing, insert its own targeted ads surfing-insert-its-own-targeted-ads

Some excellent reports on investment and economics of FTTH

[In the last few weeks there have been several excellent reports on the investment and economics of FTTH networks. As well the OECD should be soon releasing their annual broadband report on this subject. Gordon Cooks ARCH-Econ list is the source of man of these pointers--BSA]

All the presentations of the OECD workshop on FTTH are now available:,3343,en_2649_34225_40460600_1_1_1_1,00.html

Of particular note is the presentation by Herman Wagter of CityNet of Amsterdam which convincingly demonstrates how separating ownership of fiber from companies who deliver services allows for true facilities based competition where competitors can use different layer 0 technologies for delivering services

Benoit Felten also maintains an excellent blog on the various presentations that were given at the OECD workshop:

Hendrik Rood of Stratix Consulting has just released a very interesting report on FTTH developments in The Netherlands who have one of the highest penetration of FTTH in the world. At the end of the 1st quarter in 2008 the Netherlands had 176 thousand FTTH connections.

Direct link to the Stratix paper:

Of particular note the paper goes into considerable detail explaining the arrival of Institutional investors (Pension funds etc.) with a real estate approach in funding open network infrastructure and the recognition by the incumbent operator KPN that this model may suit their business needs as well.

"Market entry by infrastructure facility providers with a Real Estate approach like independent Tower companies for mobile service providers and neutral data centre and telehouse facility owners, the development of FTTH in the Netherlands have shown the arrival of a new kind of market entrant: Real Estate financers investing in local loop networks.

The entry of real estate finance may act as a harbinger of a novel market structure with non-incumbents owning those infrastructure facilities with real estate characteristics. Their market arrival could have lasting consequences for regulatory policy of communications infrastructure.

As the Dutch market is now genuinely warming up to Fiber-to-the-Home, while the new open business models with real estate oriented investors are established, Stratix Consulting expects a new development stage with a run-for-the-market, where the market consists of local FTTH projects. Such a stage has happened before in the 1881-1900 period with telephony roll out and the 1960-1980 period of CATV network deployment.

Local loop economics indicates only one network per area to be feasible, in particular under the open network business models. With financiers stepping in and supply constraints visible in construction, we expect mounting citizenry pressure on municipalities and provinces to lure the projects to their area first, aiding constructors by facilitating community drives.

New ITIF Report: “Explaining International Broadband Leadership”

The executive summary does not do this report justice. There are dozens of hidden gems within the report. I recommend reading the report in its entirety. I was very pleased to see from the report's regression analysis that price has the strongest correlation with broadband penetration. This is something that I have been claiming, based on a paper written for Scientific American way back in 1993!! This paper demonstrated, for a variety of telecommunication technologies – telephone, cable, PC – price was the single biggest determining factor for adoption rates. Most people are surprised that the telephone took over 75 years to reach 50% penetration. But if you measure the price of telephony in terms of per average per capita income, you discover historically it has been a very expensive technology. And what drives price ?----competition!

"In a new report examining in depth broadband policies in 9 nations the Information Technology and Innovation Foundation concludes that while we shouldn’t look to other nations for silver bullets or assume that practices in one nation will automatically work in another, U.S. policymakers can and should look to broadband best practices in other nations. Learning the right lessons and emulating the right policies here will enable the United States to improve our broadband performance faster than in the absence of proactive policies. The report analyzes the extent to which policy and non-policy factors drive broadband performance, and how broadband policies related to national leadership, incentives, competition, rural access, and consumer demand affect national broadband performance. Based on these findings the report makes a number of recommendations to boost U.S. broadband performance.

Also included in the report are the updated 2008 ITIF Broadband Rankings, a composite measure of broadband penetration, speed and price among OECD countries. When these factors are considered together, the United States ranks 15th out of 30 OECD nations in broadband performance.

The executive summary can be accessed at

The full report can be accessed at"

FTTH allows teachers in Wymong to teach English in Korea

[A great example how broadband provided by FTTH allows new business models to evolve. From a pointer on Gordon Cook's list-BSA]

Broadband Enables Wyoming To Teach English to South Korea

I read a tremendous article found in Jim Baller's [] regular email newsletter earlier this week that highlights a number of interesting and important points.

It details an initiative where 150 teachers are going to be finding employment in Wyoming teaching South Koreans how to speak English.

Firstly, it's a tremendous example of the use of broadband as the teaching is conducted via videoconferencing.

Secondly, they specifically mention that what makes this possible is the fact that Powell, Wyoming, where the teachers will be located, is deploying a full fiber network with the capacity to enable high quality videoconferencing.

Thirdly, it's another example of how broadband enables the creation of new jobs that allow people to work from home.

Fourthly, it shows how there are businesses to be made catering to educational pursuits and not just entertainment related endeavors.

Fifthly, it shows how far ahead South Korea is in their use of broadband to enable better education.

Lastly, and unfortunately not necessarily a positive, it highlights the fact that South Koreans are aggressively pursuing applications that can not only be a good business but also benefit society as the money behind this comes not from the US but a South Korean venture capitalist.

Whew, that's a lot of points hit in an article that's not much longer than this post, but there's simply no denying how many relevant points it touches upon.

But what I think I like about it most is that even though it's being funded and driven by South Koreans, it's still creating new jobs here in the US. It's jobs like these that will help us reverse the trend of outsourcing so that other countries can come to rely on the expertise, know how, and hard work of the American people.

And it's important to never forget that this is all possible only through the power of broadband.

Join the hunt to feed the world's hungry through broadband Internet

[Another good example of citizen science. Excerpts from NY Times article -- BSA]

Join the Hunt for Super-Rice

There is no quick fix to the world food crisis, but a project getting underway Wednesday could make a difference in the long run. Rice

A team of researchers at the University of Washington are putting a genomics project on the World Community Grid in the computational search for strains of rice that have traits like higher yields, disease resistance and a wider range of nutrients.

The purpose is to hasten the pace of modern rice genetics, which since the 1960s has delivered a series of new strains, starting with higher-yielding semidwarf varieties, a breakthrough that was hailed as the Green Revolution.

But the demand — all those mouths to feed — keeps rising. Rice is the main staple food for more than half the world’s population. In Asia alone, more than two billion people get up to 70 percent of their dietary energy from rice.

The World Community Grid, begun in 2004, gives selected humanitarian scientific projects access to massive computing resources. It taps the unused computing cycles of nearly one million computers around the world — much like SETI@home, the best-known distributed computing effort, which claims it has harnessed more than 3 million PCs in the search for extraterrestrial life.

The World Community Grid places a small piece of software on your PC that taps your unused computing cycles and combines them with others to create a virtual supercomputer. Its equivalent computing power would make it the world’s third-largest supercomputer, according to I.B.M., which has donated the hardware, software and technical expertise for the project.

The grid will run a three-dimensional modeling program created by the computational biologists at the University of Washington to study the structures of the proteins that make up the building blocks of rice. Understanding the structures provides clues to their functions, interactions between the molecular parts and how certain desired traits are expressed.


Thursday, May 8, 2008

Collaborative feature film making over the Net

[Here is an interesting project reported on Slashdot. If you are a budding film maker you can join any number of collaborative film projects under development -- BSA]

collaborative film making

John Buckman from Magnatune clues us that the trailer for Iron Sky is available. We've been following the production for some time, as these are the same guys who brought us Star Wreck, the most successful feature-length Internet-distributed film of all time. That film was made by 3,000 people, has been downloaded 8 million times, is under a Creative Commons by-nd-nc license, and made good money both through DVD sales and through an eventual deal with Universal. Iron Sky is being made using Wreck-a-Movie — a collaborative film-making web site (also Creative Commons based) that grew out of the Star Wreck experience."
Wreck A Movie is a new way of creating film brought about by the power of the Internet to connect people and spread information.

Star Wreck Studios is a film studio that is specialized in blending the Internet and the film industry together by unleashing the creative potential of Internet communities, and changing the whole chain of filmmaking. Through the launch of its new online service, the company makes it possible to collaboratively produce professional quality A/V content of all types: from short films to feature films and to all distribution screens – from Internet and mobile to film theater.

Star Wreck Studios was established in February of 2007 by the creators of the world’s first feature-length collaborative Internet film, Star Wreck: In the Pirkinning which has more than 8 million global downloads. Star Wreck Studios is headquartered at the center of the global Internet film world in Tampere, Finland.

It all started with the Star Wreck phenomenon

Only a few years ago, it would have been impossible for a full-length science fiction film from Finland to become a phenomenon on the world stage. This niche film with Hollywood quality special effects, subtitles in 30 languages and all made in collaboration with an active passionate community, was released in August of 2005, going against every industry standard and for only 15 000 €. With today’s technology, people cooperating across the Internet and now instant global distribution, the crew of 5 and 3000 of their friends across the globe proved the old model of film making and watching is not the only way. The success opens the door for any niche film to get made and then seen by millions.

Based on the experience of creating the Star Wreck phenomenon, Star Wreck Studios has developed a Web platform that is designed to harness the power of passionate Internet communities for creating short films, documentaries, music videos, Internet flicks, full length features, mobile films and more. is a social community, simple workflow and marketplace that builds communities around film productions. It helps get films done faster and at a considerably lower cost through crowd-sourced work on production tasks and online resourcing of expertise and corporate funding. The communities developed in production will also create a viral social marketing force that will get films seen through the hundreds of existing online and standard channels.

How to build a submarine cable

[Thanks to George McLaughlin for this pointer -- BSA]

PIPE Networks, who are building a cable from Sydney to Guam and interconnecting there with VSNL (the TYCO Guam-Japan spur) have set up a blog covering various aspects of the build.

Quite interesting, and a pretty open approach (typical of PIPE). Covers
also the terrestrial builds, snippets (with links to more detailed
discussions) on how repeaters work, horizontally drilled ducts, etc, etc
- comments/questions can be posted and get responded to......

Bevan Slattery wrote:
As you maybe aware, we have launched our PIPE International website.
We've decided to do something a little different and that is undertake
the construction in an open and transparent manner.
We have included:
- a daily blog
- Progress table
- Discussion Forum
- Photo and video gallery
PPC-1 is an exciting development for competitive international
transmission into Australia and we have decided to go the 'open' route
in order to provide interested parties the ability to check out how
the system is progressing and to ask any questions you may have.
Anyway, thought you might be interested in how to build a submarine

Friday, March 28, 2008

How Non-Net Neutrality Affects Businesses

[Michael Geist has been maintaining an excellent blog on the challenges of net neutrality. While this remains a hot topic in the US, despite Michael's best efforts it has barely caused a ripple in Canada. There is increasing evidence of the impact of bit torrent throttling is having on business, competition and Canadian cultural policies, as for example, CBC's attempt to distribute DRM fee video content through BitTorrent. As well Geo-blocking is preventing Canadians from watching the NHL hockey games over the Internet which will be distributed in the US by It is incredible that the country where hockey is considered almost a religion, that Canadians do not have the same rights and privileges as their American cousins to watch their national sport. The major ISPs who are practicing bit throttling and benefit from geo-blocking are also Canada's major distributors of video content via cable TV and satellite. As Michael Geist's reports in his blog "As cable and satellite companies seek to sell new video services to consumers, they simultaneously use their network provider position to lessen competition that seeks to deliver competing video via the Internet. This is an obvious conflict that requires real action from Canada's competition and broadcast regulators" --BSA]

Michael Geist's Blog

How Network Non-Neutrality Affects Real Businesses

Network neutrality leaped back into the headlines last month, when FCC commissioners held a public hearing at Harvard University to examine whether the commission should institute rules to regulate the way Internet service providers (ISPs) manage traffic on their networks. The panel heard from executives representing the two largest ISPs in the Northeast, Comcast and Verizon, along with Internet pundits, politicians and academics.

The hearing coincided with an increasing public awareness that Comcast and dozens of other ISPs (most of them cable TV companies) commonly use methods to throttle some forms of traffic on their networks. They do this to prevent their networks from becoming congested. These methods typically target peer-to-peer traffic from BitTorrent, a popular music and video file sharing program the ISPs say generates a third or more of their traffic.

Accordingly, BitTorrent has become the debate’s poster child, pushing much of the net neutrality debate into endless arguments over free speech, copyright law and what—if anything—should constitute “legal use” of the Net.

But there’s another side to this debate, one that gets far too little attention. In their attempt to limit BitTorrent and other peer-to-peer file sharing traffic, some ISPs have unwittingly caused collateral damage to other, unrelated businesses and their users. For example, some Web conferencing providers have seen their services slow to a crawl in some regions of the world because of poorly executed traffic management policies. Since ISPs often deny they use such practices, it can be exceedingly difficult to identify the nature of the problem in an attempt to restore normal service.

My company, Glance Networks, has first hand experience. Glance provides a simple desktop screen sharing service that thousands of businesses use to show online presentations and web demos to people and businesses worldwide. When a Glance customer hosts a session, bursts of high speed data are sent each time the person’s screen content changes. The Glance service forwards these data streams to all guests in the session, so they can see what the host sees. The streams need to flow quickly, so everyone’s view stays in sync.

One day a few years ago, our support line got a spate of calls from customers complaining that our service had suddenly slowed to a crawl. We soon realized the problem was localized to Canada, where nearly everyone gets their Internet service through one of just two ISPs. Sure enough, posts on blogs indicated that both of these ISPs had secretly deployed “traffic shaping” methods to beat back the flow of BitTorrent traffic. But the criteria their methods used to identify the streams were particularly blunt instruments that not only slowed BitTorrent, but many other high-speed data streams sent by their customers’ computers.

This experience illustrates why additional rules need to be imposed on ISPs. While we were working the problem, customers were understandably stuck wondering who was telling them the truth. Their ISP was saying “all is well” and that “nothing has changed”, both of which turned out to be wrong. But how were they to know? Their other Web traffic flowed normally. From their perspective, only our service had slowed.

Luckily, we quickly discovered that by changing a few parameters in our service, we were able to restore normal performance to our Canadian customers. But the Canadian ISPs were of no help. For over a year, they denied even using traffic shaping, let alone what criteria they used to single out “bad” traffic. We were forced to find our own “workaround” by trial and error.

And there’s the rub.

Imagine for a moment that regional phone companies were allowed to “manage their congestion” by implementing arbitrary methods that block a subset of phone calls on their network. People whose calls got blocked would be at a loss to know why some calls failed to connect, while others continued to go through normally. Such behavior would never be tolerated in our telephony market. Yet we allow ISPs to “manage their congestion” this way today.

In a truly open marketplace, we could expect market forces to drive bad ISPs out of the market. But most ISPs are monopolies, for good reason. Their infrastructure costs are enormous. The right to have a monopoly, however, must always be balanced by regulations that prevent abuse of that right.

Business and markets cannot thrive when ISPs secretly delay or discard a subset of their traffic. Networks need to be free of secret, arbitrary traffic management policies. Just because an ISP’s network suffers chronic congestion, that ISP cannot be allowed to selectively block arbitrary classes of traffic.


Meanwhile, FCC commissioners need to understand that arbitrary and secret traffic management policies have already impacted businesses unrelated to the peer-to-peer file sharing applications targeted by those policies. These are not hypothetical scenarios. The ongoing threat to legitimate Web services that businesses and consumers depend upon daily is real.

The FCC must impose rules that prevent ISPs from implementing such policies. ISPs that oversold capacity must respond with improved pricing plans, not traffic blocking policies. To let the status quo continue imperils legitimate users of the global information infrastructure that so many of us depend upon daily.

The Bell Wake-Up Call

For months, I've been asked repeatedly why net neutrality has not taken off as a Canadian political and regulatory issue. While there has been some press coverage, several high-profile incidents, and a few instances of political or regulatory discussion (including the recent House of Commons Committee report on the CBC), the issue has not generated as much attention in Canada as it has in the United States

The reported impact of traffic shaping on CBC downloads highlights the danger that non-transparent network management practices pose to the CBC's fulfillment of its statutory mandate to distribute content in the most efficient manner possible. This should ultimately bring cultural groups like Friends of the CBC into the net neutrality mix. Moreover, it points to a significant competition concern. As cable and satellite companies seek to sell new video services to consumers, they simultaneously use their network provider position to lessen competition that seeks to deliver competing video via the Internet. This is an obvious conflict that requires real action from Canada's competition and broadcast regulators.

The Bell throttling practices also raise crucial competition issues.
The CRTC has tried to address limited ISP competition by requiring companies such as Bell to provide access to third-party ISPs that "resell" Bell service with regulated wholesale prices that lead to a measure of increased competition. Indeed, there are apparently about 100 companies that currently resell Bell access services. Many have made substantial investments in their own networks and have loyal customer bases that number into the tens of thousands.

Those same companies have expressed concern to Bell about the possibility that it might institute throttling and thereby directly affect their services. Until yesterday, Bell had sought to reassure the companies that this was not their plan. For example, in response to a question about network speeds to resellers, it told the CRTC in 2003 that:

Bell irks ISPs with new throttling policy

CBC To Release TV-Show via BitTorrent, For Free

CBC, Canada’s public television broadcaster has plans to release the
upcoming TV-show “Canada’s Next Great Prime Minister” for free via
BitTorrent. This makes CBC the first North-American broadcaster to
embrace the popular filesharing protocol.

According to an early report, high quality copies of the show will be
published the day after it aired on TV, without any DRM restrictions.

CBC is not alone in this, European broadcasters, including the BBC,
are currently working on a next generation BitTorrent client that will
allow them to make their content available online. The benefit of
BitTorrent is of course that it will reduce distribution costs.

The popularity of movies and TV-shows on BitTorrent hasn’t gone
unnoticed. We reported earlier that some TV-studios allegedly use
BitTorrent as a marketing tool, and others leaking unaired pilots

[snip]= Blocks Canadians from NHL Games

Tuesday, March 11, 2008

GeoBlocking: Why Hollywood itself is a major cause of piracy

[From my opinion piece at Thinkernet --BSA]

GeoBlocking: Why Hollywood itself is a major cause of piracy

There is a lot of buzz in the popular press about video sites where you can legally download (for a fee) movies and TV shows across the Internet -- such as Apple’s iTunes and Amazon’s UnBox. Unfortunately, the content on most of these services is only available to U.S. citizens. The rest of the world cannot legally download this same content because of a practice called "geo-blocking."

Geo-blocking is a technique used by Hollywood distributors to block their content from being accessed by online viewers outside the U.S. The system identifies users by their IP address to determine if they reside in the U.S. or not. There are several sites that provide services to get around geo-blocking, but they tend to be cumbersome and slow -- and you need a degree in Geekology to use them properly.

Hollywood studios generally are keen on geo-blocking because they can extract more revenue from the traditional "windows" process of first distributing through theaters, rentals, pay per view, and finally on cable TV.

Geo-blocking is also a convenient arrangement for international cable companies and culture regulators. Both are terrified of the contrary implications: Their citizens can have free and open access to popular American culture bypassing their own regulatory controls and wallets.


Thursday, March 6, 2008


WASHINGTON D.C. - March 3, 2008 --, a new web site designed to help Internet users measure and gauge broadband availability, competition, speeds, and prices, on Monday announced the availability of a beta version of an Internet speed test at Through the release of the beta version, encourages testing and feedback of the technology in preparation for a national release.

The speed test seeks to allow consumers all across America to test their high-speed Internet connections to determine whether broadband providers are delivering the promised services. At, users can learn about local broadband availability, competition, speeds and service. By participating in the speed test and an anonymous online census questionnaire, users can greatly contribute to the nation's knowledge and understanding about the state of the nation's broadband competition and services.

"We believe the Broadband Census will provide vital statistics to the public and to policy makers about the true state of broadband in our country today," said Drew Clark, Executive Director of "By releasing a beta version of the speed test, we hope to encourage feedback from early adopters in the research and education community so that we can create an even more robust mechanism for collecting broadband data." is deploying the NDT (Network Diagnostic Tool), an open-source network performance testing system designed to identify computer configuration and network infrastructure problems that can degrade broadband performance. The NDT is under active development by the Internet2 community, an advanced networking consortium led by the research and education community. The NDT has been used by other broadband mapping endeavors, including the eCorridors Program at Virginia Tech, which is working to collect data of residential and small business broadband trends throughout the Commonwealth of Virginia.

"Internet2 supports its more than 300 member organizations in getting the best performance from their advanced network connections," said Gary Bachula, Internet2 vice president for external relations. "We are pleased that the Network Diagnostic Tool can play an important role in helping U.S. citizens and policy makers gain a better understanding of existing broadband services. This information will help consumers and policy makers make better decisions about future broadband services," said Bachula.

"The eCorridors Program endorses and supports the Broadband Census as a means of continuing the effort with the participation of key national players," said Brenda van Gelder, Program Director of eCorridors. Virginia Tech launched the first of its kind community broadband access map and speed test in July 2006. "We believe that mapping broadband along with these other factors can have significant political and economic impacts by providing the public a user-friendly, grassroots tool for maintaining oversight of available internet services, applications, bandwidth and pricing."

The NDT provides network performance information directly to a user by running a short diagnostic test between a Web browser on a desktop or laptop computer and one of several NDT servers around the country. The NDT software helps users get a reading on their network speed and also to understand what may be causing specific network performance issues.

Congress and state government officials have all recently focused on the need for better broadband data. And the Federal Communications Commission last week called for greater transparency about the speeds and prices of service offered by broadband carriers.

Rep. Ed Markey, D-Mass., Chairman of the House Subcommittee on Telecommunications and the Internet, has introduced legislation that would provide the public with better broadband information. Markey's "Broadband Census of America Act," H.R. 3919, has passed the House of Representatives and is now before the Senate.

By allowing users to participate in collecting Broadband Census data, aims to build on these initiatives, and to provide consumers and policy-makers with timely tools to understanding broadband availability, adoption and competition.

Additionally, Pew Internet & American Life Project has contracted with to gather anonymized information about users' broadband experiences on the web site, and to incorporate those findings into Pew's 2008 annual broadband report.

"Connection speed matters greatly to people's online surfing patterns, but few home broadband users know how fast their on-ramp to cyberspace is," said John Horrigan, Associate Director for Research with the Pew Internet & American Life Project. " will help fill a gap in understanding how evolving broadband networks influence users' online behavior." is made available under a Creative Commons Attribution-Noncommercial License. That means that the content on is available for all to view, copy, redistribute and reuse for FREE, providing that attribution is provided to, and that such use is done for non-commercial purposes.

About Broadband
Broadband Census LLC is organized as a Limited Liability Company in the Commonwealth of Virginia. Drew Clark is the principal member of Broadband Census LLC. To find out more about the organizations and individuals providing financial, technical, research or outreach support to the Broadband Census, please visit For more information:

About Pew Internet & American Life Project:
The Pew Internet & American Life Project produces reports that explore the impact of the internet on families, communities, work and home, daily life, education, health care, and civic and political life. The Project aims to be an authoritative source on the evolution of the internet through collection of data and analysis of real-world developments as they affect the virtual world. For more information:

About Virginia Tech e-Corridors Project:
eCorridors is an outreach program of Virginia Tech that was established in 2000. Its activities include telecommunications policy, communications infrastructure, research and other computing applications as well as community networks and economic development in a networked world. eCorridors is a primary means through which government, private sector industry and community stakeholders participate and collaborate with Virginia Tech researchers and IT professionals. For more information:

Drew Clark, Executive Director
Telephone: 202-580-8196

Microsoft's Google - killer strategy: Finally on the way?

[Microsoft "cloud" strategy that has been rumoured for some time, looks will soon to come to fruition. Given that Microsoft has hired big eScience names like Tony Hey and Dan Reed, I suspect this cloud strategy will have a major impact on future cyber-infrastructure projects and will be a strong competitor to Amazon EC2. Thanks to Digg and Gregory Soo for these pointers--BSA]

From Digg news

"The new strategy will, I'm told, lay out a roadmap of moves across three major areas: the transformation of the company's portfolio of enterprise applications to a web-services architecture, the launch of web versions of its major PC applications, and the continued expansion of its data center network. I expect that all these announcements will reflect Microsoft's focus on what it calls "software plus services" - the tying of web apps to traditional installed apps - but they nevertheless promise to mark the start of a new era for the company that has dominated the PC age."

Microsoft to build Skynet, send Terminators back to 20th century to preempt Google....

Nick / Rough Type:
Rumor: Microsoft set for Vast data-center push — I've received a few more hints about the big cloud-computing initiative Microsoft may be about to announce, perhaps during the company's Mix08 conference in Las Vegas this coming week. ...The construction program will be "totally over the top," said a person briefed on the plan. The first phase of the buildout, said the source, will include the construction of about two dozen data centers around the world, each covering about 500,000 square feet or more (that's a total 12 million sq ft). The timing of the construction is unclear....

Excellent comments by David P Reed on Network Neutrality

[At the recent FCC hearings in Boston David Reed, a professor at MIT gave a very compelling argument in regard to Comcast's efforts to throttle BitTorrent in terms of efforts at reasonable traffic management. My personal interpretation on David's comments is that cablecos and telcos have entered into a contract with users to provide access to the "Internet". The Internet is not a product or service developed by exclusively by the cablecos or telcos for use and enjoyment by their customers, as for example traditional cell phone service. Since the Internet is a global service with its own set of engineering principles, guidelines and procedures, implicit in providing access to the Internet is, in essence, an unwritten contract to adhere to those recognized standards such as the end2end principle. No one questions the need for traffic management, spam control and other such services, but they should be done in way that is consistent within open and transparent engineering practices that are part and parcel of the contract with the user in providing access to the global Internet. -- BSA]

Cyber-infrastructure cloud tools for social scientists

[Here is a great example of using cyber-infrastructure cloud tools for social science applications. The NY Times project is a typical of many social science projects where thousands of documents must be digitized and indexed. The cost savings compared to operating a cluster are impressive. Also it is exciting to see the announcement from NSF to promote industrial research partnership with Google and IBM on clouds. Thanks to Glen Newton for this pointer -- BSA]

Hadoop + EC2 + S3 = Super alternatives for researchers (& real people too!)

I recently discovered and have been inspired by a real-world and non-trivial (in space and in time) application of Hadoop (Open Source implementation of Google's MapReduce) combined with the Amazon Simple Storage Service (Amazon S3) and the Amazon Elastic Compute Cloud (Amazon EC2). The project was to convert pre-1922 New York Times articles-as-scanned-TIFF-images into PDFs of the articles:

4 TB of data loaded to S3 (TIFF images)
+ Hadoop (+ Java Advanced Imaging and various glue)
+ 100 EC2 instances
+ 24 hours
= 11M PDFs, 1.5 TB on S3

Unfortunately, the developer (Derek Gottfrid) did not say how much this cost the NYT. But here is my back-of-the-envelope calculation (using the Amazon S3/EC2 FAQ):

EC2: $0.10 per instance-hour x 100 instances x 24hrs = $240
S3: $0.15 per GB-Month x 4500 GB x ~1.5/31 months = ~$33
+ $0.10 per GB of data transferred in x 4000 GB = $400
+ $0.13 per GB of data transferred out x 1500 GB = $195
Total: = ~$868

Not unreasonable at all! Of course this does not include the cost of bandwidth that the NYT needed to upload/download their data.

I've known about the MapReduce and Hadoop for quite a while now, but this is the first use outside of Google (MapReduce) and Yahoo (Hadoop) and combined with Amazon services that I've such a real problem solved so smoothly and also wasn't web indexing or toy examples.

As much of my work in information retrieval and knowledge discovery involves a great deal of space and even more CPU, I am looking forward to experimenting with this sort of environment (Hadoop, local or in a service cloud) for some of the more extreme experiments I am working on. And by using Hadoop locally, if the problem gets to big for our local resources, we can always buy capacity like the NYT example with a minimum of effort!

This is also something that various commercial organizations (and even individuals?) with specific high CPU / high storage / high bandwidth (oh, transfers between S3 and EC2 are free) compute needs should be considering this solution. Of course security and privacy concerns apply.

Breaking News:
NSF Teams w/ Google, IBM for Academic 'Cloud' Access

Feb. 25 -- Today, the National Science Foundation's Computer and Information Science and Engineering (CISE) Directorate announced the creation of a strategic relationship with Google Inc. and IBM. The Cluster Exploratory (CluE) relationship will enable the academic research community to conduct experiments and test new theories and ideas using a large-scale, massively distributed computing cluster.

In an open letter to the academic computing research community, Jeannette Wing, the assistant director at NSF for CISE, said that the relationship will give the academic computer science research community access to resources that would be unavailable to it otherwise.

"Access to the Google-IBM academic cluster via the CluE program will provide the academic community with the opportunity to do research in data-intensive computing and to explore powerful new applications," Wing said. "It can also serve as a tool for educating the next generation of scientists and engineers."

"Google is proud to partner with the National Science Foundation to provide computing resources to the academic research community," said Stuart Feldman, vice president of engineering at Google Inc. "It is our hope that research conducted using this cluster will allow researchers across many fields to take advantage of the opportunities afforded by large-scale, distributed computing."

"Extending the Google/IBM academic program with the National Science Foundation should accelerate research on Internet-scale computing and drive innovation to fuel the applications of the future," said Willy Chiu, vice president of IBM software strategy and High Performance On Demand Solutions. "IBM is pleased to be collaborating with the NSF on this project."

In October of last year, Google and IBM created a large-scale computer cluster of approximately 1,600 processors to give the academic community access to otherwise prohibitively expensive resources. Fundamental changes in computer architecture and increases in network capacity are encouraging software developers to take new approaches to computer-science problem solving. In order to bridge the gap between industry and academia, it is imperative that academic researchers are exposed to the emerging computing paradigm behind the growth of "Internet-scale" applications.

This new relationship with NSF will expand access to this research infrastructure to academic institutions across the nation. In an effort to create greater awareness of research opportunities using data-intensive computing, the CISE directorate will solicit proposals from academic researchers. NSF will then select the researchers to have access to the cluster and provide support to the researchers to conduct their work. Google and IBM will cover the costs associated with operating the cluster and will provide other support to the researchers. NSF will not provide any funding to Google or IBM for these activities.

While the timeline for releasing the formal request for proposals to the academic community is still being developed, NSF anticipates being able to support 10 to 15 research projects in the first year of the program, and will likely expand the number of projects in the future.

Information about the Google-IBM Academic Cluster Computing Initiative can be found at

According to Wing, NSF hopes the relationship may provide a blueprint for future collaborations between the academic computing research community and private industry. "We welcome any comparable offers from industry that offer the same potential for transformative research outcomes," Wing said.


Source: National Science Foundation

Monday, February 25, 2008

More on The fallacy of bandwidth on demand

[An excellent rebuttal by Bill Johnston from ESnet, although I have not changed my opinion. Although I want to make one point clear - I am a big believer there will be a huge demand for dedicated optical circuits, I just don’t see the need for bandwidth on demand or reservation, or fast optical switching for that matter in order to optimize utilization of bandwidth -- BSA]

Needless to say, I don't agree with your views expressed

There are two uses for the suite of end-two-end (virtual) circuit tools that ESnet, Interenet2, DANTE, and the European NRENs are developing that, among other things, provide schedulable bandwidth (which has a bit different flavor than BOD but this is still what people frequently call it). These tools also provide the network engineers with very powerful traffic engineering capability.

Re: the rate of growth of big science traffic: The rate of growth - without any of the next generation of scientific instruments yet fully operational - is tracking very steady at 10X every 4 yrs. Resulting in an average of 10Gb/s in our core by 2009-10 and 100 Gb/s by 2013-14. This growth has been consistent since 1990. However, this does not account for any “disruptive” use of the network by science that we have not seen in the past, and there are several of these showing up in the next several years. The LHC is the first of the big next gen. experiments, and even in the data system testing phase (which is now winding down as CERN gets ready for first operation later this year) has already saturated both ESnet and Internet2 circuits in the US is necessitating a lot of mixed circuit + "cloud" TE. By 2008-09 the LHC is expected to be generating 20-30 Gb/s steady state traffic (24 hrs/day 9 mo./yr) into ESnet to the data centers and then out to Internet2, NLR, and the RONs to the analysis systems at universities..

There are several important science uses of the virtual
circuit tools capabilities:
1) One (and the reason for the wide flung collaboration
noted above) is to manage bandwidth on long, diverse, international virtual circuits where there are frequently (nay, almost always) bandwidth issues somewhere along the multi-domain path. The circuit tools have already proved important where some level of guaranteed bandwidth is needed end-to-end between institutions in the US and Europe in order to accomplish the target science.

2) There are science applications - which we are quite sure will be showing up in the next year because we are tracking the construction of the instruments and the planned methodology for accomplishing the science - where remote scientists working on experiments with a real-time aspect will definitely require guaranteed bandwidth in order to ensure the success of the experiment. This use is scheduled and frequently of limited time duration, but requires guaranteed bandwidth in order to get instrument output to analysis systems and science feedback back to the instrument on a rigid time schedule. I won't go into details here but there are a number documented case studies that detail these needs. See, e.g., “Science Network Requirements (ppt)” and “Science-Driven Network Requirements for ESnet: Update to the 2002 Office of Science Networking Requirements Workshop Report - February 21, 2006” at

There are very practical reasons with we do not build larger and larger "clouds" to address these needs: we can't afford it. (To point out the obvious, the R&E networks in the US and Europe are large and complex and expensive to grow - the Internet2-ESnet network is 14,000 miles of fiber organized into six interconnected rings in order to provide even basic coverage of the US. The situation is more complex, though not quite as large a geographic scale, in
Europe.) We have to drop down from L3 in order to use
less expensive network equipment. As soon as you do that
you must have circuit management tools in order to go e2e
as many sites only have IP access to the network. Since this involves a hybrid circuit-IP network, you end up having to manage virtual circuits that stitch together the paths through different sorts of networks and provide ways of steering the large-scale science traffic into the circuits. Given this, providing schedulable, guaranteed bandwidth is just another characteristic of managing the e2e L1-L2-L3 hybrid VCs.

I have watched big science evolve over more than 35 years
and the situation today is very different than in the past. There is no magic in this - the root causes are a combination of 1) the same Moore's law that drives computer chip density is driving the density of the sensing elements in large detectors of almost all descriptions, and
2) very long device refresh times. The technology "refresh" time for large scientific instruments is much, much longer than in the computing world. This is mostly because the time to design and build large science instruments is of order 10-15 years, and the next generation of instruments just now starting to come on-line is the first to make use of modern, high-density chip technology in sensors (and therefore generates vastly more data than older instruments). (This is an "approximation," but my observation indicates it true to first order.) This is why we are about to be flooded with data - and at a rate that I believe, given the R&E network budgets, we will just barely be able to keep ahead of just in terms of provisioning more and more lambdas each year. And keeping up with demand assumes 100G lambdas and associated deployed equipment by 2012-ish for essentially the same cost as 10G today.

End-to-end circuit management (including allocation of bandwidth as a potentially scarce resource) is rapidly becoming a reality in today’s R&E networks and is going to become more important in the next 5 years. Schedulable bandwidth allocation is one of several aspects of the VC management.


The fallacy of bandwidth on demand

[Some excerpts from my opinion piece at Internet Evolution -- BSA]

Around the world, many National Research and Education Networks (NRENs) are focusing on various bandwidth-on-demand schemes for the future Internet architecture that will be used primarily for big science and cyber-infrastructure applications. The assumption is that in the future, big science institutions will produce such volumes of data that this traffic alone will easily exceed the capacity of today’s optical networks. If you think you have heard this story before, you’re right.

These same arguments were used to justify the need for ISDN (Integrated Services Digital Network), ATM (Asynchronous Transfer Mode), GMPLS (Generalized Multiprotocol Label Switching), and QoS (Quality of Service). These technologies were also premised on the assumption that network capacity would never be able to keep up with demand. Ergo, you needed an “intelligent” network to anticipate the applications demand for bandwidth.

Once again, we are hearing the same old litany that we need optical switched networks and optical bandwidth on demand as networks will be unable to keep up with anticipated traffic growth, especially for big science.

The fact is, no evidence exists yet that big science traffic volumes, or for that matter Internet traffic volumes, are growing anywhere near what was forecast, even just a few short years ago.

Combined with the slow ramp-up of big science traffic volumes, the business case for optical bandwidth on demand is also challenged by new optical network technology, which will dramatically increase existing optical networks by several orders of magnitude. Vendors like Alcatel-Lucent (NYSE: ALU), Nortel Networks Ltd. (NYSE/Toronto: NT), Ciena Corp. (Nasdaq: CIEN), and Infinera Corp. (Nasdaq: INFN) are developing next-generation optical networks using a new technique called coherent optical technology.

This will allow in field upgrades of existing optical networks to carry 100 Gigabits per wavelength using existing 10 Gigabit wavelength channels. The vendors expect that this can be accomplished without any changes to the core of network, such as optical repeaters, fiber, etc. Research into coherent optical technology is only at its infancy. Even more dramatic jumps in network will probably be announced in the coming years.

The argument for bandwidth on demand is further undermined with the advent of the new commercial “cloud” services offered by Google (Nasdaq: GOOG), Amazon, and others. Increasingly, big science and computation will move to these clouds for convenience, cost, and ease of use. Just like clouds allow the creation of shared work spaces and obviate the need for users to email document attachments to all and sundry, they will also minimize the need to ship big science and data files back and forth across NRENs. All computation and data will be accessible locally, regardless of where the researcher is located.


As predicted, new tools to thwart traffic shaping by telcos and cablecos

[Will the telecos and cablecos ever learn? Implementing traffic shaping tools and trying to block BitTorrent and other applications will eventually backfire. Hackers are already working on tools to thwart such attempts. No one questions that carriers have the right to traffic manage their network. But using secretive techniques without informing users will guarantee the carriers will be saddled with some sort of network neutrality legislation. Instead they should be focusing on traffic engineering techniques that enhance the users P2P experience by establishing BitTorrent supernodes etc. Thankfully a consortium of ISPs and P2P companies had been created to come up with such solutions.--BSA]

[From a posting by Lauren Weinstein of NNSquad

As predicted, P2P extensions to thwart ISP "traffic shaping" and "RST
injections" are in development. We can assume that ISPs will attempt
to deploy countermeasures, then the P2P folks will ratchet up another
level, and ... well, we may well end up with the Internet version of
the Cold War's wasteful and dangerous Mutally Assured Destruction
(MAD). There's gotta be a better way, folks.

"The goal of this new type of encryption (or obfuscation) is to
prevent ISPs from blocking or disrupting BitTorrent traffic
connections that span between the receiver of a tracker response and any peer IP-port appearing in that tracker response,
according tothe proposal.
This extension directly addresses a known attack on the BitTorrent protocol performed by some deployed network hardware."

[Thanks to Matt Petach for these notes from NANOG]

2008.02.18 Lightning talk #1
Laird Popkin, Pando networks
Doug Pasko, Verizon networks

P4P: ISPs and P2P

DCIA, distributed computing industry

P2P and ISPs
P2P market is maturing

digital content delivery is where things are heading;
content people are excited about p2p as disruptive
way to distribute content.
BBC doing production quality P2P traffic;
rapidly we're seeing huge changes, production
people are seeing good HD rollout.

Nascent P2P market pre 2007
Now, P2P is become a key part of the portfolio
for content delivery

P2P bandwidth usage
cachelogic slide, a bit dated, with explosion of
youtube, the ratio is sliding again the other way,
but it's still high.

Bandwidth battle
ISPs address P2P
upgrade network
deploy p2p caching
terminate user
rate limit p2p traffic

P2P countermeasures
use random ports

Fundamental problem; our usual models for managing
traffic don't apply anymore. It's very dynamic, moves
all over the place.
DCIA has P4P working group, goal is to get ISPs working
with the p2p community, to allow shared control of
the infrastructure.

Make tracking infrastructure smarter.

Pando, Verizon, has a unch of other members.
There's companies in the core working group,
and many more observing.

Goal is it design framework to allow ISPs and P2P
networks to guide connectivity to optimize traffic
flows, provide better performance and reduce network
P2P alone doesn't understand topology, and has no
idea of cost models and peering relationships.
So, goal is to blend business requirements together
with network topology.
Reduce hop count, for example.

Want an industry solution to arrive before a
regulatory pressure comes into play.

Drive the solution to be carrier grade, rather
than ad-hoc solutions.

P2P applications with P4P benefits
better performance, faster downloads
less impact on ISPs results in fewer restrictions

P4P enables more efficient delivery.
CDN model (central pushes, managed locations)
P2P, more chaotic, no central locations,
P2P+P4P, knowledge of ISP infrastructure, can
form adjacencies among local clients as much
as possible.

Traditional looking network management, but pushed
to network layer.

P4P goals
share topology in a flexible, controlled way;
sanitized, generalized, summarized set of information,
with privacy protections in place; no customer or user information out, without security concerns.

Need to be flexibile to be usable across many P2P
applications and architectures (trackers, trackerless)
Needs to be easy to implement, want it to be an open
standard; any ISP/P2P can implement it.

P4P architecture slide
P2P clients talk to Ptracker to figure out who to
talk to; Ptracker talks to Itracker to get guidance
on which peers to connect to which; so peers get told
to connect to nearby peers.

It's a joint optimization problem; minimize utilization
by P2P, while maximizing download performance.

At the end of this, goal is customer to have a better experience; customer gets to be happier.

Data exchanged in P4P; network maps go into Itracker,
provides a weight matrix between locations without
giving topology away.
Each PID has IP 'prefix' associated with it in the
matrix, has percentage weighting of how heavily
people in one POP should connect to others.

Ran simulations on Verizon and Telefonica networks.

Zero dollars for the ISPs, using Yale modelling,
shows huge reduction in hop counts, cutting down
long haul drastically. Maps to direct dollar

Results also good for P2P, shorter download times,
with 50% to 80% increases in download speeds
and reductions in download time.
This isn't even using caching yet.

P4PWG is free to join
monthly calls
mailing list
field test underway
mission is to improve
Marty Lafferty (
Laird (
Doug (

Q: interface, mathematical model; why not have a
model where you ask the ISP for a given prefix, and
get back weighting. But the communication volume
between Ptracker and Itracker was too large for that
to work well; needed chatter for every client that
connected. The map was moved down into the Ptracker
so it can do the mapping faster as in-memory
operation, even in the face of thousands of mappings
per second.
The architecture here is one proof of concept test;
if there's better designs, please step forward and
talk to the group; goal is to validate the basic idea
that localizing traffic reduces traffic and improves performance. They're proving out, and then will start out the

Danny Mcphereson, when you do optimization, you will
end up with higher peak rates within the LAN or within
the POP; p2p isn't a big part of intradomain traffic,
as opposed with localized traffic, where it's 80-90%
of the traffic.
What verizon has seen is that huge amounts of P2P
traffic is crossing peering links.
What about Net Neutrality side, and what they might
be contributing in terms of clue factor to that
It's definitely getting attention; but if they can
stem the vertical line, and make it more reasonable,
should help carriers manage their growth pressures
Are they providing technical contributions to the
FCC, etc.? DCIA is sending papers to the FCC, and
is trying to make sure that voices are being heard
on related issues as well.

Q: Bill Norton, do the p2p protocols try to infer any topological data via ping tests, hop counts, etc.? Some do try; others use random peer connections; others try to reverse engineer network via traceroutes. One attempts to use cross-ISP links as much as possible, avoids internal ISP connections as much as possible. P4P is addition to existing P2P networks; so this information can be used by the network for whatever information the P2P network determines its goal is. Is there any motivation from the last-mile ISP to make them look much less attractive? It seems to actually just shift the balance, without altering the actual traffic volume; it makes it more localized, without reducing or increasing the overall level.

How are they figuring on distributing this information
from the Itracker to the Ptracker? Will it be via a
BGP feed? If there's a central tracker, the central
tracker will get the map information; for distributed
P2P networks, there's no good answer yet; each peer
asks Itracker for guidance, but would put heavy load
on the Itracker.
If everyone participates, it'll be like a global,
offline IGP with anonymized data; it's definitely
a challenge, but it's information sharing with a

Jeff--what stops someone from getting onto a tracker
box, and maybe changing the mapping to shift all traffic against one client, to DoS them? This is aimed as guidance; isn't aimed to be the absolute override. application will still have some intelligence built in. Goal will be to try to secure the data exchange and updates to some degree.