Friday, March 28, 2008

How Non-Net Neutrality Affects Businesses

[Michael Geist has been maintaining an excellent blog on the challenges of net neutrality. While this remains a hot topic in the US, despite Michael's best efforts it has barely caused a ripple in Canada. There is increasing evidence of the impact of bit torrent throttling is having on business, competition and Canadian cultural policies, as for example, CBC's attempt to distribute DRM fee video content through BitTorrent. As well Geo-blocking is preventing Canadians from watching the NHL hockey games over the Internet which will be distributed in the US by Hulu.com. It is incredible that the country where hockey is considered almost a religion, that Canadians do not have the same rights and privileges as their American cousins to watch their national sport. The major ISPs who are practicing bit throttling and benefit from geo-blocking are also Canada's major distributors of video content via cable TV and satellite. As Michael Geist's reports in his blog "As cable and satellite companies seek to sell new video services to consumers, they simultaneously use their network provider position to lessen competition that seeks to deliver competing video via the Internet. This is an obvious conflict that requires real action from Canada's competition and broadcast regulators" --BSA]


Michael Geist's Blog
http://www.michaelgeist.ca/

How Network Non-Neutrality Affects Real Businesses http://www.xconomy.com/2008/03/24/how-network-non-neutrality-affects-real-businesses/


Network neutrality leaped back into the headlines last month, when FCC commissioners held a public hearing at Harvard University to examine whether the commission should institute rules to regulate the way Internet service providers (ISPs) manage traffic on their networks. The panel heard from executives representing the two largest ISPs in the Northeast, Comcast and Verizon, along with Internet pundits, politicians and academics.

The hearing coincided with an increasing public awareness that Comcast and dozens of other ISPs (most of them cable TV companies) commonly use methods to throttle some forms of traffic on their networks. They do this to prevent their networks from becoming congested. These methods typically target peer-to-peer traffic from BitTorrent, a popular music and video file sharing program the ISPs say generates a third or more of their traffic.

Accordingly, BitTorrent has become the debate’s poster child, pushing much of the net neutrality debate into endless arguments over free speech, copyright law and what—if anything—should constitute “legal use” of the Net.

But there’s another side to this debate, one that gets far too little attention. In their attempt to limit BitTorrent and other peer-to-peer file sharing traffic, some ISPs have unwittingly caused collateral damage to other, unrelated businesses and their users. For example, some Web conferencing providers have seen their services slow to a crawl in some regions of the world because of poorly executed traffic management policies. Since ISPs often deny they use such practices, it can be exceedingly difficult to identify the nature of the problem in an attempt to restore normal service.

My company, Glance Networks, has first hand experience. Glance provides a simple desktop screen sharing service that thousands of businesses use to show online presentations and web demos to people and businesses worldwide. When a Glance customer hosts a session, bursts of high speed data are sent each time the person’s screen content changes. The Glance service forwards these data streams to all guests in the session, so they can see what the host sees. The streams need to flow quickly, so everyone’s view stays in sync.

One day a few years ago, our support line got a spate of calls from customers complaining that our service had suddenly slowed to a crawl. We soon realized the problem was localized to Canada, where nearly everyone gets their Internet service through one of just two ISPs. Sure enough, posts on blogs indicated that both of these ISPs had secretly deployed “traffic shaping” methods to beat back the flow of BitTorrent traffic. But the criteria their methods used to identify the streams were particularly blunt instruments that not only slowed BitTorrent, but many other high-speed data streams sent by their customers’ computers.

This experience illustrates why additional rules need to be imposed on ISPs. While we were working the problem, customers were understandably stuck wondering who was telling them the truth. Their ISP was saying “all is well” and that “nothing has changed”, both of which turned out to be wrong. But how were they to know? Their other Web traffic flowed normally. From their perspective, only our service had slowed.

Luckily, we quickly discovered that by changing a few parameters in our service, we were able to restore normal performance to our Canadian customers. But the Canadian ISPs were of no help. For over a year, they denied even using traffic shaping, let alone what criteria they used to single out “bad” traffic. We were forced to find our own “workaround” by trial and error.

And there’s the rub.

Imagine for a moment that regional phone companies were allowed to “manage their congestion” by implementing arbitrary methods that block a subset of phone calls on their network. People whose calls got blocked would be at a loss to know why some calls failed to connect, while others continued to go through normally. Such behavior would never be tolerated in our telephony market. Yet we allow ISPs to “manage their congestion” this way today.

In a truly open marketplace, we could expect market forces to drive bad ISPs out of the market. But most ISPs are monopolies, for good reason. Their infrastructure costs are enormous. The right to have a monopoly, however, must always be balanced by regulations that prevent abuse of that right.

Business and markets cannot thrive when ISPs secretly delay or discard a subset of their traffic. Networks need to be free of secret, arbitrary traffic management policies. Just because an ISP’s network suffers chronic congestion, that ISP cannot be allowed to selectively block arbitrary classes of traffic.

[..]

Meanwhile, FCC commissioners need to understand that arbitrary and secret traffic management policies have already impacted businesses unrelated to the peer-to-peer file sharing applications targeted by those policies. These are not hypothetical scenarios. The ongoing threat to legitimate Web services that businesses and consumers depend upon daily is real.

The FCC must impose rules that prevent ISPs from implementing such policies. ISPs that oversold capacity must respond with improved pricing plans, not traffic blocking policies. To let the status quo continue imperils legitimate users of the global information infrastructure that so many of us depend upon daily.

The Bell Wake-Up Call http://www.michaelgeist.ca/content/view/2787/125/

For months, I've been asked repeatedly why net neutrality has not taken off as a Canadian political and regulatory issue. While there has been some press coverage, several high-profile incidents, and a few instances of political or regulatory discussion (including the recent House of Commons Committee report on the CBC), the issue has not generated as much attention in Canada as it has in the United States

[...]
The reported impact of traffic shaping on CBC downloads highlights the danger that non-transparent network management practices pose to the CBC's fulfillment of its statutory mandate to distribute content in the most efficient manner possible. This should ultimately bring cultural groups like Friends of the CBC into the net neutrality mix. Moreover, it points to a significant competition concern. As cable and satellite companies seek to sell new video services to consumers, they simultaneously use their network provider position to lessen competition that seeks to deliver competing video via the Internet. This is an obvious conflict that requires real action from Canada's competition and broadcast regulators.

The Bell throttling practices also raise crucial competition issues.
The CRTC has tried to address limited ISP competition by requiring companies such as Bell to provide access to third-party ISPs that "resell" Bell service with regulated wholesale prices that lead to a measure of increased competition. Indeed, there are apparently about 100 companies that currently resell Bell access services. Many have made substantial investments in their own networks and have loyal customer bases that number into the tens of thousands.

Those same companies have expressed concern to Bell about the possibility that it might institute throttling and thereby directly affect their services. Until yesterday, Bell had sought to reassure the companies that this was not their plan. For example, in response to a question about network speeds to resellers, it told the CRTC in 2003 that:

Bell irks ISPs with new throttling policy http://www.theglobeandmail.com/servlet/story/RTGAM.20080325.wgtinternet26/BNStory/Technology/home


CBC To Release TV-Show via BitTorrent, For Free

CBC, Canada’s public television broadcaster has plans to release the
upcoming TV-show “Canada’s Next Great Prime Minister” for free via
BitTorrent. This makes CBC the first North-American broadcaster to
embrace the popular filesharing protocol.

According to an early report, high quality copies of the show will be
published the day after it aired on TV, without any DRM restrictions.

CBC is not alone in this, European broadcasters, including the BBC,
are currently working on a next generation BitTorrent client that will
allow them to make their content available online. The benefit of
BitTorrent is of course that it will reduce distribution costs.

The popularity of movies and TV-shows on BitTorrent hasn’t gone
unnoticed. We reported earlier that some TV-studios allegedly use
BitTorrent as a marketing tool, and others leaking unaired pilots
intentionally.

[snip]=

Hulu.com Blocks Canadians from NHL Games http://www.michaelgeist.ca/content/view/2776/196/

Tuesday, March 11, 2008

GeoBlocking: Why Hollywood itself is a major cause of piracy

[From my opinion piece at Thinkernet --BSA]

GeoBlocking: Why Hollywood itself is a major cause of piracy

http://www.internetevolution.com/author.asp?section_id=506&doc_id=147793&

There is a lot of buzz in the popular press about video sites where you can legally download (for a fee) movies and TV shows across the Internet -- such as Apple’s iTunes and Amazon’s UnBox. Unfortunately, the content on most of these services is only available to U.S. citizens. The rest of the world cannot legally download this same content because of a practice called "geo-blocking."

Geo-blocking is a technique used by Hollywood distributors to block their content from being accessed by online viewers outside the U.S. The system identifies users by their IP address to determine if they reside in the U.S. or not. There are several sites that provide services to get around geo-blocking, but they tend to be cumbersome and slow -- and you need a degree in Geekology to use them properly.

Hollywood studios generally are keen on geo-blocking because they can extract more revenue from the traditional "windows" process of first distributing through theaters, rentals, pay per view, and finally on cable TV.

Geo-blocking is also a convenient arrangement for international cable companies and culture regulators. Both are terrified of the contrary implications: Their citizens can have free and open access to popular American culture bypassing their own regulatory controls and wallets.

[..]

Thursday, March 6, 2008

Tool to measure true broadband speed & competition etc

BROADBANDCENSUS.com LAUNCHES BETA VERSION OF INTERNET SPEED TEST; NEW WEB SITE AIMS TO PROVIDE DATA ABOUT BROADBAND TO PUBLIC

WASHINGTON D.C. - March 3, 2008 -- BroadbandCensus.com, a new web site designed to help Internet users measure and gauge broadband availability, competition, speeds, and prices, on Monday announced the availability of a beta version of an Internet speed test at www.broadbandcensus.com. Through the release of the beta version, BroadbandCensus.com encourages testing and feedback of the technology in preparation for a national release.

The speed test seeks to allow consumers all across America to test their high-speed Internet connections to determine whether broadband providers are delivering the promised services. At BroadbandCensus.com, users can learn about local broadband availability, competition, speeds and service. By participating in the speed test and an anonymous online census questionnaire, users can greatly contribute to the nation's knowledge and understanding about the state of the nation's broadband competition and services.

"We believe the Broadband Census will provide vital statistics to the public and to policy makers about the true state of broadband in our country today," said Drew Clark, Executive Director of BroadbandCensus.com. "By releasing a beta version of the speed test, we hope to encourage feedback from early adopters in the research and education community so that we can create an even more robust mechanism for collecting broadband data."

BroadbandCensus.com is deploying the NDT (Network Diagnostic Tool), an open-source network performance testing system designed to identify computer configuration and network infrastructure problems that can degrade broadband performance. The NDT is under active development by the Internet2 community, an advanced networking consortium led by the research and education community. The NDT has been used by other broadband mapping endeavors, including the eCorridors Program at Virginia Tech, which is working to collect data of residential and small business broadband trends throughout the Commonwealth of Virginia.

"Internet2 supports its more than 300 member organizations in getting the best performance from their advanced network connections," said Gary Bachula, Internet2 vice president for external relations. "We are pleased that the Network Diagnostic Tool can play an important role in helping U.S. citizens and policy makers gain a better understanding of existing broadband services. This information will help consumers and policy makers make better decisions about future broadband services," said Bachula.

"The eCorridors Program endorses and supports the Broadband Census as a means of continuing the effort with the participation of key national players," said Brenda van Gelder, Program Director of eCorridors. Virginia Tech launched the first of its kind community broadband access map and speed test in July 2006. "We believe that mapping broadband along with these other factors can have significant political and economic impacts by providing the public a user-friendly, grassroots tool for maintaining oversight of available internet services, applications, bandwidth and pricing."

The NDT provides network performance information directly to a user by running a short diagnostic test between a Web browser on a desktop or laptop computer and one of several NDT servers around the country. The NDT software helps users get a reading on their network speed and also to understand what may be causing specific network performance issues.

Congress and state government officials have all recently focused on the need for better broadband data. And the Federal Communications Commission last week called for greater transparency about the speeds and prices of service offered by broadband carriers.

Rep. Ed Markey, D-Mass., Chairman of the House Subcommittee on Telecommunications and the Internet, has introduced legislation that would provide the public with better broadband information. Markey's "Broadband Census of America Act," H.R. 3919, has passed the House of Representatives and is now before the Senate.

By allowing users to participate in collecting Broadband Census data, BroadbandCensus.com aims to build on these initiatives, and to provide consumers and policy-makers with timely tools to understanding broadband availability, adoption and competition.

Additionally, Pew Internet & American Life Project has contracted with BroadbandCensus.com to gather anonymized information about users' broadband experiences on the web site, and to incorporate those findings into Pew's 2008 annual broadband report.

"Connection speed matters greatly to people's online surfing patterns, but few home broadband users know how fast their on-ramp to cyberspace is," said John Horrigan, Associate Director for Research with the Pew Internet & American Life Project. "BroadbandCensus.com will help fill a gap in understanding how evolving broadband networks influence users' online behavior."

BroadbandCensus.com is made available under a Creative Commons Attribution-Noncommercial License. That means that the content on BroadbandCensus.com is available for all to view, copy, redistribute and reuse for FREE, providing that attribution is provided to BroadbandCensus.com, and that such use is done for non-commercial purposes.

About Broadband Census.com:
Broadband Census LLC is organized as a Limited Liability Company in the Commonwealth of Virginia. Drew Clark is the principal member of Broadband Census LLC. To find out more about the organizations and individuals providing financial, technical, research or outreach support to the Broadband Census, please visit BroadbandCensus.com. For more information: http://www.broadbandcensus.com/

About Pew Internet & American Life Project:
The Pew Internet & American Life Project produces reports that explore the impact of the internet on families, communities, work and home, daily life, education, health care, and civic and political life. The Project aims to be an authoritative source on the evolution of the internet through collection of data and analysis of real-world developments as they affect the virtual world. For more information: http://www.pewinternet.org/

About Virginia Tech e-Corridors Project:
eCorridors is an outreach program of Virginia Tech that was established in 2000. Its activities include telecommunications policy, communications infrastructure, research and other computing applications as well as community networks and economic development in a networked world. eCorridors is a primary means through which government, private sector industry and community stakeholders participate and collaborate with Virginia Tech researchers and IT professionals. For more information: http://www.ecorridors.vt.edu/

Contact:
Drew Clark, Executive Director
BroadbandCensus.com
E-mail: drew@broadbandcensus.com
Telephone: 202-580-8196

Microsoft's Google - killer strategy: Finally on the way?


[Microsoft "cloud" strategy that has been rumoured for some time, looks will soon to come to fruition. Given that Microsoft has hired big eScience names like Tony Hey and Dan Reed, I suspect this cloud strategy will have a major impact on future cyber-infrastructure projects and will be a strong competitor to Amazon EC2. Thanks to Digg and Gregory Soo for these pointers--BSA]


From Digg news http://digg.com/

http://www.news.com/8301-10787_3-9883909-60.html

"The new strategy will, I'm told, lay out a roadmap of moves across three major areas: the transformation of the company's portfolio of enterprise applications to a web-services architecture, the launch of web versions of its major PC applications, and the continued expansion of its data center network. I expect that all these announcements will reflect Microsoft's focus on what it calls "software plus services" - the tying of web apps to traditional installed apps - but they nevertheless promise to mark the start of a new era for the company that has dominated the PC age."

Microsoft to build Skynet, send Terminators back to 20th century to preempt Google....

Nick / Rough Type:
Rumor: Microsoft set for Vast data-center push — I've received a few more hints about the big cloud-computing initiative Microsoft may be about to announce, perhaps during the company's Mix08 conference in Las Vegas this coming week. ...The construction program will be "totally over the top," said a person briefed on the plan. The first phase of the buildout, said the source, will include the construction of about two dozen data centers around the world, each covering about 500,000 square feet or more (that's a total 12 million sq ft). The timing of the construction is unclear....

Excellent comments by David P Reed on Network Neutrality



[At the recent FCC hearings in Boston David Reed, a professor at MIT gave a very compelling argument in regard to Comcast's efforts to throttle BitTorrent in terms of efforts at reasonable traffic management. My personal interpretation on David's comments is that cablecos and telcos have entered into a contract with users to provide access to the "Internet". The Internet is not a product or service developed by exclusively by the cablecos or telcos for use and enjoyment by their customers, as for example traditional cell phone service. Since the Internet is a global service with its own set of engineering principles, guidelines and procedures, implicit in providing access to the Internet is, in essence, an unwritten contract to adhere to those recognized standards such as the end2end principle. No one questions the need for traffic management, spam control and other such services, but they should be done in way that is consistent within open and transparent engineering practices that are part and parcel of the contract with the user in providing access to the global Internet. -- BSA]


http://www.reed.com/dpr/docs/Papers/Reed%20FCC%20statement.pdf

Cyber-infrastructure cloud tools for social scientists

[Here is a great example of using cyber-infrastructure cloud tools for social science applications. The NY Times project is a typical of many social science projects where thousands of documents must be digitized and indexed. The cost savings compared to operating a cluster are impressive. Also it is exciting to see the announcement from NSF to promote industrial research partnership with Google and IBM on clouds. Thanks to Glen Newton for this pointer -- BSA]


http://zzzoot.blogspot.com/2008/02/hadoop-ec2-s3-super-alternatives-for.html

Hadoop + EC2 + S3 = Super alternatives for researchers (& real people too!)

I recently discovered and have been inspired by a real-world and non-trivial (in space and in time) application of Hadoop (Open Source implementation of Google's MapReduce) combined with the Amazon Simple Storage Service (Amazon S3) and the Amazon Elastic Compute Cloud (Amazon EC2). The project was to convert pre-1922 New York Times articles-as-scanned-TIFF-images into PDFs of the articles:

Recipe:
4 TB of data loaded to S3 (TIFF images)
+ Hadoop (+ Java Advanced Imaging and various glue)
+ 100 EC2 instances
+ 24 hours
= 11M PDFs, 1.5 TB on S3


Unfortunately, the developer (Derek Gottfrid) did not say how much this cost the NYT. But here is my back-of-the-envelope calculation (using the Amazon S3/EC2 FAQ):

EC2: $0.10 per instance-hour x 100 instances x 24hrs = $240
S3: $0.15 per GB-Month x 4500 GB x ~1.5/31 months = ~$33
+ $0.10 per GB of data transferred in x 4000 GB = $400
+ $0.13 per GB of data transferred out x 1500 GB = $195
Total: = ~$868

Not unreasonable at all! Of course this does not include the cost of bandwidth that the NYT needed to upload/download their data.

I've known about the MapReduce and Hadoop for quite a while now, but this is the first use outside of Google (MapReduce) and Yahoo (Hadoop) and combined with Amazon services that I've such a real problem solved so smoothly and also wasn't web indexing or toy examples.

As much of my work in information retrieval and knowledge discovery involves a great deal of space and even more CPU, I am looking forward to experimenting with this sort of environment (Hadoop, local or in a service cloud) for some of the more extreme experiments I am working on. And by using Hadoop locally, if the problem gets to big for our local resources, we can always buy capacity like the NYT example with a minimum of effort!

This is also something that various commercial organizations (and even individuals?) with specific high CPU / high storage / high bandwidth (oh, transfers between S3 and EC2 are free) compute needs should be considering this solution. Of course security and privacy concerns apply.


Breaking News:
NSF Teams w/ Google, IBM for Academic 'Cloud' Access

Feb. 25 -- Today, the National Science Foundation's Computer and Information Science and Engineering (CISE) Directorate announced the creation of a strategic relationship with Google Inc. and IBM. The Cluster Exploratory (CluE) relationship will enable the academic research community to conduct experiments and test new theories and ideas using a large-scale, massively distributed computing cluster.

In an open letter to the academic computing research community, Jeannette Wing, the assistant director at NSF for CISE, said that the relationship will give the academic computer science research community access to resources that would be unavailable to it otherwise.

"Access to the Google-IBM academic cluster via the CluE program will provide the academic community with the opportunity to do research in data-intensive computing and to explore powerful new applications," Wing said. "It can also serve as a tool for educating the next generation of scientists and engineers."

"Google is proud to partner with the National Science Foundation to provide computing resources to the academic research community," said Stuart Feldman, vice president of engineering at Google Inc. "It is our hope that research conducted using this cluster will allow researchers across many fields to take advantage of the opportunities afforded by large-scale, distributed computing."

"Extending the Google/IBM academic program with the National Science Foundation should accelerate research on Internet-scale computing and drive innovation to fuel the applications of the future," said Willy Chiu, vice president of IBM software strategy and High Performance On Demand Solutions. "IBM is pleased to be collaborating with the NSF on this project."

In October of last year, Google and IBM created a large-scale computer cluster of approximately 1,600 processors to give the academic community access to otherwise prohibitively expensive resources. Fundamental changes in computer architecture and increases in network capacity are encouraging software developers to take new approaches to computer-science problem solving. In order to bridge the gap between industry and academia, it is imperative that academic researchers are exposed to the emerging computing paradigm behind the growth of "Internet-scale" applications.

This new relationship with NSF will expand access to this research infrastructure to academic institutions across the nation. In an effort to create greater awareness of research opportunities using data-intensive computing, the CISE directorate will solicit proposals from academic researchers. NSF will then select the researchers to have access to the cluster and provide support to the researchers to conduct their work. Google and IBM will cover the costs associated with operating the cluster and will provide other support to the researchers. NSF will not provide any funding to Google or IBM for these activities.

While the timeline for releasing the formal request for proposals to the academic community is still being developed, NSF anticipates being able to support 10 to 15 research projects in the first year of the program, and will likely expand the number of projects in the future.

Information about the Google-IBM Academic Cluster Computing Initiative can be found at www.google.com/intl/en/press/pressrel/20071008_ibm_univ.html.

According to Wing, NSF hopes the relationship may provide a blueprint for future collaborations between the academic computing research community and private industry. "We welcome any comparable offers from industry that offer the same potential for transformative research outcomes," Wing said.

-----

Source: National Science Foundation