Monday, February 22, 2010

The Flattening Internet Topology: Natural Evolution, Unsightly Barnacles or Contrived Collapse?

[Here is another excellent paper on the evolving Internet that has also documented the changes to the to the Internet topology as many companies deploy content centric networks using distributed computing and storage. As I mentioned in my paper this evolution could have profound impacts for mobile offload for large data volumes from 3G/4G networks. Content centric networks are much easier to deploy than traditional end-to-end networks. A possible business scenario is for someone like Google and/or Microsoft to partner with Boingo as the customer facing mobile data network using a 3G/4G "virtual" network (like Virgin Mobile's network) as fallback delivery modes in areas where there is no Wifi or white space coverage. Such a network would be an ideal partnership with R&E networks and community networks to easily extend coverage to students and faculty off campus and deliver wireless data at a fraction of the price of today's mobile networks. Most R&E networks already peer with these content networks.
Thanks to Martin Arlitt for this pointer - BSA]

*The Flattening Internet Topology: Natural Evolution, Unsightly Barnacles or Contrived Collapse?*

/Gill, Phillipa; Arlitt, Martin; Li, Zongpeng; Mahanti, Anirban/

HPL-2008-47

*Keyword(s):* Internet, topology, content providers, private WAN, measurement

*Abstract:* In this paper we collect and analyze traceroute measurements 1 to show that large content providers (e.g., Google, Microsoft, Yahoo!) are deploying their own wide-area networks, bringing their networks closer to users, and bypassing Tier-1 ISPs on many paths. This trend, should it continue and be adopted by more content providers, could flatten the Internet topology, and may result in numerous other consequences to users, Internet Service Providers (ISPs), content providers, and network researchers. Publication Info:
Presented and published at Passive and Active Measurement Conference, Cleveland, Ohio, April 2008

http://www.hpl.hp.com/techreports/2008/HPL-2008-47.pdf

Tuesday, February 16, 2010

A personal perspective on the evolving Internet and Research and Education Networks

[Recently I was commissioned to write a paper as part of a submission to the FCC Network Neutrality hearings. But after writing the paper I started to realize it might have broader implication beyond network neutrality. In many ways it represents a synthesis of a number of seemingly unrelated topics that I have been working over the past decade from Green IT, user controlled networks to Citizen Science. As always comments, criticisms and suggestions most welcome –BSA]


A personal perspective on the evolving Internet and Research and Education Networks
http://docs.google.com/Doc?docid=0ARgRwniJ-qh6ZGdiZ2pyY3RfMjc3NmdmbWd4OWZr&hl=en


Abstract
Over the past few years the Internet has evolved from an “end-to-end” telecommunications service to a distributed computing information and content infrastructure. In the academic community this evolving infrastructure is often referred to as cyber-infrastructure or eInfrastructure .This evolution in the Internet was predicted by Van Jacobson several years ago and now seems readily evident by recent studies such as the Arbor report indicating that the bulk of Internet traffic is carried over this type of infrastructure as opposed to a general purpose routed Internet/optical network. This evolving Internet is likely to have profound impacts on Internet architectures and business models in the both the academic and commercial worlds. Increasingly traffic will be “local” with connectivity to the nearest cloud, content distribution network and or social network gateway at a local Internet Exchange (IX) point. Network topology and architecture will increasingly be driven by the needs of the applications and content rather than as general purpose infrastructure connecting users and devices. As a consequence the need to deploy to IPv6 addressing or ID/loc split may be superseded by DNS-type such as laye r 7 XML routing. This new Internet will more easily allow the deployment of low carbon infrastructure and create new challenges and opportunities in terms of last mile ownership and network neutrality. Customer owned networks and tools like User Controlled LightPath (UCLP) and Reverse Passive Optical Networks (RPON) may become more relevant in order to allow users to connect directly to the distributed content and application infrastructure at a nearby IX. Research and Education (R&E) networks may play a critical leadership role in developing new Internet business strategies for zero carbon mobile Internet solutions interconnecting next generation multi-channel RF mobile devices to this infrastructure using “white space” and Wifi spectrum. Perhaps ultimately these developments will lay the foundation for a National Public Internet (NPI) where R&E networks deploy transit and/or internet exchanges in smaller communities and cities to support distribution of content and information from not for profit organizations to the general public.


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Monday, February 15, 2010

FCC and Seybold got it wrong on wireless networks - new revenue opportunities for R&E networks

FCC and Seybold claim we face serious challenges in terms of wireless spectrum and congestion. With a traditional end-to-end network this may be true. But most Internet applications don’t require an end-to-end network. They just need to deliver data locally as are already seeing happening on wireline networks. Over 50% of Internet traffic is delivered by Hypergiants according to Arbor study bypassing traditional end-to-end Internet providers and delivering their services locally at IXs. The next step is to deliver this data directly to Internet mobile devices bypassing 3G/4G networks

http://www.fiercewireless.com/story/seybolds-take-will-ipad-choke-wireless-networks/2010-02-09#ixzz0fYtZSosw

I have argued that the big battle for network neutrality will not only be with today’s broadband networks but tomorrow’s wireless cell phone Internet networks. Instead of debating NN over the last mile, the real fight will be over the last inch (open devices) and the last tower (interconnecting to existing 3G/4G systems)

R&E networks can do a preemptive strike in this future battleground by offering free wireless cell phone data service to all students (and open access community networks) using next generation wireless Internet devices with multiple RF receivers using white space spectrum and tools like Google Voice or Skype.

I hope shortly to issue a paper on this topic
Bill

Friday, February 5, 2010

Scientists Given Free Access to Cloud

[Another example why content peering at major IXs will be critical for future of scientific research. More and more research projects need access to global Internet community for crowd sourcing, cloud computing, distribution of educational video and citizen science. Excerpts from NY Times and Dan Reed’s blog – BSA]

http://www.nytimes.com/2010/02/05/science/05cloud.html?hpw

U.S. Scientists Given Access to Cloud
The National Science Foundation and the Microsoft Corporation have agreed to offer American scientific researchers free access to the company’s new cloud computing service.

A goal of the three-year project is to give scientists the computing power to cope with exploding amounts of research data. It uses Microsoft’s Windows Azure computing system, which the company recently introduced to compete with cloud computing services from companies like Amazon, Google, I.B.M. and Yahoo. These cloud computing systems allow organizations and individuals to run computing tasks and Internet services remotely in relatively low-cost data centers.

Neither Microsoft nor the foundation was willing to place a dollar amount on the agreement, but Dan Reed, the corporate vice president for technology strategy and policy at Microsoft, said that the company was prepared to invest millions of dollars in the service and that it could support thousands of scientific research programs.

Access to the service will come in grants from the foundation to new and continuing scientific research. Microsoft executives said they planned eventually to make the new service global.
[…]
Simplicity of use is one Microsoft goal. Programming modern cloud systems for full efficiency has been difficult. The company is trying to overcome this difficulty in creating a variety of software tools for scientists, said Ed Lazowska, a University of Washington computer scientist who works with the Microsoft researchers.

Dr. Lazowska said the explosion of data being collected by scientists had transformed the staffing needs of the typical scientific research program on campus from a half-time graduate student one day a week to a full-time employee dedicated to managing the data. He said such exponential growth in cost was increasingly hampering scientific research

http://www.hpcdan.org/reeds_ruminations/2010/02/innovation-via-client-plus-cloud-microsoft-nsf-partnership.html

Innovation Via Client Plus Cloud: Microsoft-NSF Partnership

Today, February 4, Microsoft and the U.S. National Science Foundation (NSF) announced a collaborative project where Microsoft will offer individual researchers and research groups (selected through NSF's merit review process) free access to advanced client-plus-cloud computing. Our focus is on empowering researchers via intuitive and familiar client tools whose capabilities extend seamlessly in power and scope via the cloud.

I am very excited about this, as it is the fruit of nearly two years of planning and collaboration across Microsoft product and research teams, as well as many discussions with researchers, university leaders and government agencies. As part of this project, a technical computing engagement team, led by Dennis Gannon and Roger Barga, will work directly with NSF-funded researchers to port, extend and enhance client tools for data analysis and modeling. We also appreciate the support of the Microsoft Dreamspark, Technical Computing, Windows Azure, Azure Dallas, Public Sector, education and evangelism (DPE) teams, among others, to build and deliver this capability.

21st Century Innovation

The brief history of computing is replete with social and technological inflection points, when a set of quantitative and qualitative technology changes led to new computing modalities. I believe we are now at such an inflection point in computing-mediated discovery and innovation, enabled by four social and technical trends:
• Massive, highly efficient cloud infrastructures, driven by search and social networking demands
• Explosive data growth, enabled by inexpensive sensors and high-capacity storage
• Research at the interstices of multiple disciplines, conducted by distributed, virtual teams
• Powerful, popular and easy-to-use client tools that facilitate data analysis and modeling

The first two of these are well documented, and I have written about them before. (See Beyond the Azure Blue and Language Shapes Behavior: Our Poor Cousin Data.) The late Jim Gray also lectured and wrote perspicuously about data-intensive scientific discovery, which he called The Fourth Paradigm. As a logical complement to theory, experiment and computation, the fourth paradigm is based on extracting insight from the prodigious amounts of social, business and scientific data now stored in facilities whose scale now dwarfs all previous computing capabilities.

Climate change and its environmental, economic and social implications; genetics, proteomics, lifestyle, environment, health care and personalized medicine; economics, global trade, social dynamics and security – these are all complex, multidisciplinary problems whose exploration and understanding depend critically on expertise from diverse disciplines with differing cultures and reward metrics.
As our research problems rise in complexity, the enabling tools must rise commensurately in power while retaining simplicity. I have seen far too many multidisciplinary projects founder on the rocks of infrastructure optimization and complexity, when they should have focused on simplicity, familiarity and ease of use.

As Fred Brooks once remarked, "We must build tools so powerful that full professors want to use them, and so simple that they can." Simplicity really, really matters. It is for this reason that Excel spreadsheets, high-level scripting languages and domain-specific client toolkits are now the lingua franca of multidisciplinary innovation, the harbingers of invisibility.

Invisible Simplicity
[..]
Sadly, our technical computing experiences have been dominated and shaped by a focus on technology and infrastructure, rather than empowerment and simplicity. We talk routinely of data and software repositories, toolkits and packages; of cyberinfrastructure and technology roadmaps. In our technological fascination, it is all too easy to lose sight of the true objective. Infrastructure exists to enable. If it is excessively complex, cumbersome or difficult to use, its value is limited. The mathematician Richard Hamming's admonition remains apt: "The purpose of computing is insight, not numbers."

Empowering the Majority

To address 21st century challenges, we must democratize access to data and computational models, recognizing that the computing cognoscenti are, by definition, the minority of those who can and should benefit from computing-mediated innovation and discovery. Instead, we must enfranchise the majority, those who do not and will not use the low-level technologies – clusters, networks, file systems and databases – but wish to ask and answer complex questions. Remember, most researchers do not write low-level code; nor should they need to.

I believe we must focus on human productivity, not cyberinfrastructure, and leverage popular and intuitive client tools that hide infrastructural idiosyncrasies. This means deploying tools like Excel that can manipulate data of arbitrary size, reaching into the cloud to access petabytes of data and executing massive computations for epidemiological analysis as easily as one might balance a checkbook. It means coupling multiple, domain-specific toolkits via a script of a dozen lines, and launching a parametric microclimate study as easily as one searches the web. Perhaps more importantly, it means rethinking public and private sector partnerships for innovation, identifying and leveraging core competencies.
It all comes back to simplicity and invisibility. Technical computing can and should be an invisible intellectual amplifier, as easy to use as any other successful consumer technology. Now is the time; and Microsoft is committed to making it a reality. We look forward to working with the community.


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro