Monday, February 25, 2008
More on The fallacy of bandwidth on demand
[An excellent rebuttal by Bill Johnston from ESnet, although I have not changed my opinion. Although I want to make one point clear - I am a big believer there will be a huge demand for dedicated optical circuits, I just don’t see the need for bandwidth on demand or reservation, or fast optical switching for that matter in order to optimize utilization of bandwidth -- BSA]
Needless to say, I don't agree with your views expressed
below.
There are two uses for the suite of end-two-end (virtual) circuit tools that ESnet, Interenet2, DANTE, and the European NRENs are developing that, among other things, provide schedulable bandwidth (which has a bit different flavor than BOD but this is still what people frequently call it). These tools also provide the network engineers with very powerful traffic engineering capability.
Re: the rate of growth of big science traffic: The rate of growth - without any of the next generation of scientific instruments yet fully operational - is tracking very steady at 10X every 4 yrs. Resulting in an average of 10Gb/s in our core by 2009-10 and 100 Gb/s by 2013-14. This growth has been consistent since 1990. However, this does not account for any “disruptive” use of the network by science that we have not seen in the past, and there are several of these showing up in the next several years. The LHC is the first of the big next gen. experiments, and even in the data system testing phase (which is now winding down as CERN gets ready for first operation later this year) has already saturated both ESnet and Internet2 circuits in the US is necessitating a lot of mixed circuit + "cloud" TE. By 2008-09 the LHC is expected to be generating 20-30 Gb/s steady state traffic (24 hrs/day 9 mo./yr) into ESnet to the data centers and then out to Internet2, NLR, and the RONs to the analysis systems at universities..
There are several important science uses of the virtual
circuit tools capabilities:
1) One (and the reason for the wide flung collaboration
noted above) is to manage bandwidth on long, diverse, international virtual circuits where there are frequently (nay, almost always) bandwidth issues somewhere along the multi-domain path. The circuit tools have already proved important where some level of guaranteed bandwidth is needed end-to-end between institutions in the US and Europe in order to accomplish the target science.
2) There are science applications - which we are quite sure will be showing up in the next year because we are tracking the construction of the instruments and the planned methodology for accomplishing the science - where remote scientists working on experiments with a real-time aspect will definitely require guaranteed bandwidth in order to ensure the success of the experiment. This use is scheduled and frequently of limited time duration, but requires guaranteed bandwidth in order to get instrument output to analysis systems and science feedback back to the instrument on a rigid time schedule. I won't go into details here but there are a number documented case studies that detail these needs. See, e.g., “Science Network Requirements (ppt)” and “Science-Driven Network Requirements for ESnet: Update to the 2002 Office of Science Networking Requirements Workshop Report - February 21, 2006” at http://www.es.net/hypertext/requirements.html.
There are very practical reasons with we do not build larger and larger "clouds" to address these needs: we can't afford it. (To point out the obvious, the R&E networks in the US and Europe are large and complex and expensive to grow - the Internet2-ESnet network is 14,000 miles of fiber organized into six interconnected rings in order to provide even basic coverage of the US. The situation is more complex, though not quite as large a geographic scale, in
Europe.) We have to drop down from L3 in order to use
less expensive network equipment. As soon as you do that
you must have circuit management tools in order to go e2e
as many sites only have IP access to the network. Since this involves a hybrid circuit-IP network, you end up having to manage virtual circuits that stitch together the paths through different sorts of networks and provide ways of steering the large-scale science traffic into the circuits. Given this, providing schedulable, guaranteed bandwidth is just another characteristic of managing the e2e L1-L2-L3 hybrid VCs.
I have watched big science evolve over more than 35 years
and the situation today is very different than in the past. There is no magic in this - the root causes are a combination of 1) the same Moore's law that drives computer chip density is driving the density of the sensing elements in large detectors of almost all descriptions, and
2) very long device refresh times. The technology "refresh" time for large scientific instruments is much, much longer than in the computing world. This is mostly because the time to design and build large science instruments is of order 10-15 years, and the next generation of instruments just now starting to come on-line is the first to make use of modern, high-density chip technology in sensors (and therefore generates vastly more data than older instruments). (This is an "approximation," but my observation indicates it true to first order.) This is why we are about to be flooded with data - and at a rate that I believe, given the R&E network budgets, we will just barely be able to keep ahead of just in terms of provisioning more and more lambdas each year. And keeping up with demand assumes 100G lambdas and associated deployed equipment by 2012-ish for essentially the same cost as 10G today.
End-to-end circuit management (including allocation of bandwidth as a potentially scarce resource) is rapidly becoming a reality in today’s R&E networks and is going to become more important in the next 5 years. Schedulable bandwidth allocation is one of several aspects of the VC management.
Bill
The fallacy of bandwidth on demand
[Some excerpts from my opinion piece at Internet Evolution -- BSA]
http://www.internetevolution.com/author.asp?section_id=506&doc_id=146268&
Around the world, many National Research and Education Networks (NRENs) are focusing on various bandwidth-on-demand schemes for the future Internet architecture that will be used primarily for big science and cyber-infrastructure applications. The assumption is that in the future, big science institutions will produce such volumes of data that this traffic alone will easily exceed the capacity of today’s optical networks. If you think you have heard this story before, you’re right.
These same arguments were used to justify the need for ISDN (Integrated Services Digital Network), ATM (Asynchronous Transfer Mode), GMPLS (Generalized Multiprotocol Label Switching), and QoS (Quality of Service). These technologies were also premised on the assumption that network capacity would never be able to keep up with demand. Ergo, you needed an “intelligent” network to anticipate the applications demand for bandwidth.
Once again, we are hearing the same old litany that we need optical switched networks and optical bandwidth on demand as networks will be unable to keep up with anticipated traffic growth, especially for big science.
The fact is, no evidence exists yet that big science traffic volumes, or for that matter Internet traffic volumes, are growing anywhere near what was forecast, even just a few short years ago.
[...]
Combined with the slow ramp-up of big science traffic volumes, the business case for optical bandwidth on demand is also challenged by new optical network technology, which will dramatically increase existing optical networks by several orders of magnitude. Vendors like Alcatel-Lucent (NYSE: ALU), Nortel Networks Ltd. (NYSE/Toronto: NT), Ciena Corp. (Nasdaq: CIEN), and Infinera Corp. (Nasdaq: INFN) are developing next-generation optical networks using a new technique called coherent optical technology.
This will allow in field upgrades of existing optical networks to carry 100 Gigabits per wavelength using existing 10 Gigabit wavelength channels. The vendors expect that this can be accomplished without any changes to the core of network, such as optical repeaters, fiber, etc. Research into coherent optical technology is only at its infancy. Even more dramatic jumps in network will probably be announced in the coming years.
The argument for bandwidth on demand is further undermined with the advent of the new commercial “cloud” services offered by Google (Nasdaq: GOOG), Amazon, and others. Increasingly, big science and computation will move to these clouds for convenience, cost, and ease of use. Just like clouds allow the creation of shared work spaces and obviate the need for users to email document attachments to all and sundry, they will also minimize the need to ship big science and data files back and forth across NRENs. All computation and data will be accessible locally, regardless of where the researcher is located.
[..]
As predicted, new tools to thwart traffic shaping by telcos and cablecos
[Will the telecos and cablecos ever learn? Implementing traffic shaping tools and trying to block BitTorrent and other applications will eventually backfire. Hackers are already working on tools to thwart such attempts. No one questions that carriers have the right to traffic manage their network. But using secretive techniques without informing users will guarantee the carriers will be saddled with some sort of network neutrality legislation. Instead they should be focusing on traffic engineering techniques that enhance the users P2P experience by establishing BitTorrent supernodes etc. Thankfully a consortium of ISPs and P2P companies had been created to come up with such solutions.--BSA]
[From a posting by Lauren Weinstein of NNSquad
As predicted, P2P extensions to thwart ISP "traffic shaping" and "RST
injections" are in development. We can assume that ISPs will attempt
to deploy countermeasures, then the P2P folks will ratchet up another
level, and ... well, we may well end up with the Internet version of
the Cold War's wasteful and dangerous Mutally Assured Destruction
(MAD). There's gotta be a better way, folks.
"The goal of this new type of encryption (or obfuscation) is to
prevent ISPs from blocking or disrupting BitTorrent traffic
connections that span between the receiver of a tracker response and any peer IP-port appearing in that tracker response,
according tothe proposal.
>
This extension directly addresses a known attack on the BitTorrent protocol performed by some deployed network hardware."
>
http://torrentfreak.com/bittorrent-devs-introduce-comcast-busting-
encryption-080215/
[Thanks to Matt Petach for these notes from NANOG]
2008.02.18 Lightning talk #1
Laird Popkin, Pando networks
Doug Pasko, Verizon networks
P4P: ISPs and P2P
DCIA, distributed computing industry
association,
P2P and ISPs
P2P market is maturing
digital content delivery is where things are heading;
content people are excited about p2p as disruptive
way to distribute content.
BBC doing production quality P2P traffic;
rapidly we're seeing huge changes, production
people are seeing good HD rollout.
Nascent P2P market pre 2007
Now, P2P is become a key part of the portfolio
for content delivery
P2P bandwidth usage
cachelogic slide, a bit dated, with explosion of
youtube, the ratio is sliding again the other way,
but it's still high.
Bandwidth battle
ISPs address P2P
upgrade network
deploy p2p caching
terminate user
rate limit p2p traffic
P2P countermeasures
use random ports
Fundamental problem; our usual models for managing
traffic don't apply anymore. It's very dynamic, moves
all over the place.
DCIA has P4P working group, goal is to get ISPs working
with the p2p community, to allow shared control of
the infrastructure.
Make tracking infrastructure smarter.
Partnership
Pando, Verizon, has a unch of other members.
There's companies in the core working group,
and many more observing.
Goal is it design framework to allow ISPs and P2P
networks to guide connectivity to optimize traffic
flows, provide better performance and reduce network
impact.
P2P alone doesn't understand topology, and has no
idea of cost models and peering relationships.
So, goal is to blend business requirements together
with network topology.
Reduce hop count, for example.
Want an industry solution to arrive before a
regulatory pressure comes into play.
Drive the solution to be carrier grade, rather
than ad-hoc solutions.
P2P applications with P4P benefits
better performance, faster downloads
less impact on ISPs results in fewer restrictions
P4P enables more efficient delivery.
CDN model (central pushes, managed locations)
P2P, more chaotic, no central locations,
P2P+P4P, knowledge of ISP infrastructure, can
form adjacencies among local clients as much
as possible.
Traditional looking network management, but pushed
to network layer.
P4P goals
share topology in a flexible, controlled way;
sanitized, generalized, summarized set of information,
with privacy protections in place; no customer or user information out, without security concerns.
Need to be flexibile to be usable across many P2P
applications and architectures (trackers, trackerless)
Needs to be easy to implement, want it to be an open
standard; any ISP/P2P can implement it.
P4P architecture slide
P2P clients talk to Ptracker to figure out who to
talk to; Ptracker talks to Itracker to get guidance
on which peers to connect to which; so peers get told
to connect to nearby peers.
It's a joint optimization problem; minimize utilization
by P2P, while maximizing download performance.
At the end of this, goal is customer to have a better experience; customer gets to be happier.
Data exchanged in P4P; network maps go into Itracker,
provides a weight matrix between locations without
giving topology away.
Each PID has IP 'prefix' associated with it in the
matrix, has percentage weighting of how heavily
people in one POP should connect to others.
Ran simulations on Verizon and Telefonica networks.
Zero dollars for the ISPs, using Yale modelling,
shows huge reduction in hop counts, cutting down
long haul drastically. Maps to direct dollar
savings.
Results also good for P2P, shorter download times,
with 50% to 80% increases in download speeds
and reductions in download time.
This isn't even using caching yet.
P4PWG is free to join
monthly calls
mailing list
field test underway
mission is to improve
Marty Lafferty (marty@dcia.org)
Laird (laird@pando.com)
Doug (doug.pasko@verizon.com)
Q: interface, mathematical model; why not have a
model where you ask the ISP for a given prefix, and
get back weighting. But the communication volume
between Ptracker and Itracker was too large for that
to work well; needed chatter for every client that
connected. The map was moved down into the Ptracker
so it can do the mapping faster as in-memory
operation, even in the face of thousands of mappings
per second.
The architecture here is one proof of concept test;
if there's better designs, please step forward and
talk to the group; goal is to validate the basic idea
that localizing traffic reduces traffic and improves performance. They're proving out, and then will start out the
Danny Mcphereson, when you do optimization, you will
end up with higher peak rates within the LAN or within
the POP; p2p isn't a big part of intradomain traffic,
as opposed with localized traffic, where it's 80-90%
of the traffic.
What verizon has seen is that huge amounts of P2P
traffic is crossing peering links.
What about Net Neutrality side, and what they might
be contributing in terms of clue factor to that
issue?
It's definitely getting attention; but if they can
stem the vertical line, and make it more reasonable,
should help carriers manage their growth pressures
better.
Are they providing technical contributions to the
FCC, etc.? DCIA is sending papers to the FCC, and
is trying to make sure that voices are being heard
on related issues as well.
Q: Bill Norton, do the p2p protocols try to infer any topological data via ping tests, hop counts, etc.? Some do try; others use random peer connections; others try to reverse engineer network via traceroutes. One attempts to use cross-ISP links as much as possible, avoids internal ISP connections as much as possible. P4P is addition to existing P2P networks; so this information can be used by the network for whatever information the P2P network determines its goal is. Is there any motivation from the last-mile ISP to make them look much less attractive? It seems to actually just shift the balance, without altering the actual traffic volume; it makes it more localized, without reducing or increasing the overall level.
How are they figuring on distributing this information
from the Itracker to the Ptracker? Will it be via a
BGP feed? If there's a central tracker, the central
tracker will get the map information; for distributed
P2P networks, there's no good answer yet; each peer
asks Itracker for guidance, but would put heavy load
on the Itracker.
If everyone participates, it'll be like a global,
offline IGP with anonymized data; it's definitely
a challenge, but it's information sharing with a
benefit.
Jeff--what stops someone from getting onto a tracker
box, and maybe changing the mapping to shift all traffic against one client, to DoS them? This is aimed as guidance; isn't aimed to be the absolute override. application will still have some intelligence built in. Goal will be to try to secure the data exchange and updates to some degree.
European Research Project to Shape Next Generation Internet TV
[From Dewayne Hendricks list--BSA]
[Note: This item comes from friend Charles Brown. DLH]
European Research Project to Shape Next Generation Internet TV
Brussels, 19 February 2008 - P2P-Next, a pan-European conglomerate of
21 industrial partners, media content providers and research
institutions, has received a €14 million grant from the European
Union. The grant will enable the conglomerate to carry out a research
project aiming to identify the potential uses of peer-to-peer (P2P)
technology for Internet Television of the future. The partners,
including the BBC, Delft University of Technology, the European
Broadcasting Union, Lancaster University, Markenfilm, Pioneer and VTT
Technical Research Centre of Finland, intend to develop a Europe-wide
“next-generation” internet television distribution system, based on
P2P and social interaction.
P2P-Next statement:
“The P2P-Next project will run over four years, and plans to conduct a
large-scale technical trial of new media applications running on a
wide range of consumer devices. If successful, this ambitious project
could create a platform that would enable audiences to stream and
interact with live content via a PC or set top box. In addition, it is
our intention to allow audiences to build communities around their
favourite content via a fully personalized system.
This technology could potentially be built into VOD services in the
future and plans are underway to test the system for broadcasting the
2008 Eurovision Song Contest live online.
We will have an open approach towards sharing results. All core
software technology will be available as open source, enabling new
business models. P2P-Next will also address a number of outstanding
challenges related to content delivery over the internet, including
technical, legal, regulatory, security, business and commercial
issues.” =
The Future of Internet and the role of R&E networks
[Olivier Martin has put together a very good overview paper on the challenges facing the future of the Internet and the ongoing evolution and role of the R&E networks in that evolution-- BSA]
http://www.ictconsulting.ch/reports/NEC2007-OHMartin.doc
Abstract
After a fairly extensive review of the state of the Commercial and Research & Education, aka Academic, Internet the problematic behind the, still hypothetic, IPv4 to IPv6 migration will be examined in detail. A short review of the ongoing efforts to re-design the Internet in a clean-slate approach will then be made. This will include the National Science Foundation (NSF) funded programs such as FIND (Future Internet Network Design) [1] and GENI (Global Environment for Network Innovations) [2], European Union (EU) Framework Program 7 (FP7), but also more specific architectural proposals such as the publish/subscribe (pub/sub) paradigm and Data Oriented Network Architecture (DONA) [3].
[...]
What is slightly surprising is that, despite the fact that the need for on-demand, i.e. switched, circuits has not been clearly established, somewhat overdue efforts are spent on developing various Bandwidth on Demand (BoD) middleware in Europe and North America, e.g. Autobahn , DRAGON , ESLEA , JIT , OSCARS , etc. Fortunately, the DICE (DANTE, Internet2, CANARIE, and ESnet) Control Plane working group is actively developing an Inter-Domain Controller (IDC) protocol, based on ESnet’s OSCARS technology. “As a result of both the DRAGON and DICE collaborations, Internet2 has recently released an early version of a turn-key dynamic networking solution, called the “DCN (Dynamic Control Network) Software Suite” which includes IDC software and a modified version of the DRAGON software. Deployed as a set of web services, IDC software ensures that networks with different equipment, network technology, and allocation models can work together seamlessly to set up optical circuits”.
[...]
7 Tentative Conclusions
The Internet has ossified. A clean-slate re-implementation is unlikely in the medium to long term (i.e. 7-10 years). However, some new ideas may find their way into the current Internet. The most urgent problem is to solve the explosion of the routing tables which is endangering the growth and the stability of the Internet, but this should be fairly easy to solve as the number of actors, i.e. suppliers of core Internet routers, is fairly small (i.e. Cisco, Juniper).
The next most urgent problem is the exhaustion of the IPv4 address space. Strangely enough, this is not seen as a high priority item by many major ISPs! however, IPv6 looks unavoidable some day, if one adopts the “conventional” view that all Internet capable devices, e.g. mobile phones, home appliances, RFIDs, etc., must be directly accessible, but, is this really desirable or even sound? NAT like solution, even so considered as “kludges”, are therefore very likely to flourish and even to slow down considerably, if not prevent, the deployment of IPv6. This process should culminate with the standardization by the IETF of NATs. Last but not least, one cannot exclude the possibility that IANA, will allow the RIRs to go to an IPv4 “Trading Model”, thus considerably extending the lifetime of IPv4 and also facilitating the migration to IPv6 by granting much needed additional time. An ongoing problem is the proliferation of security threats and the associated “degeneracy” of the Internet but the time horizons of the clean-slate Internet architects and the Internet Service Providers are so different that one must be prepared to continue living with it!
More programmable network devices, e.g. routers, multiplexers, should become available, however, “Active Networks” technology is unlikely.
Last Mile, affordable, broadband access, including Campus networks will remain very challenging and fast evolving technology.
EU and IBM launch research initiative for Cloud Computing
http://www.gridtoday.com/grid/2102294.html
IBM, EU Launch Research Initiative for Cloud Computing
Expanding its cloud computing initiative, IBM today announced that it is leading a joint research initiative of 13 European partners to develop technologies that help automate the fluctuating demand for IT resources in a cloud computing environment.
The 17-million-Euro EU-funded initiative, called RESERVOIR -- Resources and Services Virtualization without Barriers -- will explore the deployment and management of IT services across different administrative domains, IT platforms and geographies. This cloud computing project aims to develop technologies to support a service-based online economy, where resources and services are transparently provisioned and managed.
Cloud computing is an emerging approach to shared infrastructure in which large pools of systems are linked together to provide IT services. The need for such environments is fueled by dramatic growth in connected devices, real-time data streams, and the adoption of service-oriented architectures and Web 2.0 applications, such as mashups, open collaboration, social networking and mobile commerce. Continuing advances in the performance of digital components has resulted in a massive increase in the scale of IT environments, driving the need to manage them as a unified cloud.
IBM, which has been researching technologies related to cloud computing for more than a decade, kicked off a companywide cloud computing initiative in 2007 across its server, software, services, and R&D units. In November, IBM unveiled plans for "Blue Cloud," a series of cloud computing offerings that will allow corporate data centers to operate more like the Internet through improved organization and simplicity.
"You can think of cloud computing as the Internet operating system for business and RESERVOIR as pioneering technologies that will enable people to access the cloud of services in an efficient and cost effective way," said Dr. Yaron Wolfsthal, senior manager of system technologies at the IBM Research Lab in Haifa, Israel.
To support the seamless delivery of services to consumers regardless of demand or available computing resources, RESERVOIR will investigate new capabilities for the deployment of commercial service scenarios that cannot currently be supported. These capabilities will be made possible by developing new virtualization and grid technologies.
For example, RESERVOIR could be used to simplify the delivery of online entertainment.
As the distribution of television shows, movies and other videos are moving to the Web, the RESERVOIR project would work to enable a network of service providers to host the different media. Using cloud computing technology, the broadcasters can join forces to reach a service cooperation contract that enables them to tap into advanced services including content distribution, load balancing, and overlay networking across different platforms in different countries.
Any time additional services or infrastructure are needed, they could be rapidly supplied through the cloud by one of the various RESERVOIR-powered sites. For example, if there is large demand for a show hosted by a particular site, it could dynamically 'hire' additional servers and services from other sites that are not being used.
The IBM Haifa Research Lab will lead this computing project and the consortium of partners from academia and industry to pursue this effort. Research partners on this initiative from across academia and industry include Elsag Datamat, CETIC, OGF.eeig standards organization, SAP Research, Sun Microsystems, Telefónica Investigación y Desarrollo, Thales, Umea University, University College of London, Universidad Complutense de Madrid, University of Lugano and University of Messina.
RESERVOIR will be built on open standards to create a scalable, flexible and dependable framework for delivering services in this cloud computing model. The technologies developed by this project are expected to serve IBM, partners and customers in the development of modern datacenters with quantified and significant improvements in service delivery productivity, quality, availability and cost.
Subscribe to:
Posts (Atom)