Tuesday, March 20, 2007

Network Architectures for High End Science and Research

[Bill Johnston who is the ESnet Department Head and Senior Scientist at
Lawrence Berkeley National Laboratory recently gave an excellent presentation at the ON*Vector workshop on the future demands of high end science and research on networks. His conclusions and predictions for future network architectures very much echo our own experiences in that we are seeing traffic flows from a relatively small number of research projects dwarf the general IP traffic that emanates from our universities and research sites. The CANARIE network architecture and that of other high end networks like ESnet are now increasingly being optimized to handle these flows. The question that is being asked is whether these new traffic patterns reflect the particular needs of the high end science and research community, or will they also be a precursor to what we will eventually see in the larger global Internet? Some excerpts from his slide deck- BSA]

http://dsd.lbl.gov/~wej/ESnet/files/LHC Networking(Photonics-2007-02-26).v2.ppt

On ESnet more than 50% of the traffic is now generated by the top 100 sites — large scale science increasingly dominates all ESnet traffic. While the total traffic is increasing exponentially, the peak flow and the number of large flows is increasing.

On Esnet most large data transfers are now done by parallel / Grid data
Movers:
• In June, 2006 72% of the hosts generating the top 1000 flows were involved in parallel data movers (Grid applications) • This is the most significant traffic pattern change in the history of ESnet • This has implications for the network architecture that favor path multiplicity and route diversity

Increasingly these large flows are exhibiting circuit like behaviour. Over 1 year, this work flow / “circuit” duration is about 3 months.

[As a consequence, the new Science Data Network being deployed by ESnet..]

Science Data Network (SDN) core for
– Provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows – Less expensive router/switches – Initial configuration targeted at LHC, which is also the first step to the general configuration that will address all SC requirements – Can meet other unknown bandwidth requirements by adding lambdas

Probably no more than one lambda will ever be needed to carry production IP traffic

The ethical implications of identity management, web services, grids etc

[Here is an excellent report by UNESCO on the ethical implications of identity management, web services, grids, semantic web, RFID and other emerging ICT technologies. One of the major underpinnings of these new technologies is the need for identity management systems. As this report points out there are some very significant and troubling ethical issues with respect to identity management systems in terms of privacy, security and access to public information. I am constantly amazed at the rush of university CIOs to embrace identity management systems (IdM). There is no better technology than IdM, that can be easily be abused in order to keep track of pesky protestors or those who might have alternate points of view. To my mind universities should be at the vanguard of fighting the deployment of such technologies or at least finding alternate solutions to today's typical IdM platforms. Thanks to Walter Stewart for this pointer -- BSA]

http://unesdoc.unesco.org/images/0014/001499/149992E.pdf

Friday, March 9, 2007

Exellent paper on FTTH architectures

[Here is an excellent paper on the pros and cons of various FTTh architectures. I fully agree with authors that home run fiber with Ethernet is the optimal solution. Not only does home run fiber provide unlimited bandwidth and greater flexibility - it also enables innovative business models such as customer owned fiber and layer 1 peer to peer interconnects between subscribers. The only thing missing in this report is FTTh architectures for initial low take up deployments (less than 10%). Virtually all FTTh analysis assumes Greenfield deployments with close to 100% takeup. I think there is some new innovative "slot" and micro-conduit technologies that will substantially reduce costs for deploying FTTH in existing neighbourhoods, even with initial low take-ups-- BSA]


http://www.cisco.com/application/pdf/en/us/guest/netsol/ns547/c654/cdccont_0900aecd805df841.pdf


Canadian companies to finance and build Palo Alto FTTh project


[After many delays and cancellations Palo Alto is proceeding with its long anticipated Fiber to the Home project. The Royal Back of Canada, one of Canada's largest banks, in partnership with 180 Connects will be providing the financing. PacketFront from Sweden will be providing the network technology and 180 Connect will be doing the engineering. It is good to see the private sector, rather than municipality undertake the financing and construction of this open access network. 180Connect is nominally a Canadian company – sadly it is a story often repeated in this country where a small successful company that started in Alberta and eventually moves to the USA – but leaves its brass plate and public listing in Canada for tax purposes. All the management and staff for 181Connects now resides in the USA. Excerpts from Palo Alto On line -- BSA]


http://www.paloaltoonline.com/news/show_story.php?id=4645


PA council pushes ahead with broadband
By 5-1 vote, council opts to work with 180 Connect firm to build and operate fiber-based network


In a long-awaited decision, the Palo Alto City Council voted 5-1 Monday night to pursue negotiations with Idaho-based 180 Connect Network Services, Inc. to develop a high-speed broadband fiber network for the city.

The project is expected to cost about $41 million, the bulk of which 180 Connect would be expected to finance. 180 Connect has said it would be working with two partners: PacketFront, Inc, with international experience in open-access broadband networks, and the Royal Bank of Canada’s Capital Market.

When the city officially announced its intention to develop a broadband network last year (after discussing the matter for nearly a decade), only two companies came forward with formal proposals.

180 Connect’s proposal adheres to the city’s requirements, Yeats said, but the company it has a poor financial record and is facing two lawsuits.

In addition, its estimation of public interest in the subscribing to the service may be unrealistically high, Yeats said.

Yeats told the council he does not know of any city in California that is successfully operating a high-speed fiber network.

Several members of the public addressed the council, volunteering their time to help improve the city’s project.

Vice Mayor Larry Klein agreed a committee could be formed, but then rescinded his offer when he learned it would be subject to the state’s open meeting law. Instead, Mayor Yoriko Kishimoto indicated she may put together a less formal mayor’s committee of lay experts from the community to advise the council during negotiations.

The council action will require city staff to postpone or eliminate other projects, City Manager Frank Benest said. He said he will prepare a report to inform the council how its decision will affect other city work.



Wednesday, March 7, 2007

Broadband 2.0 - UK regulator says fiber needed in the last mile

[It is nice to that a couple of regulator's in Europe who "get it". Thanks to Dirk van der Woude for this pointer ---BSA]



http://www.ofcom.org.uk/research/technology/overview/ese/lastmile/

Executive Summary
The investigation considers whether there is a way forward to offering economic, ubiquitous broadband wireless access, given that previous solutions have had marginal business cases. The report time scale covers the next 10-20 years. The focus is fixed access, i.e. the local loop; mobile access is specifically excluded from the scope.

The first specific question to answer is: What is the future last mile wireless broadband requirement?

This really is a key question over the long time scale under consideration. We believe that the last mile requirement will increasingly be one in which there is a convergence of the services and platforms providing communications and entertainment to the home. We note that High Definition
(HD) displays and services are set to play an increasing role in this future. Whilst we cannot predict the exact, future HD services, we can take HDTV as a proxy - future requirements can then be estimated over the next 10-20 years. It was found that whilst video codecs have typically improved two-fold each five years, this fails to take into account two things: Firstly, users' quality demands will increase, secondly the amount of coding gain for a given codec depends on the quality and resolution of the source; at the highest quality and resolution, less coding gain is available. In conclusion, 10-15Mb/s of bandwidth is likely to be required, per channel, for HD services in 10-20 years time.

At first sight it may appear that the present-day ADSL service is close to what is required by HD services. This could not be further from the truth. In fact, examining a typical ADSL service advertised at 'up to' 8Mb/s results in two immediate problems
• the bandwidth of 8Mb/s may only be available at up to 2 miles from the exchange. Only 20% of customers live this close. At five miles from the exchange, the rate will have fallen, perhaps to only 2Mb/s or even 512 kb/s
• the present day ADSL service is a contended service. BT wholesale provide two contention levels; 20:1 and 50:1. Even a home user close to the exchange, who may access 8Mb/s peak rate, may access only 160kb/s when the system is fully loaded Hence present day contended ADSL is unsuited to deliver HDTV or even standard definition TV1.

In fact the requirements for HD services of at least 10Mb/s streaming is so vastly different to what contended ADSL presently provides, that we have termed the future bandwidth requirement 'Broadband 2.0', relative to today's 'Broadband 1.0'. This issue is summarised in Figure 1.

One obvious question then arises - can wireless address the needs of Broadband 2.0? It would have to do so at a competitive cost, which means preferring self install indoor systems and minimizing base station numbers, perhaps by working at the lower frequencies of the UHF band. But before evaluating specific wireless technology approaches, benchmarking against access technologies in other countries was performed, with the following results
1. It was quickly apparent that countries leading on bandwidth to the home are all using some form of fibre system. Whilst Japan/Korea are doing this with government sponsorship, Verizon and AT&T in the US have recently begun fibre roll-outs on a purely commercial basis. This is a watershed development for fibre in the local loop. 2. Interest in fibre is high in the EU too, but some operators have halted their roll-out plans due to the absence of an FCC-style forbearance on fibre unbundling within the EU. 3. Benchmarking against upcoming wireless standards showed these were biased towards small screen mobile content delivery, i.e. they are not attempting to address the challenge of the Broadband 2.0 requirements for delivery of HD services to the home.
Based on the requirements identified, the cost drivers and benchmarking, three fresh approaches to the physical technology are proposed. These are • Mesh and multihopping systems • UHF/TV band working • hybrid schemes with fibre or Gb/s 'wireless-fibre'
It was also appropriate to consider fresh approaches for
• licensing, including the licence mix
• creating a nationally tetherless last mile
• ubiquitous access, based on peering approaches
The subsequent evaluation of the technology approaches began by looking generally at the capacity-coverage trade-off involved in all point to multipoint wireless systems. We also looked in detail at WiMAX and 802.22 capacity planning. This provides a profound, if not entirely unanticipated result - the practical, economic capability of wireless, while adequate to provide today's Broadband 1.0, is very clearly inadequate for the very much more demanding Broadband 2.0. The capacity shortfall is about two orders of magnitude. For example, to provide even only an SDTV-capable uncontended streaming capacity to all subscribers would need 50x more base station resource than is needed to provide Broadband 1.0. This would either require 50x more spectrum allocation or 50x more base stations would need to be deployed. To provide HD services, this factor becomes 500x.
Applying this finding first to UHF and then to mesh working, in both cases we conclude that wireless cannot be expected to provide Broadband 2.0 in a cost-effective manner. It was noted further that our sister project2 also supports this view for frequencies over 30GHz.

Having thus concluded that neither today's contended ADSL nor wireless can provide Broadband 2.0, then attention must focus on what could - and whether wireless has any contributing part to play within that solution. The Broadband 2.0 solution must be based on fibre, which must in future reach further into the access network, and potentially all the way to the customer premises. Fibre can solve the contention issues by increasing back haul capacity, and can solve the last mile issue by acting as a point to point solution alone, or as a feeder to DSL distribution technologies - thus effectively reducing the length of DSL lines required. These findings are summarised by the broadband decision tree in Figure 2.

In addition we note that the desire to provide ubiquitous broadband access to the UK will probably be best met by a peering3 arrangement between legacy and future, fixed and mobile devices, rather than attempting to design a single last mile access scheme.
To find the Economic Value of wireless last mile access to the UK, we built on an earlier analysis4 based on increasing the range of base stations. We propose a counterfactual of the status quo and a factual consisting of
• fibre based access for urban customers at Broadband 2.0 level • wireless based access for rural customers at Broadband 1.0 level The resulting net benefit for wireless thus comes from rural customers alone and is estimated as an upper bound figure of £54M, which is relatively small. Further, from a social perspective we point out that there exists a clear danger of creating a new digital divide - those who can access applications which run only over Broadband 2.0 versus those who cannot. In conclusion, this report has found that • the future needs of fixed broadband will be driven by a convergence of the services and platforms providing communications and entertainment to the home, and in particular the use of HD displays and services. This demands access to streaming content at 10Mb/s and above. This is so far in excess of what today's contended ADSL systems can support, that we have termed it Broadband 2.0
• an increase in back haul capability will be needed to support Broadband 2.0, irrespective of the access method used • wireless cannot realistically compete with fibre for the provision of Broadband 2.0 over the whole of the last mile • coverage of fibre may be below 100%, leaving some scope for wireless based Broadband 1.0 systems, probably in rural areas Nonetheless, within Broadband 2.0, wireless does have application -
1. as a last mile feeder element, using Gb/s wireless as a fibre replacement 2. within the home, e.g. 802.11n

Finally, the key recommendations of this report are -
1. Fibre should be the foundation of a Broadband 2.0 capability for the UK. 2. In order to avoid a new digital divide, deployment of fibre would ideally extend to rural areas, although this may not be attractive as a wholly private venture. 3. In order to facilitate Broadband 1.0 in rural areas, spectrum should be made available at suitable frequencies, for example (i) within the UHF TV bands by re-allocation or sharing; or
(ii) by sharing of underused cellular or military spectrum at UHF.
4. With respect to DSO spectrum, market forces are unlikely to promote rural broadband access, so an alternative approach may need to be considered. 5. In licence exempt spectrum, where technology neutrality is desired, both codes of practice and polite protocols should be pursued in preference to application specific bands. 6. Given that home wireless usage is likely to increase and the traffic is likely to move over to mainly streaming or real-time, it would seem appropriate to reconsider the likely amount of licence exempt spectrum required, given that some estimations performed recently have considered only bursty data traffic. 7. Both service and platform convergences are key trends in the broadband future. In other words, the distinction between fixed, portable and mobile devices and services is becoming increasingly blurred. Whilst this report has concentrated on fixed wireless broadband, we recommend that future studies enable an integrated evaluation of technology, licensing and
spectrum considerations for broadband wireless.

Structural separation on cellular networks?


[To date cellular networks architectures can be characterized a vertical silos where there is very strong coupling between the network architecture and services. 3G systems today are the quintessential example of "walled gardens". However, slowly we are starting to see some chinks in that armour with the new cell phones that integrate WiFi and 3G, and new services such as FON, Muni Wifi and pico GSM. This will gradually enable the structural separation of the network from services which in my opinion, will be the defining characteristic of 4G systems. This structure separation will enable explosion of new wireless services and applications, similar to what we saw happen with the Internet. The Internet was the first network architecture that achieved separated the network and applications from the underlying infrastructure. You would think there would be a lesson here. But despite the evidence and data from financial analysts and economists the telcom industry still seems to be wedded to the traditional vertical silo model with such technologies as IMS, IPsphere and NGN. Some excerpts from Om Malik column-- BSA]


http://gigaom.com/2007/03/06/bt-to-invest-in-fon/


BT to Invest in FON?


FON, the Spanish share-your-Wi-Fi services company, is close to announcing a new round of funding that could total to as much as 10 million Euros (shade over $13 million.)

While some of its existing investors – Index Ventures, Skype and Google - are coming back with more cash, the word from telecom circles in Europe is that British Telecom is going to invest in the wireless router company.

FON is the latest start-up by Spanish telecom entrepreneur, Martin Varsavsky, who has made a name for being a thorn in the side of incumbents. When I visited BT last summer while reporting a story for Business 2.0, the senior management of the British incumbent carrier was pretty bullish on the whole notion of dual-mode phones and municipal wireless.

Dual-mode phones are phone that have Wi-Fi and cellular capabilities in one single handset, and in an ideal world there is a seamless handoff between the two networks once you are in range of a Wi-Fi network

While this does seem like a distant dream, eventually it will happen. Nokia, for instance, is pretty confident that most of its mobile phones in the near future will have Wi-Fi capabilities built into them. BT can offer voice (and other services) over Wi-Fi when in range of a FON node, and when out of range it can switch to the Vodafone network for cellular access.

T-Mobile USA is trying something similar up in the U.S. Northwest, though it is through a customer’s own Wi-Fi network. Free.fr of France has also been playing with a similar idea and has bundled FON-like software in its residential gateways.

Too expensive to meter: The influence of transaction costs on commuications..


[Again another excellent paper by the well known iconoclast and debunker Andrew Odlyzko and his co-author David Levinson. In this paper they refute many of the arguments for fine scaled charging which underlies the architecture of IP Multimedia Systems (IMS), Next Generation Network (NGN) and the old bugaboo QoS. Further substance to this argument can be found in the article below from the NY Times in regards to charging for WiFi access at your favourite coffee shop. Thanks to Andrew Odlyzko and Dave Macneil for these pointers. --BSA]

Too expensive to meter: The influence of transaction costs in transportation and communication

http://www.dtc.umn.edu/~odlyzko/doc/metering-expensive.pdf

Abstract. Technology appears to be making fine-scale charging (as in tolls on roads that depend on time of day or even on current and anticipated levels of congestion) increasingly feasible. And such charging appears to be increasingly desirable, as trafficc on roads continues to grow, and costs and public opposition limit new construction. Similar incentives towards fine-scale charging also appear to be operating in communications and other areas, such as electricity usage. Standard economic theory supports such measures, and technology is being developed and deployed to implement them. But their spread is not very rapid, and prospects for the future are uncertain. This paper presents a collection of sketches, some from ancient history, some from current developments, that illustrate the costs that charging imposes. Some of those costs are explicit (in terms of the monetary costs to users, and the costs of implementing the charging mechanisms). Others are implicit, such as the time or the mental processing costs of users. These argue that the case for fine-scale charging is not unambiguous, and that in many cases may be inappropriate.

[...]


>From the NY Times, March 4, 2007
Digital Domain
What Starbucks Can Learn From the Movie Palace
By RANDALL STROSS

WI-FI service is quickly becoming the air-conditioning of the Internet age, enticing customers into restaurants and other public spaces in the same way that cold "advertising air" deliberately blasted out the open doors of air-conditioned theaters in the early 20th century to help sell tickets.

Today, hotspots are the new cold spots.

Starbucks became the most visible Wi-Fi-equipped national chain when it began offering the service in 2002. Now, at more than 5,100 stores, Starbucks offers Internet access "from the comfort of your favorite cozy chair."

Before you pop open your laptop, however, you need to pull out your credit card. Starbucks and its partner, T-Mobile, charge $6 an hour for the "pay as you go" plan.

Metering and charging for a service, of course, is the prerogative of any business owner in a free market. One will always find entrepreneurs willing to try new ways to profit by erecting tollbooths in front of facilities that had been freely accessible.

In the past, this took the form of coin-operated locks on bathroom stalls. (You may have first encountered these at a moment when you were least ready to praise the inventor´s ingenuity.)

Today, the outer frontier of pricing innovation can be found at the Dallas-Fort Worth International Airport, where some electrical outlets are accompanied by a small sign: "To Activate Pay $2 at Kiosk."

The restaurants´ predecessors, the movie theater owners of almost a century ago, understood that not every amenity, every service, every offering must have a separate price tag attached.

Panera Bread, which has more than 900 Wi-Fi-equipped sandwich and bakery stores, has set itself apart from its contemporaries by upholding the old-fashioned spirit of those bygone theater owners who never stinted in their efforts to make public space inviting.

The grand movie palaces did not have to show the revenue-enhancing potential of an ornamental gold cornice or plaster pilaster. So, too, at Panera Bread, where its fireplaces do not have to demonstrate a monetary payback to justify their place in the stores.

Neither does Wi-Fi. Neil Yanofsky, Panera´s president, said that no cost accounting had been done on its service, which is free. The rationale relates to ambience: "We want our customers to stay and linger."

A Panera cafe does half of its business at lunchtime - there is little lingering then. But before and after the lunch rush, the restaurant addresses what it refers to internally as "the chill-out business," which constitutes a not-insignificant 15 to 20 percent of its revenue.

Panera has no interest in rushing these customers out - the longer they stay, the greater the likelihood that resistance to the aroma of freshly baked muffins will crumble. Free, unmetered Wi-Fi is one way the restaurant sends an unambiguous signal: Stay as long as you like.

[..]

Free fiber to the home in France


[While North America continues to debate the pros and cons of net neutrality the rest of the world is racing to solve the real problem that underlies the debate of network neutrality - and that is solving the bandwidth bottleneck in the last mile on an open, competitive, non-discriminatory basis.

A good example is Iliad in France which has signed a deal with Paris Municipal government to do a city wide roll out of FTTH. In buildings where they install fiber, Iliad will over a free introductory service which will include low speed Internet access, emergency telephone services and basic cable. For 30 Euros a month customers will get 50 Mbps Internet service, free telephony within France and HTDV cable TV package.

Iliad represents a growing trend where new innovative companies are able to identify and capitalize on new horizontal business models for FTTh, rather than the traditional vertical monopoly of the telcos and cabelcos. This reflects a similar trend we are seeing in the UK, Amsterdam, Burlington VT, and elsewhere, where companies are specializing solely in the physical transport of FTTH and letting other companies deliver cable TV, telephony and Internet services - many of which will be free such as Inuk, Joost, etc

It is also good to see that Iliad recognizes PON as a captive technology and instead is building a home run architecture. Thanks to Robert Shaw for this pointer -- BSA]


http://www.iliad.fr/finances/2006/FREE_09-2006.pdf

French Telecom Regulator excellent comments on Fiber to the Home


[Mme Gauthey is a member of the board of ARCEP - the French Telecom Regulator. I am told she knows about anything there is to know on next generation networks and is the person ensuring continued competition on FttH in France. Her comments in this article I think reflect an excellent understanding of the roles of regulators, governments, municipalities and carriers in terms of FTTh. I whole heartedly agree with everything she says, especially in relation to passive infrastructure and PON. Although PON is called "passive optical networking" it is anything but. As a layer 1 technology it is a recipe to insure customer "lock-in" with a carrier. (With home run fiber, however, it can serve a useful purpose at layer 2). Some excerpts from the Global Telecoms Business article. Thanks to Dirk van der Woude for this pointer--BSA]


http://www.cisco.com/web/FR/pub_sector/newsletter/numero7/7_questions.html

Fibre to the home Operators around the world are installing or at least considering fibre to the home. Gabrielle Gauthey, a board member of French regulator ARCER considers network strategies and the different investment models needed to make fibre infrastructure viable. There is a risk of creating new monopolies in local fibre infrastructure, she warns
………………………………….
Fibre, a breakthrough but a danger of a new monopoly

Gabrielle Gauthey: operators can share a passive optical network, ducts and dark fibre, but that risks creating local monopolies We are on the eve of an evolution that is essential, and somewhat revolutionary in the history of telecommunications — the transition from broad-band to very high-speed broadband, made possible by the use of fibre in the access network.

This transition will have very important conse-quences for industry, operators, local governments, as well for the development of the knowledge economy and the competitiveness of our companies. What are the stakes, the opportunities, but also the risks of this coming evolution? What role can government take in guiding and facilitating this transition?

Stakes and risks of deploying fibre
Before raising the question of government involve-
ment, we need to weigh the stakes of fibre access net-
works for our country. We are talking about the local
loop of tomorrow, which doubtlessly will eventually
replace the copper access network. But we are no
longer living in the same conditions of the 1970s,
when the copper local loop was installed and financed
through a state-run monopoly.
So it is necessary to devise other investment mod-
els, and to anticipate the potential risks they repre-
sent for broadband?
Initial assessments show that the cost of deploying
a nationwide FTTH network would require a total
investment of several tens of billions of euros, spread
out over more than 10 years. It is quite unlikely that
one operator could alone build such a nationwide
project in a reasonable timeframe.
Passive infrastructure accounts for the bulk — 70%
to 80% — of network deployment costs. Civil engi-
neering costs are particularly onerous — more than
50% in urban areas — and so are costs related to
cabling buildings. But the cost of fibre is low, and the
cost of active equipment will continue to decrease
with mass deployments.
Based on the assessments, profitability could be
reached not only in very dense zones, but also in
cities with a medium density, only if there is a high
degree of passive network sharing.
Sharing passive infrastructure appears to be the key
to removing entry barriers and favouring an econom-
ical deployment of high-speed broadband. There are
two ways of implementing this sharing: either by
using existing infrastructure or by co-investment
and/or coordination when networks are to be built.
It would make sense for the first operator installing
fibre to use ducts big enough, and in a sufficient num-
bers to accommodate the fibres of other operators.
Which investment model?
It is necessary to understand that the passive network
involves a long-term investment with a long-term
rate of return — more than 20 years. This can pose a
problem for the private operator who derives its
profit from an active network and needs to turn a
short-term profit, in three to five years.
Operators must control and own their active net-
work equipment because it's only at the active level
that operators can differentiate themselves and be
competitive. However, private operators can easily
share the passive network — ducts and dark fibre.
The ways of implementing network sharing can lead
to very different investment models, which might risk
in some cases recreating monopolies, even local ones.

In one model, the operator is vertically integrated
and installs a closed network, or a slightly open net-
work with only resale offers. This is the preferred
model of incumbents in the US.
In another model, long-term investors, which are
associated if necessary with local governments, adopt
an open-access model from the start, selling passive
network capacity without necessarily becoming
themselves operators.
The passive network is made available to operators
that want to sell very high-speed services. These opera-
tors install their active equipment upstream just as oper- ators did with unbundled copper networks to deliver DSL services. This model is used in northern Europe and also by some local government projects in the US. . There are basically two major types of network
architectures: point-to-point and PON, or passive
optical network. PON is a point to multi-point — tree-and-branch — architecture in which the same operator manages all the active equipment. Point-to- point networks, on die other hand, allow several operators to install their own equipment on dark fibre and at a subscriber's house. Whatever the choice of architecture may be it is of major importance that it should allow the sharing of passive infrastructure and the implementation by competitors of their own active equipment, in order not to reduce the competitiveness of the market.

What actions can government take?
It's clear that government — at the central and local
level — has a decisive role to play in facilitating the
upgrade of the future local loop. Central and local
government authorities must first reduce the entry
barriers for all players by encouraging the sharing of
civil work and the cabling of buildings.
.
By taking an interest in the digital development of
their jurisdictions some years ago, authorities — in departements for the most part, but also in municipal conurbations — discovered how constructive their fibre to the home management of passive infrastructure could be. Their role until now has mainly involved the connection via fibre of central offices or distribution frames to allow all operators to reach, on a non-discriminatory basis, the unbundled local loop and business parks. A number of local governments have begun to turn their attention to the access network, where they can play an important role in encouraging the sharing of civil work. First, they have the important task of gathering information about civil work and the existing telecommunication networks in their territories. Who is better placed than the local governments to collect this important geographic information? Secondly, they must govern well these public assets and be particularly vigilant that the public ownership of certain infrastructure remains public and accessi- ble to operators. Lastly, the local governments are the best placed to encourage deployments, by offering their rights of way at a fair price, and also by requiring operators to install networks jointly and even to build in reserve capacity for third party operators. Some have already decided to go beyond a simple policy of leasing ducts and have launched public access networks, similar to those of their US or Euro- pean counterparts, such as in Vienna and Amsterdam. Their aim is to accelerate the participation of opera- tors, without recreating local monopolies, by lower- ing the investment burden of private operators in the sharable, non-discriminatory part of the network.

Even though deployments are just beginning, it is
unrealistic to think that all operators have an equal
start. France Telecom has a large amount of spare
duct capacity, dating from when it was state monop-
oly, which the operator can use to significantly reduce
its FTTH deployment costs. This situation makes its
possible for these ducts to be made available, on a transparent, non-discriminatory and cost-oriented basis at to all operators deploying very high speed broadband.

www.globaltelecomsbusiness.com


HEAnet and i2CAT demonstrate network virtualization of pure optical links with UCLP


HEAnet Ireland’s National Education and Research Network, CTVR and Catalan’s i2CAT Demonstrate First-Ever User-Controlled Provisioning of Pure Optical links On-Demand using Network Virtualization Networks Dynamically Reconfigure Themselves to Economically Connect Large Data Flows

Dublin, Ireland/Hayward, CA – February 21, 2007 – A consortium of network innovators proved today that a user, without prior knowledge of a network or an understanding of how to configure it, could create their own private optical network on demand, via a web page and Web Services interface.

Additionally, the consortium’s members demonstrated the ability to automatically detect and re-route large data flows between countries along more optimal paths to improve customer service and provide greater network efficiency.

This series of announcements were made today by HEAnet, Ireland’s National Research and Education Network (NREN); Trinity College Dublin’s Centre for Telecommunications Value Chain Research (CTVR); Barcelona’s i2CAT, a non-profit organization that promotes research and innovation for advanced Internet technology and Glimmerglass, developer of optical switching solutions at the core of the new optical Internet.

Using User Controlled LightPath (UCLP) software to control two Glimmerglass Intelligent Optical Switches via a web interface, a HEAnet user created a Gigabit Ethernet circuit from the i2CAT facility in Barcelona to Trinity College in Dublin, using a link from GÉANT over a L2 MPLS Virtual Leased Line network. In this way, the group demonstrated an agile optical Internet in which users and communities can define and determine network connections and desired bandwidth on demand as needed.

“This is the first successful demonstration of using optical switches and UCLPv1.5 software” said Eoin Kenny, project manager, HEAnet. “It’s important because, previous to this UCLPv1.5 software had only been used with traditional SDH/SONET transmission or Ethernet equipment. This demonstration enabled a user to automatically request an optical link as part of complete end to end Gigabit Ethernet circuit from i2CAT in Barcelona to CTVR in Dublin.”

In addition to i2CAT developing UCLPv1.5 software to control Glimmerglass all optical switches, CTVR were able to demonstrate how their IP flow software was able to create on demand pure optical links using UCLP’s Web Services and the Glimmerglass switches based on detecting IP flows which were then switched to alternative optical paths. This technique is often referred to as optical IP switching (OIS).

Optical IP switching is a pioneering technique developed at CTVR that can be embedded in IP routers. It analyzes and correlates IP packets, and if IP flows appear with specific characteristics the router establishes an optical cut-through path between its upstream and downstream neighbors, requesting the upstream node to place all the packets belonging to the flow into the new path. The newly generated trail bypasses the IP layer of the router, as the packets transparently flow from the upstream to the downstream neighbor.

About HEAnet
HEAnet is Ireland’s national education and research network, providing high quality Internet services to students and staff in Irish Universities, Institutes of Technology and educational community including primary and post primary schools. HEAnet today is one of the largest Internet Service Providers in the country providing high-speed national network with direct connectivity to other networks in Ireland, Europe the USA and the rest of the world in the academic and research communities.

About CTVR
The Centre for Telecommunications Value-Chain Research (CTVR) is an Irish government initiative established in 2004. It brings together a team of 100 researchers operating in 8 Universities and third level institutes working on key problems in wireless and optical networking. The centre aims to carve out an international leadership position in industry-guided research, which redefines key elements of telecommunications systems, architectures and networks, and the value chains used to design, build, market and service them.



About i2CAT
i2CAT is a non-profit Foundation aimed at fostering research and innovation supporting advanced Internet technology. Based in Barcelona, Spain, i2CAT promotes deployment of services and wideband applications from private and public research companies supporting the Catalunya region. The i2CAT model aims to make Internet research and innovation accessible to the whole of society through collaboration between the public sector, businesses and research groups within universities and the educational world.

About Glimmerglass
Glimmerglass is developing solutions at the core of the new optical Internet. The company’s Intelligent Optical Switches cost-effectively create, monitor and protect advanced communication services. Glimmerglass products manage physical-layer fiber connections that carry IP over DWDM, 10 Gigabit Ethernet, 40 Gigabit SONET/SDH, FTTx, video, RF over fiber and more. System operators of commercial networks, mission-critical defense systems, advanced optical testing facilities and high-performance research networks worldwide rely on Glimmerglass to remotely and automatically configure optical fiber. Visit www.glimmerglass.com.

###
Glimmerglass is a trademark of Glimmerglass Networks, Inc.
All other trademarks and service marks are the property of their respective holders.

About UCLP
User Controlled LightPath program was initiated by CANARIE with additional funding from Cisco and software development was undertaken by a consortium of partners including Communications Research Centre, University of Ottawa, Universite Quebec a Montreal, Inocbye, i2Cat, HEAnet, Solana Networks amongst others. For more details on UCLP please see www.uclp.ca, www.uclpv2.ca, and www.uclpv2.com. Commercial versions of UCLP are under development such as www.inocybe.ca

Cyber-infrastructure for undersea instruments


[Excellent example of the use of cyber-infrastructure Service Oriented Architecture using web services and workflow for management and control of undersea instruments and networks. Of particular note is that Alcatel who is building the Neptune undersea backbone fiber network will also be using SOA for the management and control of the network facilities. Because both the network and the instruments can now be represented as web services various exciting new possibilities are possible in terms of providing user control of both data, instruments and networks. For more examples please see www.uclp.ca and for commercial products please see http://www.inocybe.ca/-- BSA]

http://www.neptunecanada.ca/news/documents/NC_Newsletter_2007Feb14.pdf

In September 2005, DMAS was awarded a grant from the CANARIE Intelligent Infrastructure Program (CIIP) for a project called: “Toward a Service Oriented Architecture and Workflow Management for VENUS and NEPTUNE”. The goals of this project were to provide these two high profile Canadian Cabled Ocean Observatories with an integrated scientific instruments management system, the capability to deliver event information to users, as well as integrated access to distributed compute and data resources through the use of innovative technologies.

The initial draft of the Service-Oriented Architecture (SOA) was proposed by IBM Canada, one of the largest R&D investors in Canada. This architecture was later refined by the Data Management Archive System (DMAS) team. It is worth mentioning that this is also the approach taken by Alcatel for the interaction with the NEPTUNE backbone.

This style of information systems architecture enables the NEPTUNE Canada and VENUS DMAS to be built by combining loosely coupled and interoperable services. The IBM Enterprise Service Bus (ESB) with its underlying message system WebSphere MQ provides the features such as point-to-point data delivery and message publish-and subscribe to implement the DMAS SOA.

A wide range of Web Services has been developed by the DMAS team and it will continue to add more, especially in the area of instrument control and monitoring. One of the most interesting Web Services offered is the device service which allows remote interaction with the instruments under water using either a web page on the NEPTUNE Canada web site or custom applications written by scientists or engineers. The Science Instrument Interface Modules (SIIM) - now called Junction Boxes- can be controlled from a simple web interface allowing authorized engineers to open or close a port to which instruments are connected. Other Web Services deliver observatory metadata or sample data to users.

Over the last few months, the DMAS team has developed a number of new features, some of which are already in production while others are still in the ‘incubator’ and will be released in the coming weeks. Karen Tang, one of the DMAS developers, has built an example of scientific workflow using the Kepler software, a product built on top of the Ptolemy II system of the University of California, Berkeley. The example, illustrated below, performs an oxygen sensor data analysis using DMAS Web Services. Darry Bidulock, another team member, has just completed the new gallery to show the VENUS camera images and the hydrophone data. This allows scientists around the world to view pictures, movies and spectra, or listen to sound taken by instruments with a delay of just a few minutes. The user interface ‘à la You tube’ is very easy to use and will be the basis for all displayable products of NEPTUNE Canada and VENUS. These products will also soon be available using RSS feeds.

Yigal Rachman, our data acquisition developer, is now working on a top of the line data acquisition framework (DAF) which should be in place this summer. The DAF presents a range of new challenges such as the support for thousands of sensors and the direct access by engineers and scientists using Web Services.

All these new features would not have been possible without the support of the CANARIE project, which helped us build the underlying infrastructure. As a result of CANARIE’s support, the newest technologies available in the IT world, and the work accomplished in 2005–06 NEPTUNE Canada will establish a leading position in the Big Science


Addressing the Future Internet

For more information on this item please visit my blog at
http://billstarnaud.blogspot.com/
-------------------------------------------

[Excellent commentary by Geoff Huston on future of the Internet -- BSA]

http://www.circleid.com/posts/addressing_the_future_internet/

Addressing the Future Internet
Feb 09, 2007 | Inside: Exploring Frontlines
Posted by Geoff Huston Comments | Print | Email

The National Science Foundation of the United States and the Organisation for Economic Co-operation and Development held a joint workshop on January 31, 2007 to consider the social and economic factors shaping the future of the Internet. The presentations and position papers from the Workshop are available online.

Is Internet incrementalism a sufficient approach to answer tomorrow’s needs for communications? Can we get to useful outcomes by just allowing research and industry to make progressive marginal piecemeal changes to the Internet’s operational model? That’s a tough question to answer without understanding the alternatives to incrementalism. Its probably an equally hard question to attempt to phrase future needs outside of the scope of the known Internet’s capabilities. Its hard to quantify a need for something that simply has no clear counterpart in today’s Internet. But maybe we can phrase the question in a way that does allow some forms of insight on the overall question. One form of approach is to ask: What economic and social factors are shaping our future needs and expectations for communications systems?

This question was the theme of a joint National Science Foundation (NSF) and Organisation for Economic Co Operation and Development (OECD) workshop, held on the 31st January of this year. The approach taken for this workshop was to assemble a group of technologists, economists, industry, regulatory and political actors and ask each of them to consider a small set of specific questions related to a future Internet.

Thankfully, this exercise was not just another search for the next “Killer App”, nor a design exercise for IP version 7. It was a valuable opportunity to pause and reflect on some of the sins of omission in today’s Internet and ask why, and reflect on some of the unintended consequences of the Internet and ask if they were truly unavoidable consequences. Was spam a necessary outcome of the Internet’s model of mail delivery? Why has multi-lingualism been so hard? Is network manageability truly a rather poor afterthought? Why has Quality of Service proved to be a commercial failure? Can real time applications sit comfortably on a packet switched network that is dominated by rate adaptive transport applications? Why are trust and identity such difficult concepts in this particular model of networking? How did we achieve this particular set of outcomes with this particular Internet framework? Can we conceive of a different Internet model where different outcomes would’ve happened as naturally?

[snip]
.....


-------------------------------------
To SUBSCRIBE:
send a blank e-mail message to
news-join@canarie.ca

To UNSUBSCRIBE:
send a blank email message to
news-leave@canarie.ca
-------------------------------------

These news items and comments are mine alone and do not necessarily reflect those of the CANARIE board or management.

-----------
Bill.St.Arnaud@canarie.ca
Bill.St.Arnaud@gmail.com
www.canarie.ca/~bstarn
skype: pocketpro
SkypeIn: +1 614 441-9603

International Grid battles Malaria

For more information on this item please visit my blog at
http://billstarnaud.blogspot.com/
-------------------------------------------


[Excerpts from www.gridtoday.com- BSA]

From Sheffield to Singapore, International Grid Battles Malaria


Malaria kills more than one million people each year, most of them
young children living in Africa. Now physicists in the UK have shared
their computers with biologists from countries including France and
Korea in an effort to combat the disease. Using an international
computing Grid spanning 27 countries, scientists on the WISDOM project
analysed an average of 80,000 possible drug compounds against malaria
every hour. In total, the challenge processed over 140 million
compounds, with a UK physics Grid providing nearly half of the
computing hours used.

The computers are all part of EGEE (Enabling Grids for E-sciencE),
which brings together computing Grids from different countries and
disciplines. Up to 5000 computers were used simultaneously, generating a total of 2000 GB (2,000,000,000,000 bytes) of useful data.

Most of the UK's contribution came from GridPP, a computing Grid
funded by the Particle Physics and Astronomy Research Council and
built to process data from the world's largest particle physics
accelerator, due to be turned on later this year in Geneva. Professor
Tony Doyle, the GridPP Project Leader, explains, "Although our Grid
was built to analyse particle physics data, when we have spare
capacity we're able to share it with other scientists worldwide. In
this case, we're happy to have contributed more than two million hours
of computer time to help find drugs against malaria."

This challenge of the international WISDOM (World-wide In Silico
Docking On Malaria) initiative ran between 1 October and 31 January.
Its analysis of possible docking arrangements between drug compounds
and target proteins of the malaria parasite will greatly speed up the
search for drugs against malaria. WISDOM uses in silico docking, where
computers calculate the probability that molecules will dock with a
target protein. This lets researchers rule out the vast majority of
potential drugs, so they can concentrate on the most promising
compounds in laboratory tests. As well as speeding up the screening
process, this reduces the cost of developing new drugs to treat
diseases.

"The impact of WISDOM goes much beyond malaria," declared Doman Kim,
Director of the Bioindustry and Technology Institute at Jeonnam
National University in Korea. "The method developed can be extended to
all diseases and this opens exciting industrial perspectives. Until
now, the search for new drugs in the academic sector was done at a
relatively small scale whereas the WISDOM approach allows a systematic
inquiry of all the potentially interesting molecules."


A second computing challenge targeting avian flu in April and May 2006
has significantly raised the interest of the biomedical research
community. Laboratories in France, Italy, Venezuela and South Africa
proposed targets for the second challenge against neglected diseases.
The WISDOM researchers plan a further data challenge on avian flu
later in 2007.

In addition to the computing power of the EGEE Grid (of which GridPP
is a part), AuverGrid, EELA, EUChinaGRID, EUMedGRID and South East
Asia Grid all provided additional resources. The Embrace and
BioinfoGRID projects are contributing to the development of a virtual,
in silico screening pipeline that will allow researchers to select,
for any given target protein, the most active molecules out of the
millions of compounds commercially available.


-------------------------------------
To SUBSCRIBE:
send a blank e-mail message to
news-join@canarie.ca

To UNSUBSCRIBE:
send a blank email message to
news-leave@canarie.ca
-------------------------------------

These news items and comments are mine alone and do not necessarily reflect those of the CANARIE board or management.
-----------
Bill.St.Arnaud@canarie.ca
Bill.St.Arnaud@gmail.com
www.canarie.ca/~bstarn
skype: pocketpro
SkypeIn: +1 614 441-9603

Internet to (almost) die in 2007

For more information on this item please visit my blog at
http://billstarnaud.blogspot.com/
-------------------------------------------


[Here is a couple of interesting articles in Forbes and the Wall St Journal based on a Deloitte & Touche prediction that global traffic will exceed the Internet's capacity as soon as this year. I think there is general consensus that we are going to see an explosion of video traffic over the Internet because of exciting new applications such as Joost, Inuk, YouTube, BitTorrent, Zudeo, etc and this will create a traffic jam over our existing "dirt road" last mile networks of DSL and Cable (although in theory cable will have a delayed reaction to this oncoming exaflood of traffic because of its greater bandwidth capacity). The need for higher speed optical networks is ever more compelling in the last mile. But the big question is how to pay for it. There are those on the right argue that network neutrality requirements will inhibit carriers from making the necessary investments to build fiber to the home and therefore broadband should be completely deregulated in order for the carriers to prioritize certain types of premium traffic. Those on the left argue that municipalities should be building fiber to the home as broadband should be a basic infrastructure like roads and sewers.

I think they are both wrong. If our experience in Ottawa is any guide, cities do not move on Internet time when it comes to broadband deployment. One of our condominium fiber contractors has been waiting over 9 months (and still waiting) for a right of way permit to build low cost fiber to homes and businesses in the Ottawa area. In any event the cost of building fiber networks to all citizens would be prohibitive to most municipalities who are already struggling under incredible debt and tax loads. However cities can play a critical role by renting access to conduit as for example in Montreal and Barcelona.

On the other hand I don't believe there will be any demand or business case for prioritized traffic (with or without network neutrality), that will justify the business case of fiber to the home for the carriers, despite Verizon's FiOS rosy predictions. With new services like Inuk, Joost, AppleTV it is increasingly unlikely that the telco or cableco will be to capture any portion of the video distribution value chain. That is why several Wall St analysts have called for the carriers to recognize this reality and to specialize in only providing the transport infrastructure. But turning themselves into razor thin commodity transport providers will provide little incentive or business case to deploy fiber to the home.

As such it is my belief, whether you are from the right or left, we must find entirely new business models that will enable the financing of the next generation last mile networks in an open affordable way. An unfettered private sector is the best way to develop such new business models and services. Some exciting possibilities are starting to emerge such as customer owned and controlled fiber (which has been extremely successful with schools, hospitals and business), Green Broadband, and in far off New Zealand initiatives like CityLink and Inspired Networks in Palmerston. I am sure there are other possibilities - but we need both research programs such as GENI and FIRE to develop new architectures and clever business people not tied to the myopic backward looking carrier world to explore these new business models. Some excerpts-- BSA]




http://www.forbes.com/2007/01/30/info-traffic-jams-oped-cx_pk_0131network.html?partner=yahootix

Commentary
Information Super Traffic Jam
Phil Kerpen 01.31.07, 6:00 AM ETWASHINGTON, D.C. -