Thursday, January 31, 2008
Infinera Offers Bandwidth Virtualization
[It is exciting to see that the concept of "virtualization" and "Infrastructure as a Service (IaaS)" is starting to take hold in the telecommunications world. Virtualization of computers and networks will also go a long way to addressing the challenges of CO2 emissions by the ICT industry. Companies like Inocybe (www.inocbye.ca) have also extended and generalized the concept of network virtualization to all types of networks and services. Excerpts from www.convergedigest.com--BSA]
Infinera Offers Bandwidth Virtualization over Optical Transport Infinera is promoting the concept of "Bandwidth Virtualization" over optical transport networks based on its digital circuit technology as a means to meet the needs for a faster, more responsive Internet.
In a traditional optical network using Wavelength Division Multiplexing (WDM), each service that a service provider sells is typically linked to a specific wavelength, which is installed and turned up after a customer commits to purchase that service. Each service to be sold typically requires pre-planning, engineering activities and testing at time of installation, and there is often a significant delay between customer commitment to purchase and turn-up of the corresponding service. Service providers introducing new services, such as 40 Gbps and 100 Gbps services, must frequently re-engineer or overbuild their WDM networks to support the new services, creating long cycles between end-user service requests and service delivery, as well as inefficient network utilization, operational complexity, and the need for additional capital outlays.
Infinera said Bandwidth Virtualization overcomes these challenges and accelerates operators' speed of service provisioning by decoupling the service layer in the network from the underlying optical transmission layer.
Bandwidth Virtualization is enabled by an Infinera Digital Optical Network using high-capacity photonic integrated circuit technology on every route in the optical network, and integrates sub-wavelength digital switching with end-to-end software intelligence. This provides operators with a readily available pool of WDM bandwidth to meet immediate service requests, and allows new services to be deployed over the same infrastructure. The transmission layer can be configured to support any service simply by installing a service interface module at each of the two service endpoints and activating new end-to-end services using software rather than via hardware-based re-engineering of network resources.
Bandwidth Virtualization also yields significant operational benefits for service providers. By deploying hundreds of Gigabits of capacity at initial installation and being able to turn up additional services with digital plug-and-play ease, service providers can operate their network with smaller skilled engineering teams and at lower cost than on traditional WDM networks.
Sunday, January 20, 2008
Amazing SOA (Service Oriented Architecture) Demo
[For anybody who is doing SOA or Web 2.0 development I strongly encourage them to take a look at this web site. Lots of useful information and SOA, Web 2.0 and Mashups - and an amazing demo to show that it is all real. Thanks to Benoit Pirenne for this pointer. Some excerpts from the Adobe web site -- BSA]
http://www.adobe.com/devnet/livecycle/articles/soa_gis.html
Very good article on marrying GIS with SOA; see the demo of the bike
race at http://www.adobetourtracker.com/max/Simulator.html
The emergence of the Internet in the mid 1990s as a platform for data distribution and the advent of structured information have revolutionized our ability to deliver rich information to any corner of the world on demand. The move towards Service Oriented Architecture (SOA) infrastructure has been essential to the development of modern distributed systems. Service Oriented Architecture is an architectural paradigm and discipline used to build infrastructures that enable those with needs (consumers) and those with capabilities (providers) to interact via services though they are from disparate domains of ownership.
The evolution of software systems over the last two decades has resulted in a migration to a common set of design patterns that are often referred to as Web 2.0, or the maturing of the Internet. This migration is being enabled and accelerated by the move to Service Oriented Architecture, also known as SOA. This evolution path of software systems architecture is documented in
Many developers building geospatial applications today embrace some of the core patterns of Web 2.0 such as the Mashup, Software as a Service, and Rich Internet Application patterns. (These patterns are documented in the O'Reilly book Web 2.0 Architecture Patterns, J. Governor, D. Hinchcliffe, D. Nickull – ISBN 0596514433.) All of these patterns rely on a fundamental change in the software model used to architect multi-tiered systems. This core change is the migration to SOA as an evolution of the old Client-Server model. The old Client-Server model was the cornerstone of the first iteration of the Internet (roughly 1994-2000). It was largely implemented by web servers and browsers with idempotent request-response message exchange patterns. In the subsequent evolution of the Internet (2002-2007), the model has changed and now SOA has become the de facto standard for application architects and developers. As shown in Figure 1-2, the "server" component of the Client-Server model has been replaced with a services tier which enables capabilities to be consumed via the Internet, using a standardized set of protocols and technologies, by client applications for the benefit of the end user.
http://www.adobe.com/devnet/livecycle/articles/soa_gis.html
Very good article on marrying GIS with SOA; see the demo of the bike
race at http://www.adobetourtracker.com/max/Simulator.html
The emergence of the Internet in the mid 1990s as a platform for data distribution and the advent of structured information have revolutionized our ability to deliver rich information to any corner of the world on demand. The move towards Service Oriented Architecture (SOA) infrastructure has been essential to the development of modern distributed systems. Service Oriented Architecture is an architectural paradigm and discipline used to build infrastructures that enable those with needs (consumers) and those with capabilities (providers) to interact via services though they are from disparate domains of ownership.
The evolution of software systems over the last two decades has resulted in a migration to a common set of design patterns that are often referred to as Web 2.0, or the maturing of the Internet. This migration is being enabled and accelerated by the move to Service Oriented Architecture, also known as SOA. This evolution path of software systems architecture is documented in
Many developers building geospatial applications today embrace some of the core patterns of Web 2.0 such as the Mashup, Software as a Service, and Rich Internet Application patterns. (These patterns are documented in the O'Reilly book Web 2.0 Architecture Patterns, J. Governor, D. Hinchcliffe, D. Nickull – ISBN 0596514433.) All of these patterns rely on a fundamental change in the software model used to architect multi-tiered systems. This core change is the migration to SOA as an evolution of the old Client-Server model. The old Client-Server model was the cornerstone of the first iteration of the Internet (roughly 1994-2000). It was largely implemented by web servers and browsers with idempotent request-response message exchange patterns. In the subsequent evolution of the Internet (2002-2007), the model has changed and now SOA has become the de facto standard for application architects and developers. As shown in Figure 1-2, the "server" component of the Client-Server model has been replaced with a services tier which enables capabilities to be consumed via the Internet, using a standardized set of protocols and technologies, by client applications for the benefit of the end user.
Google to Host Terabytes of Open-Source Science Data
[From a posting by Randy Burge on Dewayne Hendricks list --BSA]
Google to Host Terabytes of Open-Source Science Data
By Alexis Madrigal
January 18, 2008 | 2:23:21 PM
Categories: Dataset, Research
Sources at Google have disclosed that the humble domain, http://research.google.com
, will soon provide a home for terabytes of open-source scientific
datasets. The storage will be free to scientists and access to the
data will be free for all. The project, known as Palimpsest and first
previewed to the scientific community at the Science Foo camp at the
Googleplex last August, missed its original launch date this week, but
will debut soon.
Building on the company's acquisition of the data visualization
technology, Trendalyzer, from the oft-lauded, TED presenting Gapminder
team, Google will also be offering algorithms for the examination and
probing of the information. The new site will have YouTube-style
annotating and commenting features.
The storage would fill a major need for scientists who want to openly
share their data, and would allow citizen scientists access to an
unprecedented amount of data to explore. For example, two planned
datasets are all 120 terabytes of Hubble Space Telescope data and the
images from the Archimedes Palimpsest, the 10th century manuscript
that inspired the Google dataset storage project.
UPDATE (12:01pm): Attila Csordas of Pimm has a lot more details on the
project, including a set of slides that Jon Trowbridge of Google gave
at a presentation in Paris last year. WIRED's own Thomas Goetz also
mentioned the project in his fantastic piece of freeing dark data.
One major issue with science's huge datasets is how to get them to
Google. In this post by a SciFoo attendee over at business|bytes|genes|
molecules, the collection plan was described:
(Google people) are providing a 3TB drive array (Linux RAID5). The
array is provided in “suitcase” and shipped to anyone who wants to
send they data to Google. Anyone interested gives Google the file
tree, and they SLURP the data off the drive. I believe they can extend
this to a larger array (my memory says 20TB).
You can check out more details on why hard drives are the preferred
distribution method at Pimm. And we hear that Google is hunting for
cool datasets, so if you have one, it might pay to get in touch with
them.=
The Economist on Broadband - Open up those highways
[From the prestigious Economist. I couldn't agree more. Thanks to Dave Macneil for this pointer -- BSA]
Broadband
Open up those highways
Jan 17th 2008 | TOKYO
>From The Economist print edition
Rapid internet services are a boon. But not all regulators understand them
IN ERAS past, economic success depended on creating networks that could shift people, merchandise and electric power as efficiently and as widely as possible. Today's equivalent is broadband: the high-speed internet service that has become as vital a tool for producers and distributors of goods as it is for people plugging into all the social and cultural opportunities offered by the web.
Easy access to cheap, fast internet services has become a facilitator of economic growth and a measure of economic performance. No wonder, then, that statistics show a surge in broadband use, especially in places that are already prosperous. The OECD, a rich-country club, says the number of subscribers in its 30 members was 221m last June-a 24% leap over a year earlier. But it is not always the most powerful economies that are most wired. In Denmark, the Netherlands and Switzerland, over 30% of inhabitants have broadband. In America, by contrast, the proportion is 22%, only slightly above the OECD average of just under 20%.
In terms of speed, Japan leads the world. Its average advertised download speed is 95 megabits per second. France and Korea are ranked second and third, but are less than half as fast, and the median among OECD countries is not much more than a tenth. America's average speed is supposed to be a bit above the median, but most users find that it isn't, or that the faster speeds are vastly more expensive. A New Yorker who wants the same quality of broadband as a Parisian has to pay around $150 more per month.
What accounts for the differences among rich countries? Two or three years ago demography was often cited: small, densely populated countries were easier to wire up than big, sparsely inhabited ones. But the leaders in broadband usage include Canada, where a tiny population is spread over a vast area. The best explanation, in fact, is that broadband thrives on a mix of competition and active regulation, to ensure an open contest.
A lack of competition-boosting oversight is one reason for the poor record of the United States (and indeed for New Zealand, another unexpected laggard). Most Americans have a choice of only two broadband providers, either a telecoms or a cable operator. This virtual duopoly suits both sorts of provider, and neither has raced to offer its customers faster access. In some American states, prices have risen; in most other countries they have dropped.
In theory, America's 1996 Telecoms Act obliged operators to rent out their lines to rivals; in practice, a regulatory decision and then a court ruling (in 2003 and 2004 respectively) have made it easy for operators to keep competitors out. The supposed aim of these decisions was to force new firms to build their own infrastructure, instead of piggybacking on facilities set up by older outfits. But new entrants have found it hard to join the fray.
In any event, those American rulings may have been based on a faulty idea of how competition works in this area. As Taylor Reynolds, an OECD analyst, puts it, innovation usually comes in steps: newcomers first rent space on an existing network, to build up customers and income. Then they create new and better infrastructure, as and when they need it.
In France, for example, the regulator forced France Télécom to rent out its lines. One small start-up firm benefited from this opportunity and then installed technology that was much faster than any of its rivals'. It won so many customers that other operators had to follow suit. In Canada, too, the regulator mandated line- sharing, and provinces subsidised trunk lines from which smaller operators could lease capacity to provide service.
In South Korea, where half the population lives in flats, each block owns its own internal cabling and allows rival operators to put their equipment in the basement; each tenant then chooses which to use. In Japan, politicians put pressure on the dominant operator, NTT, to connect people's homes by high-speed fibre lines. And this week the communications ministry indicated that it will make NTT open those fibre connections to rivals.
As broadband grows more popular, the political mood may change in many countries. At present, consumers are often misled by the speeds that operators promise to deliver. Soon regulators can expect to face pressure to ensure truth in advertising, as well as to promote easier access.
Pressure will also come to correct another problem: most operators cap the amount of traffic users may send and receive each month, and nearly all provide far less speed for sending than for receiving. In other words, broadband doesn't really offer a two- way street. This will matter more as users turn into creators of content, from videos to blogs, and ask to be treated with due respect.
Broadband
Open up those highways
Jan 17th 2008 | TOKYO
>From The Economist print edition
Rapid internet services are a boon. But not all regulators understand them
IN ERAS past, economic success depended on creating networks that could shift people, merchandise and electric power as efficiently and as widely as possible. Today's equivalent is broadband: the high-speed internet service that has become as vital a tool for producers and distributors of goods as it is for people plugging into all the social and cultural opportunities offered by the web.
Easy access to cheap, fast internet services has become a facilitator of economic growth and a measure of economic performance. No wonder, then, that statistics show a surge in broadband use, especially in places that are already prosperous. The OECD, a rich-country club, says the number of subscribers in its 30 members was 221m last June-a 24% leap over a year earlier. But it is not always the most powerful economies that are most wired. In Denmark, the Netherlands and Switzerland, over 30% of inhabitants have broadband. In America, by contrast, the proportion is 22%, only slightly above the OECD average of just under 20%.
In terms of speed, Japan leads the world. Its average advertised download speed is 95 megabits per second. France and Korea are ranked second and third, but are less than half as fast, and the median among OECD countries is not much more than a tenth. America's average speed is supposed to be a bit above the median, but most users find that it isn't, or that the faster speeds are vastly more expensive. A New Yorker who wants the same quality of broadband as a Parisian has to pay around $150 more per month.
What accounts for the differences among rich countries? Two or three years ago demography was often cited: small, densely populated countries were easier to wire up than big, sparsely inhabited ones. But the leaders in broadband usage include Canada, where a tiny population is spread over a vast area. The best explanation, in fact, is that broadband thrives on a mix of competition and active regulation, to ensure an open contest.
A lack of competition-boosting oversight is one reason for the poor record of the United States (and indeed for New Zealand, another unexpected laggard). Most Americans have a choice of only two broadband providers, either a telecoms or a cable operator. This virtual duopoly suits both sorts of provider, and neither has raced to offer its customers faster access. In some American states, prices have risen; in most other countries they have dropped.
In theory, America's 1996 Telecoms Act obliged operators to rent out their lines to rivals; in practice, a regulatory decision and then a court ruling (in 2003 and 2004 respectively) have made it easy for operators to keep competitors out. The supposed aim of these decisions was to force new firms to build their own infrastructure, instead of piggybacking on facilities set up by older outfits. But new entrants have found it hard to join the fray.
In any event, those American rulings may have been based on a faulty idea of how competition works in this area. As Taylor Reynolds, an OECD analyst, puts it, innovation usually comes in steps: newcomers first rent space on an existing network, to build up customers and income. Then they create new and better infrastructure, as and when they need it.
In France, for example, the regulator forced France Télécom to rent out its lines. One small start-up firm benefited from this opportunity and then installed technology that was much faster than any of its rivals'. It won so many customers that other operators had to follow suit. In Canada, too, the regulator mandated line- sharing, and provinces subsidised trunk lines from which smaller operators could lease capacity to provide service.
In South Korea, where half the population lives in flats, each block owns its own internal cabling and allows rival operators to put their equipment in the basement; each tenant then chooses which to use. In Japan, politicians put pressure on the dominant operator, NTT, to connect people's homes by high-speed fibre lines. And this week the communications ministry indicated that it will make NTT open those fibre connections to rivals.
As broadband grows more popular, the political mood may change in many countries. At present, consumers are often misled by the speeds that operators promise to deliver. Soon regulators can expect to face pressure to ensure truth in advertising, as well as to promote easier access.
Pressure will also come to correct another problem: most operators cap the amount of traffic users may send and receive each month, and nearly all provide far less speed for sending than for receiving. In other words, broadband doesn't really offer a two- way street. This will matter more as users turn into creators of content, from videos to blogs, and ask to be treated with due respect.
Friday, January 18, 2008
Democratization of Hollywood- film making for the masses
[Not only does Hollywood have to worry about the challenges of distributing movies over the Internet, a new breed of Hollywood developers is using open source tools to develop full length feature movies, with a cast of thousands, for a fraction of the price of traditional processes. The posting from Slashdot showing how 4 young film makers recreated the battle of Omaha beach is amazing. In a very short while I can see the Internet being dominated by professional looking, block buster movies and TV shows made by individuals or very small teams of amateurs. Some excerpts from Slashdot. Thanks to Rollie Cole for the pointer on Animation for the Masses--BSA]
Filming an Invasion without extras http://slashdot.org/article.pl?sid=08/01/14/172228
"Kevin Kelly has an interesting blog post on how a World War II D-Day invasion was staged in a few days with four guys and a video camera using batches of smaller crowds replicated computationally to produce very convincing non-repeating huge crowds. Filmmakers first used computer generated crowds about ten years ago and the technique became well known in the Lord of the Rings trilogy but now crowds can be generated from no crowds at all — just a couple of people. 'What's new is that the new camera/apps are steadily becoming like a word processor — both pros and amateurs use the same one,' says Kelly. 'The same gear needed to make a good film is today generally available to amateurs — which was not so even a decade ago. Film making gear is approaching a convergence between professional and amateur, so that what counts in artistry and inventiveness.'"
Animation for the Masses
Adobe is developing software to let home users create movie-quality 3-D graphics. http://www.technologyreview.com/Infotech/19344/
Computer-generated effects are becoming increasingly more realistic on the big screen, but these animations generally take hours to render. Now, Adobe Systems, the company famous for tools like Photoshop and Acrobat Reader, is developing software that could bring the power of a Hollywood animation studio to the average computer and let users render high-quality graphics in real time. Such software could be useful for displaying ever-more-realistic computer games on PCs and for allowing the average computer user to design complex and lifelike animations.
Open Source for the Big Screen http://linux.slashdot.org/article.pl?sid=08/01/15/2011232
http://en.wikipedia.org/wiki/Elephants_Dream
Elephants Dream is a computer-generated short film that was produced almost completely using open source software, except for the modular sound studio Reaktor and the cluster that rendered the final production which ran Mac OS X. It premiered on March 24, 2006, after about 8 months of work.. The project was joint funded by the Blender Foundation and the Netherlands Media Art Institute. The film's purpose is primarily to field test, develop and showcase the capabilities of open source software, demonstrating what can be done with such tools in the field of organizing and producing quality content for films.
The film's content was released under the Creative Commons Attribution license [3], so that viewers may learn from it and use it however they please (provided attribution is given). The DVD set includes NTSC and PAL versions of the film on separate discs, a high-definition video version as a computer file, and all the production files.
The film was released for download directly and via BitTorrent on the Official Orange Project website on May 18, 2006, along with all production files.
Did Apple just kill Cable TV?
[As I mention in my Internet Evolution posting the Internet is like a slow moving glacier, inexorably and relentlessly crushing the carrier walled gardens. Innovation, at the edge, continues unabated. The latest Apple announcements are just further examples of this inevitably. The smart carriers will realize that they will no longer be the customer facing organization. That will belong to Google, Apple, Facebook and a host of other entrepreneurial companies. From a posting on Dewayne Hendricks list--BSA]
Carrier walls are a tumblin' down* http://www.internetevolution.com/author.asp?section_id=506&doc_id=142971
*with apologies to John Cougar Mellencamp
http://www.pjentrepreneur.com/category/consumer-internet/
Last night, as I watched Steve Jobs announce movie rentals on iTunes and re-launch the Apple TV, it dawned on me that Apple has just driven a stake into the heart of the cable TV industry. The speed of cable TV’s demise will depend on how fast Apple can get films and TV shows from all over the world on iTunes.
[...]
Using the Apple TV box hooked up to your flat screen TV monitor, you can watch any content from movies to TV shows to YouTube videos, Flickr photos, video podcasts, your own video clips, anything you want.
So why should anyone continue to pay money every month to a cable company (and rent a set top box) to watch the same movies and TV shows that are on iTunes? It does not give you access to YouTube, video podcasts and other content on the Internet. You can’t watch your cable company’s offerings on your iPod or laptop while you are in an airplane.
[..]
What gets me really excited is that iTunes could be the repository of films and TV shows that we never see on cable, in the cinema, or in our video rental stores: older films, movies made by independent film makers in different countries, TV shows in other parts of the world, and documentaries. Just look at the video and audio podcast offerings on iTunes. They even have iTunes University where you can view physics and English literature lectures given in top universities in the US.
When I watched Steve Jobs give a demo on how easy it is to rent and download a film, I’d say people-friendly video on demand is here.
Monday, January 14, 2008
Some excellent comments by Dan Reed on academic high performance computing
[It is astounding how much money granting councils continue to fund for the purchase of stand alone computers and clusters by university researchers. Much of this computation could be more easily done on virtual machines or clouds as Dan Reed points out. Not only will this save significant dollars, but it will also be a first step in addressing the challenges of global warming caused by the power and cooling required for these machines. As well the universities can save money in avoiding the significant costs of the physical infrastructure to host these facilities. Some excerpts from Dan Reed's commentary in HPCwire--BSA]
www.HPCwire.com
[...]
Outsourcing: Perhaps It Is Time?
In late November, I briefed the NSF OCI advisory committee on the
PCAST report. The ensuing discussion centered on the rising academic
cost of operating research computing infrastructure. The combination
of rising power densities in racks and declining costs for blades
means computing and storage clusters are multiplying across campuses
at a stunning rate. Consequently, every academic CIO and chief
research officer (CRO) I know is scrambling to coordinate and
consolidate server closets and machine rooms for reasons of
efficiency, security and simple economics.
This prompted an extended discussion with the OCI advisory committee
about possible solutions, including outsourcing research
infrastructure and data management to industrial partners. Lest this
seem like a heretical notion, remember that some universities have
already outsourced email, the lifeblood of any knowledge-driven
organization. To be sure, there are serious privacy and security
issues, as well as provisioning, quality of service and pricing
considerations. However, I believe the idea deserves exploration.
Computing Clouds
All of this is part of the still ill-formed and evolving notion of
cloud computing, where massive datacenters host storage farms and
computing resources, with access via standard web APIs. In a very real
sense, this is the second coming of Grids, but backed by more robust
software and hardware of enormously larger scale. IBM, Google, Yahoo,
Amazon and my new employer -- Microsoft -- are shaping this space,
collectively investing more in infrastructure for Web services than we
in the computational science community spend on HPC facilities.
I view this as the research computing equivalent of the fabless
semiconductor firm, which focuses on design innovation and outsources
chip fabrication to silicon foundries. This lets each group -- the
designers and the foundry operators -- do what they do best and at the
appropriate scale. Most of us operate HPC facilities out of necessity,
not out of desire. They are, after all, the enablers of discovery, not
the goal. (I do love big iron dearly, though, just like many of you.)
In the facility-less research computing model, researchers focus on
the higher levels of the software stack -- applications and
innovation, not low-level infrastructure. Administrators, in turn,
procure services from the providers based on capabilities and pricing.
Finally, the providers deliver economies of scale and capabilities
driven by a large market base.
This is not a one size fits all solution, and change always brings
upsets. Remember, though, that there was a time (not long ago) when
deploying commodity clusters for national production use was
controversial. They were once viewed as too risky; now they are the
norm. Technologies change, and we adapt accordingly.
[...]
www.HPCwire.com
[...]
Outsourcing: Perhaps It Is Time?
In late November, I briefed the NSF OCI advisory committee on the
PCAST report. The ensuing discussion centered on the rising academic
cost of operating research computing infrastructure. The combination
of rising power densities in racks and declining costs for blades
means computing and storage clusters are multiplying across campuses
at a stunning rate. Consequently, every academic CIO and chief
research officer (CRO) I know is scrambling to coordinate and
consolidate server closets and machine rooms for reasons of
efficiency, security and simple economics.
This prompted an extended discussion with the OCI advisory committee
about possible solutions, including outsourcing research
infrastructure and data management to industrial partners. Lest this
seem like a heretical notion, remember that some universities have
already outsourced email, the lifeblood of any knowledge-driven
organization. To be sure, there are serious privacy and security
issues, as well as provisioning, quality of service and pricing
considerations. However, I believe the idea deserves exploration.
Computing Clouds
All of this is part of the still ill-formed and evolving notion of
cloud computing, where massive datacenters host storage farms and
computing resources, with access via standard web APIs. In a very real
sense, this is the second coming of Grids, but backed by more robust
software and hardware of enormously larger scale. IBM, Google, Yahoo,
Amazon and my new employer -- Microsoft -- are shaping this space,
collectively investing more in infrastructure for Web services than we
in the computational science community spend on HPC facilities.
I view this as the research computing equivalent of the fabless
semiconductor firm, which focuses on design innovation and outsources
chip fabrication to silicon foundries. This lets each group -- the
designers and the foundry operators -- do what they do best and at the
appropriate scale. Most of us operate HPC facilities out of necessity,
not out of desire. They are, after all, the enablers of discovery, not
the goal. (I do love big iron dearly, though, just like many of you.)
In the facility-less research computing model, researchers focus on
the higher levels of the software stack -- applications and
innovation, not low-level infrastructure. Administrators, in turn,
procure services from the providers based on capabilities and pricing.
Finally, the providers deliver economies of scale and capabilities
driven by a large market base.
This is not a one size fits all solution, and change always brings
upsets. Remember, though, that there was a time (not long ago) when
deploying commodity clusters for national production use was
controversial. They were once viewed as too risky; now they are the
norm. Technologies change, and we adapt accordingly.
[...]
There's Grid in them thar clouds
[Ian Foster has posted an excellent commentary in his blog on the relationship between grids and clouds. As well Grid Today reported On Dec. 6, there was a session on "Utility Computing, Grids, and Virtualization" at a meeting of the Network Centric Operations Industry Consortium in St. Petersburg, Fla. There were some excellent presentations on the relationship between clouds grids, virtualization, SOA, etc .--BSA]
http://www.gridtoday.com/grid/2008160.html
The session was co-hosted by the Open Grid Forum. The proceedings of the session are available at http://colab.cim3.net/cgi-bin/wiki.pl?ServerVirtualization.
Ian Foster commentary http://ianfoster.typepad.com/blog/2008/01/theres-grid-in.html
....
the problems are mostly the same in cloud and grid. There is a common need to be able to manage large facilities; to define methods by which consumers discover, request, and use resources provided by the central facilities; and to implement the often highly parallel computations that execute on those resources. Details differ, but the two communities are struggling with many of the same issues.
Unfortunately, at least to date, the methods used to achieve these goals in today’s commercial clouds have not been open and general purpose, but instead been mostly proprietary and specialized for the specific internal uses (e.g., large-scale data analysis) of the companies that developed them. The idea that we might want to enable interoperability between providers (as in the electric power grid) has not yet surfaced. Grid technologies and protocols speak precisely to these issues, and should be considered. ... In building this distributed “cloud” or “grid” (“groud”?), we will need to support on-demand provisioning and configuration of integrated “virtual systems” providing the precise capabilities needed by an end-user. We will need to define protocols that allow users and service providers to discover and hand off demands to other providers, to monitor and manage their reservations, and arrange payment. We will need tools for managing both the underlying resources and the resulting distributed computations. We will need the centralized scale of today’s cloud utilities, and the distribution and interoperability of today’s grid facilities.
Some of the required protocols and tools will come from the smart people at Amazon and Google. Others will come from the smart people working on grid. Others will come from those creating whatever we call this stuff after grid and cloud. It will be interesting to see to what extent these different communities manage to find common cause, or instead proceed along parallel paths.
http://www.gridtoday.com/grid/2008160.html
The session was co-hosted by the Open Grid Forum. The proceedings of the session are available at http://colab.cim3.net/cgi-bin/wiki.pl?ServerVirtualization.
Ian Foster commentary http://ianfoster.typepad.com/blog/2008/01/theres-grid-in.html
....
the problems are mostly the same in cloud and grid. There is a common need to be able to manage large facilities; to define methods by which consumers discover, request, and use resources provided by the central facilities; and to implement the often highly parallel computations that execute on those resources. Details differ, but the two communities are struggling with many of the same issues.
Unfortunately, at least to date, the methods used to achieve these goals in today’s commercial clouds have not been open and general purpose, but instead been mostly proprietary and specialized for the specific internal uses (e.g., large-scale data analysis) of the companies that developed them. The idea that we might want to enable interoperability between providers (as in the electric power grid) has not yet surfaced. Grid technologies and protocols speak precisely to these issues, and should be considered. ... In building this distributed “cloud” or “grid” (“groud”?), we will need to support on-demand provisioning and configuration of integrated “virtual systems” providing the precise capabilities needed by an end-user. We will need to define protocols that allow users and service providers to discover and hand off demands to other providers, to monitor and manage their reservations, and arrange payment. We will need tools for managing both the underlying resources and the resulting distributed computations. We will need the centralized scale of today’s cloud utilities, and the distribution and interoperability of today’s grid facilities.
Some of the required protocols and tools will come from the smart people at Amazon and Google. Others will come from the smart people working on grid. Others will come from those creating whatever we call this stuff after grid and cloud. It will be interesting to see to what extent these different communities manage to find common cause, or instead proceed along parallel paths.
Wednesday, January 9, 2008
Excellent paper on Network Neutrality by Andrew Odlyzko
[Another excellent researched and well thought out paper on the subject of Network Neutrality by Andrew Odlyzko--BSA]
"Network neutrality, search neutrality, and the never-ending conflict between efficiency and fairness in markets,"
http://www.dtc.umn.edu/~odlyzko/doc/net.neutrality.pdf
"providing morefunding for current operators is likely to be wasteful, in that it would either be pocketed as extra profit, or spent in wasteful ways. The one thing that has been well documented (...) is that established service providers are terrible at innovation in services. Their core expertise is in widespread delivery of basic connectivity, and they, and
their suppliers, have done well in innovating there, introducing DSL, cable modems, wire-less transmission technologies, DWDM, and so on. But they have
failed utterly in end-user services.
(...)
In conclusion, the basic conclusion is that for pervasive infrastructure services that are crucial for the functioning of society, rules about allowable degrees of discrimination are needed, and those rules will often have to be set by governments. For telecommunications, given current trends in demand and in rate and sources of innovation, it appears to be better for society not to tilt towards the operators, and instead to stimulate innovation on the network by others by enforcing net neutrality.
But this would likely open the way for other players, such as Google, that emerge from that open and competitive arena as big winners, to become choke points. So it would be wise to prepare to monitor what happens, and be ready to intervene by imposing neutrality rules on them when necessary.
"Network neutrality, search neutrality, and the never-ending conflict between efficiency and fairness in markets,"
http://www.dtc.umn.edu/~odlyzko/doc/net.neutrality.pdf
"providing morefunding for current operators is likely to be wasteful, in that it would either be pocketed as extra profit, or spent in wasteful ways. The one thing that has been well documented (...) is that established service providers are terrible at innovation in services. Their core expertise is in widespread delivery of basic connectivity, and they, and
their suppliers, have done well in innovating there, introducing DSL, cable modems, wire-less transmission technologies, DWDM, and so on. But they have
failed utterly in end-user services.
(...)
In conclusion, the basic conclusion is that for pervasive infrastructure services that are crucial for the functioning of society, rules about allowable degrees of discrimination are needed, and those rules will often have to be set by governments. For telecommunications, given current trends in demand and in rate and sources of innovation, it appears to be better for society not to tilt towards the operators, and instead to stimulate innovation on the network by others by enforcing net neutrality.
But this would likely open the way for other players, such as Google, that emerge from that open and competitive arena as big winners, to become choke points. So it would be wise to prepare to monitor what happens, and be ready to intervene by imposing neutrality rules on them when necessary.
There's Grid in them thar clouds
[Ian Foster has posted an excellent commentary in his blog on the relationship between grids and clouds. As well Grid Today reported On Dec. 6, there was a session on "Utility Computing, Grids, and Virtualization" at a meeting of the Network Centric Operations Industry Consortium in St. Petersburg, Fla. There were some excellent presentations on the relationship between clouds grids, virtualization, SOA, etc .--BSA]
http://www.gridtoday.com/grid/2008160.html
The session was co-hosted by the Open Grid Forum. The proceedings of the session are available at http://colab.cim3.net/cgi-bin/wiki.pl?ServerVirtualization.
Ian Foster commentary http://ianfoster.typepad.com/blog/2008/01/theres-grid-in.html
....
the problems are mostly the same in cloud and grid. There is a common need to be able to manage large facilities; to define methods by which consumers discover, request, and use resources provided by the central facilities; and to implement the often highly parallel computations that execute on those resources. Details differ, but the two communities are struggling with many of the same issues.
Unfortunately, at least to date, the methods used to achieve these goals in today’s commercial clouds have not been open and general purpose, but instead been mostly proprietary and specialized for the specific internal uses (e.g., large-scale data analysis) of the companies that developed them. The idea that we might want to enable interoperability between providers (as in the electric power grid) has not yet surfaced. Grid technologies and protocols speak precisely to these issues, and should be considered. ... In building this distributed “cloud” or “grid” (“groud”?), we will need to support on-demand provisioning and configuration of integrated “virtual systems” providing the precise capabilities needed by an end-user. We will need to define protocols that allow users and service providers to discover and hand off demands to other providers, to monitor and manage their reservations, and arrange payment. We will need tools for managing both the underlying resources and the resulting distributed computations. We will need the centralized scale of today’s cloud utilities, and the distribution and interoperability of today’s grid facilities.
Some of the required protocols and tools will come from the smart people at Amazon and Google. Others will come from the smart people working on grid. Others will come from those creating whatever we call this stuff after grid and cloud. It will be interesting to see to what extent these different communities manage to find common cause, or instead proceed along parallel paths.
Tuesday, January 8, 2008
The IT department is dead - Nicholas Carr
[Another provocative book by Nicholas Carr. But in general I agree with him, except I think his vision is too limited to utility computing. Web 2.0, SOA, IaaS (Infrastructure as a web service) will be additional services that will be offered by clouds operated by Google, Amazon and so forth. And as I have often stated in this blog, I believe elimination or reduction of CO2 emissions may also be the critical enabler, as opposed to the putative business case benefits of virtualization and web services. The classic example is cyber-infrastructure applications where take up by university researchers has been disappointing (and is one of the themes at the upcoming MardiGras conference on distributed applications).
http://www.mardigrasconference.org/
Carr uses analogy of how companies moved from owning and operating their own generating equipment to purchasing power off the grid. This was only possible once a robust and widescale transmission grid was deployed by utilities. However, this is the component that is missing in Carr's vision with respect to utility computing and virtualization - we do not yet have the communications networks especially in the last mile, to realize this vision.
And finally while the traditional IT techie may disappear I can see a new class of skills being required of "orchestration engineers" to help users and researchers build solutions, APNs and workflows from the variety of virtual services available from companies like Google, Amazon, IBM etc. Some excerpts from Network world article
-- BSA]
http://www.networkworld.com/news/2008/010708-carr-it-dead.html?t51hb
The IT department is dead, and it is a shift to utility computing that will kill this corporate career path. So predicts Nicholas Carr in his new book, The Big Switch: Rewiring the World from Edison to Google. Other stories on this topic
Carr is best known for a provocative Harvard Business Review article entitled "Does IT Matter?" Published in 2003, the article asserted that IT investments didn't provide companies with strategic advantages because when one company adopted a new technology, its competitors did the same.
With his new book, Carr is likely to engender even more wrath among CIOs and other IT pros.
"In the long run, the IT department is unlikely to survive, at least not in its familiar form," Carr writes. "It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud. Business units and even individual employees will be able to control the processing of information directly, without the need for legions of technical people."
Carr's rationale is that utility computing companies will replace corporate IT departments much as electric utilities replaced company-run power plants in the early 1900s.
Carr explains that factory owners originally operated their own power plants. But as electric utilities became more reliable and offered better economies of scale, companies stopped running their own electric generators and instead outsourced that critical function to electric utilities.
Carr predicts that the same shift will happen with utility computing
Carr cites several drivers for the move to utility computing. One is that computers, storage systems, networking gear and most widely used applications have become commodities.
He says even IT professionals are indistinguishable from one company to the next. "Most perform routine maintenance chores — exactly the same tasks that their counterparts in other companies carry out," he says.
Carr points out that most data centers have excess capacity, with utilization ranging from 25% to 50%. Another driver to utility computing is the huge amount of electricity consumed by data centers, which can use 100 times more energy than other commercial office buildings.
"The replication of tens of thousands of independent data centers, all using similar hardware, running similar software, and employing similar kinds of workers, has imposed severe economic penalties on the economy," he writes. "It has led to the overbuilding of IT assets in every sector of the economy, dampening the productivity gains that can spring from computer automation."
Carr embraces Google as the leader in utility computing. He says Google runs the largest and most sophisticated data centers on the planet, and is using them to provide services such as Google Apps that compete directly with traditional client/server software from vendors such as Microsoft.
"If companies can rely on central stations like Google's to fulfill all or most of their computing requirements, they'll be able to slash the money they spend on their own hardware and software — and all the dollars saved are ones that would have gone into the coffers of Microsoft and the other tech giants," Carr says.
[..]
http://www.mardigrasconference.org/
Carr uses analogy of how companies moved from owning and operating their own generating equipment to purchasing power off the grid. This was only possible once a robust and widescale transmission grid was deployed by utilities. However, this is the component that is missing in Carr's vision with respect to utility computing and virtualization - we do not yet have the communications networks especially in the last mile, to realize this vision.
And finally while the traditional IT techie may disappear I can see a new class of skills being required of "orchestration engineers" to help users and researchers build solutions, APNs and workflows from the variety of virtual services available from companies like Google, Amazon, IBM etc. Some excerpts from Network world article
-- BSA]
http://www.networkworld.com/news/2008/010708-carr-it-dead.html?t51hb
The IT department is dead, and it is a shift to utility computing that will kill this corporate career path. So predicts Nicholas Carr in his new book, The Big Switch: Rewiring the World from Edison to Google. Other stories on this topic
Carr is best known for a provocative Harvard Business Review article entitled "Does IT Matter?" Published in 2003, the article asserted that IT investments didn't provide companies with strategic advantages because when one company adopted a new technology, its competitors did the same.
With his new book, Carr is likely to engender even more wrath among CIOs and other IT pros.
"In the long run, the IT department is unlikely to survive, at least not in its familiar form," Carr writes. "It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud. Business units and even individual employees will be able to control the processing of information directly, without the need for legions of technical people."
Carr's rationale is that utility computing companies will replace corporate IT departments much as electric utilities replaced company-run power plants in the early 1900s.
Carr explains that factory owners originally operated their own power plants. But as electric utilities became more reliable and offered better economies of scale, companies stopped running their own electric generators and instead outsourced that critical function to electric utilities.
Carr predicts that the same shift will happen with utility computing
Carr cites several drivers for the move to utility computing. One is that computers, storage systems, networking gear and most widely used applications have become commodities.
He says even IT professionals are indistinguishable from one company to the next. "Most perform routine maintenance chores — exactly the same tasks that their counterparts in other companies carry out," he says.
Carr points out that most data centers have excess capacity, with utilization ranging from 25% to 50%. Another driver to utility computing is the huge amount of electricity consumed by data centers, which can use 100 times more energy than other commercial office buildings.
"The replication of tens of thousands of independent data centers, all using similar hardware, running similar software, and employing similar kinds of workers, has imposed severe economic penalties on the economy," he writes. "It has led to the overbuilding of IT assets in every sector of the economy, dampening the productivity gains that can spring from computer automation."
Carr embraces Google as the leader in utility computing. He says Google runs the largest and most sophisticated data centers on the planet, and is using them to provide services such as Google Apps that compete directly with traditional client/server software from vendors such as Microsoft.
"If companies can rely on central stations like Google's to fulfill all or most of their computing requirements, they'll be able to slash the money they spend on their own hardware and software — and all the dollars saved are ones that would have gone into the coffers of Microsoft and the other tech giants," Carr says.
[..]
Monday, January 7, 2008
Nintendo Wii to promote fiber deployment in Japan
[There is speculation that the new Nitendo WiiWare service titles will be potentially huge which will require an external hard drive for the Wii or alternatively Nintendo expects users to download individual games every time they need them. Thanks to Lee Doerksen for this pointer -- BSA]
http://www.joystiq.com/2007/11/29/nintendo-promotes-fiber-optic-internet-in-japan/
Nintendo is anticipating that its WiiWare service is going to be huge. So huge, in fact, that they're doing their best to make sure Japanese gamers have the internet muscle to handle it. Reuters reports that Nintendo is partnering with broadband provider Nippon Telegraph and Telephone Corp, in order to promote and increase the use of high-speed, fiber-optic internet connections in the country.
Video game maker Nintendo Co Ltd and telecoms operator NTT's regional units said they would cooperate in Japan to promote the broadband Internet access of Nintendo's hot-selling Wii game console.
Driving Wii's Web connection is important for Nintendo since the creator of game characters Mario and Zelda plans to launch a new service next March called WiiWare, which allows users of the Wii to purchase new game software titles via the Internet.
For Nippon Telegraph and Telephone Corp, which aims to boost subscribers to its fiber-optic Web connection service as a new long-term source of growth, a growing user base for the Wii represents an attractive pool of potential customers.
[..]
http://www.joystiq.com/2007/11/29/nintendo-promotes-fiber-optic-internet-in-japan/
Nintendo is anticipating that its WiiWare service is going to be huge. So huge, in fact, that they're doing their best to make sure Japanese gamers have the internet muscle to handle it. Reuters reports that Nintendo is partnering with broadband provider Nippon Telegraph and Telephone Corp, in order to promote and increase the use of high-speed, fiber-optic internet connections in the country.
Video game maker Nintendo Co Ltd and telecoms operator NTT's regional units said they would cooperate in Japan to promote the broadband Internet access of Nintendo's hot-selling Wii game console.
Driving Wii's Web connection is important for Nintendo since the creator of game characters Mario and Zelda plans to launch a new service next March called WiiWare, which allows users of the Wii to purchase new game software titles via the Internet.
For Nippon Telegraph and Telephone Corp, which aims to boost subscribers to its fiber-optic Web connection service as a new long-term source of growth, a growing user base for the Wii represents an attractive pool of potential customers.
[..]
More on massive new investment required for Internet
[Scott Bradner makes some important additional comments on the Nemertes Research report in his regular column in Network World-- BSA]
Internet overload: painting tomorrow something like today Nemertes Research report on a future clogged Internet may not tell whole story
http://www.networkworld.com/columnists/2007/113007bradner.html
[...]
The report does not pull a Metcalfe and predict an Internet collapse. It does, however, say that the Internet broadband-access networks will not keep up with future demand and, thus, users will be slowed. It does not mention that many broadband Internet subscribers are seeing slowdowns today because of the low speed and oversubscribing of current access networks.
My second-biggest problem with the report is that it fails to take into account the wide differences in Internet-access speed and cost across the world. The average download speed in the United States is less than 2Mpbs compared with more than 60Mbps in Japan. The report fails to point out that the United States’ definition of broadband is one of the slowest and most expensive to be found in the major industrialized countries in terms of dollars per Mbps -- more than ten times as expensive as in Japan and even more expensive than in Portugal. It calls for spending a lot more money on access infrastructure over the next few years, but does not hint at how it might be paid for. Already U.S. broadband Internet service is too expensive for a lot of people, and the Nemertes report does not factor that into its projected growth in users and demand.
My biggest problem with the report, however, is that its authors seem to think that the only possible Internet-access future comes from the traditional telecommunications carriers. It ignores (or at least I could not find any mention of these) non-carrier solutions, such as muni or neighborhood Wi-Fi, and mentions Google’s potential entry into the wireless-access business only in passing.
[...]
Massive new investment required for Internet to maintain today's performace
[If this researcher's analysis is correct, the biggest challenge will be finding the $137 billion required to upgrade the access networks. I very much doubt that governments at any level are prepared to underwrite such an investment. Telcos and cablecos face similar changes in making such an investment with the market uncertainty of "over the top" application and service providers (with or without network neutrality). Hence one alternate approach may be http://green-broadband.blogspot.com. From a posting on Dewayne Hendricks list -- BSA]
[Note: This item comes from reader Matt Oristano. DLH]
Internet not growing fast enough, researchers say
By Shamus McGillicuddy, News Writer
12 Dec 2007 | SearchSMB.com
According to new research, demand for Internet usage will start to
outpace the capacity of the Internet's access points. This potential
crunch could spell trouble for CIOs.
The Nemertes Research Group Inc. in Mokena, Ill., examined revenues
and expenditures of Internet infrastructure companies and the changing
pace of demand for bandwidth to determine this. Nemertes president
Johna Till Johnson said her firm isn't predicting a cataclysmic
failure of the Internet. Instead, the quality of service will start to
drag for businesses, stifling innovation and slowing the speed of
transactions and communications.
Johnson explained that the pace of investment in the infrastructure
that connects individuals and businesses to the Internet
infrastructure is falling behind demand. Nemertes projects that by
2010, for instance, the average amount of time it takes to download a
YouTube video will jump from 10 seconds to as much as two minutes.
"It's that last mile, the pipes at the edges. The access circuit," she
said. "That would be cable for consumers, DSL. For businesses they are
telephony circuits, and increasingly they're wireless. Wireless is one
of the key limiting factors. The technologies for wireless broadband
access exist, but they are not deployed widely."
Johnson likened the problem to a series of broad interstate highways.
Traffic is flowing smoothly on the multilane highways, but the narrow
access ramps that lead to those highways are clogged with cars waiting
to get on.
Nemertes estimates that it will take as much as $137 billion in global
infrastructure investment in the next three to five years to prevent
significant service declines, including $42 billion to $45 billion in
North America alone. Nemertes research shows that service providers
plan to spend just 60% to 70% of that total required global investment.
This could spell trouble for companies that conduct critical
communication and business transactions and processes via the
Internet. Slower or interrupted service means lost productivity.
Midmarket CIOs should pay attention
Johnson said large enterprises should do all right in this environment
because they can afford their own direct access circuit to the
Internet if the need arises. But smaller enterprises and midmarket
companies will be vulnerable.
"If you're one of those lucky people who can get a big fat pipe into
the Internet cloud, none of this matters," she said. "Where this gets
fuzzy is the trend we've been seeing the past five years in our
research, where enterprises are getting more and more distributed with
branch offices. Some of these branches are small facilities and their
staff is highly distributed. These are not places where you'll be
getting fat pipes. And if you're moving toward telecommuting you're
relying on the lowest common denominator of access to your employees.
That's a bad idea. You've got a telecommuter waiting hours for an
accounting dump of sales figures. They're not getting access to the
data they need in a timely fashion."
Johnson said midmarket CIOs should pay close attention to their
connectivity in the coming years. And they should advocate for
infrastructure investment.
Johnson said government policymakers may have to make a choice to
ensure that capacity doesn't become an issue. She said some believe
the Internet is as primal a right as access to basic utilities, and
the government should guarantee access for everyone. Others believe
the free market should determine how investment is made in the
infrastructure. Either way, Johnson said some sort of comprehensive
policy is needed to encourage more investment in the infrastructure.
"We've been told we're fear-mongering," Johnson said. "No one wants to
look at this problem. I don't know what the answer is. I don't know
what the right policy is. They need to come up with that."
Bruce Mehlman, co-chairman of the Washington, D.C.-based Internet
Innovation Alliance said he agrees with most of Nemertes' findings,
"but we're a bit more optimistic."
Mehlman said he thinks investors and innovators will rise to the
challenge that Nemertes has uncovered and make sure that bandwidth
will meet demand.
"As I see what [Nemertes] is saying, the usage curve is steeper than
what most people appreciate. To maximize the value of the Internet in
2010, the capacity curve has to be made steeper than is currently
being made," Mehlman said. "That can be done through investment and
innovation. This is akin to the Y2K scenario, where you could have an
issue, but prudent planning and investment mean it goes off with a
whimper. I see this as a call to investment, to help network providers
and policymakers recognize the magnitude of what needs to get done."
[Note: This item comes from reader Matt Oristano. DLH]
Internet not growing fast enough, researchers say
By Shamus McGillicuddy, News Writer
12 Dec 2007 | SearchSMB.com
According to new research, demand for Internet usage will start to
outpace the capacity of the Internet's access points. This potential
crunch could spell trouble for CIOs.
The Nemertes Research Group Inc. in Mokena, Ill., examined revenues
and expenditures of Internet infrastructure companies and the changing
pace of demand for bandwidth to determine this. Nemertes president
Johna Till Johnson said her firm isn't predicting a cataclysmic
failure of the Internet. Instead, the quality of service will start to
drag for businesses, stifling innovation and slowing the speed of
transactions and communications.
Johnson explained that the pace of investment in the infrastructure
that connects individuals and businesses to the Internet
infrastructure is falling behind demand. Nemertes projects that by
2010, for instance, the average amount of time it takes to download a
YouTube video will jump from 10 seconds to as much as two minutes.
"It's that last mile, the pipes at the edges. The access circuit," she
said. "That would be cable for consumers, DSL. For businesses they are
telephony circuits, and increasingly they're wireless. Wireless is one
of the key limiting factors. The technologies for wireless broadband
access exist, but they are not deployed widely."
Johnson likened the problem to a series of broad interstate highways.
Traffic is flowing smoothly on the multilane highways, but the narrow
access ramps that lead to those highways are clogged with cars waiting
to get on.
Nemertes estimates that it will take as much as $137 billion in global
infrastructure investment in the next three to five years to prevent
significant service declines, including $42 billion to $45 billion in
North America alone. Nemertes research shows that service providers
plan to spend just 60% to 70% of that total required global investment.
This could spell trouble for companies that conduct critical
communication and business transactions and processes via the
Internet. Slower or interrupted service means lost productivity.
Midmarket CIOs should pay attention
Johnson said large enterprises should do all right in this environment
because they can afford their own direct access circuit to the
Internet if the need arises. But smaller enterprises and midmarket
companies will be vulnerable.
"If you're one of those lucky people who can get a big fat pipe into
the Internet cloud, none of this matters," she said. "Where this gets
fuzzy is the trend we've been seeing the past five years in our
research, where enterprises are getting more and more distributed with
branch offices. Some of these branches are small facilities and their
staff is highly distributed. These are not places where you'll be
getting fat pipes. And if you're moving toward telecommuting you're
relying on the lowest common denominator of access to your employees.
That's a bad idea. You've got a telecommuter waiting hours for an
accounting dump of sales figures. They're not getting access to the
data they need in a timely fashion."
Johnson said midmarket CIOs should pay close attention to their
connectivity in the coming years. And they should advocate for
infrastructure investment.
Johnson said government policymakers may have to make a choice to
ensure that capacity doesn't become an issue. She said some believe
the Internet is as primal a right as access to basic utilities, and
the government should guarantee access for everyone. Others believe
the free market should determine how investment is made in the
infrastructure. Either way, Johnson said some sort of comprehensive
policy is needed to encourage more investment in the infrastructure.
"We've been told we're fear-mongering," Johnson said. "No one wants to
look at this problem. I don't know what the answer is. I don't know
what the right policy is. They need to come up with that."
Bruce Mehlman, co-chairman of the Washington, D.C.-based Internet
Innovation Alliance said he agrees with most of Nemertes' findings,
"but we're a bit more optimistic."
Mehlman said he thinks investors and innovators will rise to the
challenge that Nemertes has uncovered and make sure that bandwidth
will meet demand.
"As I see what [Nemertes] is saying, the usage curve is steeper than
what most people appreciate. To maximize the value of the Internet in
2010, the capacity curve has to be made steeper than is currently
being made," Mehlman said. "That can be done through investment and
innovation. This is akin to the Y2K scenario, where you could have an
issue, but prudent planning and investment mean it goes off with a
whimper. I see this as a call to investment, to help network providers
and policymakers recognize the magnitude of what needs to get done."
Subscribe to:
Posts (Atom)