Wednesday, June 27, 2007

Impact of Telecom and Internet on economic growth

[Some excerpts from Business Week article--BSA]

Telecom: Back From The Dead

All those YouTube videos and MySpace pages zipping back and forth on the Net have revived the telecom industry—and charged up the economy

In those taken-for-granted wires, cables, and computers lies a remarkable tale of resurrection. Seven years ago the communications business, made up of companies providing everything from phones to computer networks to routers and switches, was laid low by the worst collapse to hit a U.S. industry since the Great Depression.

Over the past year, however, the telecom industry has roared back to life. Credit a steady rise in appetite for broadband Internet connections, which enable easy consumption of watch-my-cat video clips, iPod music files, and such Web-inspired services as free Internet phoning. Indeed, this year broadband adoption among U.S. adults is expected to cross the important threshold of 50%.

About half of the Internet's transmission capacity was going unused in 2002. Today that pipeline has almost doubled in size, and yet the unused portion is down to about 30%. As a result, the price that companies pay for bandwidth in some parts of the U.S. is on the rise after six years of declines.

But telecom's revival has implications way beyond Wall Street. A dollar spent on telecom infrastructure produces an outsize impact on the U.S. economy as a whole. Indeed, a growing body of research has found that telecom investment plays a vital role in stimulating economic growth and productivity--more so than money spent on roads, electricity, or even education. Communication assets generate massive benefits by slashing the cost of doing business across the economy.

A 2001 paper in the American Economic Review, written by Lars-Hendrik Röller of Berlin's Social Science Research Center and Leonard Waverman of the London Business School, concluded that the spread of land-based telecommunications networks in 21 developed nations accounted for one-third of the increase in economic output between 1970 and 1990. Other studies suggest fiber-optic and wireless networks provide their own special jolt to the economies of rich and poor nations alike. "Out of the ashes of the tech crisis we got a world-class, spanking-new communications network," says Mark Zandi, chief economist for Moody Corp.'s (MCO ) Inc. "That has been key to outsized productivity gains ever since."

The $900 billion industry looks far different than it did in 2000. The balance of power has shifted toward Web upstarts such as YouTube and MySpace that barely registered seven years ago. The Bell phone companies, meanwhile, have consolidated and are furiously developing services they hope will let them capitalize on the billions they're investing to build speedy new networks.

It's not clear, though, how much of the value flowing from those networks will be captured by the Bell companies themselves. The big phone companies don't have a history of developing game-changing technologies in a competitive arena. "They've got a high hill to climb," says William E. Kennard, a former Federal Communications Commission chairman who is now managing director of Carlyle Group, a large private equity firm that has purchased some telecom assets.

Online video barely existed in 2000. Today, fully one-third of all Internet traffic comes from Web videos, The Landlord included. Thanks to bandwidth-hungry services such as YouTube, global Internet traffic from 2003 to 2006 grew at a compounded annual rate of 75% a year, according to TeleGeography. "When you compound those numbers, I don't care how much inventory you have, it's going to disappear off the shelf," says Level 3 CEO Crowe.

If the old telecom world was dominated by bloated regional monopolies, the new world is a competitive mosh pit stocked with sinewy players. That's reflected in how much more productive the industry has become. While telecom revenues are now 19% higher than they were in 2000, that money supports just 1.1 million workers, down nearly 30% from boom-era levels. "It has gotten unrelentingly competitive in every area: broadband, land line, and wireless," says AT&T's new CEO, Randall Stephenson.

For the big carriers such as at&t, Verizon, and Qwest, the main challenge is to slow defections of traditional land-line customers while producing faster revenue growth in new markets such as wireless, Internet service, pay TV, and advertising. The carriers must overcome their reputation for being "dumb pipes" and prove they can fill their networks with innovative bundles of products and services that strike a chord with customers--all while battling cable operators, which are poaching millions of phone customers, and fending off or making peace with aggressive new entrants such as Google and Apple. (AAPL )

Never Email Anyone Over 30

From Andrew McAfee's blog

Never Email Anyone Over 30

A while back I wrote a post speculating about the collaboration technologies today’s college students will expect to use when they enter the workforce. I guessed that today’s collegians will want to continue their use of social networking tools on the job—that they won’t consider these tools to be only suitable for ‘play time,’ but rather as important (integral?) parts of their day. More recently, I wrote a couple posts about Facebook, the social networking site that’s become wildly popular on many college campuses and is now penetrating the rest of society.

Frank Gilbane recently used Facebook itself to gather data about young people’s expectations for collaborationware. He made use of Facebook’s polling feature, which lets a member ask a single question, then specify the desired number of respondents and their demographics (gender, age, location, etc.). Facebook advertises the poll only to members who match these demographics, then summarizes responses as they come in and presents them to the asker. It looks like a nifty feature, and I might well use it myself.

Gilbane asked "Which collaboration technologies will you use most in your job in two years?" He first asked 25-34 year olds, then 18-24 year olds (500 of each).

The largest difference, and a statistically significant one, is that the younger crowd has less faith that email will continue to dominate. As a group, the 18-24 year olds plan to make more use of text messaging (a channel technology) and social networking sites (primarily a platform technology, although Facebook does allow communication over private channels). Interestingly, they seem less enthusiastic about instant messaging than does the older set.

Gilbane’s findings don’t result from a rigorously constructed and administered survey, but I still think they have validity. They correspond tightly with stories and anecdotes I’ve been hearing, from many quarters, about the generational shift in technology use. Evidence is mounting that younger people don’t think of the Internet as a collection of content that other people produce for them to consume. Instead, they think about it as a dynamic, emergent, and peer-produced repository to which they’re eager to contribute.

Will corporate Intranets be ready for them? Should they be?

Science acknowledges breakthrough potential of lightpath technology

Science acknowledges breakthrough potential of lightpath technology

Today five research projects in the Netherlands are awarded the main prize in the “Enlighten Your Research” lightpath competition organised by SURFnet and NWO. The scientists win a lightpath and the sum of 20,000 Euros for integrating the use of the lightpath in their research.

The winners are:

• Distribution of radiology images in the NELSON lung cancer screening study – University Medical Center Groningen and University Medical Center Utrecht with P.M.A. van Ooijen from Groningen as the main team member. • Lightpath for the high-throughput genome-wide analyses in Amyotrophic Lateral Sclerosis (ALS) – University Medical Center Utrecht and the University of California, Los Angeles with Utrecht scientist dr. J. Veldink as the main team member. • Intelligent CCTV monitoring at Arke Stadium – Submitted by Telematica Instituut, University of Twente, Twente Regional Police, FC Twente, NDIX (Netherlands-German Internet Exchange) and T Xchange. The main team member for this lightpath project is dr. ir. W. Teeuw from Telematica Instituut. • Electron microscopy using lightpaths – Leiden University Medical Center, Technische Universiteit Eindhoven en SARA Computing and Networking Services. Leiden researcher dr. J. Valentijn will use lightpaths to share measuring instruments by connecting three research institutes. • Remote High-Resolution Visualisation of Climate Data using Pixel Streaming – The proposal involves a lightpath between Utrecht University and SARA Computing and Networking Services in Amsterdam with prof. H. Dijkstra of the ESSENCE project at Utrecht University as the main team member.

The winning proposals come from different scientific disciplines and are characterised by a high degree of social relevance. The submissions for this first edition of the “Enlighten Your Research” lightpath competition show that scientists are aware of the added value of lightpaths for their research, for instance to share scarce resources such as measuring equipment. The scientist expect that their lightpath will contribute significantly to accelerating or improving their research.

The “Enlighten your Research” lightpath competition was organised by SURFnet in collaboration with NWO with the aim of promoting the use of lightpaths for scientific research. Lightpaths are a new feature of the hybrid SURFnet6 network and are characterised by high transmission speeds and reliability, a low and very constant network latency and a high degree of security. These properties make lightpaths extremely suitable for scientific research and offer researchers in the Netherlands the possibility of moving in new directions.

Further details and the jury report are available at

About SURFnet
SURFnet enables breakthrough education and research. We develop and operate the national SURFnet6 network and provide innovative services in the areas of security, authentication and authorisation, group communication and video. Over 750,000 academics, staff and students in higher education and research in the Netherlands have daily access to the Internet using SURFnet6. SURFnet is a partner in SURF, the organisation for innovative ICT facilities in which academic universities, universities of applied sciences and research institutions collaborate at the national and international levels.

About NWO
NWO (Netherlands Organisation for Scientific Research) supports the future of academic endeavour in the Netherlands. Together with academics, (inter)national organisations and companies, NWO funds and develops top quality research programmes. About 4,500 researchers working at universities and (NWO) institutes are funded by an NWO subsidy. The core budget amounts to over 400 million euros annually.

Carriers need to surf the Web 2.0 wave

[Interesting article on how carriers need to embrace Web 2.0 technologies. But Web 2.0 is so antithetical to the carrier business model, it would be like asking the Pope to convert to Islam. Thanks to Frank Coluccio for this pointer. Some excerpts from original Lightreading article- BSA]

Carriers' next generation service platforms need to go beyond the closed IMS (IP Multimedia Subsystem) and extend into the open Web 2.0 world if they're to develop the applications their customers will demand, according to a group of senior operator CTOs talking here in Chicago.

BT Group plc (NYSE: BT - message board; London: BTA) CTO Matt Bross agrees that incorporating the potential of Web 2.0 mashups -- the creation of a new service by combining two existing applications -- and allowing third-party developers to create applications to run on carrier networks is the only way forward.

"... the innovation genie is out of the bottle. We need to do more mashups, and we need to connect together for innovation," stated Bross, ...We're moving towards a real-time global innovation model... [and] moving from a closed to an open model. It's a big challenge," for carriers.

"The emergence of Web 2.0 mashups from the IT world may put a new spin on telco service creation that could ultimately render an emerging generation of IMS-based applications obsolete,"

Tuesday, June 19, 2007

Amazing mashup tool of Flickr photos to produce 3D images

[This is an amazing demo and talk (as are most talks at TED) demonstrating the power of participative web through mashups of pictures across the Internet. It is clear demonstration of the network effect and visual semantic web --BSA]

Using photos of oft-snapped subjects (like Notre Dame) scraped from around the Web, Photosynth (based on Seadragon technology) creates breathtaking multidimensional spaces with zoom and navigation features that outstrip all expectation. Its architect, Blaise Aguera y Arcas, shows it off in this standing-ovation demo. Curious about that speck in corner? Dive into a freefall and watch as the speck becomes a gargoyle. With an unpleasant grimace. And an ant-sized chip in its lower left molar. "Perhaps the most amazing demo I've seen this year," wrote Ethan Zuckerman, after TED2007. Indeed, Photosynth might utterly transform the way we manipulate and experience digital images.

Why new network business models and architectures are critical

[The following web sites point to some of the challenging problems facing users and application providers in trying to deliver innovative solutions over the current broadband last mile Internet architecture. Despite all the advances in Internet technology, Moore's law, routers, optical switches, etc the current Internet is still riding on an infrastructure that was designed over 100 years in the case of DSL/telephone and over 40 years ago in the case of cable. Not only is this tree and branch architecture outdated, the resulting business model is fundamentally shaped and distorted by the design assumption in that architecture.

The Internet architecture for the long haul and enterprise markets has been radically reshaped over the past decade because of rigorous competition. Many large enterprises multi-home to several competitive providers and also do their own direct remote peering because of the availability of dark fiber in most metro markets. As a result companies with new competitive business models largely dominate the enterprise Internet market such as Level3, Cogent, Equinix, etc.

In the last mile it is a different story.

Cablecos and Telcos are now wrestling with the data deluge of Internet video distribution over their networks whether P2P or HTTP. Many have quietly and surreptitiously implemented various policing and shaping mechanisms to limit the growth of this traffic. Although it is applied on a non-discriminatory basis in terms of the customer or the provider, most cablecos and telcos still don't understand the broader implications in terms of network neutrality and are surprised when the public takes umbrage at such tactics. This is especially true when they block VoIP over cell phones.

The final frontier is the last mile architecture to our neighborhoods and homes. I am pleased to see that Australia is taking a proactive step in this direction with their recent $AUS 2 billion broadband announcement and the formation of an open committee to specify the FTTN architecture.

I have always argued that university/research community needs to take a more proactive role in both the research and, more importantly, the deployment of alternative last mile architectures. GENI (and the European equivalent PAN) is a critical step on the technology research side. But we also need new examples of actual deployments and business models. I know of a couple universities that plan to build virtual 3G wireless networks so that students will be free to integrate WiFi and pico-cells with their regular cell phones and also be free to develop new applications and services without getting permission of the underlying wireless provider. A couple of institutions are also exploring innovative last mile broadband networks for their universities and dormitories, and be liberated from the bandwidth limiting tyranny of either the service provider or the campus CIO -- BSA]

From Dewayne Henrdick's list
From: Ken DiPietro

Will carriers spoil the online video party?

BBC story on the Internet is being overloaded

From Dewayne Hendricks list
Early adopters want Wi-Fi mobile phones, T-Mobile wants to kill VOIP


Research firm In-Stat reports that there’s a substantial market niche
for Wi-Fi enabled mobile phones:

A recent survey of US early adopters found that almost half of those
respondents plan to replace their cell phones want Wi-Fi capability.
To meet the growing demand, there is an avalanche of dual-mode phones
in the pipeline. By the end of this year, the Wi-Fi Alliance will
have certified more than 100 different models of Wi-Fi/cellular
phones . . . widespread Wi-Fi deployment and the variety of Wi-Fi/
cellular handsets offers Wi-Fi/Cellular based systems a significant
head-start in the market,” says Allen Nogee, In-Stat Principal
Analyst. “Other technologies, such as WiMAX and Ultra Wideband, are
also poised to enter the handset market, but Wi-Fi fills a unique
niche that WiMAX and UWB cannot match.”

An alternate architecture for university dormitory networks

Australia announce $AUS 2 billion national broadband plan

[Details are still sketchy, but this is an amazing development, but I hope the FTTN architecture planned for the cities is not based around closed, out if date telco architecture.--BSA]

Australia announce $AUS 2 billion national broadband plan

Australian Prime Minister John Howard on Monday announced a 2.0 billion dollar (1.68 billion US) plan to provide fast and affordable Internet access across the vast country.

Howard said Optus, the Australian offshoot of Singapore telco Singtel, had been awarded a 958-million-dollar contract to build a broadband network in the bush with rural finance company Elders.

The joint venture, known as OPEL, would contribute a further 900 million US dollars to provide broadband of at least 12 megabits per second by June 2009.

"What we have announced today is a plan that will deliver to 99 percent of the Australian population very fast and affordable broadband in just two years' time," Howard said.

An expert group will also develop a bidding process for the building of a fibre-to-the-node (FTTN) broadband network, funded solely by private companies, in major cities.

Communications Minister Helen Coonan said wireless was the best option for rural Australia because it was impossible to install cables which would reach every farm and property across the country.

"It's been specially developed for rural and regional areas, where (with) fixed broadband you've got to actually run a fibre optic," she said.

Senator Coonan said the broadband speed of 12 megabits per second could "scale up" to very fast speeds as the technology evolved.

"It will be able to go much faster, up to 70 megabits a second and of course our new high-speed fibre network will be able to go up to 50," she said.

But the opposition labour Party attacked the plan, saying it was too little, too late ahead of this year's election and provided country people with a second-rate service.

"The government proposes a two-tier system -- a good system for the cities, they say, and a second-rate system for rural and regional Australia," labour leader Kevin Rudd said.

labour has proposed spending 4.7 billion US dollars to build a national fibre optic network which would cover 98 percent of the population.

The National Party, which is part of Howard's ruling Liberal/National coalition, welcomed the proposal but said it would continue to push for FTTN technology in regional areas.

Nationals Senator Barnaby Joyce said the fact that Australia was a vast country with a small population meant it would always be playing catch-up with other countries when it came to broadband.

"We'll always be catching up, always, because we are 20 million people in a country (the size) of the United States without Alaska," he said.

© 2006 AFP

The future of network research, computing science and eScience

[Some very compelling and highly recommended presentations on computing science and network research. From a posting on Dave Farber's IPer list -- BSA]

Begin forwarded message:

From: Ed Lazowska


As you've previously posted on IP, the Computing Community Consortium was established by NSF (under a cooperative agreement with the Computing Research Association) to engage the computing research community in formulating and articulating some longer-range and more compelling visions for the field.

At this week's Federated Computing Research Conference, there were six CCC-related talks. Slides are online and might be of interest to IP-ers.

The talks were:

Ed Lazowska, "Introducing the Computing Community Consortium"

Christos Papadimitriou, "The Algorithmic Lens: How the Computatational Perspective is Transforming the Sciences"

Bob Colwell, "Computer Architecture Futures 2007"

Randal Bryant, "Data-Intensive Super Computing: Taking Google-Style Computing Beyond Web Search"

Scott Shenker, "We Dream of GENI: Exploring Radical Network Designs"

Ed Lazowska, "Computer Science: Past, Present, and Future" (FCRC closing keynote)


Companies use participatory web to solve toughest research problems

[From a posting on Dewayne Hendrick's list -- BSA]


The two teenagers were short of nearly everything when they kick-
started their Chicago T-shirt business seven years ago. Jake Nickell
and Jacob DeHart each chipped in $500. They ran it out of Nickell's
apartment since DeHart still lived with his mother. For shipping,
they enlisted friends to carry the shirts to the post office.

But they had a killer design team: the Web. They solicited designs
from thousands of Internet users and then had them vote on which to
manufacture. Outsourcing design work to the Web's mass audience has
built the company, now called Threadless, into one of the country's
hottest T-shirt retailers, with estimated annual revenue of about $15

In a similar fashion, Fortune 500 companies such as Procter & Gamble,
Dow AgroSciences and General Mills now turn to the Internet to solve
some of their thorniest research problems. They post them on a Web
site called InnoCentive, which links up companies and scientists,
promising a reward often worth tens of thousands of dollars in
exchange for the best answer.

From quirky Internet start-ups to industrial titans, companies are
increasingly outsourcing segments of their business to sources in
cyberspace -- much as they began shifting production overseas a
generation earlier. This process, known as crowdsourcing, means that
work once done in-house, from design and research to information-
related services and customer support, can now be farmed out, tapping
new expertise, cutting costs and freeing company employees to do what
they do best.

The trend is gaining pace as corporate executives embrace the
openness of the Web. Analysts said the promising gains in
productivity will ultimately benefit the wider economy.

"It's a way to access the distributed knowledge that is out there on
the Web," said Karim R. Lakhani, a professor at Harvard Business
School who has studied the trend. "You can now basically focus on
your core business."

This approach exploits the vast human wisdom and expertise available
via the Internet. But crowdsourcing is less of a collaborative
endeavor than a means of finding individuals with the right skills
for the right price.

Companies are still sorting through a raft of new challenges. While
executives worry about sharing too much proprietary information with
outside contractors, lawyers wrestle with concerns over who owns the
rights to contributions from the crowd. Managers are also evaluating
how to assure quality control and are assessing which tasks are best
suited for outsourcing to the Web.

Publishers that once hired their own photographers are turning to
sites like iStockphoto, which offers nearly 1.8 million images shot
by thousands of amateurs and are available royalty-free for as little
as a dollar per picture or $5 for a video clip. offering
freelance services, said its writers complete about 300 jobs a week
for an average $500 each. Feedback for the writers is posted on the
Elance site along with their credentials, which in some cases are
verified by an outside company.

One of the Web's premier sellers,, has become a broker for
the Internet labor force. Amazon's Mechanical Turk service enables
"requesters" to post tasks online and facilitates payment once
they're finished.

Using Amazon's two-year-old service, finds Internet
users to collect images of products and related information for its
catalogue, according to Peter Cohen, director of Mechanical Turk.
(The program is named for the ploy of an 18th-century Hungarian
nobleman who built a turbaned mannequin and claimed it was a
mechanical automaton capable of beating anyone in chess. Hidden
inside was an actual chess master.) Another company, which makes
games, turned to Amazon to hire people who can write trivia questions
and then verify the answers. Cohen said more than 100,000 people have
performed work through Mechanical Turk since it was introduced in
late 2005.

For Threadless, the Internet has been part of the company's fiber
from its founding.


Amazon S3 for Science Grids: A viable solution?

Amazon S3 for Science Grids

S3_for_science_grids_revised A team of researchers from the University of South Florida and the University of British Columbia have written a very interesting paper, Amazon S3 for Science Grids: A Viable Solution?

In this paper the authors review the features of Amazon S3 in depth, focusing on the core concepts, the security model, and data access protocols. After characterizing science storage grids in terms of data usage characteristics and storage requirements, they proceed to benchmark S3 with respect to data durability, data availability, access performance, and file download via BitTorrent. With this information as a baseline, they evaluate S3's cost, performance, and security functionality.

They conclude by observing that many science grid applications don't actually need all three of S3's most desirable characteristics -- high durability, high availability, and fast access. They also have some interesting recommendations for additional security functionality and some relaxing of limitations.

I do have one small update to the information presented in the article! Since it article was written, we have announced that S3 is now storing 5 billion objects, not the 800 million mentioned in section II.

Monday, June 11, 2007

Copyright Silliness on Campus

[Some excerpts from Washington Post article posted on Dave Farber's IPer list-- BSA]


Copyright Silliness on Campus
By Fred von Lohmann
Wednesday, June 6, 2007; A23

As universities are pressured to punish students and install
expensive "filtering" technologies to monitor their computer
networks, the entertainment industry has ramped up its student
shakedown campaign. The Recording Industry Association of America has
targeted more than 1,600 individual students in the past four months,
demanding that each pay $3,000 for file-sharing transgressions or
face a federal lawsuit. In total, the music and movie industries have
brought more than 20,000 federal lawsuits against individual
Americans in the past three years.

History is sure to judge harshly everyone responsible for this absurd
state of affairs. Our universities have far better things to spend
money on than bullying students. Artists deserve to be fairly
compensated, but are we really prepared to sue and expel every
college student who has made an illegal copy? No one who takes
privacy and civil liberties seriously can believe that the
installation of surveillance technologies on university computer
networks is a sensible solution.

It's not an effective solution, either. Short of appointing a
copyright hall monitor for every dorm room, there is no way digital
copying will be meaningfully reduced. Technical efforts to block file-
sharing will be met with clever countermeasures from sharp computer
science majors. Even if students were completely cut off from the
Internet, they would continue to copy CDs, swap hard drives and pool
their laptops.

Already, a hard drive capable of storing more than 80,000 songs can
be had for $100. Blank DVDs, each capable of holding more than a
first-generation iPod, now sell for a quarter apiece. Students are
going to copy what they want, when they want, from whom they want.

So universities can't stop file-sharing. But they can still help
artists get paid for it. How? By putting some cash on the bar.

Universities already pay blanket fees so that student a cappella
groups can perform on campus, and they also pay for cable TV
subscriptions and site licenses for software. By the same token, they
could collect a reasonable amount from their students for "all you
can eat" downloading.

The recording industry is already willing to offer unlimited
downloads with subscription plans for $10 to $15 per month through
services such as Napster and Rhapsody. But these services have been a
failure on campuses, for a number of reasons, including these: They
don't work with the iPod, they cause downloaded music to "expire"
after students leave the school, and they don't include all the music
students want.

The only solution is a blanket license that permits students to get
unrestricted music and movies from sources of their choosing.

At its heart, this is a fight about money, not about morality. We
should have the universities collect the cash, pay it to the
entertainment industry and let the students do what they are going to
do anyway. In exchange, the entertainment industry should call off
the lawyers and lobbyists, leaving our nation's universities to focus
on the real challenges facing America's next generation of leaders.

How much value does the Internet deliver to consumers, an economist's view

[From a posting on Dewayne Hendricks list -- BSA]
[Note: This item comes from reader Charles Jackson. DLH]

From: "Charles Jackson"

Interesting paper:

Valuing Consumer Products by the Time Spent Using Them: An
Application to the Internet
by Austan Goolsbee, Peter J. Klenow

NBER Working Paper No. 11995
Issued in February 2006
NBER Program(s): EFG IO PR

---- Abstract -----

For some goods, the main cost of buying the product is not the price
but rather the time it takes to use them. Only about 0.2% of consumer
spending in the U.S., for example, went for Internet access in 2004
yet time use data indicates that people spend around 10% of their
entire leisure time going online. For such goods, estimating price
elasticities with expenditure data can be difficult, and, therefore,
estimated welfare gains highly uncertain. We show that for time-
intensive goods like the Internet, a simple model in which both
expenditure and time contribute to consumption can be used to
estimate the consumer gains from a good using just the data on time
use and the opportunity cost of people's time (i.e., the wage). The
theory predicts that higher wage internet subscribers should spend
less time online (for non-work reasons) and the degree to which that
is true identifies the elasticity of demand. Based on expenditure and
time use data and our elasticity estimate, we calculate that consumer
surplus from the Internet may be around 2% of full-income, or several
thousand dollars per user. This is an order of magnitude larger than
what one obtains from a back-of-the-envelope calculation using data
from expenditures.

Citizen Science: 'Push-Button' Climate Modeling Now Available

Breaking News:
'Push-Button' Climate Modeling Now Available

WEST LAFAYETTE, Ind., June 6 -- A tool used by scientists to create climate models is about to become easier to use and available to a much wider audience.

A new Web-enhanced version of the most commonly used climate-modeling system will allow many more scientists -- and even curious students -- to test theories about the planet's climate.

Matt Huber, an assistant professor of earth and atmospheric sciences at Purdue University, says the Community Climate System Model is already used by thousands of scientists, and the results from their models often make headlines around the world.

"This new tool makes climate modeling available to a much wider audience," Huber says. "This allows us to get science done at the push of a button. Now we have a 'turn-key' climate model."

The new climate modeling TeraGrid service tool was announced Wednesday (June 6) at the annual meeting of TeraGrid users in Madison, Wis.

Huber says this tool will allow many more people to become involved with climate modeling and to ask "what if?" questions.

"Our hope is to roll this out to a broader community," he says. "Researchers on the cutting edge of science can use this tool, but so can high school students who want to run their own climate models. They will generate equal output."

The Community Climate System Model, known to many scientists as CCSM, is actually a collection of interconnected modeling systems. The climate system model contains separate climate models using data from the atmosphere, oceans, land surfaces and ice fields and then brings the models together in yet another system known as a coupler. Carol X. Song, senior research scientist in Purdue's Office of the Vice President for Information Technology and principal investigator for the Purdue TeraGrid project, says researchers currently have to enter climate modeling information using UNIX command lines and know how to optimize the system to get accurate results.

"It can take days or more to get someone up to speed on how to use the modeling system, and that assumes they already know how to enter instructions in command lines," Song says. "With our new climate portal, that's all Web-enabled. All the user has to do is fill in fields on a Web form."

However, even with the easier to use Web interface, most users would be unable to run their models without access to powerful computing resources, which the new portal also provides.

"These simulations are very resource-intensive because they require a large amount of computer cycles and data storage," Song says. "We have connected this system to the resources of the National Science Foundation's TeraGrid so that the computing resources will be available."

The simulations are currently being run on an IBM DataStar computer at the San Diego Supercomputing Center. The post-processing of the simulation data is done on Purdue's distributed computing system, known as a Condor pool. Both institutions are part of the NSF TeraGrid.

Huber says climate models can be sensitive to underlying issues related to getting the multiple systems to work together.

"Optimizing the Community Climate System Model is difficult," he says. "It is possible by changing the optimization for the model to show global warming or global cooling when that isn't what the data really shows. Obviously you don't want that because, with climate modeling, everybody cares about the answer. This new system does the optimization for the user, so the modelers can concentrate on their climate models and not on system optimization."

Another benefit to using the new climate-modeling portal is that users don't have to be experts at using the TeraGrid.

"One of the problems with doing science on the grid is that sometimes you ask, 'Where did my data go?' Huber says. "With this system you don't have to track it down. This system automates a whole series of steps and also manages and archives the data."

Lan Zhao, a Purdue research scientist and architect of the Purdue earth science portals, including the CCSM portal, says development of additional portals for other scientific disciplines will now be quicker.

"We developed many generic, configurable components for this portal that can be used in other portals, which means new portals can be created rapidly and not from scratch," Zhao says.

Song says she is proud to be a part of the team that developed the new climate modeling tool.

"It's a very nice piece of work, and we're very excited to offer this new resource," she says. "The Rosen Center for Advanced Computing is a computing research group within Information Technology at Purdue, and this is what we are about. We connect the computing hardware with the needs of researchers."

The Community Climate System Model was developed by the National Center for Atmospheric Research and is currently funded by the National Science Foundation and the U.S. Department of Energy.

Social Networking tools for researchers and scientists
Newsroom > Web Article
Calit2 Launches Research Intelligence Portal and Web 2.0 Tools to Assist Researchers

San Diego, CA, May 4, 2007 -- Treemapping. Tag clouds. Mashups. Bug boxes. The jargon takes some getting used to, but a new Internet portal deployed by Calit2 offers the latest in Web 2.0-type technologies designed to help the institute's scientists and engineers find new funding opportunities and research partners.

The UC San Diego division of Calit2 today announced the beta release of the Research Intelligence Portal, which promises to "aggregate to inform." The site's tools offer information and insight that go well beyond what faculty members traditionally have relied upon to learn about available grants and collaborators for new research initiatives.

"We wanted to take some of the lessons learned from commercial business intelligence efforts and then apply them to the business of a research university," says project leader Jerry Sheehan, manager of government relations for the UCSD division. "This portal is a living experiment in how data mining, visualization and Web 2.0 technologies can be used to support the research endeavor."

UCSD Vice Chancellor for Research Art Ellis has been using an alpha version of the portal for several months. "For scholars, research intelligence tools will help create partnerships, locate resources, and identify emerging areas of opportunity," says Ellis. "In short, these tools have the potential to create completely new paradigms for conducting research."

The portal relies on best-of-breed technologies culled from open-source software; code developed in Calit2, and commercial programs, notably for the blog and treemapping sections of the site. [For a glossary of Web 2.0 buzzwords, see bottom of article.]

"We are doing real-time business analytics along with content and knowledge management in a participatory way," explains Sheehan. The portal is broken into four main sections:

+ Grant funding (Updated Daily). New and ongoing solicitations from the federal government, with funding opportunities by agency. Weekly maps of new funding are also available.

+ Industrial partners (Updated Daily). Profiles of 72 Calit2 industry partners with up-to-date breaking news (e.g., 18 articles were posted on May 1), and interactive maps that geo-locate where each partner is headquartered. Users can also sign up to get news via email or RSS feed, and can search the portal's backlog of partner data.

+ Research interests (Updated by Users). This section offers a snapshot of Calit2 research, based on keyword data from the websites and grant abstracts of 218 faculty and staff at UC San Diego. The software uses keyword analysis to generate tag clouds -- making it easier for a user to locate a researcher with specific expertise. Likewise, the user can get a glimpse of Calit2's overall research emphasis by choosing to look at 50 or 100 keyword tags at a glance. Separately, a treemap provides an interactive, visual representation of the 311 federal, peer-reviewed research grants awarded in the past four years to Calit2-affiliated faculty. Users can dig down to details on the smallest grant, while also capturing the bigger picture (e.g., NIH edges out NSF as Calit2's top provider of project funding, while the Department of Defense ranks #3).

+ Developer's Blog. This section offers site analytics; a bug box that encourages users to report any problems with the site; and a live-chat section that is usually staffed during business hours (Pacific time). Also on the site: a video primer on subscribing Really Simple Syndication (RSS) feeds.

"The Research Intelligence portal grew out of my need to see and interact with the universe of research associated with Calit2," says institute director Larry Smarr. "The beauty of these Web 2.0 tools is that, as more members of the Calit2 research community begin to participate in the creation of new content, the value of the portal increases -- not just to individual researchers, but also to the institute as a whole."

Primary funding for development of the portal came from the UCSD Division of Calit2. "It is critical that Calit2 remain on the cutting edge of new Web technologies and trends such as social networking," says division director Ramesh Rao, an electrical and computer engineering professor in UCSD's Jacobs Shool of Engineering. "Calit2 is fundamentally interdisciplinary, and the Research Intelligence portal will help researchers make new connections to their counterparts around campus."

In the beta release, the faculty information is limited to the 218 researchers affiliated with Calit2 on the UCSD campus. Work is underway to extend the coverage of the portal to include all Calit2-affiliated faculty at UC Irvine. Already, Smarr's office has provided supplemental funding to allow developers to beef up corporate data on the site to include Calit2 industry partners of the UC Irvine and UC San Diego divisions alike.

Members of both campuses can benefit from the portal's aggregation of federal funding opportunities and breaking news on the institute's industrial partners.

Calit2's Jerry Sheehan, who leads the project, outlines what the Research Intelligence portal offers researchers and Calit2's industry partners. To watch the video, click on the right arrow. Length: 1:34 According to Sheehan, the site is "tapping the emerging generation of web technologies to enable the collective intelligence of Calit2 researchers to be brought to bear in examining new research opportunities and collaborations."

The Research Strengths component of the portal uses software to analyze each investigator's research interests by culling through publicly available abstracts of grants or other available documents, as well as one web page affiliated with each researcher. All of the text is run through a web application programming interface (API) used by Yahoo! on its searches, as well as the Keyword Extractor and Analyzer (KEA) algorithm, which takes a text or a web page URL as input and spits out keywords. The result: a high degree of accuracy in selecting a weighted series of keywords that reflect each UCSD researcher's interests. With those keywords, the portal can then automatically search through new funding opportunities and feed the pertinent ones to specific researchers via Really Simple Syndication (RSS) or email.

More accurate keywords make for more relevant funding alerts, so the beta version lets Calit2-affiliated researchers edit their profiles, and keywords, and export any of this data to their web pages or other social bookmarking sites. In the next round of tool development, users will also be able to upload their own documents and web pages and get back appropriate keywords for submission. "Even with the best algorithms in the world, the accuracy of extracting key words will always benefit from the active participation of the researcher," says Sheehan. "The foundation of this portal is that it is based on the participatory Web 2.0 model, and as more researchers upload content, the more valuable the entire service will be for that researcher as well as for all users of the site."

The portal uses RSS as a way to deal with data, and blogs as a framework for content management, with an underpinning of open-source databases such as MySQL. There are interactive chat features already integrated into the system, but developers plan to add more such features in future. "The site is like a Swiss army knife," adds Sheehan. "Most users can use it, but a lot of tools may take some explanation, which is one reason for embedding the instructional videos and help features."

The underlying database structure and service-oriented integration of the portal was developed by staff programmer-analyst Arindam Ganguly, a recent UCSD computer science graduate, in collaboration with Calit2's Software & Systems Architecture & Integration (SAINT) lab, led by CSE professor Ingolf Krueger. SAINT program analyst Yonghui Chen worked on the KEA key word extraction, treemapping and some RSS functions, while graduate student To-ju Huang helped automate the process of information gathering, filtering and populating.

UCSD vice chancellor for research Art Ellis says that the Calit2 portal will help the campus position itself as an innovator, especially in front of the funding agencies that account for the bulk of federal support for research. "We live in an age where technologies such as databases and visualization tools are allowing us to track the creation and diffusion of knowledge -- essentially in real time and across the globe," notes Ellis, who joined UC San Diego last September from the National Science Foundation, where he directed the agency's Division of Chemistry. "The development of research intelligence tools is being supported by a variety of federal agencies and campuses that recognize their potential for managing their investment portfolios."

"Our hope is that in the next four months we will get input from beta users so we can see how useful the portal is," says Sheehan. At that point, the developers will decide whether to move into full-scale production. Before then, they will be closely tracking the number of users who register for the system and then take the trouble to modify their key research words.

Sheehan's principal message to Calit2 researchers: "The key is your participation."

Do You Speak Web 2.0?

New Web technologies have triggered an explosion in Internet-related vocabulary. Following are short descriptions based on definitions posted on Wikipedia.

Blog. Short for 'web log' - a website where entries are made and displayed in reverse chronological order, typically providing commentary or news on a particular subject or from a specific perspective.

Bug box. Website feature that encourages users to submit information on bugs or glitches in the site's software or tools.

Mashup. Website or web application that combines content from more than one source.

RSS. Really Simple Syndication (RSS 2.0) is a web feed format used to publish frequently updated digital content.

Social bookmarking. User-stored lists of Internet resources that are available to a network through user-defined keywords (tags). Services such as and Connotea also offer rating, commenting, citations, reviews web annotation and other innovations.

Social networks. Virtual online communities - such as MySpace and Facebook - which grow as newcomers invite their own personal network contacts to join; large organizations are creating private social networks, known as Enterprise Relationship Management.

Tags. A type of metadata involving the association of descriptors with objects; frequently appear as keywords.

Tag cloud. Visual depiction of content tags used on a website, with more frequently used tags depicted in a larger font.

Treemap. Interactive method for displaying information about entities with a hierarchical relationship, in a "space-constrained" environment (e.g., a computer monitor).

Web 2.0. Second-generation, Web-based communities, tools and services that facilitate collaboration and sharing among users.

The challenges for traditional software as it moves to the web

[From Dewayne Hendricks list -- BSA]

[Note: This item comes from friend John McMullen. DLH]

From: "John F. McMullen

From the New York Times -- technology/05compute.html?

Competing as Software Goes to Web
Can two bitter rivals save the desktop operating system?
by John Markoff

In the battle between Apple and Microsoft, Bertrand Serlet and Steven
Sinofsky are the field generals in charge of competing efforts to
ensure that the PCs basic software stays relevant in an increasingly
Web-centered world.

The two men are marshaling their software engineers for the next
encounter, sometime in 2009, when a new generation of Macintosh and
Windows operating systems is due. Their challenge will be to avoid
refighting the last war and to prevent finding themselves outflanked
by new competitors.

Many technologists contend that the increasingly ponderous PC-bound
operating systems that currently power 750 million computers,
products like Microsofts Windows Vista and Apples soon-to-be-released
Mac OS X Leopard, will fade in importance.

In this view, software will be a modular collection of Web-based
services accessible by an array of hand-held consumer devices and
computers and will be designed by companies like Google and Yahoo
and quick-moving start-ups.

The center of gravity and the center of innovation has moved to the
Web, where it used to be the PC desktop, said Nova Spivack, chief
executive and founder of Radar Networks, which is developing a Web
service for storing and organizing information.

Faced with that changing dynamic, Apple and Microsoft are expected to
develop operating systems that will increasingly reflect the
influence of the Web. And if their valuable turf can be preserved, it
will largely reflect the work of Mr. Serlet and Mr. Sinofsky, veteran
software engineers with similar challenges but contrasting management


Amazon EC2 For Scientific Processing

[Amazon EC2 is generating a lot of buzz in scientific computing circles as it makes distributed computing platform projects a lot easier to implement, as opposed to federating disparate resources from different management domains. Some also claim that EC2 is considerably cheaper for a researcher than what it would cost to provide power and cooling to most HPC computing clusters. Thanks to Richard Ackerman for this pointer-- BSA]

Amazon EC2 For Scientific Processing

Bioinformatics_for_dummies Mike Cariaso was kind enough to set up the Meetup in Bethesda for my upcoming trip to Washington, DC. Mike has done some pretty cool work with with Amazon EC2, setting up the mpiBLAST tool to run on EC2.

MPI, short for Message Passing Interface, is a standard for coordinating processing on supercomputer grids. MPIPCH2 is a popular implementation of MPI.

BLAST is the primary bioinformatics tool used to query genome sequences against an established database, or to match one sequence against another. The primary BLAST tool is run as an online service by the National Institute of Health.

Running BLAST over MPI lets BLAST run on a processing grid; this variant is called mpiBLAST.

Mike's work builds on that of Peter Skomorch, who did the work needed to get MPIPCH2 running on Amazon EC2. Peter documented his work in a very informative set of blog posts:

* On-Demand MPI Cluster with Python and EC2 (part 1 of 3)
* MPI Cluster with Python and Amazon EC2 (part 2 of 3)
* Amazon EC2 Considered Harmful

That last post doesn't actually reference EC2, but it is entertaining nonetheless. Part 2 ends with a parallel fractal calculation running on 5 EC2 instances!

By the way, I'm very interested in hearing about more academic and scientific uses of EC2. Please feel free to post a comment.

-- Jeff;

See also past postings:

Open Source - the Ignorance of Crowds

[There have been several good articles and discussion on some lists about the value of open source programming especially related to some of its inherent limitations. I agree with most critics about the challenges of open source with respect to large software projects. But to my mind the debate is becoming increasingly irrelevant. The problem is not about the pros and cons of open source - but about large monolith programming projects. Web services, Web 2.0, etc is slowly eliminating the need for such architectural approaches to solving large complex problems. Instead programmers and computer scientists are recognizing that a lot of this work can be incorporated into stand alone web services linked across the network. The modules can be developed independently by small teams of open source developers, where the same module can be re-used by many different applications. The value no longer remains resident in the software but how you mash up these services together to create new innovative solutions. Thanks to Frank Coluccio and Andrew Odlyzko for these pointers -- BSA]

The Ignorance of Crowds
by Nicholas G. Carr
Issue 47 | Summer 2007

The open source model can play an important role in innovation, but know its limitations. Ten years ago, on May 22, 1997, a little-known software programmer from Pennsylvania named Eric Raymond presented a paper at a technology conference in Wurzburg, Germany. Titled The Cathedral and the Bazaar the paper caused an immediate stir, and its renown has only grown in the years since. It is now widely considered one of the seminal documents in the history of the software industry.

Continued at:

Open Source Software Development as a Special Type of Academic Research

A Second Look at the Cathedral and Bazaar



[From a posting on Dewayne Hendricks list --BSA]

[SOURCE: Internet Innovation Alliance]

Working on the premise that informed policy makers make the right
decisions when they have the most accurate and current information at
their fingertips, the Internet Innovation Alliance has attempted to
gather the relevant data on high speed Internet access in one single
location. The report cover Internet basics and access speeds,
demographics of Internet users, broadband deployment in the US and
abroad, the consumer and economic benefits of broadband, and the
growth in Internet traffic. All in less than 30 pages -- what a bargain.

Health Care Grid using web services to be deployed in Saskatchewan

[Excerpts from article-- BSA]

On The Go Technologies Group, .... has received an order
for a turnkey DICOM archive solution be deployed within Saskatchewan's Provincial health care region. The order is significant and
unprecedented as it represents the first of its kind in Canada.

Acuo Technologies' DICOM Services Grid software delivers 21st century
image management features and performance. The AcuoMed Image Manager
is a secure, open-system software solution for transporting, storing,
tracking and retrieval of digital images across an entire DICOM
network. The enabling open systems software solution, constructed on a
collaborative and extensible grid computing model, facilitates an
infrastructure built on a services-oriented architecture and
virtualizes and replicates storage assets.

More on Platform Architect as a Network-centric Strategy

[More comments From David Reed on platform architects -- BSA

Interesting. But it's mostly been said before, quite coherently - read
the book by Annabelle Gawer and Michael Cusumano entitled Platform
Leadership, from Harvard Business School Press.

As vp and chief scientist at Lotus, I was lucky to be involved directly
in most of the cases discussed in that book - Intel, for example, early
recognized that its value proposition was intimately entangled with a
network of other companies, many of whom were linked only indirectly to
Intel (e.g. Lotus, which had *zero* customer or supplier contracts with
Intel, and little need to talk to them). Thus taking a leadership role
in shaping that network's evolution was going to be crucial, and Intel
stepped into the role - very actively working with people with
responsibilities like mine in distant parts of the network to define the
evolution of the PC platform. (e.g. Intel led a variety of software
architecture standards efforts that increased the success of PC's their
OS's and their applications - often acting as Switzerland between
tensely competing companies that also had to cooperate - IBM and Compaq,
Lotus and MicrosoftApplications). This contrasted directly with Apple's
very constrained notion of a vertically integrated stack, and no network
to speak of (if you also did apps or hadrware for PCs, Apple basically
told you, e.g. us at Lotus, to go to hell, presuming that they could
dictate their platform's direction alone).

The failure to grasp that one's platform was embedded in a network of
relationships that had non-trivial network effects was a failing
strategy, over the long run. IMO, it explains a lot of what has
happened in the computing industry. It's not a full explanation, but
ignore it at your peril. That said, the more macho and self-centered
CEOs and management teams ignore it all the time. The more the CEO (or
POTUS, for that matter) is surrounded by glorifying sycophants, the more
likely the value of the network is discounted by that person.

This, by the way, is part of where the inspiration for Reed's Law came from.

Platform Architect as a Network-centric Strategy

[From N. Venkat Venkatrama's blog -- BSA]

Platform Architect as a Network-centric Strategy

My work in recent years has focused on strategies from a network-centric point of view. Here are some core differences when we shift the frame from a firm-centric view to a network-centric view.

1. Instead of looking at a corporation as a portfolio of products (single business firms) or portfolio of businesses (multi-business firms), look at corporations as a portfolio of capabilities leveraging a portfolio of relationships (within- and across different organizational entities and including customers). Co-creation with complementary corporations and customers emerge as a useful frame to think about strategic choices.

2. Recognize the role of Internet as the new infrastructure to architect business models that deliver superior value to customers (and by extension, the shareholders).

Viewed this way, there is symbiotic relationship between business and IT strategies. Each needs the other; neither alone is sufficient for winning performance.
So, what new business models emerge when viewed from a network-centric point of view? One that I have been intrigued by for sometime is what I label as platform architect. The term platform has become widely used (and perhaps even abused) in business writings today. But, the logic of architecting a platform reflects the essence of strategic thinking in network-centric terms. Let me outline some essential ideas.

1. The automotive industry may have created the first product platform whereby automakers such as GM or Ford designed ways to share a set of components common to different automobiles. It helped in streamlining production and efficient procurement of components and subsystems from vendors. In the automotive industry, product platforms are also labeled as vehicle architecture that specifies how the various components and subsystems fit together. This is an example of proprietary or closed platform architecture. GM's vehicle architecture differed from Ford's or Chrysler's.

2. The computer industry transformation allows us to go beyond single-company, proprietary closed architecture to a more multi-company, relatively open architecture.

Understanding this shift in the computer industry allowed researchers to introduce key concepts of design rules and modular operators by Carliss Baldwin and Kim Clark. The six modular operators introduced by them in the book is a useful way to understand the shift in the computer industry. These are:

Splitting - Modules can be made independent.
Substituting - Modules can be substituted and interchanged.
Excluding - Existing Modules can be removed to build a usable solution.
Augmenting - New Modules can be added to create new solutions.
Inverting - The hierarchical dependencies between Modules can be rearranged.
Porting - Modules can be applied to different contexts.

This shift also allows us to understand the importance of platform leaders. Annabelle Gawer and Michael Cusumano show how Intel, Microsoft and Cisco (among others) drive innovation in the computer industry through platform leadership. Indeed, a significant part of the Microsoft antitrust case is based on ideas of network effects and 'divided technical leadership'--a term introduced by Professor Tim Bresnahan in an unpublished working paper (available here). The essential point is to recognize that a platform need not be under the control of a single firm but could be designed by the coordinated efforts of multiple different firms, each leading in one or more layers of the stack (as shown in the figure above). Thus, we refer to the dominant architecture of the personal computer as Wintel (Windows-Intel) architecture. Coordination across complementary capabilities through different types of is an essential requirement of crafting strategies in a network-era. Platform architect is one who is capable of coordinating different components into a coherent architecture that creates direct and indirect network effects.

3. So, does the idea of platform architecture have any role beyond the computer industry? To the extent that network effects may be operating, platform architecture will be important. To the extent that business models are being created on a global networked infrastructure, platform architecture will be critical. We are beginning to see attempts by companies to architect platforms in music (e.g., Apple ipod/itunes and Microsoft Zune), photography (Yahoo-Flickr, Google Picasa) and e-retailing (Amazon, eBay), social networking (MySpace, Facebook), media (YouTube) and so on. I have developed some preliminary maps of some of these shifts that I use in my presentation on network-views of strategy. In an intriguing post, JP Rangaswami of BT even suggested that American Idol be viewed as a platform and enumerated a set of criteria that is useful to explore the ideas of platforms beyond the technology sector.
So, are you really a platform architect?

One--platform architecture is essentially network-centric. it involves coordination of modules (or sub-components) designed and delivered by different independent entities. So, while we may credit Apple for shifting the music industry to the network era, its success equally well depends on the broader ecosystem that it has orchestrated. Microsoft--despite succeeding in architecting the Wintel platform--has been unable to win with Zune thus far. Similarly, GM OnStar is a relatively closed (GM-centric) telematics platform that has not yet been adopted by any non-GM automakers for it to be considered a network-centric strategy while Microsoft is initially partnering with Ford Motor Company to launch a new dashboard OS. When Microsoft extends this experiment to deliver this functionality to non-Ford cars, it becomes a platform architect in the same spirit of Windows and Office platforms.

Behind every successful platform is vibrant ecosystem of complementary capabilities.

Two--platform architects earn revenue in two ways. There is direct payment from the users of the platform (e.g., Windows Vista or Office 2007, Sony PS3 or Xbox) or indirect payment from other parties involved in business transactions (e.g., advertising in the case of Google or transaction fee in the case of Visa or Amazon or eBay). The choice depends on network characteristics and customer propensity to pay for different features under different conditions.

Thinking through the location of cash register is critical to successful platform architect strategy.

Three--the scope of platform architecture straddles multiple different industry boundaries. The single-company proprietary platform like the automotive vehicle architecture is a firm-centric strategy. In contrast, platform architect from a network-centric point of view is cross-industry. Is Google a search engine or an advertising platform? Is Microsoft a software company or something else? Is Amazon an e-tailer or e-commerce platform or something even broader? What makes the platform architect strategy challenging is that it defies traditional industry boundaries.

Myopic demarcation of platform scope is likely to lead to failed strategies.

Four--platforms evolve and morph. Networks by definition are connected and dynamic. The dynamic nature is by virtue of addition of new nodes and new linkages. New entities and new relationships change the scope of platforms. Amazon's success is partly due to its ability to evolve its definition of business scope and adapt to the new functionality and market demand (look at Amazon's launch of DRM-free onlin emusic downloads as an alternative to iTunes in different music formats). Competitive moves also redefine the scope of platforms. When Google acquired YouTube, it put in motion a chain reaction of new connections (e.g., NBC and MySpace joining together). RIM announced that its popular blackberry software will be available on competing phones that run Microsoft mobile OS. In some cases, the implication are more emerget (than designed) as in the case of mash-ups using APIs from different entities (The facebook platform releasing itsAPIs is the latest). For an updated matrix of mashups, see here.

Defending successful platforms call for proactively recognizing the likely evolutionary shifts and being in a position to capture value.

Posted by N. Venkatraman at 8:20 AM