[A presentation I highly recommend is one given by Ian Foster at a recent D-Grid meeting. There are many important concepts included in that presentation that are well worth investigating further. These new concepts are often referred to Web 2.0, others call it the participatory web, and still others call it user controlled processes - but essentially it is a new of thinking about doing science and business processes. The first generation of grids and eScience involved a strong coupling between the application and the underlying computational infrastructure. The second generation, built around Web 2.0 and SOA will empower many more users to deploy service oriented science where people create services independent of any specific computational infrastructure which anyone can discover. Other third party users can then compose these services with workflow tools to create a new function which then can be published as a new service to the global community. Users can now decouple their applications from the computational infrastructure and select best of breed facilities anywhere in the world - including commercial services such as Amazon Ec2 --BSA]
http://www-fp.mcs.anl.gov/~foster/Talks/Scaling%20eScience%20IBERGrid.pdf
Also highly recommended is Ian Foster's log at http://ianfoster.typepad.com/blog/
Tuesday, May 22, 2007
First International Conference on Networks for Grid Applications
http://www.gridnets.org
The GridNets conference series is an annual international meeting which provides a focused and highly interactive forum where researchers and technologists have the opportunity to present and discuss leading research, developments, and future directions in the Grid networking area. The objective of this event is to serve as both the premier conference presenting best Grid Networking research and a forum where new concepts can be introduced and explored.
The previous events in this series were: GridNets 2004 in San Jose (USA), GridNets 2005 in Boston (USA), GridNets 2006 in San Jose (USA). All of these events have been successful in attracting high quality papers and a wide international participation. From the first event through the fourth, we have been known as the GridNets Workshop affiliated with the IEEE BroadNets conference series. For this, our fifth event, we will convene our first meeting as a conference and in Europe. The proceedings will be published by ACM and will be available through ACM Digital Library. Best papers will be considered for publication in a special section of Elsevier Future Generation Computer Systems (FGCS) - The International Journal of Grid Computing: Theory, Methods and Application.
Grid developers and practicioners are increasingly realising the importance of an efficient network support. Entire classes of applications would greatly benefit by a network-aware Grid middleware, able to effectively manage the network resource in terms of scheduling, access and use. Conversely, the peculiar requirements of Grid applications provide stimulating drivers for new challenging research towards the development of Grid-aware networks.
Cooperation between Grid middleware and network infrastructure driven by a common control plane is a key factor to effectively empower the global Grid platform for the execution of network-intensive applications, requiring massive data transfers, very fast and low-latency connections, and stable and guaranteed transmission rates. Big e-science projects, as well as industrial and engineering applications for data analysis, image processing, multimedia, or visualisation just to name a few are awaiting an efficient Grid network support. They would be boosted by a global Grid platform enabling end-to-end dynamic bandwidth allocation, broadband and low-latency access, interdomain access control, and other network performance monitoring capabilities.
Scope
-----
* Network architectures and technologies for grids
* Integration of advanced optical networking technologies into the
grid environment.
* End to end lightpath provisioning software systems and emergent
standards
* The network as a first class grid resource: network resource
information publication, brokering and co-scheduling with other Grid
resources.
* Interaction of the network with distributed data management systems.
* Network monitoring, traffic characterisation and performance analysis
* Inter-layer interactions: optical layer with higher layer protocols,
integration among layers
* Experience with pre-production optical network infrastructures and
exchange points
* Peer-to-peer network enhancements applied to the Grid
* Network support for wireless and ad hoc Grids
* Data replication and multicasting strategies and novel data transport
protocols
* Fault-tolerance, self healing networks
* Security and scalability issues when connecting a large number of sites
within a virtual organization VPN.
* Simulations
* New concepts and requirements which may fundamentally reshape the evolution
of Networks.
Logical IP network service utilizing UCLP
FOR IMMEDIATE RELEASE: MONDAY, May 14, 2007
MANTICORE project Phase I: The specification of a logical IP network service utilizing UCLP
HEAnet and the i2CAT Foundation have started a new framework of collaboration under the just released UCLPv2 system, in order to progress and identify new services and network architectures
MANTICORE (Making APN Network Topologies on Internet COREs) is a project proposal to define the guidelines and specifications of a logical IP network, by providing a Web Services based system to offer APN (Articulated Private Networks), LR-WS (Logical Router Web Service) and Peering Web services to end users, which would allow them to create their own logical IP networks across the physical network. This project proposal consists of two phases. The first phase will be focused on defining the specifications of what is understood as a logical IP network Web Service with APN and LR-WS functionalities under the scope of UCLPv2. These initial ideas will be peer reviewed by external people and there will be a tentative check on real IP equipment. During the second phase there will be a rigorous implementation of the architecture and the specifications defined during the first phase.
MANTICORE extends existing User Controlled LightPath (UCLPv2) technology by adding the ability to manage Logical IP networks and the equipment that supports them by means of Web Services. This opens the possibility to configure end to end logical IP networks - routers as well as links - independently of the underlying network topology. The project aims to define requirements in an open basis, in order to allow its future implementation over any manufacturer's device. Therefore, major players will be requested to participate.
The specifications referring to LR-WS and Peering-WS will manage the physical devices and represent an abstract logical router towards the UCLPv2 middleware. Initially, these interfaces will use an XML API and NetConf to control both the logical devices and the physical host equipment. The objective is to provide an interface for devices that support XML APIs.
The second component involved in the study is the architecture of the IP Network-WS, which represents the IP services offered by UCLPv2 middleware towards the end user. This component represents a device-independent IP service and its associated configurations, and exposes abstract IP network services to end users, hiding equipment-specific or topology-specific details.
The MANTICORE’s use case definition and requirements gathering is underway (first phase of the project), and will be evaluated by different international researchers with expertise related to new architectures based on Logical Router and Peering Web Service functionalities. This assessment will validate the completeness of the project specifications defined during the first phase.
Before starting the second phase of the project, some organisations will be contacted in order to find out their interest in participating in the implementation to be performed during the second phase, in which a lab for experimental testing will be implemented on for instance HEAnet or i2CAT premises. If you are interested in actively taking part in this second phase please let us know.
About HEAnet
HEAnet is a world class provider of high quality Internet services to Ireland’s Universities, Institutes of Technology and the research and educational community including primary and post primary schools. HEAnet uses technology that places Ireland amongst the leading countries of the world.
HEAnet plays an essential role in facilitating leading edge research involving national and international collaborations between Irish researchers and students and their colleagues across the globe. It also enables leading researchers to live in Ireland and develop their ideas here.
About i2CAT
i2CAT is a non-profit Foundation aimed at fostering research and innovation supporting advanced Internet technology. Based in Barcelona, Spain, i2CAT promotes deployment of services and wideband applications from private and public research companies supporting the Catalunya region. The i2CAT model aims to make Internet research and innovation accessible to the whole of society through collaboration between the public sector, businesses and research groups within universities and the educational world. i2CAT is one of the original developers of UCLPv2. (www.i2cat.net)
Contact People
HEAnet: Victor Reijs (victor.reijs@heanet.ie)
i2CAT: Sergi Figuerola (sergi.figuerola@i2cat.net)
Democracy, Internet and Broadband Last Mile Networks - Worth reading
[Thanks to Hud Croasdale for this pointer. Definitely worth reading regardless of your political orientation-- BSA]
http://www.time.com/time/nation/article/0,8599,1622015,00.html
Wednesday, May. 16, 2007
Book Excerpt: The Assault on Reason
By Al Gore
Not long before our nation launched the invasion of Iraq, our longest-serving Senator, Robert Byrd of West Virginia, stood on the Senate floor and said: "This chamber is, for the most part, silent—ominously, dreadfully silent. There is no debate, no discussion, no attempt to lay out for the nation the pros and cons of this particular war. There is nothing. We stand passively mute in the United States Senate."
Why was the Senate silent?
In describing the empty chamber the way he did, Byrd invited a specific version of the same general question millions of us have been asking: "Why do reason, logic and truth seem to play a sharply diminished role in the way America now makes important decisions?" The persistent and sustained reliance on falsehoods as the basis of policy, even in the face of massive and well-understood evidence to the contrary, seems to many Americans to have reached levels that were previously unimaginable.
A large and growing number of Americans are asking out loud: "What has happened to our country?" People are trying to figure out what has gone wrong in our democracy, and how we can fix it.
To take another example, for the first time in American history, the Executive Branch of our government has not only condoned but actively promoted the treatment of captives in wartime that clearly involves torture, thus overturning a prohibition established by General George Washington during the Revolutionary War.
It is too easy—and too partisan—to simply place the blame on the policies of President George W. Bush. We are all responsible for the decisions our country makes. We have a Congress. We have an independent judiciary. We have checks and balances. We are a nation of laws. We have free speech. We have a free press. Have they all failed us? Why has America's public discourse become less focused and clear, less reasoned? Faith in the power of reason—the belief that free citizens can govern themselves wisely and fairly by resorting to logical debate on the basis of the best evidence available, instead of raw power—remains the central premise of American democracy. This premise is now under assault.
American democracy is now in danger—not from any one set of ideas, but from unprecedented changes in the environment within which ideas either live and spread, or wither and die. I do not mean the physical environment; I mean what is called the public sphere, or the marketplace of ideas.
It is simply no longer possible to ignore the strangeness of our public discourse. I know I am not alone in feeling that something has gone fundamentally wrong. In 2001, I had hoped it was an aberration when polls showed that three-quarters of Americans believed that Saddam Hussein was responsible for attacking us on Sept. 11. More than five years later, however, nearly half of the American public still believes Saddam was connected to the attack.
At first I thought the exhaustive, nonstop coverage of the O.J. Simpson trial was just an unfortunate excess—an unwelcome departure from the normal good sense and judgment of our television news media. Now we know that it was merely an early example of a new pattern of serial obsessions that periodically take over the airwaves for weeks at a time: the Michael Jackson trial and the Robert Blake trial, the Laci Peterson tragedy and the Chandra Levy tragedy, Britney and KFed, Lindsay and Paris and Nicole.
While American television watchers were collectively devoting 100 million hours of their lives each week to these and other similar stories, our nation was in the process of more quietly making what future historians will certainly describe as a series of catastrophically mistaken decisions on issues of war and peace, the global climate and human survival, freedom and barbarity, justice and fairness. For example, hardly anyone now disagrees that the choice to invade Iraq was a grievous mistake. Yet, incredibly, all of the evidence and arguments necessary to have made the right decision were available at the time and in hindsight are glaringly obvious.
Those of us who have served in the U.S. Senate and watched it change over time could volunteer a response to Senator Byrd's incisive description of the Senate prior to the invasion: The chamber was empty because the Senators were somewhere else. Many of them were at fund-raising events they now feel compelled to attend almost constantly in order to collect money—much of it from special interests—to buy 30-second TV commercials for their next re-election campaign. The Senate was silent because Senators don't feel that what they say on the floor of the Senate really matters that much anymore—not to the other Senators, who are almost never present when their colleagues speak, and certainly not to the voters, because the news media seldom report on Senate speeches anymore.
Our Founders' faith in the viability of representative democracy rested on their trust in the wisdom of a well-informed citizenry, their ingenious design for checks and balances, and their belief that the rule of reason is the natural sovereign of a free people. The Founders took great care to protect the openness of the marketplace of ideas so that knowledge could flow freely. Thus they not only protected freedom of assembly, they made a special point—in the First Amendment—of protecting the freedom of the printing press. And yet today, almost 45 years have passed since the majority of Americans received their news and information from the printed word. Newspapers are hemorrhaging readers. Reading itself is in decline. The Republic of Letters has been invaded and occupied by the empire of television.
Radio, the Internet, movies, cell phones, iPods, computers, instant messaging, video games and personal digital assistants all now vie for our attention—but it is television that still dominates the flow of information. According to an authoritative global study, Americans now watch television an average of 4 hours and 35 minutes every day—90 minutes more than the world average. When you assume eight hours of work a day, six to eight hours of sleep and a couple of hours to bathe, dress, eat and commute, that is almost three-quarters of all the discretionary time the average American has.
In the world of television, the massive flows of information are largely in only one direction, which makes it virtually impossible for individuals to take part in what passes for a national conversation. Individuals receive, but they cannot send. They hear, but they do not speak. The "well-informed citizenry" is in danger of becoming the "well-amused audience." Moreover, the high capital investment required for the ownership and operation of a television station and the centralized nature of broadcast, cable and satellite networks have led to the increasing concentration of ownership by an ever smaller number of larger corporations that now effectively control the majority of television programming in America.
In practice, what television's dominance has come to mean is that the inherent value of political propositions put forward by candidates is now largely irrelevant compared with the image-based ad campaigns they use to shape the perceptions of voters. The high cost of these commercials has radically increased the role of money in politics—and the influence of those who contribute it. That is why campaign finance reform, however well drafted, often misses the main point: so long as the dominant means of engaging in political dialogue is through purchasing expensive television advertising, money will continue in one way or another to dominate American politics. And as a result, ideas will continue to play a diminished role. That is also why the House and Senate campaign committees in both parties now search for candidates who are multimillionaires and can buy the ads with their own personal resources.
When I first ran for Congress in 1976, I never took a poll during the entire campaign. Eight years later, however, when I ran statewide for the U.S. Senate, I did take polls and like most statewide candidates relied more heavily on electronic advertising to deliver my message. I vividly remember a turning point in that Senate campaign when my opponent, a fine public servant named Victor Ashe who has since become a close friend, was narrowing the lead I had in the polls. After a detailed review of all the polling information and careful testing of potential TV commercials, the anticipated response from my opponent's campaign and the planned response to the response, my advisers made a recommendation and prediction that surprised me with its specificity: "If you run this ad at this many 'points' [a measure of the size of the advertising buy], and if Ashe responds as we anticipate, and then we purchase this many points to air our response to his response, the net result after three weeks will be an increase of 8.5% in your lead in the polls."
I authorized the plan and was astonished when three weeks later my lead had increased by exactly 8.5%. Though pleased, of course, for my own campaign, I had a sense of foreboding for what this revealed about our democracy. Clearly, at least to some degree, the "consent of the governed" was becoming a commodity to be purchased by the highest bidder. To the extent that money and the clever use of electronic mass media could be used to manipulate the outcome of elections, the role of reason began to diminish.
As a college student, I wrote my senior thesis on the impact of television on the balance of power among the three branches of government. In the study, I pointed out the growing importance of visual rhetoric and body language over logic and reason. There are countless examples of this, but perhaps understandably, the first one that comes to mind is from the 2000 campaign, long before the Supreme Court decision and the hanging chads, when the controversy over my sighs in the first debate with George W. Bush created an impression on television that for many viewers outweighed whatever positive benefits I might have otherwise gained in the verbal combat of ideas and substance. A lot of good that senior thesis did me.
The potential for manipulating mass opinions and feelings initially discovered by commercial advertisers is now being even more aggressively exploited by a new generation of media Machiavellis. The combination of ever more sophisticated public opinion sampling techniques and the increasing use of powerful computers to parse and subdivide the American people according to "psychographic" categories that identify their susceptibility to individually tailored appeals has further magnified the power of propagandistic electronic messaging that has created a harsh new reality for the functioning of our democracy.
As a result, our democracy is in danger of being hollowed out. In order to reclaim our birthright, we Americans must resolve to repair the systemic decay of the public forum. We must create new ways to engage in a genuine and not manipulative conversation about our future. We must stop tolerating the rejection and distortion of science. We must insist on an end to the cynical use of pseudo-studies known to be false for the purpose of intentionally clouding the public's ability to discern the truth. Americans in both parties should insist on the re-establishment of respect for the rule of reason.
And what if an individual citizen or group of citizens wants to enter the public debate by expressing their views on television? Since they cannot simply join the conversation, some of them have resorted to raising money in order to buy 30 seconds in which to express their opinion. But too often they are not allowed to do even that. MoveOn.org tried to buy an ad for the 2004 Super Bowl broadcast to express opposition to Bush's economic policy, which was then being debated by Congress. CBS told MoveOn that "issue advocacy" was not permissible. Then, CBS, having refused the MoveOn ad, began running advertisements by the White House in favor of the president's controversial proposal. So MoveOn complained, and the White House ad was temporarily removed. By temporarily, I mean it was removed until the White House complained, and CBS immediately put the ad back on, yet still refused to present the MoveOn ad.
To understand the final reason why the news marketplace of ideas dominated by television is so different from the one that emerged in the world dominated by the printing press, it is important to distinguish the quality of vividness experienced by television viewers from the "vividness" experienced by readers. Marshall McLuhan's description of television as a "cool" medium—as opposed to the "hot" medium of print—was hard for me to understand when I read it 40 years ago, because the source of "heat" in his metaphor is the mental work required in the alchemy of reading. But McLuhan was almost alone in recognizing that the passivity associated with watching television is at the expense of activity in parts of the brain associated with abstract thought, logic, and the reasoning process. Any new dominant communications medium leads to a new information ecology in society that inevitably changes the way ideas, feelings, wealth, power and influence are distributed and the way collective decisions are made.
As a young lawyer giving his first significant public speech at the age of 28, Abraham Lincoln warned that a persistent period of dysfunction and unresponsiveness by government could alienate the American people and that "the strongest bulwark of any government, and particularly of those constituted like ours, may effectively be broken down and destroyed—I mean the attachment of the people." Many Americans now feel that our government is unresponsive and that no one in power listens to or cares what they think. They feel disconnected from democracy. They feel that one vote makes no difference, and that they, as individuals, have no practical means of participating in America's self-government. Unfortunately, they are not entirely wrong. Voters are often viewed mainly as targets for easy manipulation by those seeking their "consent" to exercise power. By using focus groups and elaborate polling techniques, those who design these messages are able to derive the only information they're interested in receiving from citizens—feedback useful in fine-tuning their efforts at manipulation. Over time, the lack of authenticity becomes obvious and takes its toll in the form of cynicism and alienation. And the more Americans disconnect from the democratic process, the less legitimate it becomes.
Many young Americans now seem to feel that the jury is out on whether American democracy actually works or not. We have created a wealthy society with tens of millions of talented, resourceful individuals who play virtually no role whatsoever as citizens. Bringing these people in—with their networks of influence, their knowledge, and their resources—is the key to creating the capacity for shared intelligence that we need to solve our problems.
Unfortunately, the legacy of the 20th century's ideologically driven bloodbaths has included a new cynicism about reason itself—because reason was so easily used by propagandists to disguise their impulse to power by cloaking it in clever and seductive intellectual formulations. When people don't have an opportunity to interact on equal terms and test the validity of what they're being "taught" in the light of their own experience and robust, shared dialogue, they naturally begin to resist the assumption that the experts know best.
So the remedy for what ails our democracy is not simply better education (as important as that is) or civic education (as important as that can be), but the re-establishment of a genuine democratic discourse in which individuals can participate in a meaningful way—a conversation of democracy in which meritorious ideas and opinions from individuals do, in fact, evoke a meaningful response.
Fortunately, the Internet has the potential to revitalize the role played by the people in our constitutional framework. It has extremely low entry barriers for individuals. It is the most interactive medium in history and the one with the greatest potential for connecting individuals to one another and to a universe of knowledge. It's a platform for pursuing the truth, and the decentralized creation and distribution of ideas, in the same way that markets are a decentralized mechanism for the creation and distribution of goods and services. It's a platform, in other words, for reason. But the Internet must be developed and protected, in the same way we develop and protect markets—through the establishment of fair rules of engagement and the exercise of the rule of law. The same ferocity that our Founders devoted to protect the freedom and independence of the press is now appropriate for our defense of the freedom of the Internet. The stakes are the same: the survival of our Republic. We must ensure that the Internet remains open and accessible to all citizens without any limitation on the ability of individuals to choose the content they wish regardless of the Internet service provider they use to connect to the Web. We cannot take this future for granted. We must be prepared to fight for it, because of the threat of corporate consolidation and control over the Internet marketplace of ideas.
The danger arises because there is, in most markets, a very small number of broadband network operators. These operators have the structural capacity to determine the way in which information is transmitted over the Internet and the speed with which it is delivered. And the present Internet network operators—principally large telephone and cable companies—have an economic incentive to extend their control over the physical infrastructure of the network to leverage control of Internet content. If they went about it in the wrong way, these companies could institute changes that have the effect of limiting the free flow of information over the Internet in a number of troubling ways.
The democratization of knowledge by the print medium brought the Enlightenment. Now, broadband interconnection is supporting decentralized processes that reinvigorate democracy. We can see it happening before our eyes: As a society, we are getting smarter. Networked democracy is taking hold. You can feel it. We the people—as Lincoln put it, "even we here"—are collectively still the key to the survival of America's democracy.
http://www.time.com/time/nation/article/0,8599,1622015,00.html
Wednesday, May. 16, 2007
Book Excerpt: The Assault on Reason
By Al Gore
Not long before our nation launched the invasion of Iraq, our longest-serving Senator, Robert Byrd of West Virginia, stood on the Senate floor and said: "This chamber is, for the most part, silent—ominously, dreadfully silent. There is no debate, no discussion, no attempt to lay out for the nation the pros and cons of this particular war. There is nothing. We stand passively mute in the United States Senate."
Why was the Senate silent?
In describing the empty chamber the way he did, Byrd invited a specific version of the same general question millions of us have been asking: "Why do reason, logic and truth seem to play a sharply diminished role in the way America now makes important decisions?" The persistent and sustained reliance on falsehoods as the basis of policy, even in the face of massive and well-understood evidence to the contrary, seems to many Americans to have reached levels that were previously unimaginable.
A large and growing number of Americans are asking out loud: "What has happened to our country?" People are trying to figure out what has gone wrong in our democracy, and how we can fix it.
To take another example, for the first time in American history, the Executive Branch of our government has not only condoned but actively promoted the treatment of captives in wartime that clearly involves torture, thus overturning a prohibition established by General George Washington during the Revolutionary War.
It is too easy—and too partisan—to simply place the blame on the policies of President George W. Bush. We are all responsible for the decisions our country makes. We have a Congress. We have an independent judiciary. We have checks and balances. We are a nation of laws. We have free speech. We have a free press. Have they all failed us? Why has America's public discourse become less focused and clear, less reasoned? Faith in the power of reason—the belief that free citizens can govern themselves wisely and fairly by resorting to logical debate on the basis of the best evidence available, instead of raw power—remains the central premise of American democracy. This premise is now under assault.
American democracy is now in danger—not from any one set of ideas, but from unprecedented changes in the environment within which ideas either live and spread, or wither and die. I do not mean the physical environment; I mean what is called the public sphere, or the marketplace of ideas.
It is simply no longer possible to ignore the strangeness of our public discourse. I know I am not alone in feeling that something has gone fundamentally wrong. In 2001, I had hoped it was an aberration when polls showed that three-quarters of Americans believed that Saddam Hussein was responsible for attacking us on Sept. 11. More than five years later, however, nearly half of the American public still believes Saddam was connected to the attack.
At first I thought the exhaustive, nonstop coverage of the O.J. Simpson trial was just an unfortunate excess—an unwelcome departure from the normal good sense and judgment of our television news media. Now we know that it was merely an early example of a new pattern of serial obsessions that periodically take over the airwaves for weeks at a time: the Michael Jackson trial and the Robert Blake trial, the Laci Peterson tragedy and the Chandra Levy tragedy, Britney and KFed, Lindsay and Paris and Nicole.
While American television watchers were collectively devoting 100 million hours of their lives each week to these and other similar stories, our nation was in the process of more quietly making what future historians will certainly describe as a series of catastrophically mistaken decisions on issues of war and peace, the global climate and human survival, freedom and barbarity, justice and fairness. For example, hardly anyone now disagrees that the choice to invade Iraq was a grievous mistake. Yet, incredibly, all of the evidence and arguments necessary to have made the right decision were available at the time and in hindsight are glaringly obvious.
Those of us who have served in the U.S. Senate and watched it change over time could volunteer a response to Senator Byrd's incisive description of the Senate prior to the invasion: The chamber was empty because the Senators were somewhere else. Many of them were at fund-raising events they now feel compelled to attend almost constantly in order to collect money—much of it from special interests—to buy 30-second TV commercials for their next re-election campaign. The Senate was silent because Senators don't feel that what they say on the floor of the Senate really matters that much anymore—not to the other Senators, who are almost never present when their colleagues speak, and certainly not to the voters, because the news media seldom report on Senate speeches anymore.
Our Founders' faith in the viability of representative democracy rested on their trust in the wisdom of a well-informed citizenry, their ingenious design for checks and balances, and their belief that the rule of reason is the natural sovereign of a free people. The Founders took great care to protect the openness of the marketplace of ideas so that knowledge could flow freely. Thus they not only protected freedom of assembly, they made a special point—in the First Amendment—of protecting the freedom of the printing press. And yet today, almost 45 years have passed since the majority of Americans received their news and information from the printed word. Newspapers are hemorrhaging readers. Reading itself is in decline. The Republic of Letters has been invaded and occupied by the empire of television.
Radio, the Internet, movies, cell phones, iPods, computers, instant messaging, video games and personal digital assistants all now vie for our attention—but it is television that still dominates the flow of information. According to an authoritative global study, Americans now watch television an average of 4 hours and 35 minutes every day—90 minutes more than the world average. When you assume eight hours of work a day, six to eight hours of sleep and a couple of hours to bathe, dress, eat and commute, that is almost three-quarters of all the discretionary time the average American has.
In the world of television, the massive flows of information are largely in only one direction, which makes it virtually impossible for individuals to take part in what passes for a national conversation. Individuals receive, but they cannot send. They hear, but they do not speak. The "well-informed citizenry" is in danger of becoming the "well-amused audience." Moreover, the high capital investment required for the ownership and operation of a television station and the centralized nature of broadcast, cable and satellite networks have led to the increasing concentration of ownership by an ever smaller number of larger corporations that now effectively control the majority of television programming in America.
In practice, what television's dominance has come to mean is that the inherent value of political propositions put forward by candidates is now largely irrelevant compared with the image-based ad campaigns they use to shape the perceptions of voters. The high cost of these commercials has radically increased the role of money in politics—and the influence of those who contribute it. That is why campaign finance reform, however well drafted, often misses the main point: so long as the dominant means of engaging in political dialogue is through purchasing expensive television advertising, money will continue in one way or another to dominate American politics. And as a result, ideas will continue to play a diminished role. That is also why the House and Senate campaign committees in both parties now search for candidates who are multimillionaires and can buy the ads with their own personal resources.
When I first ran for Congress in 1976, I never took a poll during the entire campaign. Eight years later, however, when I ran statewide for the U.S. Senate, I did take polls and like most statewide candidates relied more heavily on electronic advertising to deliver my message. I vividly remember a turning point in that Senate campaign when my opponent, a fine public servant named Victor Ashe who has since become a close friend, was narrowing the lead I had in the polls. After a detailed review of all the polling information and careful testing of potential TV commercials, the anticipated response from my opponent's campaign and the planned response to the response, my advisers made a recommendation and prediction that surprised me with its specificity: "If you run this ad at this many 'points' [a measure of the size of the advertising buy], and if Ashe responds as we anticipate, and then we purchase this many points to air our response to his response, the net result after three weeks will be an increase of 8.5% in your lead in the polls."
I authorized the plan and was astonished when three weeks later my lead had increased by exactly 8.5%. Though pleased, of course, for my own campaign, I had a sense of foreboding for what this revealed about our democracy. Clearly, at least to some degree, the "consent of the governed" was becoming a commodity to be purchased by the highest bidder. To the extent that money and the clever use of electronic mass media could be used to manipulate the outcome of elections, the role of reason began to diminish.
As a college student, I wrote my senior thesis on the impact of television on the balance of power among the three branches of government. In the study, I pointed out the growing importance of visual rhetoric and body language over logic and reason. There are countless examples of this, but perhaps understandably, the first one that comes to mind is from the 2000 campaign, long before the Supreme Court decision and the hanging chads, when the controversy over my sighs in the first debate with George W. Bush created an impression on television that for many viewers outweighed whatever positive benefits I might have otherwise gained in the verbal combat of ideas and substance. A lot of good that senior thesis did me.
The potential for manipulating mass opinions and feelings initially discovered by commercial advertisers is now being even more aggressively exploited by a new generation of media Machiavellis. The combination of ever more sophisticated public opinion sampling techniques and the increasing use of powerful computers to parse and subdivide the American people according to "psychographic" categories that identify their susceptibility to individually tailored appeals has further magnified the power of propagandistic electronic messaging that has created a harsh new reality for the functioning of our democracy.
As a result, our democracy is in danger of being hollowed out. In order to reclaim our birthright, we Americans must resolve to repair the systemic decay of the public forum. We must create new ways to engage in a genuine and not manipulative conversation about our future. We must stop tolerating the rejection and distortion of science. We must insist on an end to the cynical use of pseudo-studies known to be false for the purpose of intentionally clouding the public's ability to discern the truth. Americans in both parties should insist on the re-establishment of respect for the rule of reason.
And what if an individual citizen or group of citizens wants to enter the public debate by expressing their views on television? Since they cannot simply join the conversation, some of them have resorted to raising money in order to buy 30 seconds in which to express their opinion. But too often they are not allowed to do even that. MoveOn.org tried to buy an ad for the 2004 Super Bowl broadcast to express opposition to Bush's economic policy, which was then being debated by Congress. CBS told MoveOn that "issue advocacy" was not permissible. Then, CBS, having refused the MoveOn ad, began running advertisements by the White House in favor of the president's controversial proposal. So MoveOn complained, and the White House ad was temporarily removed. By temporarily, I mean it was removed until the White House complained, and CBS immediately put the ad back on, yet still refused to present the MoveOn ad.
To understand the final reason why the news marketplace of ideas dominated by television is so different from the one that emerged in the world dominated by the printing press, it is important to distinguish the quality of vividness experienced by television viewers from the "vividness" experienced by readers. Marshall McLuhan's description of television as a "cool" medium—as opposed to the "hot" medium of print—was hard for me to understand when I read it 40 years ago, because the source of "heat" in his metaphor is the mental work required in the alchemy of reading. But McLuhan was almost alone in recognizing that the passivity associated with watching television is at the expense of activity in parts of the brain associated with abstract thought, logic, and the reasoning process. Any new dominant communications medium leads to a new information ecology in society that inevitably changes the way ideas, feelings, wealth, power and influence are distributed and the way collective decisions are made.
As a young lawyer giving his first significant public speech at the age of 28, Abraham Lincoln warned that a persistent period of dysfunction and unresponsiveness by government could alienate the American people and that "the strongest bulwark of any government, and particularly of those constituted like ours, may effectively be broken down and destroyed—I mean the attachment of the people." Many Americans now feel that our government is unresponsive and that no one in power listens to or cares what they think. They feel disconnected from democracy. They feel that one vote makes no difference, and that they, as individuals, have no practical means of participating in America's self-government. Unfortunately, they are not entirely wrong. Voters are often viewed mainly as targets for easy manipulation by those seeking their "consent" to exercise power. By using focus groups and elaborate polling techniques, those who design these messages are able to derive the only information they're interested in receiving from citizens—feedback useful in fine-tuning their efforts at manipulation. Over time, the lack of authenticity becomes obvious and takes its toll in the form of cynicism and alienation. And the more Americans disconnect from the democratic process, the less legitimate it becomes.
Many young Americans now seem to feel that the jury is out on whether American democracy actually works or not. We have created a wealthy society with tens of millions of talented, resourceful individuals who play virtually no role whatsoever as citizens. Bringing these people in—with their networks of influence, their knowledge, and their resources—is the key to creating the capacity for shared intelligence that we need to solve our problems.
Unfortunately, the legacy of the 20th century's ideologically driven bloodbaths has included a new cynicism about reason itself—because reason was so easily used by propagandists to disguise their impulse to power by cloaking it in clever and seductive intellectual formulations. When people don't have an opportunity to interact on equal terms and test the validity of what they're being "taught" in the light of their own experience and robust, shared dialogue, they naturally begin to resist the assumption that the experts know best.
So the remedy for what ails our democracy is not simply better education (as important as that is) or civic education (as important as that can be), but the re-establishment of a genuine democratic discourse in which individuals can participate in a meaningful way—a conversation of democracy in which meritorious ideas and opinions from individuals do, in fact, evoke a meaningful response.
Fortunately, the Internet has the potential to revitalize the role played by the people in our constitutional framework. It has extremely low entry barriers for individuals. It is the most interactive medium in history and the one with the greatest potential for connecting individuals to one another and to a universe of knowledge. It's a platform for pursuing the truth, and the decentralized creation and distribution of ideas, in the same way that markets are a decentralized mechanism for the creation and distribution of goods and services. It's a platform, in other words, for reason. But the Internet must be developed and protected, in the same way we develop and protect markets—through the establishment of fair rules of engagement and the exercise of the rule of law. The same ferocity that our Founders devoted to protect the freedom and independence of the press is now appropriate for our defense of the freedom of the Internet. The stakes are the same: the survival of our Republic. We must ensure that the Internet remains open and accessible to all citizens without any limitation on the ability of individuals to choose the content they wish regardless of the Internet service provider they use to connect to the Web. We cannot take this future for granted. We must be prepared to fight for it, because of the threat of corporate consolidation and control over the Internet marketplace of ideas.
The danger arises because there is, in most markets, a very small number of broadband network operators. These operators have the structural capacity to determine the way in which information is transmitted over the Internet and the speed with which it is delivered. And the present Internet network operators—principally large telephone and cable companies—have an economic incentive to extend their control over the physical infrastructure of the network to leverage control of Internet content. If they went about it in the wrong way, these companies could institute changes that have the effect of limiting the free flow of information over the Internet in a number of troubling ways.
The democratization of knowledge by the print medium brought the Enlightenment. Now, broadband interconnection is supporting decentralized processes that reinvigorate democracy. We can see it happening before our eyes: As a society, we are getting smarter. Networked democracy is taking hold. You can feel it. We the people—as Lincoln put it, "even we here"—are collectively still the key to the survival of America's democracy.
More on ABC streaming HD video - "Move" over Joost and YouTube
[Some excerpts from Forbes magazine article. Move is another nail in the coffin for traditional video delivery systems like cable TV and IPTV. Move is the technology being used by ABC to deliver their HD content. Thanks to Quentin Hardy, the author of this article for the pointer -- BSA]
http://members.forbes.com/forbes/2007/0521/072.html
A handful of show producers are trading in the Web's old flavor of video delivery for newer technologies that deliver clear and smooth streamed images.
A company called Move Networks in American Fork, Utah is at the forefront of this next evolution. Move does for video what voice over Internet networks did for telephone calls: It breaks up the video into bits and efficiently reorganizes them over the network so there's no need for the special computer servers and dedicated transmission lines required on streams using Flash.
Move executives say they handle more than a million full episode streams a week, and viewership has doubled every month. Move says it is now delivering as much as 200 terabytes, or 200 trillion bytes, of streams per day.
Move gets bits from the closest storage cache (similar to technology from Web video giant Akamai) and brings them back to the screen at the best streaming rate based on the network's traffic load. It uses standard Internet protocols, which means it can take advantage of the many server farms around the world that offer up Web pages. Both ABC and Fox say there is nothing else that can stream at this scale.
http://members.forbes.com/forbes/2007/0521/072.html
A handful of show producers are trading in the Web's old flavor of video delivery for newer technologies that deliver clear and smooth streamed images.
A company called Move Networks in American Fork, Utah is at the forefront of this next evolution. Move does for video what voice over Internet networks did for telephone calls: It breaks up the video into bits and efficiently reorganizes them over the network so there's no need for the special computer servers and dedicated transmission lines required on streams using Flash.
Move executives say they handle more than a million full episode streams a week, and viewership has doubled every month. Move says it is now delivering as much as 200 terabytes, or 200 trillion bytes, of streams per day.
Move gets bits from the closest storage cache (similar to technology from Web video giant Akamai) and brings them back to the screen at the best streaming rate based on the network's traffic load. It uses standard Internet protocols, which means it can take advantage of the many server farms around the world that offer up Web pages. Both ABC and Fox say there is nothing else that can stream at this scale.
ABC to offer primetime TV shows in HighDefinition over Internet
[This confirms the growing trend of studios and networks moving their content over to the Internet. The annoying thing is that it is only available in the US, unless you set up a proxy server, and it is streaming rather than download. From a DIGG posting -- BSA]
http://www.multichannel.com/article/CA6442478.html
Full episodes of some popular ABC primetime shows will debut this summer in HD on the Web.
Disney-ABC Television Group claimed to be the first major television programmer to stream HD video online, at 1280-by-720 resolution. In a test the broadcaster expects to launch in early July, ABC.com’s HD channel will feature a limited amount of content in from such series as Lost, Desperate Housewives, Grey's Anatomy and Ugly Betty. When ABC launches its new season in September, the site will offer a “more robust” HD lineup.
"This is all about innovating and creating ‘what's next’ to give consumers the best experience as they watch our content, regardless of viewing platform," Disney-ABC Television Group executive vice president of digital media Albert Cheng said in a prepared statement.
In addition to HD content, ABC.com's new broadband-video player will include national news and local content. The player will be “geo-targeted,” meaning that it can identify the area from which an Internet user is accessing the site and serve up corresponding content and ads.
Why Web 2.0 needs Grid Computing
[Excellent article in Grid Today. I fully agree with Paul Strong that traditional grids have long been associated with academic high performance computing with a relatively small number of jobs on a limited number of resources. Web 2.0 and web services brings in whole new range of complexity and scaling issues where there may be hundreds millions of services being invoked across thousands of distributed servers. Some excerpts from Grid Today--BSA]
If one were asked to cite companies whose datacenters epitomize the
idea of Grid 2.0, he could do a lot worse than to point to any of the
Internet giants that have taken advantage of grid technologies to
forever transform the ways in which we shop, access information,
communicate, and just about every other aspect of our lives.
Companies like Google, Yahoo!, Amazon and eBay set the standard
because they are using their massive, distributed computing
infrastructures
only host applications and store data, but also to host countless
services, both internal and external, and handle hundreds of millions
to billions of transactions every single day -- in real time.
Paul Strong takes on just this topic, discussing grid computing's expansion from being solely an HPC technology to being the basis for the distributed platforms necessary to make the Web 2.0 world run.
Historically, the term "grid computing" has been intimately associated with the high-performance and technical computing community. The term was, of course, coined by some leading members of that community and, unsurprisingly, to many it has been almost completely defined within the context of this specific type of use. Yet when you look closely at what grid computing actually is predicated upon, it becomes apparent that the notion of grid computing is far more universally applicable than perhaps many people think. Indeed, one could make the assertion that grids are the integrated platforms for all network-distributed applications or services, whether they are computationally or transactionally intensive.
Grids are about leveraging the network to scale to be able to solve problems or achieve throughput in a way that is just not possible using individual computers -- at least not computers in the traditional sense. It doesn’t matter what your class of workload, you can do so much more on a network: Your ability to scale is limited only by the number of resources you and your collaborators can connect together. Scaling is often achieved through division of workload and replication/duplication, which potentially yields the very desirable side effects of great resilience and, at least for appropriately implemented transactional applications, almost continuous availability.
In the Web 2.0 world, one has to use the network to scale. Individual servers cannot handle the transaction rates required of business on the Internet. The volume of data stored and manipulated by the likes of Google, eBay, Yahoo! and Amazon cannot fit into single instances or clusters of traditional databases or file systems. You have to do something different. Similarly, manipulating and presenting this data, or the services based on it, to hundreds of millions of consumers cannot be achieved without harnessing tens of thousands of computers, load balancers, firewalls and so forth. All of these applications or services treat the network as the platform, hence the assertion that infrastructures, such as eBay’s, are, in fact, grids.
While these Internet behemoths are at the extreme end of things, almost all application developers and datacenters are using similar techniques to scale their services. SOA and the like are just the latest milestones in the long journey from monolithic, server-centric applications to evermore fine-grained, agile, network-distributed services. As more and more instances of this class of application are deployed, and deployed on typically larger and larger numbers of small commodity components (small servers, cheap disks, etc.) instead of large multi-processor servers or complex storage arrays, the platform moves inexorably toward being the network itself, rather than the server. In fact, for most datacenters, it moves toward being a grid.
Just like a traditional operating system, [the grid] maps workload onto resources in line with policy. Unlike a traditional operating system, however, the resources are no longer just processors, memory and IO devices. Instead, they are servers, operating system instances, virtual machine monitors, disk arrays, load balancers, switches, database instances, application servers and so forth. The workload has shifted from being relatively simple binaries and executables to being distributed business services or complex simulations, and the policies are moving from simple scheduling priorities and the like to service-level objectives. This meta-operating system is realized today by various pieces of software, ranging from what we think of as traditional grid middleware to systems- and enterprise-management software.
The trouble with this massive scale-out approach is that management becomes a serious problem. What do you do when you find yourself managing 15,000-plus servers or an application with 15,000 instances? These environments become exponentially more complex as you add new elements. One new component or class of component might mean tens or hundreds of new relationships or interactions with the thousands of elements already in the infrastructure. Indeed, a 10,000-server infrastructure might actually comprise hundreds of thousands of managed components and relationships. Components can be as simple as servers or as complex as business workflow, and the relationships between all of these must be understood.
This is becoming the next big challenge, especially for commercial organizations. They must be able to map the value of the services delivered to the underlying platform elements and their costs. They must be able to detect infrastructure failures and understand how these impact the business services, or capture breaches of service-level objectives for business processes and trace these to problems within individual software or hardware components. They must be able to be agile, to make changes in a small part of their infrastructure and be able to predict or prevent any negative impact on the rest. With massively distributed, shared platforms -- i.e., grids -- this is extremely difficult. Clearly, the organizations who feel this pain the most probably are those with the greatest scale, such as the big Internet businesses.
So, where grid was once perhaps thought of as the context for one class of application, one can reasonably assert that it is, in fact, the universal context for network distributed applications or services. Grids are, in fact, a general-purpose platform and the context for commercial workloads within the enterprise. And where once the focus was almost exclusively on delivering scale, as the techniques and technologies that enable this mature, the focus has to shift to managing the resulting, enormously complex grid systems -- both the services and the platforms on which they run.
Engineering Virtual Organizations - NSF CyberInfrastructure Program
[NSF has just launched an exciting new program called Engineering Virtual Organizations. Those who are interested in applying CANARIE's Network Enabled Platforms program should read this solicitation closely. It is almost identical in terms of requirements for the CANARIE program. Most importantly Canadian research teams are eligible to apply for funding to the CANARIE program to join or participate in any US or European virtual organization as described in the NSF solicitation.-- BSA]
Engineering Virtual Organization Grants (EVO)
Program Solicitation
NSF 07-558
Engineering Virtual Organization (EVO) Grants
Synopsis of Program:
The primary purpose of this solicitation is to promote the development of Virtual Organizations (VO's) for the engineering community (EVOs). A VO is created by a group of individuals whose members and resources may be dispersed globally, yet who function as a coherent unit through the use of cyberinfrastructure (CI). EVOs will extend beyond small collaborations and individual departments or institutions to encompass wide-ranging, geographically dispersed activities and groups. This approach has the potential to revolutionize the conduct of science and engineering research, education, and innovation. These systems provide shared access to centralized or distributed resources, such as community-specific sets of tools, applications, data, and sensors, and experimental operations, often in real time.
With the access to enabling tools and services, self-organizing communities can create VOs to facilitate scientific workflows; collaborate on experiments; share information and knowledge; remotely operate instrumentation; run numerical simulations using shared computing resources; dynamically acquire, archive, e-publish, access, mine, analyze, and visualize data; develop new computational models; and deliver unique learning, workforce-development, and innovation tools. Most importantly, each VO design can originate within a community and be explicitly tailored to meet the needs of that specific community. At the same time, to exploit the full power of cyberinfrastructure for a VO's needs, research domain experts need to collaborate with CI professionals who have expertise in algorithm development, systems operations, and application development.
This program solicitation requests proposals for two-year seed awards to establish EVOs. Proposals must address the EVO organizing principle, structure, shared community resources, and research and learning goals; a vision for organizing the community, including international partners; a vision for preparing the CI components needed to enable those goals; a plan to obtain and document user requirements formally; and a project management plan for developing both a prototype implementation and a conceptual design of a full implementation. These items will be used as criteria for evaluation along with the standard NSF criteria of Intellectual Merit and Broader Impacts. Within the award size constraints, the prototype implementation should provide proof of concept with a limited number of its potential CI features. Successful proposals should expect to demonstrate the benefits of a fully functional EVO and how it will catalyze both large and small connections, circumventing the global limitations of geography and time zones.
I. INTRODUCTION
Cyberinfrastructure (CI) is having a transformative effect on engineering practice, science and education. The National Science Foundation (NSF) has been active in developing CI and advancing its use. Numerous resources are available that describe these activities:
* Report of the NSF Blue-Ribbon Panel on Cyberinfrastructure
* NSF Cyberinfrastructure Council Vision document
* NSF-sponsored workshops, several focused on engineering CI
Among its other investments in CI, NSF has catalyzed the creation of VOs as a key means of aiding access to research resources, thus advancing science and its application. Researchers working at the frontiers of knowledge and innovation increasingly require access to shared, world-class community resources spanning data collections, high-performance computing equipment, advanced simulation tools, sophisticated analysis and visualization facilities, collaborative tools, experimental facilities and field equipment, distributed instrumentation, sensor networks and arrays, mobile research platforms, and digital learning materials. With an end-to-end system, VOs can integrate shared community resources, including international resources, with an interoperable suite of software and middleware services and tools and high-performance networks. This use of CI can then create powerful transformative and broadly accessible pathways for scientific and engineering VOs to accelerate research outcomes into knowledge, products, services, and new learning opportunities.
Initial engineering-focused VOs (EVOs) have demonstrated the potential for this approach. Examples of EVOs involving significant engineering communities are the George E. Brown Jr. Network for Earthquake Engineering Simulation (NEES), the Collaborative Large-scale Engineering Analysis Network for Environmental Research (now called the WATERS network), the National Nanofabrication Users Network, and the Network for Computational Nanotechnology and its nanoHUB.org portal.
Other engineering communities can benefit from extending this model: organizing as VOs; exploiting existing CI tools, rapidly putting them to use; and identifying new CI opportunities, needs, and tools to reach toward their immediate and grand-challenge goals. These activities must be driven by the needs of participating engineers and scientists, but collaboration with information scientists is vital to build in the full power of CI capabilities.
Creation of VOs by engineering communities will revolutionize how their research, technical collaborations, and engineering practices are developed and conducted. EVOs will accelerate both research and education by organizing and aiding shared access to community resources through a mix of governance principles and cyberinfrastructure.
II. PROGRAM DESCRIPTION
This program solicitation requests proposals for two-year seed awards with three key elements: (1) establishing an engineering virtual organization, (2) deploying its prototype EVO implementation, and (3) creating a conceptual design of its full implementation. Proposals are encouraged from engineering communities that can provide documentary evidence of strong community support and interest in developing an EVO enabled by CI, potentially including international participants. The CI conceptual design should draw upon: (1) articulated research and education goals of a research community to advance new frontiers, (2) advances made by other scientific and engineering fields in establishing and operating VOs and their associated CI, (3) commercially available CI tools and services, and (4) CI tools and services emerging from current federal investments.
Proposals must address the following topics:
*
EVO structure and justification: Vision and mission; organizing and governing structure; members and recruitment; end users; stakeholders; and shared community resources (e.g., experimental facilities, observatories, data collections), their associated service providers, and access / allocation methods. Identify frontier research and education goals of the EVO, including compelling research questions and the potential for broad participation. EVOs will extend beyond small collaborations and individual departments or institutions to encompass wide-ranging, geographically dispersed activities and groups.
*
[...]
Engineering Virtual Organization Grants (EVO)
Program Solicitation
NSF 07-558
Engineering Virtual Organization (EVO) Grants
Synopsis of Program:
The primary purpose of this solicitation is to promote the development of Virtual Organizations (VO's) for the engineering community (EVOs). A VO is created by a group of individuals whose members and resources may be dispersed globally, yet who function as a coherent unit through the use of cyberinfrastructure (CI). EVOs will extend beyond small collaborations and individual departments or institutions to encompass wide-ranging, geographically dispersed activities and groups. This approach has the potential to revolutionize the conduct of science and engineering research, education, and innovation. These systems provide shared access to centralized or distributed resources, such as community-specific sets of tools, applications, data, and sensors, and experimental operations, often in real time.
With the access to enabling tools and services, self-organizing communities can create VOs to facilitate scientific workflows; collaborate on experiments; share information and knowledge; remotely operate instrumentation; run numerical simulations using shared computing resources; dynamically acquire, archive, e-publish, access, mine, analyze, and visualize data; develop new computational models; and deliver unique learning, workforce-development, and innovation tools. Most importantly, each VO design can originate within a community and be explicitly tailored to meet the needs of that specific community. At the same time, to exploit the full power of cyberinfrastructure for a VO's needs, research domain experts need to collaborate with CI professionals who have expertise in algorithm development, systems operations, and application development.
This program solicitation requests proposals for two-year seed awards to establish EVOs. Proposals must address the EVO organizing principle, structure, shared community resources, and research and learning goals; a vision for organizing the community, including international partners; a vision for preparing the CI components needed to enable those goals; a plan to obtain and document user requirements formally; and a project management plan for developing both a prototype implementation and a conceptual design of a full implementation. These items will be used as criteria for evaluation along with the standard NSF criteria of Intellectual Merit and Broader Impacts. Within the award size constraints, the prototype implementation should provide proof of concept with a limited number of its potential CI features. Successful proposals should expect to demonstrate the benefits of a fully functional EVO and how it will catalyze both large and small connections, circumventing the global limitations of geography and time zones.
I. INTRODUCTION
Cyberinfrastructure (CI) is having a transformative effect on engineering practice, science and education. The National Science Foundation (NSF) has been active in developing CI and advancing its use. Numerous resources are available that describe these activities:
* Report of the NSF Blue-Ribbon Panel on Cyberinfrastructure
* NSF Cyberinfrastructure Council Vision document
* NSF-sponsored workshops, several focused on engineering CI
Among its other investments in CI, NSF has catalyzed the creation of VOs as a key means of aiding access to research resources, thus advancing science and its application. Researchers working at the frontiers of knowledge and innovation increasingly require access to shared, world-class community resources spanning data collections, high-performance computing equipment, advanced simulation tools, sophisticated analysis and visualization facilities, collaborative tools, experimental facilities and field equipment, distributed instrumentation, sensor networks and arrays, mobile research platforms, and digital learning materials. With an end-to-end system, VOs can integrate shared community resources, including international resources, with an interoperable suite of software and middleware services and tools and high-performance networks. This use of CI can then create powerful transformative and broadly accessible pathways for scientific and engineering VOs to accelerate research outcomes into knowledge, products, services, and new learning opportunities.
Initial engineering-focused VOs (EVOs) have demonstrated the potential for this approach. Examples of EVOs involving significant engineering communities are the George E. Brown Jr. Network for Earthquake Engineering Simulation (NEES), the Collaborative Large-scale Engineering Analysis Network for Environmental Research (now called the WATERS network), the National Nanofabrication Users Network, and the Network for Computational Nanotechnology and its nanoHUB.org portal.
Other engineering communities can benefit from extending this model: organizing as VOs; exploiting existing CI tools, rapidly putting them to use; and identifying new CI opportunities, needs, and tools to reach toward their immediate and grand-challenge goals. These activities must be driven by the needs of participating engineers and scientists, but collaboration with information scientists is vital to build in the full power of CI capabilities.
Creation of VOs by engineering communities will revolutionize how their research, technical collaborations, and engineering practices are developed and conducted. EVOs will accelerate both research and education by organizing and aiding shared access to community resources through a mix of governance principles and cyberinfrastructure.
II. PROGRAM DESCRIPTION
This program solicitation requests proposals for two-year seed awards with three key elements: (1) establishing an engineering virtual organization, (2) deploying its prototype EVO implementation, and (3) creating a conceptual design of its full implementation. Proposals are encouraged from engineering communities that can provide documentary evidence of strong community support and interest in developing an EVO enabled by CI, potentially including international participants. The CI conceptual design should draw upon: (1) articulated research and education goals of a research community to advance new frontiers, (2) advances made by other scientific and engineering fields in establishing and operating VOs and their associated CI, (3) commercially available CI tools and services, and (4) CI tools and services emerging from current federal investments.
Proposals must address the following topics:
*
EVO structure and justification: Vision and mission; organizing and governing structure; members and recruitment; end users; stakeholders; and shared community resources (e.g., experimental facilities, observatories, data collections), their associated service providers, and access / allocation methods. Identify frontier research and education goals of the EVO, including compelling research questions and the potential for broad participation. EVOs will extend beyond small collaborations and individual departments or institutions to encompass wide-ranging, geographically dispersed activities and groups.
*
[...]
Web services, grids and UCLP for control of synchrotron beam lines
[Excellent presentation at the recent EPICs meeting in Germany on the use od web services, UCLP and grids for controlling synchrotron beam lines and distributing the data to researchers across Canada and Australia. Thanks to Elder Mathias for this pointer-- BSA]
ftp://ftp.desy.de/pub/EPICS/meeting-2007/epics-Mar2007.pdf
Good example of using UCLP on Geant2 network in Europe
An excellent use case example of UCLP on GEANT 2 network in Europe is available by engineers from i2Cat who are going to present at the coming Terena Networking Conference. The paper is based on an example application that aims to demonstrate its use for National Research and Educational Networks in Europe (which are connected to the Geant2 network)
UCLP is often confused with various bandwidth on demand and bandwidth reservation systems. Personally I am not a believer in such traditional circuit switched approaches to networking. Besides the inevitable call blocking problems associated with such architectures, the cost of optical transponders is dropping dramatically and so it is much easier to provision several nailed up parallel IP routed networks then deal with the high OPEX costs of managing a circuit switched environment.
UCLP is provisioning and configuration tool that allows edge organizations and users to configure and provision their own IP networks and do their own direct point to point peering. The traditional network approach, whether in the commercial world or R&E world is to have a central hierarchical telco like organization manage all the relationships between the edge connected networks and organizations. UCLP, on the other hand, is an attempt to extend the Internet end-to-end principle to the physical layer. It is ideally suited for condominium wavelength or fiber networks where a multitude of networks and organizations co-own and co-manage a network infrastructure.
Thanks to Sergei Figuerola for this pointer -- BSA]
UCLP case study on GEANT2 network
(http://tnc2007.terena.org/programme/presentations/show.php?pres_id=99)
For open source downloading of UCLP: http://www.uclp.ca/index.php?option=com_content&task=view&id=52&Itemid=77
The community edition will be available soon: http://www.inocybe.ca/
UCLP is often confused with various bandwidth on demand and bandwidth reservation systems. Personally I am not a believer in such traditional circuit switched approaches to networking. Besides the inevitable call blocking problems associated with such architectures, the cost of optical transponders is dropping dramatically and so it is much easier to provision several nailed up parallel IP routed networks then deal with the high OPEX costs of managing a circuit switched environment.
UCLP is provisioning and configuration tool that allows edge organizations and users to configure and provision their own IP networks and do their own direct point to point peering. The traditional network approach, whether in the commercial world or R&E world is to have a central hierarchical telco like organization manage all the relationships between the edge connected networks and organizations. UCLP, on the other hand, is an attempt to extend the Internet end-to-end principle to the physical layer. It is ideally suited for condominium wavelength or fiber networks where a multitude of networks and organizations co-own and co-manage a network infrastructure.
Thanks to Sergei Figuerola for this pointer -- BSA]
UCLP case study on GEANT2 network
(http://tnc2007.terena.org/programme/presentations/show.php?pres_id=99)
For open source downloading of UCLP: http://www.uclp.ca/index.php?option=com_content&task=view&id=52&Itemid=77
The community edition will be available soon: http://www.inocybe.ca/
SOA, Web 2.0 and Open source for education and student services
[It is exciting to see that a number of organizations and universities are starting to recognize the power of open source development combined with SOA and Web 2.0 for new educational tools and student services.
Just as the participatory web is transforming business practices and customer relationships in the corporate world, we are now seeing the same technologies have transformative impacts on educational and back office tools at schools and universities.
Of particular interest is the Kuali student service system which will deliver a new generation student system that will be developed through the Community Source process, delivered through service-oriented methodologies and technologies, and sustained by an international community of institutions and firms. Student systems are the most complex example of the various enterprise systems used by colleges and universities around the world. They are also closer to the core academic mission of these institutions than any other ‘administrative’ system; after all, students are their core business. Flexible student systems, combined with imaginative business policies and processes, can be a source of comparative advantage for an institution.
Imagine a student service system where modules and services can be distributed across multiple institutions but seen as seamless service integrated through a web portal using web service workflow tools. Database, courseware repositories and computational tasks can also be distributed across multiple institutions and linked together with workflow by either the instructor or the student.
For more details on the Kuali project please see http://rit.mellon.org/projects/kuali-student-british-columbia/ks-rit-report-2007.doc/view
For other open source, web service educational projects please see http://os4ed.com/component/option,com_frontpage/Itemid,1/
also
http://www.miller-group.net/
--BSA]
Just as the participatory web is transforming business practices and customer relationships in the corporate world, we are now seeing the same technologies have transformative impacts on educational and back office tools at schools and universities.
Of particular interest is the Kuali student service system which will deliver a new generation student system that will be developed through the Community Source process, delivered through service-oriented methodologies and technologies, and sustained by an international community of institutions and firms. Student systems are the most complex example of the various enterprise systems used by colleges and universities around the world. They are also closer to the core academic mission of these institutions than any other ‘administrative’ system; after all, students are their core business. Flexible student systems, combined with imaginative business policies and processes, can be a source of comparative advantage for an institution.
Imagine a student service system where modules and services can be distributed across multiple institutions but seen as seamless service integrated through a web portal using web service workflow tools. Database, courseware repositories and computational tasks can also be distributed across multiple institutions and linked together with workflow by either the instructor or the student.
For more details on the Kuali project please see http://rit.mellon.org/projects/kuali-student-british-columbia/ks-rit-report-2007.doc/view
For other open source, web service educational projects please see http://os4ed.com/component/option,com_frontpage/Itemid,1/
also
http://www.miller-group.net/
--BSA]
Commercial versions of UCLP available
[A couple of companies are now marketing commercial versions of UCLP. It is expected that other companies will be making announcements of commercial versions as well.
UCLP-User Controlled Lightpaths is a provisioning and configuration tool that allows end users such as enterprises or individual high performance user to setup and configure their own IP networks for such applications as remote peering and deploying private IP networks with virtual routers, switches, etc
It is an ideal tool for an organization controls and manages its own fiber network or condominium fiber network. It can be used to allow individual departments to configure their own LAN networks within a larger campus network, or establish direct IP connectivity with an external network independent of the default connection through the campus border router.
Most versions of UCLP use web services and grid technology so the network can be seen as an extension of the grid application or web service for virtualized or autonomous network-grid applications. -- BSA]
MRV is a supplier of optical line drivers, optical cross connects and WDM gear for organizations that have acquired their own fiber.
http://www.mrv.com/megavision http://www.mrv.com/product/MRV-NM-MVWEB/
ftp://ftp.mrv.com/pub/software/megavision/MegaVision_UCLP_Application.pdf
Inocbye is a network management software company in Montreal that also offers training, consultation and installation services for UCLP www.inocybe.ca
Solana is a network management software company in Ottawa www.uclpv2.com
Other UCLP resources
www.uclp.ca
www.uclpv2.ca
UCLP-User Controlled Lightpaths is a provisioning and configuration tool that allows end users such as enterprises or individual high performance user to setup and configure their own IP networks for such applications as remote peering and deploying private IP networks with virtual routers, switches, etc
It is an ideal tool for an organization controls and manages its own fiber network or condominium fiber network. It can be used to allow individual departments to configure their own LAN networks within a larger campus network, or establish direct IP connectivity with an external network independent of the default connection through the campus border router.
Most versions of UCLP use web services and grid technology so the network can be seen as an extension of the grid application or web service for virtualized or autonomous network-grid applications. -- BSA]
MRV is a supplier of optical line drivers, optical cross connects and WDM gear for organizations that have acquired their own fiber.
http://www.mrv.com/megavision http://www.mrv.com/product/MRV-NM-MVWEB/
ftp://ftp.mrv.com/pub/software/megavision/MegaVision_UCLP_Application.pdf
Inocbye is a network management software company in Montreal that also offers training, consultation and installation services for UCLP www.inocybe.ca
Solana is a network management software company in Ottawa www.uclpv2.com
Other UCLP resources
www.uclp.ca
www.uclpv2.ca
Citizen Science with Google Earth, mashups and web service for environmental applications
[Here is an excellent example of the power of mashups, web services using tools like Google Earth for environmental applications. Thanks to Richard Ackerman's blog--BSA]
Richard Ackermans Blog http://scilib.typepad.com/science_library_pad/2007/05/google_earth_an.html
Worskhop
http://www.niees.ac.uk/events/GoogleEarth/
The recent emergence of new "geobrowsing" technologies such as Google Earth, Google Maps and NASA WorldWind presents exciting possibilities for environmental science. These tools allow the visualization of geospatial data in a dynamic, interactive environment on the user's desktop or on the Web. They are low-cost, easy-to-use alternatives to the more traditional heavyweight Geographical Information Systems (GIS) software applications. Critically, it is very easy for non-specialists to incorporate their own data into these visualization engines, allowing for the very easy exchange of geographic information. This exchange is facilitated by the adoption of common data formats and services: this workshop will introduce these standards, focussing particularly on the Open Geospatial Consortium's Web Map Service and the KML data format used in Google Earth and other systems. A key capability of these systems is their ability to visualize simultaneously diverse data sources from different data providers, revealing new information and knowledge that would otherwise have been hidden. Such "mashups" have been the focus of much recent attention in many fields that relate to geospatial data: this workshop will aim to establish the true usefulness of these technologies in environmental science.
Weather Forecasts Without Boundaries using Grids, P2P and Web services
[Excerpts from www.GridToday.com article -- BSA]
[ ] M1541298 ) Grid Enables Weather Forecasts Without Boundaries
The results obtained from SIMDAT, a European research and development
project, are increasingly in demand from European and international
meteorological services and are likely to become acknowledged
worldwide. SIMDAT Meteo is working to establish a Virtual Global
Information System Centre (VGISC) for the national meteorological
services of France, Germany and the United Kingdom based on grid
technology to be used within the World Meteorological Organization
Information System (WIS) to provide cost-effective and user-friendly
services. VGISC offers a unique meteorological database integrating a
variety of data and providing secure, reliable and convenient access
via the Internet. It is targeted toward operational services and
research in the domains of meteorology, hydrology and the environment.
The VGISC software developed by the SIMDAT Meteo project partners, led
by the European Centre for Medium-Range Weather Forecasts, will offer
meteorological communities worldwide immediate, secure and convenient
access to various data and analysis services, as well as a
user-friendly platform for storage of meteorological data. VGISC will
thus enable the fast exchange of data for numerical weather forecasts,
disaster management and research -- independent of national frontiers
and beyond organizational boundaries.
The infrastructure of this new system will be based on a mesh network
of peers and meteorological databases. Messages are interchanged using
algorithms based on mobile telephony technologies and metadata
synchronization on a journalized file system. The grid technology is
based on Open Grid Services Architecture Data Access and Integration
(OGSA-DAI), which is founded on Web service and Web technology
concepts. In addition, standard protocols such as Open Archive
Initiative (OAI) are used to synchronize and integrate existing
archives and databases, as well as to extend interoperability.
Furthermore, VGISC will be a test bed for the ISO 19115 metadata
standard by handling complex data in real-time.
The SIMDAT project is Europe’s contribution to the
infrastructure technology of the emerging WIS as the World
Meteorological Organization (WMO) modernizes and enhances its
long-standing Global Telecommunications System (GTS), an international
network for exchanging mainly meteorological data and warnings in
real-time. In addition, the new system will provide access for all
environmental communities worldwide whereas GTS only allows access for
the present national weather services of the member states.
The opportunities for the new VGISC technology are excellent as VGISC
is not only of interest within Europe: The national meteorological
services of Australia, China, Japan, Korea and the Russian
Federation’s National Oceanographic Centre have already deployed
the SIMDAT software and are collaborating actively with the European
partners. The software deployment is followed by an increasing number
of meteorological centers and new meteorological datasets from Asia,
Australia, Europe and the United States are steadily being added to
the portal.
[ ] M1541298 ) Grid Enables Weather Forecasts Without Boundaries
The results obtained from SIMDAT, a European research and development
project, are increasingly in demand from European and international
meteorological services and are likely to become acknowledged
worldwide. SIMDAT Meteo is working to establish a Virtual Global
Information System Centre (VGISC) for the national meteorological
services of France, Germany and the United Kingdom based on grid
technology to be used within the World Meteorological Organization
Information System (WIS) to provide cost-effective and user-friendly
services. VGISC offers a unique meteorological database integrating a
variety of data and providing secure, reliable and convenient access
via the Internet. It is targeted toward operational services and
research in the domains of meteorology, hydrology and the environment.
The VGISC software developed by the SIMDAT Meteo project partners, led
by the European Centre for Medium-Range Weather Forecasts, will offer
meteorological communities worldwide immediate, secure and convenient
access to various data and analysis services, as well as a
user-friendly platform for storage of meteorological data. VGISC will
thus enable the fast exchange of data for numerical weather forecasts,
disaster management and research -- independent of national frontiers
and beyond organizational boundaries.
The infrastructure of this new system will be based on a mesh network
of peers and meteorological databases. Messages are interchanged using
algorithms based on mobile telephony technologies and metadata
synchronization on a journalized file system. The grid technology is
based on Open Grid Services Architecture Data Access and Integration
(OGSA-DAI), which is founded on Web service and Web technology
concepts. In addition, standard protocols such as Open Archive
Initiative (OAI) are used to synchronize and integrate existing
archives and databases, as well as to extend interoperability.
Furthermore, VGISC will be a test bed for the ISO 19115 metadata
standard by handling complex data in real-time.
The SIMDAT project is Europe’s contribution to the
infrastructure technology of the emerging WIS as the World
Meteorological Organization (WMO) modernizes and enhances its
long-standing Global Telecommunications System (GTS), an international
network for exchanging mainly meteorological data and warnings in
real-time. In addition, the new system will provide access for all
environmental communities worldwide whereas GTS only allows access for
the present national weather services of the member states.
The opportunities for the new VGISC technology are excellent as VGISC
is not only of interest within Europe: The national meteorological
services of Australia, China, Japan, Korea and the Russian
Federation’s National Oceanographic Centre have already deployed
the SIMDAT software and are collaborating actively with the European
partners. The software deployment is followed by an increasing number
of meteorological centers and new meteorological datasets from Asia,
Australia, Europe and the United States are steadily being added to
the portal.
Why research into the future of the Internet is critical
[I highly recommend taking a look at the YouTube video being distributed by the FTTh council and also listening to radio interview of Jon Crowcroft listed below. The FTTh council video is ideal for politicians and policy makers as it provides a very high level perspective of the Internet and its current challenges. It offers compelling evidence of the coming "exo-flood" of data that will hit the Internet, largely due to the distribution of video. They rightly argue that today's Internet is incapable of supporting this tsunami of data, particularly in the last mile - and that new network architectures and business models are required.
Jon Crowcroft's interview is quite interesting in that he claims that heat loading at data centers will make distribution of video through traditional client server models impossible, and that peer to peer (P2P) will be the only practical way of distributing such content. In testament to that fact there is an explosion of new companies gearing up to deliver video and other content via P2P including Joost, Vudu, etc. This is why many argue that the traditional telco NGN architecture with IPTV is doomed to failure. But P2P imposes its own sort of problems on today's Internet architectures as witnessed by the attempts of many service providers to limit P2P traffic or ban it outright.
If P2P is indeed going to be the major mode of delivery of data, especially for video then we need to explore new Internet architectures. As Van Jacobson has pointed out - too much network research is focused on the Internet as a traditional telecommunications medium of "channels" from A to B. As a result Internet research, especially in the optical and network world is largely about topology optimization, network layers, reliability, redundancy, etc
Universities to my mind should be at the forefront of exploring ways to build and deploy a new Internet - not only for the research community, but for the global community as well. But sadly many universities are also trying to restrict P2P traffic, and in some cases act as the snarling guard dog for the RIAA and MPAA. Students at universities are the early adopters of new technology (much more so than their ageing professors)- rather than discouraging their behaviour -we should see them as an opportunity to test and validate new Internet architectures and services. See my presentation at Net@Edu on some thoughts on this topic
P2P may fundamentally reshape our thinking of network architectures such as enabling the end user to do their own traffic engineering and network optimization to reach the closest P2P node or transit exchange point.
Thanks to Olivier Martin for the pointer to YouTube Video and Dewayne Hendricks for a multitude of other pointers -- BSA]
FTTH Council Video
http://www.youtube.com/watch?v=c4988qaCvvM
Jon Crowcroft Interview http://blogs.guardian.co.uk/podcasts/2007/04/science_weekly_for_april_30.html
A good article on the current challenges of P2P running on today's networks http://blogs.nmss.com/communications/2007/04/realworld_p2p_j.html
A very interesting read of why IPTV is doomed to failure: http://www.theregister.co.uk/2007/04/22/iptv_services/
Ohio university bans P2P
http://gizmodo.com/gadgets/home-entertainment/ohio-university-bans-all-p2p-activity-riaa-cackles-maniacally-255525.php
Van Jacobson's talk http://lists.canarie.ca/pipermail/news/2007/000374.html
How Universities can play a leading role in Next Generation Internet http://www.canarie.ca/canet4/library/recent/Net_Edu_Phoenix_Feb_5_2007.ppt
and http://www.canarie.ca/canet4/library/recent/Terena_Feb_22_2007.ppt
This Internet TV program is brought to you by ...
Joost, the Internet television service being developed by the
founders of Skype, has lined up several blue-chip advertisers,
including United Airlines, Microsoft, Sony Electronics and Unilever,
as it prepares for its introduction.
ex=1335240000&en=0f79f30914fff112&ei=5090&partner=rssuserland&emc=rss>
NY times article on Vudu and its P2P plans http://www.nytimes.com/2007/04/26/business/media/26adco.html?ex=1335240000&en=0f79f30914fff112&ei=5090&partner=rssuserland&emc=rss
Jon Crowcroft's interview is quite interesting in that he claims that heat loading at data centers will make distribution of video through traditional client server models impossible, and that peer to peer (P2P) will be the only practical way of distributing such content. In testament to that fact there is an explosion of new companies gearing up to deliver video and other content via P2P including Joost, Vudu, etc. This is why many argue that the traditional telco NGN architecture with IPTV is doomed to failure. But P2P imposes its own sort of problems on today's Internet architectures as witnessed by the attempts of many service providers to limit P2P traffic or ban it outright.
If P2P is indeed going to be the major mode of delivery of data, especially for video then we need to explore new Internet architectures. As Van Jacobson has pointed out - too much network research is focused on the Internet as a traditional telecommunications medium of "channels" from A to B. As a result Internet research, especially in the optical and network world is largely about topology optimization, network layers, reliability, redundancy, etc
Universities to my mind should be at the forefront of exploring ways to build and deploy a new Internet - not only for the research community, but for the global community as well. But sadly many universities are also trying to restrict P2P traffic, and in some cases act as the snarling guard dog for the RIAA and MPAA. Students at universities are the early adopters of new technology (much more so than their ageing professors)- rather than discouraging their behaviour -we should see them as an opportunity to test and validate new Internet architectures and services. See my presentation at Net@Edu on some thoughts on this topic
P2P may fundamentally reshape our thinking of network architectures such as enabling the end user to do their own traffic engineering and network optimization to reach the closest P2P node or transit exchange point.
Thanks to Olivier Martin for the pointer to YouTube Video and Dewayne Hendricks for a multitude of other pointers -- BSA]
FTTH Council Video
http://www.youtube.com/watch?v=c4988qaCvvM
Jon Crowcroft Interview http://blogs.guardian.co.uk/podcasts/2007/04/science_weekly_for_april_30.html
A good article on the current challenges of P2P running on today's networks http://blogs.nmss.com/communications/2007/04/realworld_p2p_j.html
A very interesting read of why IPTV is doomed to failure: http://www.theregister.co.uk/2007/04/22/iptv_services/
Ohio university bans P2P
http://gizmodo.com/gadgets/home-entertainment/ohio-university-bans-all-p2p-activity-riaa-cackles-maniacally-255525.php
Van Jacobson's talk http://lists.canarie.ca/pipermail/news/2007/000374.html
How Universities can play a leading role in Next Generation Internet http://www.canarie.ca/canet4/library/recent/Net_Edu_Phoenix_Feb_5_2007.ppt
and http://www.canarie.ca/canet4/library/recent/Terena_Feb_22_2007.ppt
This Internet TV program is brought to you by ...
Joost, the Internet television service being developed by the
founders of Skype, has lined up several blue-chip advertisers,
including United Airlines, Microsoft, Sony Electronics and Unilever,
as it prepares for its introduction.
NY times article on Vudu and its P2P plans http://www.nytimes.com/2007/04/26/business/media/26adco.html?ex=1335240000&en=0f79f30914fff112&ei=5090&partner=rssuserland&emc=rss
Wednesday, May 2, 2007
The futility of DRM
[Once again we are seeing an open rebellion against the attempts by the MPAA and RIAA, under the DMCA act, to censor and control the publication of keys for HD-DVD discs. When will these guys ever learn that DRM will never work in a large scale distribution of content. They continue to want to protect a failed business model through flawed DRM technologies, lawyers and take down orders, rather than develop new innovative marketing strategies. A classic example is that most of the Internet movie download services like NetFlix, iTV, etc are only available in the US because of the Byzantine marketing restrictions on distribution of content where licensing is the basis of a country and mode of distribution. In the content industry this is known as "windows" where you need a separate license for location and mode of distribution as well as negotiate the plethora of overlapping rights claims i.e. broadcast, cable, streaming, download, etc. Clever kids, outside of the US, have already figured out the use of proxy services to get around these idiotic restrictions. And the studios are wondering why unauthorized P2P movie downloads are so popular. Duh. It is easier to blame your potentially biggest customers of theft rather then question their own idiotic and antiquated business models -- BSA]
http://news.bbc.co.uk/2/hi/technology/6615047.stm
http://news.bbc.co.uk/2/hi/technology/6615047.stm
Citizen Science, Cyber-infrastructure and Carbon Dixoide Emissions
[Here are a couple of interesting sites demonstrating the power of grids, cyber-infrastructure, platforms and citizen science to measure the emission and absorption of carbon dioxide. The NOAA "Carbon Tracker" seeks volunteers to provide carbon dioxide measurements from around the globe and then uses that information integrated with a number of databases and computational models to assimilate the data and affects of forest fires, biosphere, ocean absorption etc. A similar project is the Canadian SAFORAH which has many objectives - of which one is to measure the amount of carbon dioxide absorbed by Canadian forests. This cyber-infrastructure project also supports studies in bird habitat across Canada. It uses Globus Toolkit v.4 at all of the SAFORAH participating sites. Currently, four Canadian Forestry Centres located in Victoria British Columbia, Cornerbrook Newfoundland, Edmonton, Alberta and Laurentian Québec are operationally connected to the SAFORAH data grid. SAFORAH offers Grid-enabled OGC services which are used to increase interoperability of EO data between SAFORAH and other geospatial information systems. The Grid-enabled OGC services consist of the following main components: Grid-enabled Web Map Service (GWMS), Grid-enabled Web Coverage Service (GWCS), Grid-enabled Catalog Service for Web (GCSW), Grid-enabled Catalog Service Federation (GCSF), Control Grid Service (CGS) and the Standard Grid Service Interfaces and OGC Standard User Interfaces. Thanks to Erick Cecil and Hao Chen -- BSA]
For more information on SAFORAH please see
www.saforah.org
For more information on carbon tracker please see www.esrl.noaa.gov/gmd/ccgg/carbontracker/
A tool for Science, and Policy
CarbonTracker as a scientific tool will, together with long-term monitoring of atmospheric CO2, help improve our understanding of how carbon uptake and release from land ecosystems and oceans are responding to a changing climate, increasing levels of atmospheric CO2 (the CO2 fertilization effect) and other environmental changes, including human management of land and oceans. The open access to all CarbonTracker results means that anyone can scrutinize our work, suggest improvements, and profit from our efforts. This will accelerate the development of a tool that can monitor, diagnose, and possibly predict the behavior of the global carbon cycle, and the climate that is so intricately connected to it.
CarbonTracker can become a policy support tool too. Its ability to accurately quantify natural and anthropogenic emissions and uptake at regional scales is currently limited by a sparse observational network. With enough observations though, it will become possible to keep track of regional emissions, including those from fossil fuel use, over long periods of time. This will provide an independent check on emissions accounting, estimates of fossil fuel use based on economic inventories, and generally, feedback to policies aimed at limiting greenhouse gas emissions. This independent measure of effectiveness of any policy, provided by the atmosphere (where CO2 levels matter most!) itself is the bottom line in any mitigation strategy.
CarbonTracker is intended to be a tool for the community and we welcome feedback and collaboration from anyone interested. Our ability to accurately track carbon with more spatial and temporal detail is dependent on our collective ability to make enough measurements and to obtain enough air samples to characterize variability present in the atmospheric. For example, estimates suggest that observations from tall communication towers (>200m) can tell us about carbon uptake and emission over a radius of only several hundred kilometers. The map of observation sites shows how sparse the current network is. One way to join this effort is by contributing measurements. Regular air samples collected from the surface, towers or aircraft are needed. It would also be very fruitful to expand use of continuous measurements like the ones now being made on very tall (>200m) communications towers. Another way to join this effort is by volunteering flux estimates from your own work, to be run through CarbonTracker and assessed against atmospheric CO2. Please contact us if you would like to get involved and collaborate with us!
CarbonTracker uses many more continuous observations than previously taken. The largest concentration of observations for now is from within North America. The data are fed into a sophisticated computer model with 135 ecosystems and 11 ocean basins worldwide. The model calculates carbon release or uptake by oceans, wildfires, fossil fuel combustion, and the biosphere and transforms the data into a color-coded map of sources and storage "sinks." One of the system's most powerful assets is its ability to detect natural variations in carbon uptake and release by oceans and vegetation, which could either aid or counteract societies' efforts to curb fossil fuel emissions on a seasonal basis.
Collaboration on Future Internet Architectures
Call for Research Collaboration on Future Internet Architectures in Partnership with the US NSF FIND Program
Background
The Internet's unquestionable success at embodying a single global architecture has also led over the decades of its operation to unquestionable difficulties with regard to support for sound operation and some types of functionality as well as raising issues about security and robustness. Recently the international network research community has focused on developing fresh perspectives on how to design and test new architectures for coherent, global data networks that overcome these difficulties and enable a healthy robust Future Internet.
As a reflection of this growing community interest, there has been international interest in rethinking the Internet to meet the needs of the 21st century. In the United States, the National Science Foundation
(NSF) has announced a focus area for networking research called FIND, or Future Internet Design. The agenda of this focus area is to invite the research community to take a long-range perspective, and to consider what we want our global network of 10 or 15 years to be, and how to build networks that meet the future requirements. (For further information on the FIND program, see NSF solicitation 07-507.) The research funded by FIND aims to contribute to the emergence of one or more integrated visions of a future network. (See www.nets-find.net for information about the funded research projects.)
A vital part of this effort concerns fostering collaboration and consensus-building among researchers working on future global network architectures. To this end, NSF has created a FIND Planning Committee that works with NSF to organize a series of meetings among FIND grant recipients structured around activities to identify and refine overarching concepts for networks of the future. As part of the research we leave open the question of whether there will be one Internet or several virtualized Internets.
A broader community
Because there is a broad set of efforts with similar goals supported by other agencies, industry, and nations, NSF sees significant value in researchers in the FIND program participating in collaboration and consensus-building with other researchers, in academia and industry in the US and particularly internationally, who share like-minded visions. We believe that such visions of future global networks would greatly benefit from global participation and that testing and deploying these networks require global participation.
NSF would like to do its share in helping to create a global research community centered on working toward future global network architectures by inviting researchers interested in such collaboration to participate in FIND activities. We hope that other national and international groups will invite FIND participants to work with their researchers as well.
The FIND meetings are organized for the benefit of those already actively working in this area, or for those who have specific intellectual contributions they are prepared to make in support of this kind of research. These meetings are not informational meetings for people interested in learning about the problem, or for those preparing to submit proposals to NSF.
Invitee Selection
Since the efficacy of FIND meetings is in part a function of their size and coherence, we are asking researchers or individuals engaged in activities in support of research to submit short white papers describing themselves and how their work or intellectual contribution is relevant to future global internet architectures. Based on the FIND planning committee's evaluation of the described work or contribution would contribute to a vision of the future, researchers will be invited to join the FIND meetings and other events, as overall meeting sizes and logistics permit. The white papers should not focus on implementing large-scale infrastructure projects.
The evaluation of the white papers will focus on certain criteria that are listed below, along with expectations regarding what external participation entails. Naturally, interested parties should take these considerations into account as they write their white papers, and include information in their papers sufficient to allow the FIND planning committee to evaluate the aptness of their participation. Please try to limit your white paper to 2 pages.
* In a few sentences, please describe your relevant work, and its
intended impact. When possible, include as an attachment (or a URL) a longer description of your work, which if you wish can be something prepared for another purpose (e.g. an original funding proposal or a publication). It will help to limit the supporting material to 15 pages or fewer.
* Please summarize in the white paper the ways you see your
contributions as being compatible with the objectives of FIND (the URL for the FIND solicitation is included above). Contributions that accord with the FIND program will generally be based on a long-term vision of future networking, rather than addressing specific near-term problems, and framed in terms of how it might contribute to an overall architecture for a future network.
* Since the FIND meetings have been organized for the benefit of
researchers who have already been funded and are actively pursuing their research, research described in white papers should already be supported. Please describe the means you have available to cover your FIND-related activities: the source of funds, their duration, and
(roughly) the supported level of effort. Unfortunately, NSF lacks additional funds to financially support your participation in the meetings, so you must be prepared to cover those costs as well.
* If you have submitted a FIND research proposal to the current
NeTS solicitation, you should not submit a white paper here based on that research. You should provisionally hold June 27-28, 2007 of the next meeting because if selected for funding, you will be invited to attend the June meeting. The selection will be made in early June.
* As one of the goals of FIND is to develop an active community of
researchers who work increasingly together over time towards coherent, overall architectural visions, we aim for external participants to likewise become significantly engaged. To this end, you should anticipate (and have resources for) participating in FIND project meetings (three per year) in an active, sustained fashion.
* Invitations are for individuals, not organizations, so
individuals, not organizations should submit white papers.
* We view the research as pre-competitive, so your research must
not be encumbered by intellectual property restrictions that prevent you from fully discussing your work and its results with the other participants.
Your white paper (and the supporting description of current research or other relevant contributions) will be read by members of the research community, so do not submit anything that you would not reveal to your peers. (White papers are not viewed as formal submissions to NSF.) Timing and submission You may submit a white paper at any time during the FIND program. The papers we receive will be reviewed before each scheduled FIND PI meeting. Meetings are anticipated to occur approximately three times a year, in March, June/July and October/November. The next FIND meeting is scheduled for June 27-28, 2007 in the Washington D.C area. Priority in consideration for that meeting will be given to white papers that are received by Friday, May 14th, 2007.
Send your white paper to Darleen Fisher
Will cable companies offer low cost cell phone service with Wifi peering?
[Many cable companies in North America have been struggling with the idea of how to get into the lucrative cell phone business. But they are daunted by the high cost of deploying a cell tower infrastructure. The recent Time-Warner announcement with FON points to one possible model where customers will be encouraged to operate open wifi access spots from their homes and businesses. Although the story in the NYT is being pitched as Time Warner is allowing users to share access with their modem - the real opportunity is that users of the new WiFi enabled cell phones will have an inexpensive and widespread low cost cell phone network, provided by their cable company at a faction of the cost of deploying a normal cellular phone network. The revenue opportunities of cell phones are significantly higher than selling basic broadband, and it is not to hard to see that it would be in the cable company's interest to offer free broadband if customers agree to operate a FON open wfi spot. New wifi peering tools like that developed at Technion will allow the range to be considerably extended -- BSA]
Time Warner broadband deal to allow users to share access
NY Times
By The Associated Press
In a victory for a small Wi-Fi start-up called Fon, Time Warner will
let its home broadband customers turn their connections into public
wireless access spots, a practice shunned by most Internet service
providers in the United States.
For Fon, which has forged similar agreements with service producers
across Europe, the deal will bolster its credibility with American
consumers. For Time Warner, which has 6.6 million broadband
subscribers, the move could help protect the company from an exodus
as free or inexpensive municipal wireless becomes more readily
available.
ex=1335067200&en=5cf5f54dc89bc70e&ei=5090&partner=rssuserland&emc=rss>
http://www.networkworld.com/news/2007/041907-wi-fi-software-routers.html?nwwpkg=alphadoggs
Free Wi-Fi software nixes need for routers
Wireless software from university can be downloaded at no cost
Researchers are making available software they say can be used to link nearby computers via Wi-Fi without a router and that someday could be used by cell phone users to make free calls.
Technion-Israel Institute of Technology scientists say their WiPeer software (available as a no-cost download here) can be used to link computers that are within 300 feet of each other inside buildings to more than 900 feet apart outside.
Next up is extending the software to work with cell phones so that callers can bypass operators and talk to nearby people
Time Warner broadband deal to allow users to share access
NY Times
By The Associated Press
In a victory for a small Wi-Fi start-up called Fon, Time Warner will
let its home broadband customers turn their connections into public
wireless access spots, a practice shunned by most Internet service
providers in the United States.
For Fon, which has forged similar agreements with service producers
across Europe, the deal will bolster its credibility with American
consumers. For Time Warner, which has 6.6 million broadband
subscribers, the move could help protect the company from an exodus
as free or inexpensive municipal wireless becomes more readily
available.
http://www.networkworld.com/news/2007/041907-wi-fi-software-routers.html?nwwpkg=alphadoggs
Free Wi-Fi software nixes need for routers
Wireless software from university can be downloaded at no cost
Researchers are making available software they say can be used to link nearby computers via Wi-Fi without a router and that someday could be used by cell phone users to make free calls.
Technion-Israel Institute of Technology scientists say their WiPeer software (available as a no-cost download here) can be used to link computers that are within 300 feet of each other inside buildings to more than 900 feet apart outside.
Next up is extending the software to work with cell phones so that callers can bypass operators and talk to nearby people
Grid Portals and Web 2.0 for cyber-infrastructure and platforms
[For most scientific users, portals will be the most common way to interface with grids and cyber-infrastructure platforms. A good example of a platform cyber-infrastructure portal is the Eucalyptus project where architectural collaborators can interact and link together various web services and workflows such as rendering grids, network web services for HDTV, etc. The following IBM site provides a good tutorial on how to build a portal with Web 2.0 tools, WSRF, etc-- BSA]
Eucalyptus portal http://iit-iti.nrc-cnrc.gc.ca/projects-projets/eucalyptus_e.html
IBM portal development
http://www-128.ibm.com/developerworks/grid/library/gr-stdsportal3/index.html
Built on top of grid middleware, grid portals act as gateways to the grid because they smooth the learning curve of using grid. In the first of this three-part "Development of standards-based grid portals" series, we give an overview of grid portals, focusing on today's standards-based (JSR 168 and Web Services for Remote Portlets (WSRP) V1.0) second-generation grid portals. In Part 2, we develop three portlets to illustrate how a grid portal can be built using JSR 168-compliant portlets. And here in Part 3, we discuss the application of WSRP and the future of grid portals.
Today, grid portals play an important role as resource and application gateways in the grid community. Most of all, grid portals provide researchers with a familiar UI via Web browsers, which hide the complexity of computational and data grid systems. In this three-part series, we gave a general review of portals and discussed first- and second-generation grid portals. We built three grid portlets that demonstrate how a basic grid portal can be constructed using JSR 168-compliant portlets. We illustrated how these grid portlets are reused through WSRP and considered the future of grid portal development.
JSR 168 and WSRP V1.0 are two specifications that aim to solve interoperability issues between portlets and portlet containers. In particular, today's grid portals are service-oriented. On one hand, portals are acting as service clients to consume traditional data-centric Web services. On the other hand, portals are providing presentation-centric services so federated portals can be easily built.
With basic grid related functions like proxy manager and job submission successfully implemented, advanced grid portals today are aimed at the integration of complex applications, including visualisation and workflow systems. Web 2.0 techniques were presented, and Ajax was recommended for portal development to make grid portals more interactive and attractive to users. In the future, grid portals should also aim to include existing Web applications and, as security techniques become more developed, credential delegation will play an important role in federation and sharing of grid services
Eucalyptus portal http://iit-iti.nrc-cnrc.gc.ca/projects-projets/eucalyptus_e.html
IBM portal development
http://www-128.ibm.com/developerworks/grid/library/gr-stdsportal3/index.html
Built on top of grid middleware, grid portals act as gateways to the grid because they smooth the learning curve of using grid. In the first of this three-part "Development of standards-based grid portals" series, we give an overview of grid portals, focusing on today's standards-based (JSR 168 and Web Services for Remote Portlets (WSRP) V1.0) second-generation grid portals. In Part 2, we develop three portlets to illustrate how a grid portal can be built using JSR 168-compliant portlets. And here in Part 3, we discuss the application of WSRP and the future of grid portals.
Today, grid portals play an important role as resource and application gateways in the grid community. Most of all, grid portals provide researchers with a familiar UI via Web browsers, which hide the complexity of computational and data grid systems. In this three-part series, we gave a general review of portals and discussed first- and second-generation grid portals. We built three grid portlets that demonstrate how a basic grid portal can be constructed using JSR 168-compliant portlets. We illustrated how these grid portlets are reused through WSRP and considered the future of grid portal development.
JSR 168 and WSRP V1.0 are two specifications that aim to solve interoperability issues between portlets and portlet containers. In particular, today's grid portals are service-oriented. On one hand, portals are acting as service clients to consume traditional data-centric Web services. On the other hand, portals are providing presentation-centric services so federated portals can be easily built.
With basic grid related functions like proxy manager and job submission successfully implemented, advanced grid portals today are aimed at the integration of complex applications, including visualisation and workflow systems. Web 2.0 techniques were presented, and Ajax was recommended for portal development to make grid portals more interactive and attractive to users. In the future, grid portals should also aim to include existing Web applications and, as security techniques become more developed, credential delegation will play an important role in federation and sharing of grid services
Cyber-Infrastructure, Platforms, grids & web services for emergency response
[The Open GeoSpatial Forum - www.opengeospatial.org - has a great video and web site demonstrating the use of cyber-infrastructure - platform technologies such as web services, workflows, grids and networks for emergency response applications. They have deployed a test bed demonstrating the use of these tools in response to chemical warehouse fire in the San Diego area. I highly recommend anyone interested in attending CANARIE's Platforms workshop to visit this site and watch the video. It will give you a good overall view of the type of middleware platforms we are looking to fund and deploy under the upcoming CANARIE network enabled platforms program. Thanks to Steve Liang for this pointer -- BSA]
http://sensorweb.geoict.net/
And here is an multimedia (flash) of the demo:
http://www.opengeospatial.org/pub/www/ows3/index.html
This is a movie of a Sensor Web for a disaster management application. http://sensorweb.geoict.net/Assets/SWEClient_004.avi
http://sensorweb.geoict.net/
And here is an multimedia (flash) of the demo:
http://www.opengeospatial.org/pub/www/ows3/index.html
This is a movie of a Sensor Web for a disaster management application. http://sensorweb.geoict.net/Assets/SWEClient_004.avi
IT-based Innovation in the 21st Century
Stanford EE Computer Systems Colloquium
4:15PM, Wednesday, Apr 25, 2007
HP Auditorium, Gates Computer Science Building B01
http://ee380.stanford.edu[1]
Topic: IT-based Innovation in the 21st Century
Speaker: Irving Wladawsky-Berger
Vice President, Technical Strategy and Innovation
IBM
About the talk:
Advances in information technologies, combined with open standards, and especially the Internet, are helping us build a global infrastructure with the potential to transform business, society and its institutions, and our personal lives, not unlike the impact that steam power had in ushering the Industrial Revolution in generations past. The resulting environment is characterized by collaborative innovation and access to information on an unprecedented scale. It holds the promise to help us apply engineering disciplines, tools and processes to the design and management of highly complex systems, including businesses and organizations, as well as to make applications much more user-friendly through the use of highly visual, interactive interfaces.
Slides:
There are no downloadable slides for this talk at this time.
About the speaker:
(photo) Dr. Irving Wladawsky-Berger is responsible for identifying emerging technologies and marketplace developments critical to the future of the IT industry, and organizing appropriate activities in and outside IBM in order to capitalize on them. In conjunction with that, he leads a number of key innovation-oriented activities and formulates technology strategy and public policy positions in support of them. As part of this effort, he is also responsible for the IBM Academy of Technology and the company's university relations office.
Dr. Wladawsky-Berger's role in IBM's response to emerging technologies began in December 1995 when he was charged with formulating IBM's strategy in the then emerging Internet opportunity, and developing and bringing to market leading-edge Internet technologies that could be integrated into IBM's mainstream business. He has led a number of IBM's company-wide initiatives including Linux, IBM's Next Generation Internet efforts and its work on Grid computing. Most recently, he led IBM's on demand business initiative.
He joined IBM in 1970 at the Thomas J. Watson Research Center where he started technology transfer programs to move the innovations of computer science from IBM's research labs into its product divisions. After joining IBM's product development organization in 1985, he continued his efforts to bring advanced technologies to the marketplace, leading IBM's initiatives in supercomputing and parallel computing including the transformation of IBM's large commercial systems to parallel architectures. He has managed a number of IBM's businesses, including the large systems software and the UNIX systems divisions.
Dr. Wladawsky-Berger is a member of the University of Chicago Board of Governors for Argonne National Laboratories and of the Technology Advisory Council for BP International. He was co-chair of the President's Information Technology Advisory Committee, as well as a founding member of the Computer Sciences and Telecommunications Board of the National Research Council. He is a Fellow of the American Academy of Arts and Sciences. A native of Cuba, he was named the 2001 Hispanic Engineer of the Year.
Dr. Wladawsky-Berger received an M.S. and a Ph. D. in physics from the University of Chicago.
Dr. Wladawsky-Berger maintains a personal blog[2], which captures observations, news and resources on the changing nature of innovation and the future of information technology.
Contact information:
Irving Wladawsky-Berger
IBM
Embedded Links:
[ 1 ] http://ee380.stanford.edu
[ 2 ] http://irvingwb.typepad.com/
Subscribe to:
Posts (Atom)