Tuesday, May 22, 2007

Engineering Virtual Organizations - NSF CyberInfrastructure Program

[NSF has just launched an exciting new program called Engineering Virtual Organizations. Those who are interested in applying CANARIE's Network Enabled Platforms program should read this solicitation closely. It is almost identical in terms of requirements for the CANARIE program. Most importantly Canadian research teams are eligible to apply for funding to the CANARIE program to join or participate in any US or European virtual organization as described in the NSF solicitation.-- BSA]


Engineering Virtual Organization Grants (EVO)

Program Solicitation
NSF 07-558


Engineering Virtual Organization (EVO) Grants

Synopsis of Program:

The primary purpose of this solicitation is to promote the development of Virtual Organizations (VO's) for the engineering community (EVOs). A VO is created by a group of individuals whose members and resources may be dispersed globally, yet who function as a coherent unit through the use of cyberinfrastructure (CI). EVOs will extend beyond small collaborations and individual departments or institutions to encompass wide-ranging, geographically dispersed activities and groups. This approach has the potential to revolutionize the conduct of science and engineering research, education, and innovation. These systems provide shared access to centralized or distributed resources, such as community-specific sets of tools, applications, data, and sensors, and experimental operations, often in real time.

With the access to enabling tools and services, self-organizing communities can create VOs to facilitate scientific workflows; collaborate on experiments; share information and knowledge; remotely operate instrumentation; run numerical simulations using shared computing resources; dynamically acquire, archive, e-publish, access, mine, analyze, and visualize data; develop new computational models; and deliver unique learning, workforce-development, and innovation tools. Most importantly, each VO design can originate within a community and be explicitly tailored to meet the needs of that specific community. At the same time, to exploit the full power of cyberinfrastructure for a VO's needs, research domain experts need to collaborate with CI professionals who have expertise in algorithm development, systems operations, and application development.

This program solicitation requests proposals for two-year seed awards to establish EVOs. Proposals must address the EVO organizing principle, structure, shared community resources, and research and learning goals; a vision for organizing the community, including international partners; a vision for preparing the CI components needed to enable those goals; a plan to obtain and document user requirements formally; and a project management plan for developing both a prototype implementation and a conceptual design of a full implementation. These items will be used as criteria for evaluation along with the standard NSF criteria of Intellectual Merit and Broader Impacts. Within the award size constraints, the prototype implementation should provide proof of concept with a limited number of its potential CI features. Successful proposals should expect to demonstrate the benefits of a fully functional EVO and how it will catalyze both large and small connections, circumventing the global limitations of geography and time zones.


I. INTRODUCTION

Cyberinfrastructure (CI) is having a transformative effect on engineering practice, science and education. The National Science Foundation (NSF) has been active in developing CI and advancing its use. Numerous resources are available that describe these activities:

* Report of the NSF Blue-Ribbon Panel on Cyberinfrastructure
* NSF Cyberinfrastructure Council Vision document
* NSF-sponsored workshops, several focused on engineering CI

Among its other investments in CI, NSF has catalyzed the creation of VOs as a key means of aiding access to research resources, thus advancing science and its application. Researchers working at the frontiers of knowledge and innovation increasingly require access to shared, world-class community resources spanning data collections, high-performance computing equipment, advanced simulation tools, sophisticated analysis and visualization facilities, collaborative tools, experimental facilities and field equipment, distributed instrumentation, sensor networks and arrays, mobile research platforms, and digital learning materials. With an end-to-end system, VOs can integrate shared community resources, including international resources, with an interoperable suite of software and middleware services and tools and high-performance networks. This use of CI can then create powerful transformative and broadly accessible pathways for scientific and engineering VOs to accelerate research outcomes into knowledge, products, services, and new learning opportunities.

Initial engineering-focused VOs (EVOs) have demonstrated the potential for this approach. Examples of EVOs involving significant engineering communities are the George E. Brown Jr. Network for Earthquake Engineering Simulation (NEES), the Collaborative Large-scale Engineering Analysis Network for Environmental Research (now called the WATERS network), the National Nanofabrication Users Network, and the Network for Computational Nanotechnology and its nanoHUB.org portal.

Other engineering communities can benefit from extending this model: organizing as VOs; exploiting existing CI tools, rapidly putting them to use; and identifying new CI opportunities, needs, and tools to reach toward their immediate and grand-challenge goals. These activities must be driven by the needs of participating engineers and scientists, but collaboration with information scientists is vital to build in the full power of CI capabilities.

Creation of VOs by engineering communities will revolutionize how their research, technical collaborations, and engineering practices are developed and conducted. EVOs will accelerate both research and education by organizing and aiding shared access to community resources through a mix of governance principles and cyberinfrastructure.

II. PROGRAM DESCRIPTION

This program solicitation requests proposals for two-year seed awards with three key elements: (1) establishing an engineering virtual organization, (2) deploying its prototype EVO implementation, and (3) creating a conceptual design of its full implementation. Proposals are encouraged from engineering communities that can provide documentary evidence of strong community support and interest in developing an EVO enabled by CI, potentially including international participants. The CI conceptual design should draw upon: (1) articulated research and education goals of a research community to advance new frontiers, (2) advances made by other scientific and engineering fields in establishing and operating VOs and their associated CI, (3) commercially available CI tools and services, and (4) CI tools and services emerging from current federal investments.

Proposals must address the following topics:

*

EVO structure and justification: Vision and mission; organizing and governing structure; members and recruitment; end users; stakeholders; and shared community resources (e.g., experimental facilities, observatories, data collections), their associated service providers, and access / allocation methods. Identify frontier research and education goals of the EVO, including compelling research questions and the potential for broad participation. EVOs will extend beyond small collaborations and individual departments or institutions to encompass wide-ranging, geographically dispersed activities and groups.
*
[...]

Web services, grids and UCLP for control of synchrotron beam lines


[Excellent presentation at the recent EPICs meeting in Germany on the use od web services, UCLP and grids for controlling synchrotron beam lines and distributing the data to researchers across Canada and Australia. Thanks to Elder Mathias for this pointer-- BSA]

ftp://ftp.desy.de/pub/EPICS/meeting-2007/epics-Mar2007.pdf


Good example of using UCLP on Geant2 network in Europe

An excellent use case example of UCLP on GEANT 2 network in Europe is available by engineers from i2Cat who are going to present at the coming Terena Networking Conference. The paper is based on an example application that aims to demonstrate its use for National Research and Educational Networks in Europe (which are connected to the Geant2 network)

UCLP is often confused with various bandwidth on demand and bandwidth reservation systems. Personally I am not a believer in such traditional circuit switched approaches to networking. Besides the inevitable call blocking problems associated with such architectures, the cost of optical transponders is dropping dramatically and so it is much easier to provision several nailed up parallel IP routed networks then deal with the high OPEX costs of managing a circuit switched environment.

UCLP is provisioning and configuration tool that allows edge organizations and users to configure and provision their own IP networks and do their own direct point to point peering. The traditional network approach, whether in the commercial world or R&E world is to have a central hierarchical telco like organization manage all the relationships between the edge connected networks and organizations. UCLP, on the other hand, is an attempt to extend the Internet end-to-end principle to the physical layer. It is ideally suited for condominium wavelength or fiber networks where a multitude of networks and organizations co-own and co-manage a network infrastructure.

Thanks to Sergei Figuerola for this pointer -- BSA]

UCLP case study on GEANT2 network
(http://tnc2007.terena.org/programme/presentations/show.php?pres_id=99)

For open source downloading of UCLP: http://www.uclp.ca/index.php?option=com_content&task=view&id=52&Itemid=77

The community edition will be available soon: http://www.inocybe.ca/


SOA, Web 2.0 and Open source for education and student services

[It is exciting to see that a number of organizations and universities are starting to recognize the power of open source development combined with SOA and Web 2.0 for new educational tools and student services.

Just as the participatory web is transforming business practices and customer relationships in the corporate world, we are now seeing the same technologies have transformative impacts on educational and back office tools at schools and universities.

Of particular interest is the Kuali student service system which will deliver a new generation student system that will be developed through the Community Source process, delivered through service-oriented methodologies and technologies, and sustained by an international community of institutions and firms. Student systems are the most complex example of the various enterprise systems used by colleges and universities around the world. They are also closer to the core academic mission of these institutions than any other ‘administrative’ system; after all, students are their core business. Flexible student systems, combined with imaginative business policies and processes, can be a source of comparative advantage for an institution.

Imagine a student service system where modules and services can be distributed across multiple institutions but seen as seamless service integrated through a web portal using web service workflow tools. Database, courseware repositories and computational tasks can also be distributed across multiple institutions and linked together with workflow by either the instructor or the student.

For more details on the Kuali project please see http://rit.mellon.org/projects/kuali-student-british-columbia/ks-rit-report-2007.doc/view

For other open source, web service educational projects please see http://os4ed.com/component/option,com_frontpage/Itemid,1/

also

http://www.miller-group.net/

--BSA]

Commercial versions of UCLP available

[A couple of companies are now marketing commercial versions of UCLP. It is expected that other companies will be making announcements of commercial versions as well.

UCLP-User Controlled Lightpaths is a provisioning and configuration tool that allows end users such as enterprises or individual high performance user to setup and configure their own IP networks for such applications as remote peering and deploying private IP networks with virtual routers, switches, etc

It is an ideal tool for an organization controls and manages its own fiber network or condominium fiber network. It can be used to allow individual departments to configure their own LAN networks within a larger campus network, or establish direct IP connectivity with an external network independent of the default connection through the campus border router.

Most versions of UCLP use web services and grid technology so the network can be seen as an extension of the grid application or web service for virtualized or autonomous network-grid applications. -- BSA]

MRV is a supplier of optical line drivers, optical cross connects and WDM gear for organizations that have acquired their own fiber.

http://www.mrv.com/megavision http://www.mrv.com/product/MRV-NM-MVWEB/
ftp://ftp.mrv.com/pub/software/megavision/MegaVision_UCLP_Application.pdf

Inocbye is a network management software company in Montreal that also offers training, consultation and installation services for UCLP www.inocybe.ca

Solana is a network management software company in Ottawa www.uclpv2.com

Other UCLP resources
www.uclp.ca
www.uclpv2.ca


Citizen Science with Google Earth, mashups and web service for environmental applications


[Here is an excellent example of the power of mashups, web services using tools like Google Earth for environmental applications. Thanks to Richard Ackerman's blog--BSA]

Richard Ackermans Blog http://scilib.typepad.com/science_library_pad/2007/05/google_earth_an.html

Worskhop
http://www.niees.ac.uk/events/GoogleEarth/


The recent emergence of new "geobrowsing" technologies such as Google Earth, Google Maps and NASA WorldWind presents exciting possibilities for environmental science. These tools allow the visualization of geospatial data in a dynamic, interactive environment on the user's desktop or on the Web. They are low-cost, easy-to-use alternatives to the more traditional heavyweight Geographical Information Systems (GIS) software applications. Critically, it is very easy for non-specialists to incorporate their own data into these visualization engines, allowing for the very easy exchange of geographic information. This exchange is facilitated by the adoption of common data formats and services: this workshop will introduce these standards, focussing particularly on the Open Geospatial Consortium's Web Map Service and the KML data format used in Google Earth and other systems. A key capability of these systems is their ability to visualize simultaneously diverse data sources from different data providers, revealing new information and knowledge that would otherwise have been hidden. Such "mashups" have been the focus of much recent attention in many fields that relate to geospatial data: this workshop will aim to establish the true usefulness of these technologies in environmental science.


Weather Forecasts Without Boundaries using Grids, P2P and Web services

[Excerpts from www.GridToday.com article -- BSA]

[ ] M1541298 ) Grid Enables Weather Forecasts Without Boundaries


The results obtained from SIMDAT, a European research and development
project, are increasingly in demand from European and international
meteorological services and are likely to become acknowledged
worldwide. SIMDAT Meteo is working to establish a Virtual Global
Information System Centre (VGISC) for the national meteorological
services of France, Germany and the United Kingdom based on grid
technology to be used within the World Meteorological Organization
Information System (WIS) to provide cost-effective and user-friendly
services. VGISC offers a unique meteorological database integrating a
variety of data and providing secure, reliable and convenient access
via the Internet. It is targeted toward operational services and
research in the domains of meteorology, hydrology and the environment.



The VGISC software developed by the SIMDAT Meteo project partners, led
by the European Centre for Medium-Range Weather Forecasts, will offer
meteorological communities worldwide immediate, secure and convenient
access to various data and analysis services, as well as a
user-friendly platform for storage of meteorological data. VGISC will
thus enable the fast exchange of data for numerical weather forecasts,
disaster management and research -- independent of national frontiers
and beyond organizational boundaries.

The infrastructure of this new system will be based on a mesh network
of peers and meteorological databases. Messages are interchanged using
algorithms based on mobile telephony technologies and metadata
synchronization on a journalized file system. The grid technology is
based on Open Grid Services Architecture Data Access and Integration
(OGSA-DAI), which is founded on Web service and Web technology
concepts. In addition, standard protocols such as Open Archive
Initiative (OAI) are used to synchronize and integrate existing
archives and databases, as well as to extend interoperability.
Furthermore, VGISC will be a test bed for the ISO 19115 metadata
standard by handling complex data in real-time.

The SIMDAT project is Europe’s contribution to the
infrastructure technology of the emerging WIS as the World
Meteorological Organization (WMO) modernizes and enhances its
long-standing Global Telecommunications System (GTS), an international
network for exchanging mainly meteorological data and warnings in
real-time. In addition, the new system will provide access for all
environmental communities worldwide whereas GTS only allows access for
the present national weather services of the member states.

The opportunities for the new VGISC technology are excellent as VGISC
is not only of interest within Europe: The national meteorological
services of Australia, China, Japan, Korea and the Russian
Federation’s National Oceanographic Centre have already deployed
the SIMDAT software and are collaborating actively with the European
partners. The software deployment is followed by an increasing number
of meteorological centers and new meteorological datasets from Asia,
Australia, Europe and the United States are steadily being added to
the portal.


Why research into the future of the Internet is critical

[I highly recommend taking a look at the YouTube video being distributed by the FTTh council and also listening to radio interview of Jon Crowcroft listed below. The FTTh council video is ideal for politicians and policy makers as it provides a very high level perspective of the Internet and its current challenges. It offers compelling evidence of the coming "exo-flood" of data that will hit the Internet, largely due to the distribution of video. They rightly argue that today's Internet is incapable of supporting this tsunami of data, particularly in the last mile - and that new network architectures and business models are required.

Jon Crowcroft's interview is quite interesting in that he claims that heat loading at data centers will make distribution of video through traditional client server models impossible, and that peer to peer (P2P) will be the only practical way of distributing such content. In testament to that fact there is an explosion of new companies gearing up to deliver video and other content via P2P including Joost, Vudu, etc. This is why many argue that the traditional telco NGN architecture with IPTV is doomed to failure. But P2P imposes its own sort of problems on today's Internet architectures as witnessed by the attempts of many service providers to limit P2P traffic or ban it outright.

If P2P is indeed going to be the major mode of delivery of data, especially for video then we need to explore new Internet architectures. As Van Jacobson has pointed out - too much network research is focused on the Internet as a traditional telecommunications medium of "channels" from A to B. As a result Internet research, especially in the optical and network world is largely about topology optimization, network layers, reliability, redundancy, etc

Universities to my mind should be at the forefront of exploring ways to build and deploy a new Internet - not only for the research community, but for the global community as well. But sadly many universities are also trying to restrict P2P traffic, and in some cases act as the snarling guard dog for the RIAA and MPAA. Students at universities are the early adopters of new technology (much more so than their ageing professors)- rather than discouraging their behaviour -we should see them as an opportunity to test and validate new Internet architectures and services. See my presentation at Net@Edu on some thoughts on this topic

P2P may fundamentally reshape our thinking of network architectures such as enabling the end user to do their own traffic engineering and network optimization to reach the closest P2P node or transit exchange point.

Thanks to Olivier Martin for the pointer to YouTube Video and Dewayne Hendricks for a multitude of other pointers -- BSA]


FTTH Council Video
http://www.youtube.com/watch?v=c4988qaCvvM

Jon Crowcroft Interview http://blogs.guardian.co.uk/podcasts/2007/04/science_weekly_for_april_30.html

A good article on the current challenges of P2P running on today's networks http://blogs.nmss.com/communications/2007/04/realworld_p2p_j.html

A very interesting read of why IPTV is doomed to failure: http://www.theregister.co.uk/2007/04/22/iptv_services/

Ohio university bans P2P
http://gizmodo.com/gadgets/home-entertainment/ohio-university-bans-all-p2p-activity-riaa-cackles-maniacally-255525.php

Van Jacobson's talk http://lists.canarie.ca/pipermail/news/2007/000374.html

How Universities can play a leading role in Next Generation Internet http://www.canarie.ca/canet4/library/recent/Net_Edu_Phoenix_Feb_5_2007.ppt
and http://www.canarie.ca/canet4/library/recent/Terena_Feb_22_2007.ppt


This Internet TV program is brought to you by ...
Joost, the Internet television service being developed by the
founders of Skype, has lined up several blue-chip advertisers,
including United Airlines, Microsoft, Sony Electronics and Unilever,
as it prepares for its introduction.

ex=1335240000&en=0f79f30914fff112&ei=5090&partner=rssuserland&emc=rss>



NY times article on Vudu and its P2P plans http://www.nytimes.com/2007/04/26/business/media/26adco.html?ex=1335240000&en=0f79f30914fff112&ei=5090&partner=rssuserland&emc=rss










Wednesday, May 2, 2007

The futility of DRM

[Once again we are seeing an open rebellion against the attempts by the MPAA and RIAA, under the DMCA act, to censor and control the publication of keys for HD-DVD discs. When will these guys ever learn that DRM will never work in a large scale distribution of content. They continue to want to protect a failed business model through flawed DRM technologies, lawyers and take down orders, rather than develop new innovative marketing strategies. A classic example is that most of the Internet movie download services like NetFlix, iTV, etc are only available in the US because of the Byzantine marketing restrictions on distribution of content where licensing is the basis of a country and mode of distribution. In the content industry this is known as "windows" where you need a separate license for location and mode of distribution as well as negotiate the plethora of overlapping rights claims i.e. broadcast, cable, streaming, download, etc. Clever kids, outside of the US, have already figured out the use of proxy services to get around these idiotic restrictions. And the studios are wondering why unauthorized P2P movie downloads are so popular. Duh. It is easier to blame your potentially biggest customers of theft rather then question their own idiotic and antiquated business models -- BSA]

http://news.bbc.co.uk/2/hi/technology/6615047.stm

Citizen Science, Cyber-infrastructure and Carbon Dixoide Emissions


[Here are a couple of interesting sites demonstrating the power of grids, cyber-infrastructure, platforms and citizen science to measure the emission and absorption of carbon dioxide. The NOAA "Carbon Tracker" seeks volunteers to provide carbon dioxide measurements from around the globe and then uses that information integrated with a number of databases and computational models to assimilate the data and affects of forest fires, biosphere, ocean absorption etc. A similar project is the Canadian SAFORAH which has many objectives - of which one is to measure the amount of carbon dioxide absorbed by Canadian forests. This cyber-infrastructure project also supports studies in bird habitat across Canada. It uses Globus Toolkit v.4 at all of the SAFORAH participating sites. Currently, four Canadian Forestry Centres located in Victoria British Columbia, Cornerbrook Newfoundland, Edmonton, Alberta and Laurentian Québec are operationally connected to the SAFORAH data grid. SAFORAH offers Grid-enabled OGC services which are used to increase interoperability of EO data between SAFORAH and other geospatial information systems. The Grid-enabled OGC services consist of the following main components: Grid-enabled Web Map Service (GWMS), Grid-enabled Web Coverage Service (GWCS), Grid-enabled Catalog Service for Web (GCSW), Grid-enabled Catalog Service Federation (GCSF), Control Grid Service (CGS) and the Standard Grid Service Interfaces and OGC Standard User Interfaces. Thanks to Erick Cecil and Hao Chen -- BSA]


For more information on SAFORAH please see
www.saforah.org

For more information on carbon tracker please see www.esrl.noaa.gov/gmd/ccgg/carbontracker/

A tool for Science, and Policy
CarbonTracker as a scientific tool will, together with long-term monitoring of atmospheric CO2, help improve our understanding of how carbon uptake and release from land ecosystems and oceans are responding to a changing climate, increasing levels of atmospheric CO2 (the CO2 fertilization effect) and other environmental changes, including human management of land and oceans. The open access to all CarbonTracker results means that anyone can scrutinize our work, suggest improvements, and profit from our efforts. This will accelerate the development of a tool that can monitor, diagnose, and possibly predict the behavior of the global carbon cycle, and the climate that is so intricately connected to it.

CarbonTracker can become a policy support tool too. Its ability to accurately quantify natural and anthropogenic emissions and uptake at regional scales is currently limited by a sparse observational network. With enough observations though, it will become possible to keep track of regional emissions, including those from fossil fuel use, over long periods of time. This will provide an independent check on emissions accounting, estimates of fossil fuel use based on economic inventories, and generally, feedback to policies aimed at limiting greenhouse gas emissions. This independent measure of effectiveness of any policy, provided by the atmosphere (where CO2 levels matter most!) itself is the bottom line in any mitigation strategy.

CarbonTracker is intended to be a tool for the community and we welcome feedback and collaboration from anyone interested. Our ability to accurately track carbon with more spatial and temporal detail is dependent on our collective ability to make enough measurements and to obtain enough air samples to characterize variability present in the atmospheric. For example, estimates suggest that observations from tall communication towers (>200m) can tell us about carbon uptake and emission over a radius of only several hundred kilometers. The map of observation sites shows how sparse the current network is. One way to join this effort is by contributing measurements. Regular air samples collected from the surface, towers or aircraft are needed. It would also be very fruitful to expand use of continuous measurements like the ones now being made on very tall (>200m) communications towers. Another way to join this effort is by volunteering flux estimates from your own work, to be run through CarbonTracker and assessed against atmospheric CO2. Please contact us if you would like to get involved and collaborate with us!

CarbonTracker uses many more continuous observations than previously taken. The largest concentration of observations for now is from within North America. The data are fed into a sophisticated computer model with 135 ecosystems and 11 ocean basins worldwide. The model calculates carbon release or uptake by oceans, wildfires, fossil fuel combustion, and the biosphere and transforms the data into a color-coded map of sources and storage "sinks." One of the system's most powerful assets is its ability to detect natural variations in carbon uptake and release by oceans and vegetation, which could either aid or counteract societies' efforts to curb fossil fuel emissions on a seasonal basis.




Collaboration on Future Internet Architectures


Call for Research Collaboration on Future Internet Architectures in Partnership with the US NSF FIND Program

Background
The Internet's unquestionable success at embodying a single global architecture has also led over the decades of its operation to unquestionable difficulties with regard to support for sound operation and some types of functionality as well as raising issues about security and robustness. Recently the international network research community has focused on developing fresh perspectives on how to design and test new architectures for coherent, global data networks that overcome these difficulties and enable a healthy robust Future Internet.
As a reflection of this growing community interest, there has been international interest in rethinking the Internet to meet the needs of the 21st century. In the United States, the National Science Foundation
(NSF) has announced a focus area for networking research called FIND, or Future Internet Design. The agenda of this focus area is to invite the research community to take a long-range perspective, and to consider what we want our global network of 10 or 15 years to be, and how to build networks that meet the future requirements. (For further information on the FIND program, see NSF solicitation 07-507.) The research funded by FIND aims to contribute to the emergence of one or more integrated visions of a future network. (See www.nets-find.net for information about the funded research projects.)

A vital part of this effort concerns fostering collaboration and consensus-building among researchers working on future global network architectures. To this end, NSF has created a FIND Planning Committee that works with NSF to organize a series of meetings among FIND grant recipients structured around activities to identify and refine overarching concepts for networks of the future. As part of the research we leave open the question of whether there will be one Internet or several virtualized Internets.

A broader community
Because there is a broad set of efforts with similar goals supported by other agencies, industry, and nations, NSF sees significant value in researchers in the FIND program participating in collaboration and consensus-building with other researchers, in academia and industry in the US and particularly internationally, who share like-minded visions. We believe that such visions of future global networks would greatly benefit from global participation and that testing and deploying these networks require global participation.

NSF would like to do its share in helping to create a global research community centered on working toward future global network architectures by inviting researchers interested in such collaboration to participate in FIND activities. We hope that other national and international groups will invite FIND participants to work with their researchers as well.

The FIND meetings are organized for the benefit of those already actively working in this area, or for those who have specific intellectual contributions they are prepared to make in support of this kind of research. These meetings are not informational meetings for people interested in learning about the problem, or for those preparing to submit proposals to NSF.


Invitee Selection
Since the efficacy of FIND meetings is in part a function of their size and coherence, we are asking researchers or individuals engaged in activities in support of research to submit short white papers describing themselves and how their work or intellectual contribution is relevant to future global internet architectures. Based on the FIND planning committee's evaluation of the described work or contribution would contribute to a vision of the future, researchers will be invited to join the FIND meetings and other events, as overall meeting sizes and logistics permit. The white papers should not focus on implementing large-scale infrastructure projects.

The evaluation of the white papers will focus on certain criteria that are listed below, along with expectations regarding what external participation entails. Naturally, interested parties should take these considerations into account as they write their white papers, and include information in their papers sufficient to allow the FIND planning committee to evaluate the aptness of their participation. Please try to limit your white paper to 2 pages.

* In a few sentences, please describe your relevant work, and its
intended impact. When possible, include as an attachment (or a URL) a longer description of your work, which if you wish can be something prepared for another purpose (e.g. an original funding proposal or a publication). It will help to limit the supporting material to 15 pages or fewer.
* Please summarize in the white paper the ways you see your
contributions as being compatible with the objectives of FIND (the URL for the FIND solicitation is included above). Contributions that accord with the FIND program will generally be based on a long-term vision of future networking, rather than addressing specific near-term problems, and framed in terms of how it might contribute to an overall architecture for a future network.
* Since the FIND meetings have been organized for the benefit of
researchers who have already been funded and are actively pursuing their research, research described in white papers should already be supported. Please describe the means you have available to cover your FIND-related activities: the source of funds, their duration, and
(roughly) the supported level of effort. Unfortunately, NSF lacks additional funds to financially support your participation in the meetings, so you must be prepared to cover those costs as well.
* If you have submitted a FIND research proposal to the current
NeTS solicitation, you should not submit a white paper here based on that research. You should provisionally hold June 27-28, 2007 of the next meeting because if selected for funding, you will be invited to attend the June meeting. The selection will be made in early June.
* As one of the goals of FIND is to develop an active community of
researchers who work increasingly together over time towards coherent, overall architectural visions, we aim for external participants to likewise become significantly engaged. To this end, you should anticipate (and have resources for) participating in FIND project meetings (three per year) in an active, sustained fashion.
* Invitations are for individuals, not organizations, so
individuals, not organizations should submit white papers.
* We view the research as pre-competitive, so your research must
not be encumbered by intellectual property restrictions that prevent you from fully discussing your work and its results with the other participants.
Your white paper (and the supporting description of current research or other relevant contributions) will be read by members of the research community, so do not submit anything that you would not reveal to your peers. (White papers are not viewed as formal submissions to NSF.) Timing and submission You may submit a white paper at any time during the FIND program. The papers we receive will be reviewed before each scheduled FIND PI meeting. Meetings are anticipated to occur approximately three times a year, in March, June/July and October/November. The next FIND meeting is scheduled for June 27-28, 2007 in the Washington D.C area. Priority in consideration for that meeting will be given to white papers that are received by Friday, May 14th, 2007.
Send your white paper to Darleen Fisher and Allison Mankin for coordination.

Will cable companies offer low cost cell phone service with Wifi peering?

[Many cable companies in North America have been struggling with the idea of how to get into the lucrative cell phone business. But they are daunted by the high cost of deploying a cell tower infrastructure. The recent Time-Warner announcement with FON points to one possible model where customers will be encouraged to operate open wifi access spots from their homes and businesses. Although the story in the NYT is being pitched as Time Warner is allowing users to share access with their modem - the real opportunity is that users of the new WiFi enabled cell phones will have an inexpensive and widespread low cost cell phone network, provided by their cable company at a faction of the cost of deploying a normal cellular phone network. The revenue opportunities of cell phones are significantly higher than selling basic broadband, and it is not to hard to see that it would be in the cable company's interest to offer free broadband if customers agree to operate a FON open wfi spot. New wifi peering tools like that developed at Technion will allow the range to be considerably extended -- BSA]

Time Warner broadband deal to allow users to share access
NY Times
By The Associated Press

In a victory for a small Wi-Fi start-up called Fon, Time Warner will
let its home broadband customers turn their connections into public
wireless access spots, a practice shunned by most Internet service
providers in the United States.

For Fon, which has forged similar agreements with service producers
across Europe, the deal will bolster its credibility with American
consumers. For Time Warner, which has 6.6 million broadband
subscribers, the move could help protect the company from an exodus
as free or inexpensive municipal wireless becomes more readily
available.

ex=1335067200&en=5cf5f54dc89bc70e&ei=5090&partner=rssuserland&emc=rss>




http://www.networkworld.com/news/2007/041907-wi-fi-software-routers.html?nwwpkg=alphadoggs


Free Wi-Fi software nixes need for routers
Wireless software from university can be downloaded at no cost

Researchers are making available software they say can be used to link nearby computers via Wi-Fi without a router and that someday could be used by cell phone users to make free calls.

Technion-Israel Institute of Technology scientists say their WiPeer software (available as a no-cost download here) can be used to link computers that are within 300 feet of each other inside buildings to more than 900 feet apart outside.

Next up is extending the software to work with cell phones so that callers can bypass operators and talk to nearby people

Grid Portals and Web 2.0 for cyber-infrastructure and platforms

[For most scientific users, portals will be the most common way to interface with grids and cyber-infrastructure platforms. A good example of a platform cyber-infrastructure portal is the Eucalyptus project where architectural collaborators can interact and link together various web services and workflows such as rendering grids, network web services for HDTV, etc. The following IBM site provides a good tutorial on how to build a portal with Web 2.0 tools, WSRF, etc-- BSA]

Eucalyptus portal http://iit-iti.nrc-cnrc.gc.ca/projects-projets/eucalyptus_e.html


IBM portal development

http://www-128.ibm.com/developerworks/grid/library/gr-stdsportal3/index.html

Built on top of grid middleware, grid portals act as gateways to the grid because they smooth the learning curve of using grid. In the first of this three-part "Development of standards-based grid portals" series, we give an overview of grid portals, focusing on today's standards-based (JSR 168 and Web Services for Remote Portlets (WSRP) V1.0) second-generation grid portals. In Part 2, we develop three portlets to illustrate how a grid portal can be built using JSR 168-compliant portlets. And here in Part 3, we discuss the application of WSRP and the future of grid portals.


Today, grid portals play an important role as resource and application gateways in the grid community. Most of all, grid portals provide researchers with a familiar UI via Web browsers, which hide the complexity of computational and data grid systems. In this three-part series, we gave a general review of portals and discussed first- and second-generation grid portals. We built three grid portlets that demonstrate how a basic grid portal can be constructed using JSR 168-compliant portlets. We illustrated how these grid portlets are reused through WSRP and considered the future of grid portal development.

JSR 168 and WSRP V1.0 are two specifications that aim to solve interoperability issues between portlets and portlet containers. In particular, today's grid portals are service-oriented. On one hand, portals are acting as service clients to consume traditional data-centric Web services. On the other hand, portals are providing presentation-centric services so federated portals can be easily built.

With basic grid related functions like proxy manager and job submission successfully implemented, advanced grid portals today are aimed at the integration of complex applications, including visualisation and workflow systems. Web 2.0 techniques were presented, and Ajax was recommended for portal development to make grid portals more interactive and attractive to users. In the future, grid portals should also aim to include existing Web applications and, as security techniques become more developed, credential delegation will play an important role in federation and sharing of grid services

Cyber-Infrastructure, Platforms, grids & web services for emergency response

[The Open GeoSpatial Forum - www.opengeospatial.org - has a great video and web site demonstrating the use of cyber-infrastructure - platform technologies such as web services, workflows, grids and networks for emergency response applications. They have deployed a test bed demonstrating the use of these tools in response to chemical warehouse fire in the San Diego area. I highly recommend anyone interested in attending CANARIE's Platforms workshop to visit this site and watch the video. It will give you a good overall view of the type of middleware platforms we are looking to fund and deploy under the upcoming CANARIE network enabled platforms program. Thanks to Steve Liang for this pointer -- BSA]

http://sensorweb.geoict.net/

And here is an multimedia (flash) of the demo:
http://www.opengeospatial.org/pub/www/ows3/index.html

This is a movie of a Sensor Web for a disaster management application. http://sensorweb.geoict.net/Assets/SWEClient_004.avi

IT-based Innovation in the 21st Century


Stanford EE Computer Systems Colloquium
4:15PM, Wednesday, Apr 25, 2007
HP Auditorium, Gates Computer Science Building B01
http://ee380.stanford.edu[1]


Topic: IT-based Innovation in the 21st Century

Speaker: Irving Wladawsky-Berger
Vice President, Technical Strategy and Innovation
IBM

About the talk:

Advances in information technologies, combined with open standards, and especially the Internet, are helping us build a global infrastructure with the potential to transform business, society and its institutions, and our personal lives, not unlike the impact that steam power had in ushering the Industrial Revolution in generations past. The resulting environment is characterized by collaborative innovation and access to information on an unprecedented scale. It holds the promise to help us apply engineering disciplines, tools and processes to the design and management of highly complex systems, including businesses and organizations, as well as to make applications much more user-friendly through the use of highly visual, interactive interfaces.

Slides:

There are no downloadable slides for this talk at this time.

About the speaker:

(photo) Dr. Irving Wladawsky-Berger is responsible for identifying emerging technologies and marketplace developments critical to the future of the IT industry, and organizing appropriate activities in and outside IBM in order to capitalize on them. In conjunction with that, he leads a number of key innovation-oriented activities and formulates technology strategy and public policy positions in support of them. As part of this effort, he is also responsible for the IBM Academy of Technology and the company's university relations office.

Dr. Wladawsky-Berger's role in IBM's response to emerging technologies began in December 1995 when he was charged with formulating IBM's strategy in the then emerging Internet opportunity, and developing and bringing to market leading-edge Internet technologies that could be integrated into IBM's mainstream business. He has led a number of IBM's company-wide initiatives including Linux, IBM's Next Generation Internet efforts and its work on Grid computing. Most recently, he led IBM's on demand business initiative.

He joined IBM in 1970 at the Thomas J. Watson Research Center where he started technology transfer programs to move the innovations of computer science from IBM's research labs into its product divisions. After joining IBM's product development organization in 1985, he continued his efforts to bring advanced technologies to the marketplace, leading IBM's initiatives in supercomputing and parallel computing including the transformation of IBM's large commercial systems to parallel architectures. He has managed a number of IBM's businesses, including the large systems software and the UNIX systems divisions.

Dr. Wladawsky-Berger is a member of the University of Chicago Board of Governors for Argonne National Laboratories and of the Technology Advisory Council for BP International. He was co-chair of the President's Information Technology Advisory Committee, as well as a founding member of the Computer Sciences and Telecommunications Board of the National Research Council. He is a Fellow of the American Academy of Arts and Sciences. A native of Cuba, he was named the 2001 Hispanic Engineer of the Year.

Dr. Wladawsky-Berger received an M.S. and a Ph. D. in physics from the University of Chicago.

Dr. Wladawsky-Berger maintains a personal blog[2], which captures observations, news and resources on the changing nature of innovation and the future of information technology.

Contact information:

Irving Wladawsky-Berger
IBM

Embedded Links:
[ 1 ] http://ee380.stanford.edu
[ 2 ] http://irvingwb.typepad.com/