Tuesday, July 10, 2007
Forget about Big Brother -- Watch out for Big Entertainment
[Some excerpts from Gibbs Backspin column-- BSA]
http://www.networkworld.com/columnists/2007/070907backspin.html?rlh=0709gibbs1&
Last week I discussed the doublethink and newspeak of “the Campaign to Protect America,” an initiative launched by the Coalition Against Counterfeiting and Piracy as well as the shameful strong-arm bullying tactics of the Recording Industry Association of America.
Here’s the worst case scenario: Consumer PCs would, by law, be directly monitored by ISPs to ensure compliance, and the legal consequences for any attempt to circumvent mointoring would make the punishment for murder look like a slap on the wrist.
... in Australia there is an example of a real foray by Big Entertainment into the lives of consumers. The customers of an Australian ISP, Exetel, have all audio and video content in their accounts automatically deleted every night. Exetel has been doing this for over a year and their customers are informed when they sign up that this will happen.
But what really matters is why the company is doing this: According to Exetel’s FAQ, the reason for the nightly purge isn’t anything as sensible as space conservation, but what it says is its “hard approach to copyright issues.”
AT&T is on record that it plans to develop and deploy mechanisms for finding and removing copyrighted material from its network. If AT&T does do such a thing, then it is certain that every other major ISP, like the lemmings they are, will follow suit and the consequences will be tremendous.
For a start the RIAA’s campaign to prosecute people they believe to be infringing on member’s copyrights will escalate because it can demand that the ISPs inform them of infringements they discover. The cash flow from the extortion will ensure enough cash flow to keep the RIAA’s legal machine in top gear.
International Workshop on Scientific Instruments and Sensors on the Grid
[For those of you who are planning on deploying engineering virtual organizations, national platforms or large sensor and instruments networks this might be a useful workshop. Here are some additional pointers on instruments for the grid for those who may be interested. Thanks to Herve Guy for this pointer --BSA]
Grid enabled instruments middleware
www.inocybe.ca
NSF Engineering Virtual Organizations http://www.nsf.gov/pubs/2007/nsf07558/nsf07558.htm
CANARIE's Network Enabled Platforms Workshop http://www.canarie.ca/conferences/platforms/index.html
Second International Workshop on Scientific Instruments and
Sensors on the Grid
Held in conjunction with ISSNIP 2007
Melbourne, Australia
December 3-6, 2007 http://www.issnip.org/2007/
Grids for computation and storage are mature and are key components of e-Science. Data for supporting or falsifying theories comes from experiments that involve making observations with real instruments and sensors and from simulation (in silico experiments). Efforts are currently underway to integrate instruments and sensors with computing and storage Grids to create a complete fabric for conducting e-Science, from observation to publication. The aim of this workshop is to explore the integration of instruments and sensors into computing and storage grids.
Grid-enabling instruments and sensors encompasses themes of remote access by humans and software agents, expanded availability of instruments by a broader possibly non-traditional audience, integration of instruments into software-driven analytical processes and workflows, and infrastructure for autonomic event detection and response on many scales and in many settings.
As Grids co-evolve with Web 2.0 technologies, SOA and existing Grid approaches need to be re-thought. This presents special challenges and opportunities for instruments and sensor Grids.
Many benefits accrue from sensors and instruments as grid actors. Utilization and throughput of existing facilities may be improved through remote access. Instruments can be elements in an experimental workflow involving planning, observing, analysing, discovering and publishing, and sensing and computing are brought closer together to support the coupling of
simulation and observation in real-time (e.g. nowcasting in environmental and earth sciences and structural testing in civil engineering.) In these applications the sensor network can evolve with the model supporting rapid testing and modification of theories and optimised parameterisation of models.
Beyond these high level benefits, some more mundane ones include the possibility of automatically acquiring high grade, detailed metadata with observations that can be used to diagnose problems in the instrument or experiment. High quality metadata also extends the usefulness of data, increasing the possibility of re-use in some later study. Investments in instrumentation can be made more efficiently knowing that research infrastructure could be universally accessible to the community, and that one aspect of return on investment, duty cycle or utilization, can be easily measured.
This workshop will address these issues and challenges, as they encompass both real and virtual instruments and sensors. We are soliciting papers on topics including, but not limited to, the following:
* Integration of instruments and sensors into Grids
* Protocols and service models for Grid-enabled instruments and sensors
* The role of Web 2.0 technologies and APIs in instrument-driven science
* A discussion or survey of extant models from Defence or industry that may be useful in developing a reusable approach to Grid-enabling sensor and instruments for e-Science
* Interoperability and compatibility of Grid-enabled instrumentation and applications
* Representation and control of instruments and sensors including safety issues
* Remote access to instrumentation and sensors
* Virtual organization and security issues
* Data management and metadata issues related to real-time data sources
* Social, administrative, and financial issues of Grid-enabled instruments and sensors
* Agent-based and autonomic computing for instrument and sensor networks
Organization
General Chair
Donald F. McMullen, Indiana University
Organizing/Program Committee
Kenneth Chiu, State University of New York (SUNY) at Binghamton, USA Simon Coles, University of Southampton, UK
Geoffrey Fox, Indiana University, USA
Jeremey Frey, University of Southampton, UK
Fang-Pang Lin, National Center for High-Performance Computing, Taiwan Marlon Pierce, Indiana University, USA Peter Turner, University of Sydney, Australia
Grid enabled instruments middleware
www.inocybe.ca
NSF Engineering Virtual Organizations http://www.nsf.gov/pubs/2007/nsf07558/nsf07558.htm
CANARIE's Network Enabled Platforms Workshop http://www.canarie.ca/conferences/platforms/index.html
Second International Workshop on Scientific Instruments and
Sensors on the Grid
Held in conjunction with ISSNIP 2007
Melbourne, Australia
December 3-6, 2007 http://www.issnip.org/2007/
Grids for computation and storage are mature and are key components of e-Science. Data for supporting or falsifying theories comes from experiments that involve making observations with real instruments and sensors and from simulation (in silico experiments). Efforts are currently underway to integrate instruments and sensors with computing and storage Grids to create a complete fabric for conducting e-Science, from observation to publication. The aim of this workshop is to explore the integration of instruments and sensors into computing and storage grids.
Grid-enabling instruments and sensors encompasses themes of remote access by humans and software agents, expanded availability of instruments by a broader possibly non-traditional audience, integration of instruments into software-driven analytical processes and workflows, and infrastructure for autonomic event detection and response on many scales and in many settings.
As Grids co-evolve with Web 2.0 technologies, SOA and existing Grid approaches need to be re-thought. This presents special challenges and opportunities for instruments and sensor Grids.
Many benefits accrue from sensors and instruments as grid actors. Utilization and throughput of existing facilities may be improved through remote access. Instruments can be elements in an experimental workflow involving planning, observing, analysing, discovering and publishing, and sensing and computing are brought closer together to support the coupling of
simulation and observation in real-time (e.g. nowcasting in environmental and earth sciences and structural testing in civil engineering.) In these applications the sensor network can evolve with the model supporting rapid testing and modification of theories and optimised parameterisation of models.
Beyond these high level benefits, some more mundane ones include the possibility of automatically acquiring high grade, detailed metadata with observations that can be used to diagnose problems in the instrument or experiment. High quality metadata also extends the usefulness of data, increasing the possibility of re-use in some later study. Investments in instrumentation can be made more efficiently knowing that research infrastructure could be universally accessible to the community, and that one aspect of return on investment, duty cycle or utilization, can be easily measured.
This workshop will address these issues and challenges, as they encompass both real and virtual instruments and sensors. We are soliciting papers on topics including, but not limited to, the following:
* Integration of instruments and sensors into Grids
* Protocols and service models for Grid-enabled instruments and sensors
* The role of Web 2.0 technologies and APIs in instrument-driven science
* A discussion or survey of extant models from Defence or industry that may be useful in developing a reusable approach to Grid-enabling sensor and instruments for e-Science
* Interoperability and compatibility of Grid-enabled instrumentation and applications
* Representation and control of instruments and sensors including safety issues
* Remote access to instrumentation and sensors
* Virtual organization and security issues
* Data management and metadata issues related to real-time data sources
* Social, administrative, and financial issues of Grid-enabled instruments and sensors
* Agent-based and autonomic computing for instrument and sensor networks
Organization
General Chair
Donald F. McMullen, Indiana University
Organizing/Program Committee
Kenneth Chiu, State University of New York (SUNY) at Binghamton, USA Simon Coles, University of Southampton, UK
Geoffrey Fox, Indiana University, USA
Jeremey Frey, University of Southampton, UK
Fang-Pang Lin, National Center for High-Performance Computing, Taiwan Marlon Pierce, Indiana University, USA Peter Turner, University of Sydney, Australia
Enterprise 2.0 and web 2.0 in 5 minutes, "the machine is us/ing us..."
[For those interested in impact of web 2.0 technologies in research and the enterprise I recommend the following.
Some good presentations at Enterprise 2.0 http://www.enterprise2conf.com/2007/presentations/
Also Hans Rosling's talk at TED on integrating and visualizing open data with web services and other tools is highly recommended http://tedblog.typepad.com/tedblog/2006/06/hans_rosling_on.html
Thanks to Markus Buckhorn for this pointer -- BSA]
For your amusement...
This is a clever little video clip by an anthroplogist discussing (within demonstrating) some web 1.0 and web 2.0 concepts. Ideal for those net-gen's who need to learn things in 5 minute chunks...
http://www.youtube.com/watch?v=NLlGopyXT_g
Far more entertaining and musical than some of the other 'explanations' I have seen :-)
Cheers,
Markus
New National Academies reports - cybersecurity, dependable systems, and privacy
[From Dave Farber's Iper list-- BSA]
Subject: 3 new National Academies reports - on cybersecurity,
dependable systems, and privacy
Dave--IPers may be interested in the following three reports that have recently been released by CSTB:
* Toward a Safer and More Secure Cyberspace, which examines the vulnerabilities of the Internet, offers a strategy for future research aimed at countering cyber attacks, and lays out a broad research agenda that includes traditional, problem-specific studies as well as unconventional ideas is necessary to combat current and future cybersecurity threats;
* Software for Dependable Systems: Sufficient Evidence?, which discusses the meaning of dependability in a software and systems context, illustrates how the growing use and complexity of software necessitates a different approach to ensuring dependability, and recommends an evidence-based approach to achieving justifiable confidence in and greater dependability of software; and
* Engaging Privacy and Information Technology in a Digital Age, which examines how threats to privacy are evolving, ongoing information technology trends, and how society can balance the interests of individuals, businesses, and government in ways that promote privacy reasonably and efficiently.
All can be viewed at www.cstb.org by following the "National Academies Press" link and then the "Full Text" link under Read.
Thanks/Jon
================
Jon Eisenberg, Ph.D.
Director
Computer Science and Telecommunications Board, The National Academies jeisenbe@nas.edu 202-664-1235 (find-me-anywhere) 202-334-2605 (backup--main office) 500 Fifth Street NW, Keck 959, Washington, DC 20001
Subject: 3 new National Academies reports - on cybersecurity,
dependable systems, and privacy
Dave--IPers may be interested in the following three reports that have recently been released by CSTB:
* Toward a Safer and More Secure Cyberspace, which examines the vulnerabilities of the Internet, offers a strategy for future research aimed at countering cyber attacks, and lays out a broad research agenda that includes traditional, problem-specific studies as well as unconventional ideas is necessary to combat current and future cybersecurity threats;
* Software for Dependable Systems: Sufficient Evidence?, which discusses the meaning of dependability in a software and systems context, illustrates how the growing use and complexity of software necessitates a different approach to ensuring dependability, and recommends an evidence-based approach to achieving justifiable confidence in and greater dependability of software; and
* Engaging Privacy and Information Technology in a Digital Age, which examines how threats to privacy are evolving, ongoing information technology trends, and how society can balance the interests of individuals, businesses, and government in ways that promote privacy reasonably and efficiently.
All can be viewed at www.cstb.org by following the "National Academies Press" link and then the "Full Text" link under Read.
Thanks/Jon
================
Jon Eisenberg, Ph.D.
Director
Computer Science and Telecommunications Board, The National Academies jeisenbe@nas.edu 202-664-1235 (find-me-anywhere) 202-334-2605 (backup--main office) 500 Fifth Street NW, Keck 959, Washington, DC 20001
Wednesday, July 4, 2007
Converegnce of Web 2.0, Cyber-infrastructure and Next Generation Internet
[One of the more exciting projects related to next generation Internet is the "Living The Future" program at MIT. To me this program exemplifies some key features that are necessary to address the challenges of developing a clean slate Internet architecture. First and foremost the program is intended to engage students and members of the Cambridge community in the project. This is quite a bit different than most other next generation Internet initiatives which are pure academic research projects with little or no user input or market validation. To my mind some of the best ideas on the future of the Internet can come from students. In most cases, they are the first to adapt to new ideas such as Web 2.0, P2P etc. As well, a study by University of Toronto researchers also demonstrated that students were the primary vector for popularizing the Internet outside the university/academic community. Without engagement of early adopters, like students, I think most next generation Internet projects will be doomed to failure.
One of the more interesting papers on "Living the Future" web site is one by Dirk Trossen which shows the potential linkages between Internet 1.0 enabling the creation of Web 1.0 which in turn enabled Web 2.0. Web 2.0 may enable the creation of the next generation Internet. One example of this type of thinking is CANARIE's UCLP project where Web 2.0 tools such as web services, workflows and mashups allows end user to orchestrate and architect their own Internet networks. But this potential has barely been tapped in using Web 2.0 to allow users to compose their own network services and solutions both in the wireless and wired spaces, as exemplified by "Living the Future" project.-- BSA]
Web 2.0 and Next Generation Internet http://cfp.mit.edu/events/slides/jan06/Dirk-Trossen.pdf
Living the Future Web site http://viral.media.mit.edu/wiki/tiki-index.php?page=LTF
University students played critical role in diffusion of the Internet to the global community http://www.news.utoronto.ca/bin6/060222-2074.asp
How students at university dormitories can play leading role in developing next generation Internet http://www.canarie.ca/canet4/library/customer/Customer_Owned_Network_for_university_dorm.ppt
The Effects of Broadband Deployment on Output and Employment
[This seems to be a reasonably well researched paper on the effects of broadband deployment on output and employment. Many previous studies on this topic tended to involve a lot of hand waving. I am a big believer in reduced regulation and that a truly competitive private sector market is the best solution to accelerate rollout broadband. However I remain skeptical of the intended purpose of the study when I see that the authors regularly consult for the big RBOCs and their conclusion is that "new regulatory policies not reduce investment incentives for these carriers". Many countries that are ahead of the US in terms of broadband penetration have concluded that some variant of structural separation is essential to promote broadband. I am think private sector led structural separation initiatives, without necessarily government regulation, is where things will naturally end up. As an example see http://www.canarie.ca/canet4/library/customer/Green_Broadband.ppt
Thanks to Theodore Stout and Ross MacLeod for this pointer -- BSA]
http://www3.brookings.edu/views/papers/crandall/200706litan.pdf
Effects of Broadband Deployment on Output and Employment
The Power of Broadband
A new Brookings Institution study, by researchers Robert Crandall, William Lehr, and Robert Litan, takes a hard look at the economic effects of broadband deployment. While most casual observers would suspect that broadband deployment is a good thing, there are remarkably few empirical studies that detail its effects on output and employment. This new research indicates that broadband deployment does indeed have positive economic impacts. In fact, for every one percent increase in a state’s broadband penetration rates, employment increases at a rate of 0.2 to 0.3 percent per year. If these figures are aggregated to the national level, we find that this increase could lead to an additional 300,000 new jobs per year. Based on these early findings, the researchers recommend that state policymakers be more aggressive in terms of promoting competition in broadband services. This competition will help reduce costs, improve services, and further hasten deployment efforts.
Access the June 2007 Brookings Institution paper by Robert Crandall, William Lehr and Robert Litan.
"The finding of the strong link between broadband use and state-level employment has important policy implications, both on the demand-side and the supply-side. In particular, these results suggest that all levels of government should follow policies that encourage broadband competition, which will lead to lower prices and hence greater use. It should be noted, however, that increased use will require an expansion of supply, specifically greater investment by service providers in broadband infrastructure, which already is facing capacity constraints as new applications, such as video streaming, become ever more popular. It is critical, therefore, that new regulatory policies not reduce investment incentives for these carriers."
Thanks to Theodore Stout and Ross MacLeod for this pointer -- BSA]
http://www3.brookings.edu/views/papers/crandall/200706litan.pdf
Effects of Broadband Deployment on Output and Employment
The Power of Broadband
A new Brookings Institution study, by researchers Robert Crandall, William Lehr, and Robert Litan, takes a hard look at the economic effects of broadband deployment. While most casual observers would suspect that broadband deployment is a good thing, there are remarkably few empirical studies that detail its effects on output and employment. This new research indicates that broadband deployment does indeed have positive economic impacts. In fact, for every one percent increase in a state’s broadband penetration rates, employment increases at a rate of 0.2 to 0.3 percent per year. If these figures are aggregated to the national level, we find that this increase could lead to an additional 300,000 new jobs per year. Based on these early findings, the researchers recommend that state policymakers be more aggressive in terms of promoting competition in broadband services. This competition will help reduce costs, improve services, and further hasten deployment efforts.
Access the June 2007 Brookings Institution paper by Robert Crandall, William Lehr and Robert Litan.
"The finding of the strong link between broadband use and state-level employment has important policy implications, both on the demand-side and the supply-side. In particular, these results suggest that all levels of government should follow policies that encourage broadband competition, which will lead to lower prices and hence greater use. It should be noted, however, that increased use will require an expansion of supply, specifically greater investment by service providers in broadband infrastructure, which already is facing capacity constraints as new applications, such as video streaming, become ever more popular. It is critical, therefore, that new regulatory policies not reduce investment incentives for these carriers."
The relationship between Web 2.0 and Broadband last mile
[Excerpts from Gigacom article -- BSA]
http://gigaom.com/2007/07/01/france-web20/
And now the French Web 2.0 Wave
France leads Europe in its enthusiasm for Web 2.0 startups, an industry that has doubled in size across the continent since 2005. According to Dow Jones/Venture One data French start-ups raised close to $40 million in venture capital in 2006, accounting for 40% of the total dollars invested in the category across Europe last year ($101 million), and nearly double the money invested in British Web2.0 companies. What are the forces behind this French 2.0 wave?
The easily availability of super-speed connections has made video and essential part of French Internet life. France’s leading a video-sharing platform, known as Dailymotion, already gets 35 million unique visitors per month. Dailymotion is the #2 video site behind YouTube, with France making up a big component of its traffic. According to ComScore, French consumers spend a greater percent of their total hours online viewing streaming video (13%) than do consumers in the UK (10%), in Germany (9%) – or even the United States (6%)!
As successful as they’ve been in the consumer space, French startups are now busy taking Web2.0 plays into the enterprise: BlueKiwi, which just raised $5.4 million (€ 4 million) from Soffinova, specializes in Web2.0 software solutions (a mix of blogs, wikis and other social networking platforms) for big corporations such as Danone, Dassault and the French postal services.
A Good Summary of Grid Projects and Developments
[IBM has put together a good list of grid projects around the world -- BSA]
http://www.ibm.com/developerworks/grid/library/gr-gridorgs/index.html
Edna Nerona (edna@legacystudios.biz), Consultant, Consultant
26 Jun 2007
Previously, we gave you a "Recommended reading list for grid developers" and "A starter set of open source tools for grid developers." Now we've compiled a list of some of the production projects and organizations that are shaping the future of grid computing. This article provides a comprehensive list of current projects in such diverse areas as cancer research, astronomy, and physics, just to name a few. We also cover tool kits, security, and data management. These were taken from a variety of online sources to introduce programmers, administrators, and new users to specific information and projects related to using, deploying, and developing grid infrastructure.
Production grid organizations
This track will introduce you to production grid organizations, the problems they are solving, and how they're influencing grid technologies. Production grid deployments fall into various categories of grids: general-purpose grids, scientific and community grids, nationwide grids, regional grids, and university grids.
National and international general-purpose grids
Distributed European Infrastructure for Supercomputing Applications
As a consortium of leading national supercomputing centers, the Distributed European Infrastructure for Supercomputing Applications (DEISA) deploys and operates a secure production-quality distributed supercomputing environment. By enhancing and reinforcing European capabilities in high-performance computing, the research infrastructure facilitates scientific discoveries across a variety of science and technology fields. DEISA uses a deep integration of existing national high-end platforms, with a dedicated network and support by innovative system and grid software.
DutchGrid
Established in 2000, DutchGrid has many successful integrated efforts and initiatives that span a range of scientific collaborations. As an open platform for academic and research grid computing, DutchGrid provides globally recognized identity certificates to grid users in the Netherlands. The DutchGrid CA is fully project-neutral. Any not-for-profit researcher and academic user can obtain personal and server or host certificates for use with grid applications.
Enabling Grids for E-science
The Enabling Grids for E-science (EGEE) project brings together scientists and engineers from more than 90 institutions in 32 countries worldwide to provide a seamless grid infrastructure for e-science that is available to scientists. The EGEE grid consists of more than 30,000 CPUs available to users 24 hours a day, seven days a week, in addition to about 5 petabytes (5 million gigabytes) of storage, and maintains 30,000 concurrent jobs on average. Having such resources available changes the way scientific research takes place. EGEE is a four-year project funded by the European Commission.
Grid5000
The purpose of the Grid5000 project is a highly reconfigurable, controllable, and monitorable experimental grid platform grid researchers can use as a testbed for experiments in all the software layers between the network protocols and up to applications. Grid5000 brings together nine sites geographically distributed in France, featuring 5,000 CPUs. These areas include Bordeaux, Grenoble, Lille, Lyon, Nancy, Orsay, Rennes, Sophia-Antipolis, and Toulouse.
LA Grid
Pronounced "lah grid," the LA Grid is the first-ever comprehensive computing grid to connect faculty, students, and researchers from institutions across the United States, Latin America, and Spain to collaborate on complex industry applications for business and societal needs in the context of health. In addition to universities, LA Grid has partnered with industries worldwide, enhancing innovations in many areas, including healthcare, life sciences and hurricane disaster, life sciences, and disaster mitigation.
Open Science Grid
The Open Science Grid (OSG) is a distributed computing infrastructure for scientific research. The OSG consortium's unique alliance of universities, national laboratories, scientific collaborations, and software developers brings petascale computing and storage resources into a shared uniform cyberinfrastructure.
TeraGrid
TeraGrid is an open scientific discovery infrastructure funded by the National Science Foundation. Combining leadership-class resources at nine partner sites, TeraGrid creates an integrated, persistent computational resource. Interconnected via a high-speed gigabits-per-second dedicated national network, TeraGrid provides more than 150 teraflops of computing power and nearly 2 petabytes of rotating storage, numerous scientific data collections, specialized data analysis tools, scientific gateways, and user portals to simplify access to valuable resources, and visualization resources.
Scientific and community grids
AstroGrid
AstroGrid is an open source project built to create a working Virtual Observatory (VO) for U.K. and international astronomers. Funded by the U.K. government, AstroGrid works closely with other VO projects worldwide through the International Virtual Observatory Alliance (IVOA). As a leading member of this community, AstroGrid provides internationally recognized interface standards that are emerging to promote scientific integration of astronomical data and processing resources worldwide.
cancer Biomedical Informatics Grid
The cancer Biomedical Informatics Grid (caBIG) is a voluntary network or grid connecting individuals and institutions to enable the sharing of data and tools, creating a worldwide source of cancer research. The goal is to speed the delivery of innovative approaches for the prevention and treatment of cancer. The infrastructure and tools created by caBIG also have broad utility outside the cancer community. caBIG is being developed under the leadership of the National Cancer Institute's Center for Bioinformatics.
International Virtual Data Grid Laboratory
The International Virtual Data Grid Laboratory (iVDGL) is a global data grid that will serve forefront experiments in physics and astronomy. Its computing, storage, and networking resources in the United States, Europe, Asia, and South America provide a unique laboratory that will test and validate grid technologies at international and global scales. Sites in Europe and the United States will be linked by a multigigabit-per-second transatlantic link funded by the European DataTAG project.
World Community Grid
World Community Grid's mission is to create the world's largest public computing grid to tackle projects that benefit humanity. The success of the World Community Grid depends upon individuals collectively contributing their unused computer time to change the world for the better. World Community Grid is making technology available only to public and not-for-profit organizations to use in humanitarian research that might otherwise not be completed due to the high cost of the computer infrastructure required in the absence of a public grid.
Worldwide Large Hadron Collider Computing Grid
The Worldwide Large Hadron Collider (LHC) Computing Grid is designed to handle the unprecedented quantities of data that will be produced by experiments at CERN's LHC from 2007 onward. The computational requirements of the experiments that will operate at the LHC are enormous. Some 12-14 petabytes of data will be generated each year, the equivalent of more than 20 million CDs. Analyzing this data will require the equivalent of 70,000 of today's fastest PCs. The LHC Computing Grid will meet these needs by deploying a worldwide computational grid, integrating the resources of scientific computing centers spread across Europe, the United States, and Asia into a global virtual computing service.
U.S. regional grids
Northwest Indiana Computational Grid
Northwest Indiana Computational Grid (NWICG) is a partnership of researchers and educators from Purdue University-West Lafayette, Purdue University-Calumet, and the University of Notre Dame. With a focus on national science and research initiatives, NWICG creates cyberinfrastructure that supports the solution of breakthrough-level problems and enabling continuing world-class advances in the underlying technologies of high-performance computing. They are developing a scalable, high-speed, high-bandwidth, science-driven computational grid for Northwest Indiana across the three universities in collaboration with the Department of Energy's Argonne National Laboratories.
SURAGrid
Southeastern Universities Research Association (SURA) is a consortium of organizations collaborating and combining resources to help bring grid technology to the level of seamless, shared infrastructure. The SURAgrid focuses on direct access to a rich set of distributed capabilities for participating research and education communities. SURAgrid promotes the development of contributed resources, project-specific tools and environments, highly specialized access, and gateways to national and international cyberinfrastructure.
Texas Internet Grid for Research and Education
The mission of the Texas Internet Grid for Research and Education (TIGRE) project is to create a computational grid that brings together computing systems, storage systems, and databases, visualization laboratories and displays, and even instruments and sensors across Texas. By enhancing the computational capabilities for Texas researchers in academia, government, and industry by integrating massive computing power, TIGRE hopes to aid in the advancement of biomedicine, energy and the environment, aerospace, materials science, agriculture, and information technology.
Open source grid projects
These grid projects cover a diverse set of areas, ranging from grid infrastructure toolkits, middleware toolkits, data tools, security, and more. The following represent some fast moving projection grid projects and tools. Visit these sites often to keep up to date on how they are leading the progress in grid technology.
Grid infrastructure projects
Open source grid infrastructure projects that can help you set up your own grid.
Globus Toolkit
Open source software developed by the Globus Alliance. The Globus Alliance is an international collaboration that conducts R&D to create fundamental grid technologies. The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability.
Berkeley Open Infrastructure for Network Computing
Berkeley Open Infrastructure for Network Computing (BOINC) is a software platform for projects, like distributed.net and SETI@home, that use millions of volunteer computers as a parallel supercomputer. Source code is available for the platform, and interested C++ developers are encouraged to help develop the platform code. BOINC is currently supported on Windows®, Linux®, UNIX®, and Mac OS X. CPU platform requirements may vary among project clients using BOINC.
Uniform Interface to Computing Resources
Uniform Interface to Computing Resources (UNICORE) offers a ready-to-run grid system, including client and server software. UNICORE makes distributed computing and data resources available in a seamless and secure way in intranets and the Internet. The UNICORE design focuses several core principles: seamless access to heterogeneous environments, security, site autonomy, a powerful GUI clients that provides ease of use, and quick start bundles that allow for simple installation.
Grid middleware projects
The following projects have successfully provided U.S. and international projects with the advanced tools to easily access numerous grid functionalities, such as computation, visualization, and storage resources. You can interact with various grids or have one customized to work with your own grid.
gLite
gLite is the next generation middleware for grid computing, born from the collaborative efforts of more than 80 people in 12 academic and industrial research centers as part of the EGEE Project. gLite provides a bleeding-edge best-of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet.
National Research Grid Initiative
The National Research Grid Initiative (NAREGI), in Japan, focuses on the research and development of grid middleware so that a large-scale computing environment can be implemented for widely distributed, advanced research and education.
Ninf-G
Ninf is a Japanese project developing programming middleware which enables users to access various resources, such as hardware, software, and scientific data on the grid with an easy-to-use interface. Ninf-G is open source software that supports development and execution of grid-enabled applications using Grid Remote Procedure Call (GridRPC) on distributed computing resources.
NorduGrid
NorduGrid middleware, also known as Advanced Resource Connector (ARC), is an open source software solution distributed under the GPL license, enabling production-quality computational and data grids. ARC provides a reliable implementation of the fundamental grid services, such as information services, resource discovery and monitoring, job submission and management, brokering and data management, and resource management. Most of these services are provided through the security layer of the GSI. The middleware builds upon standard open source solutions like OpenLDAP, OpenSSL, SASL and Globus Toolkit (GT) libraries.
OGSA-DAI
The OGSA-DAI project focuses on the development of middleware to assist with the access and integration of data from separate sources through the grid. The project works closely with the Globus, OMII-Europe, NextGRID, SIMDAT, and BEinGRID, ensuring that the OGSA-DAI software works in a variety of grid environments.
ProActive
ProActive is the Java™ grid middleware library (with open source code under LGPL license) for parallel, distributed, and multithreaded computing. With a reduced set of simple primitives, ProActive provides a comprehensive API to simplify the programming of grid computing applications, distributed on LAN, on clusters of workstations, or on Internet grids.
Security projects
To protect the vital infrastructure and information, security is a constant evolving requirement of grid computing. These projects represent some of the cutting-edge security standards and implementations of grid security solutions.
GridShib
An NSF-funded project between NCSA and the University of Chicago, GridShib integrates federated authorization infrastructure (Shibboleth) with grid technology (the Globus Toolkit) to provide attribute-based authorization for distributed scientific communities.
Grid User Management System
The Grid User Management System (GUMS) is a Grid Identity Mapping Service. Identity mapping is necessary when a site's resources do not use grid credentials natively, but instead use a different mechanism to identify users, such as UNIX accounts or Kerberos principals.
PRIvilege Management and Authorization
PRIvilege Management and Authorization (PRIMA) is a system that provides enhanced grid security. PRIMA is a comprehensive grid security model and system. In PRIMA, a privilege is a platform-independent, self-contained representation of a fine-grain right. PRIMA achieves platform independence of privileges by externalizing fine-grain access rights to resource objects from the resource's internal representation.
Resource management and scheduling
An essential component of grids is to manage and schedule jobs across resources. These projects demonstrate a few strategies.
Community Scheduler Framework
Community Scheduler Framework (CSF) is an open source implementation of an OGSA-based metascheduler. It supports the emerging WS-Agreement specification and the Globus Toolkit's GRAM service. CSF fills in gaps in the existing resource management picture and is integrated with Platform LSF and Platform Multicluster. The CSF open source project is included in the Globus Toolkit V4.0 release.
Special Priority and Urgent Computing Environment
High-performance modeling and simulation are playing a driving role in decision-making and prediction. For time-critical emergency support applications, such as severe weather prediction, flood modeling, and influenza modeling, late results can be useless. A specialized infrastructure is needed to provide computing resources quickly, automatically, and reliably. Special Priority and Urgent Computing Environment (SPRUCE) is a system to support urgent or event-driven computing on traditional supercomputers and distributed grids.
Grid resource monitoring
Monitoring resources and applications is key to the success of grids. Through an easy-to-use interface, these sophisticated tools help users gather, catalog, and monitor various types of resources. Moreover, systems administrators are also able to monitor the health of their grids. These evolving grid projects list a few of the open source options.
GridCat
GridCat is a high-level grid cataloging system using status dots on geographic maps, as well as a catalog. The maps help debug site troubles. The catalog contains information on site readiness with many other valuable information per site to help job submission and job scheduling for application users and grid scheduler developers. GridCat tries to present the grid-site at its simplest status representation.
Gridscape II
Gridscape II is a customized portal component that can be used on its own or plugged in to compliment existing grid portals. Gridscape II manages the gathering of information from arbitrary heterogeneous and distributed sources and presents them together seamlessly within a single interface. It leverages the Google Maps API in order to provide a highly interactive user interface. Gridscape II is simple and easy to use, providing a solution to those who don't wish to invest heavily in developing their own monitoring portals from scratch, and also for those who want something easy to customize.
Storage and data management
From open source high-performance file systems to seamless access of data from heterogeneous environments, the following projects bring together and optimize a variety of storage and data management solutions. This track emphasizes storing, managing, and moving data across resources and connecting data resources over a network.
Lustre
The Lustre File System, a high-performance open source file system from Cluster File Systems Inc., is a distributed file system that eliminates the performance, availability, and scalability problems present in many traditional distributed file systems. Lustre is a highly modular next-generation storage architecture that combines established open standards, the Linux operating system, and innovative protocols into a reliable, network-neutral data storage and retrieval solution. Providing high I/O throughput in clusters and shared-data environments, Lustre also provides independence from the location of data on the physical storage, protection from single points of failure, and fast recovery from cluster reconfiguration and server or network outages.
NeST
NeST is a software network storage device providing secured storage allocation for a specific time period. The size and duration of allocation units or lots are negotiated between NeST and the user or application. These lots can also be expanded in size, extended in time and/or subdivided into a hierarchy. Plus, NeST offers access control lists for lot and file access. NeST offers multiple protocol interfaces, including its internal Chirp, HTTP and GSI-FTP.
SAMGrid
SAMGrid is a general data handling system designed to be a key device for experiments with large (petabyte-size) data sets and widely distributed production and analysis facilities. The components now in production provide a versatile set of services for data transfer, data storage, and process bookkeeping on distributed systems.
UberFTP
Building upon the technologies of GridFTP, UberFTP is the first interactive GridFTP-enabled FTP client. The basic GridFTP client is not interactive and allows only one file transfer at a time. UberFTP provides interactive tools that work much like the popular NCFTP tool. It supports GSI authentication, parallel data channels, and third-party transfers.
Back to top
Conclusion
Grid computing is one of the most exciting technologies that are having powerful effects on the way we solve complex problems and share diverse resources. In addition to cancer and physics, it also has great influence on security and authentication, discovery, monitoring, information services, data management, resource management, and scheduling.
http://www.ibm.com/developerworks/grid/library/gr-gridorgs/index.html
Edna Nerona (edna@legacystudios.biz), Consultant, Consultant
26 Jun 2007
Previously, we gave you a "Recommended reading list for grid developers" and "A starter set of open source tools for grid developers." Now we've compiled a list of some of the production projects and organizations that are shaping the future of grid computing. This article provides a comprehensive list of current projects in such diverse areas as cancer research, astronomy, and physics, just to name a few. We also cover tool kits, security, and data management. These were taken from a variety of online sources to introduce programmers, administrators, and new users to specific information and projects related to using, deploying, and developing grid infrastructure.
Production grid organizations
This track will introduce you to production grid organizations, the problems they are solving, and how they're influencing grid technologies. Production grid deployments fall into various categories of grids: general-purpose grids, scientific and community grids, nationwide grids, regional grids, and university grids.
National and international general-purpose grids
Distributed European Infrastructure for Supercomputing Applications
As a consortium of leading national supercomputing centers, the Distributed European Infrastructure for Supercomputing Applications (DEISA) deploys and operates a secure production-quality distributed supercomputing environment. By enhancing and reinforcing European capabilities in high-performance computing, the research infrastructure facilitates scientific discoveries across a variety of science and technology fields. DEISA uses a deep integration of existing national high-end platforms, with a dedicated network and support by innovative system and grid software.
DutchGrid
Established in 2000, DutchGrid has many successful integrated efforts and initiatives that span a range of scientific collaborations. As an open platform for academic and research grid computing, DutchGrid provides globally recognized identity certificates to grid users in the Netherlands. The DutchGrid CA is fully project-neutral. Any not-for-profit researcher and academic user can obtain personal and server or host certificates for use with grid applications.
Enabling Grids for E-science
The Enabling Grids for E-science (EGEE) project brings together scientists and engineers from more than 90 institutions in 32 countries worldwide to provide a seamless grid infrastructure for e-science that is available to scientists. The EGEE grid consists of more than 30,000 CPUs available to users 24 hours a day, seven days a week, in addition to about 5 petabytes (5 million gigabytes) of storage, and maintains 30,000 concurrent jobs on average. Having such resources available changes the way scientific research takes place. EGEE is a four-year project funded by the European Commission.
Grid5000
The purpose of the Grid5000 project is a highly reconfigurable, controllable, and monitorable experimental grid platform grid researchers can use as a testbed for experiments in all the software layers between the network protocols and up to applications. Grid5000 brings together nine sites geographically distributed in France, featuring 5,000 CPUs. These areas include Bordeaux, Grenoble, Lille, Lyon, Nancy, Orsay, Rennes, Sophia-Antipolis, and Toulouse.
LA Grid
Pronounced "lah grid," the LA Grid is the first-ever comprehensive computing grid to connect faculty, students, and researchers from institutions across the United States, Latin America, and Spain to collaborate on complex industry applications for business and societal needs in the context of health. In addition to universities, LA Grid has partnered with industries worldwide, enhancing innovations in many areas, including healthcare, life sciences and hurricane disaster, life sciences, and disaster mitigation.
Open Science Grid
The Open Science Grid (OSG) is a distributed computing infrastructure for scientific research. The OSG consortium's unique alliance of universities, national laboratories, scientific collaborations, and software developers brings petascale computing and storage resources into a shared uniform cyberinfrastructure.
TeraGrid
TeraGrid is an open scientific discovery infrastructure funded by the National Science Foundation. Combining leadership-class resources at nine partner sites, TeraGrid creates an integrated, persistent computational resource. Interconnected via a high-speed gigabits-per-second dedicated national network, TeraGrid provides more than 150 teraflops of computing power and nearly 2 petabytes of rotating storage, numerous scientific data collections, specialized data analysis tools, scientific gateways, and user portals to simplify access to valuable resources, and visualization resources.
Scientific and community grids
AstroGrid
AstroGrid is an open source project built to create a working Virtual Observatory (VO) for U.K. and international astronomers. Funded by the U.K. government, AstroGrid works closely with other VO projects worldwide through the International Virtual Observatory Alliance (IVOA). As a leading member of this community, AstroGrid provides internationally recognized interface standards that are emerging to promote scientific integration of astronomical data and processing resources worldwide.
cancer Biomedical Informatics Grid
The cancer Biomedical Informatics Grid (caBIG) is a voluntary network or grid connecting individuals and institutions to enable the sharing of data and tools, creating a worldwide source of cancer research. The goal is to speed the delivery of innovative approaches for the prevention and treatment of cancer. The infrastructure and tools created by caBIG also have broad utility outside the cancer community. caBIG is being developed under the leadership of the National Cancer Institute's Center for Bioinformatics.
International Virtual Data Grid Laboratory
The International Virtual Data Grid Laboratory (iVDGL) is a global data grid that will serve forefront experiments in physics and astronomy. Its computing, storage, and networking resources in the United States, Europe, Asia, and South America provide a unique laboratory that will test and validate grid technologies at international and global scales. Sites in Europe and the United States will be linked by a multigigabit-per-second transatlantic link funded by the European DataTAG project.
World Community Grid
World Community Grid's mission is to create the world's largest public computing grid to tackle projects that benefit humanity. The success of the World Community Grid depends upon individuals collectively contributing their unused computer time to change the world for the better. World Community Grid is making technology available only to public and not-for-profit organizations to use in humanitarian research that might otherwise not be completed due to the high cost of the computer infrastructure required in the absence of a public grid.
Worldwide Large Hadron Collider Computing Grid
The Worldwide Large Hadron Collider (LHC) Computing Grid is designed to handle the unprecedented quantities of data that will be produced by experiments at CERN's LHC from 2007 onward. The computational requirements of the experiments that will operate at the LHC are enormous. Some 12-14 petabytes of data will be generated each year, the equivalent of more than 20 million CDs. Analyzing this data will require the equivalent of 70,000 of today's fastest PCs. The LHC Computing Grid will meet these needs by deploying a worldwide computational grid, integrating the resources of scientific computing centers spread across Europe, the United States, and Asia into a global virtual computing service.
U.S. regional grids
Northwest Indiana Computational Grid
Northwest Indiana Computational Grid (NWICG) is a partnership of researchers and educators from Purdue University-West Lafayette, Purdue University-Calumet, and the University of Notre Dame. With a focus on national science and research initiatives, NWICG creates cyberinfrastructure that supports the solution of breakthrough-level problems and enabling continuing world-class advances in the underlying technologies of high-performance computing. They are developing a scalable, high-speed, high-bandwidth, science-driven computational grid for Northwest Indiana across the three universities in collaboration with the Department of Energy's Argonne National Laboratories.
SURAGrid
Southeastern Universities Research Association (SURA) is a consortium of organizations collaborating and combining resources to help bring grid technology to the level of seamless, shared infrastructure. The SURAgrid focuses on direct access to a rich set of distributed capabilities for participating research and education communities. SURAgrid promotes the development of contributed resources, project-specific tools and environments, highly specialized access, and gateways to national and international cyberinfrastructure.
Texas Internet Grid for Research and Education
The mission of the Texas Internet Grid for Research and Education (TIGRE) project is to create a computational grid that brings together computing systems, storage systems, and databases, visualization laboratories and displays, and even instruments and sensors across Texas. By enhancing the computational capabilities for Texas researchers in academia, government, and industry by integrating massive computing power, TIGRE hopes to aid in the advancement of biomedicine, energy and the environment, aerospace, materials science, agriculture, and information technology.
Open source grid projects
These grid projects cover a diverse set of areas, ranging from grid infrastructure toolkits, middleware toolkits, data tools, security, and more. The following represent some fast moving projection grid projects and tools. Visit these sites often to keep up to date on how they are leading the progress in grid technology.
Grid infrastructure projects
Open source grid infrastructure projects that can help you set up your own grid.
Globus Toolkit
Open source software developed by the Globus Alliance. The Globus Alliance is an international collaboration that conducts R&D to create fundamental grid technologies. The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability.
Berkeley Open Infrastructure for Network Computing
Berkeley Open Infrastructure for Network Computing (BOINC) is a software platform for projects, like distributed.net and SETI@home, that use millions of volunteer computers as a parallel supercomputer. Source code is available for the platform, and interested C++ developers are encouraged to help develop the platform code. BOINC is currently supported on Windows®, Linux®, UNIX®, and Mac OS X. CPU platform requirements may vary among project clients using BOINC.
Uniform Interface to Computing Resources
Uniform Interface to Computing Resources (UNICORE) offers a ready-to-run grid system, including client and server software. UNICORE makes distributed computing and data resources available in a seamless and secure way in intranets and the Internet. The UNICORE design focuses several core principles: seamless access to heterogeneous environments, security, site autonomy, a powerful GUI clients that provides ease of use, and quick start bundles that allow for simple installation.
Grid middleware projects
The following projects have successfully provided U.S. and international projects with the advanced tools to easily access numerous grid functionalities, such as computation, visualization, and storage resources. You can interact with various grids or have one customized to work with your own grid.
gLite
gLite is the next generation middleware for grid computing, born from the collaborative efforts of more than 80 people in 12 academic and industrial research centers as part of the EGEE Project. gLite provides a bleeding-edge best-of-breed framework for building grid applications tapping into the power of distributed computing and storage resources across the Internet.
National Research Grid Initiative
The National Research Grid Initiative (NAREGI), in Japan, focuses on the research and development of grid middleware so that a large-scale computing environment can be implemented for widely distributed, advanced research and education.
Ninf-G
Ninf is a Japanese project developing programming middleware which enables users to access various resources, such as hardware, software, and scientific data on the grid with an easy-to-use interface. Ninf-G is open source software that supports development and execution of grid-enabled applications using Grid Remote Procedure Call (GridRPC) on distributed computing resources.
NorduGrid
NorduGrid middleware, also known as Advanced Resource Connector (ARC), is an open source software solution distributed under the GPL license, enabling production-quality computational and data grids. ARC provides a reliable implementation of the fundamental grid services, such as information services, resource discovery and monitoring, job submission and management, brokering and data management, and resource management. Most of these services are provided through the security layer of the GSI. The middleware builds upon standard open source solutions like OpenLDAP, OpenSSL, SASL and Globus Toolkit (GT) libraries.
OGSA-DAI
The OGSA-DAI project focuses on the development of middleware to assist with the access and integration of data from separate sources through the grid. The project works closely with the Globus, OMII-Europe, NextGRID, SIMDAT, and BEinGRID, ensuring that the OGSA-DAI software works in a variety of grid environments.
ProActive
ProActive is the Java™ grid middleware library (with open source code under LGPL license) for parallel, distributed, and multithreaded computing. With a reduced set of simple primitives, ProActive provides a comprehensive API to simplify the programming of grid computing applications, distributed on LAN, on clusters of workstations, or on Internet grids.
Security projects
To protect the vital infrastructure and information, security is a constant evolving requirement of grid computing. These projects represent some of the cutting-edge security standards and implementations of grid security solutions.
GridShib
An NSF-funded project between NCSA and the University of Chicago, GridShib integrates federated authorization infrastructure (Shibboleth) with grid technology (the Globus Toolkit) to provide attribute-based authorization for distributed scientific communities.
Grid User Management System
The Grid User Management System (GUMS) is a Grid Identity Mapping Service. Identity mapping is necessary when a site's resources do not use grid credentials natively, but instead use a different mechanism to identify users, such as UNIX accounts or Kerberos principals.
PRIvilege Management and Authorization
PRIvilege Management and Authorization (PRIMA) is a system that provides enhanced grid security. PRIMA is a comprehensive grid security model and system. In PRIMA, a privilege is a platform-independent, self-contained representation of a fine-grain right. PRIMA achieves platform independence of privileges by externalizing fine-grain access rights to resource objects from the resource's internal representation.
Resource management and scheduling
An essential component of grids is to manage and schedule jobs across resources. These projects demonstrate a few strategies.
Community Scheduler Framework
Community Scheduler Framework (CSF) is an open source implementation of an OGSA-based metascheduler. It supports the emerging WS-Agreement specification and the Globus Toolkit's GRAM service. CSF fills in gaps in the existing resource management picture and is integrated with Platform LSF and Platform Multicluster. The CSF open source project is included in the Globus Toolkit V4.0 release.
Special Priority and Urgent Computing Environment
High-performance modeling and simulation are playing a driving role in decision-making and prediction. For time-critical emergency support applications, such as severe weather prediction, flood modeling, and influenza modeling, late results can be useless. A specialized infrastructure is needed to provide computing resources quickly, automatically, and reliably. Special Priority and Urgent Computing Environment (SPRUCE) is a system to support urgent or event-driven computing on traditional supercomputers and distributed grids.
Grid resource monitoring
Monitoring resources and applications is key to the success of grids. Through an easy-to-use interface, these sophisticated tools help users gather, catalog, and monitor various types of resources. Moreover, systems administrators are also able to monitor the health of their grids. These evolving grid projects list a few of the open source options.
GridCat
GridCat is a high-level grid cataloging system using status dots on geographic maps, as well as a catalog. The maps help debug site troubles. The catalog contains information on site readiness with many other valuable information per site to help job submission and job scheduling for application users and grid scheduler developers. GridCat tries to present the grid-site at its simplest status representation.
Gridscape II
Gridscape II is a customized portal component that can be used on its own or plugged in to compliment existing grid portals. Gridscape II manages the gathering of information from arbitrary heterogeneous and distributed sources and presents them together seamlessly within a single interface. It leverages the Google Maps API in order to provide a highly interactive user interface. Gridscape II is simple and easy to use, providing a solution to those who don't wish to invest heavily in developing their own monitoring portals from scratch, and also for those who want something easy to customize.
Storage and data management
From open source high-performance file systems to seamless access of data from heterogeneous environments, the following projects bring together and optimize a variety of storage and data management solutions. This track emphasizes storing, managing, and moving data across resources and connecting data resources over a network.
Lustre
The Lustre File System, a high-performance open source file system from Cluster File Systems Inc., is a distributed file system that eliminates the performance, availability, and scalability problems present in many traditional distributed file systems. Lustre is a highly modular next-generation storage architecture that combines established open standards, the Linux operating system, and innovative protocols into a reliable, network-neutral data storage and retrieval solution. Providing high I/O throughput in clusters and shared-data environments, Lustre also provides independence from the location of data on the physical storage, protection from single points of failure, and fast recovery from cluster reconfiguration and server or network outages.
NeST
NeST is a software network storage device providing secured storage allocation for a specific time period. The size and duration of allocation units or lots are negotiated between NeST and the user or application. These lots can also be expanded in size, extended in time and/or subdivided into a hierarchy. Plus, NeST offers access control lists for lot and file access. NeST offers multiple protocol interfaces, including its internal Chirp, HTTP and GSI-FTP.
SAMGrid
SAMGrid is a general data handling system designed to be a key device for experiments with large (petabyte-size) data sets and widely distributed production and analysis facilities. The components now in production provide a versatile set of services for data transfer, data storage, and process bookkeeping on distributed systems.
UberFTP
Building upon the technologies of GridFTP, UberFTP is the first interactive GridFTP-enabled FTP client. The basic GridFTP client is not interactive and allows only one file transfer at a time. UberFTP provides interactive tools that work much like the popular NCFTP tool. It supports GSI authentication, parallel data channels, and third-party transfers.
Back to top
Conclusion
Grid computing is one of the most exciting technologies that are having powerful effects on the way we solve complex problems and share diverse resources. In addition to cancer and physics, it also has great influence on security and authentication, discovery, monitoring, information services, data management, resource management, and scheduling.
Subscribe to:
Posts (Atom)