[At one time cyber-infrastructure was seen as the integration of high performance computing (HPC) clusters, databases, instruments and networks for the enabling of eScience. Increasingly there several countries such as Netherland and organizations like MIT and SARA that are ahead of the curve and looking at clouds as the alternate to the HPC clusters on the campus as evidenced by these pointers below. The dream of integrating HPC and networks may soon come to end as researchers move to cloud computing for convenience and cost, especially if universities move to asses energy and cooling costs to users of HPC clusters BSA]
Innovation in Computing and Information Technology for Sustainability
Computer Science and Telecommunications Board (CSTB) of the National Academies is holding a workshop to examine the role of innovation in computing and information technology for sustainability. The workshop is being organized by the National Academies Committee on Computing Research for Environmental and Societal Sustainability as part of an ongoing study sponsored by the National Science Foundation.
Computing has many potential green applications including improving energy conservation, enhancing energy management, reducing carbon emissions in many sectors, improving environmental protection (including mitigation and adaptation to climate change), and increasing awareness of environmental challenges and responses. A public workshop will survey sustainability challenges, current research initiatives, results from previously-held topical workshops, and related industry and government development efforts in these areas. The workshop will feature presentations and discussions that explore research themes and specific research opportunities that could advance sustainability objectives and also result in advances in computer science and consider research modalities, with a focus on applicable computational techniques and long-term research that might be supported by the National Science Foundation, and with an emphasis on problem- or user-driven research. A report of the workshop will be be prepared by the study committee and published by National Academies Press.
StarCluster Brings HPC to the Amazon Cloud
Setting up an HPC cluster in the cloud can be a daunting task for new users looking to utilize the cloud to run their HPC applications.
Learning the ins and outs of the infrastructure as a service (IaaS) model in addition to configuring and installing a typical HPC system is not an easy task.
In order to use the cloud effectively users need to be able to automate the process of requesting and configuring new resources and also terminate resources when they're no longer required without losing data. These concerns can be a challenge even for advanced users and require some level of cloud programming in order to get it right.
In an effort to improve this situation, the Software Tools for Academics and Researchers (STAR) group at MIT has created an open-source project called StarCluster that allows anyone to create and manage their own HPC clusters hosted on Amazon's Elastic Compute Cloud (EC2) without needing to be a cloud expert.
StarCluster is open-source software and can be downloaded for free from the StarCluster website http://web.mit.edu/starcluster or from the Python Package Index (PyPI) at http://pypi.python.org/pypi/StarCluster.
Microsoft announces HPC cloud service
HPC stands for High-Performance Computing. That's the politically correct acronym for what we used to call supercomputing.
As Microsoft laid it out in an e-mail, there are three specific areas of focus:
Cloud: Bringing technical computing power to scientists, engineers and analysts through cloud computing to help ensure processing resources are available whenever they are neededreliably, consistently and quickly. Supercomputing work may emerge as a killer app for the cloud.
Easier, consistent parallel programming: Delivering new tools that will help simplify parallel development from the desktop to the cluster to the cloud.
Powerful new tools: Developing powerful, easy-to-use technical computing tools that will help significantly speed discovery. This includes working with customers and industry partners on innovative solutions that will bring our technical computing vision to life.
Trust me that this is indeed powerful stuff. As Hilf told me in a brief interview: "We've been doing HPC Server and selling infrastructure and tools into supercomputing, but there's really a much broader opportunity. What we're trying to do is democratize supercomputing, to take a capability that's been available to a fraction of users to the broader scientific computing."
In some sense, what this will do is open up what can be characterized as "supercomputing light" to a very broad group of users. There will be two main classes of customers who take advantage of this HPC-class access. The first will be those who need to augment their available capacity with access to additional, on-demand "burst" compute capacity.
The second group, according to Hilf, "is the broad base of users further down the pyramid. People who will never have a cluster, but may want to have the capability exposed to them in the desktop."
What's your take? Let me know, by leaving a comment below or e-mailing me directly email@example.com.
SARA Opens Gate for HPC Cloud Researchers
Researchers in the Netherlands are being granted the opportunity to take part in a grandHPC experiment over the coming year as the limits of BiG Grid are pushed into the cloud. If the full test is a success, this could mean that there will be a significant number of similar efforts in coming years from other national and international grid and research organizations.