Monday, May 31, 2010

Will SIM cards replace Eduroam, Shibboleth, etc - another reason for a 5G R&E networks

[One of the enduring challenges of the information age is authentication and authorization. A lot of research and development at universities and research networks are investigating various single sign-on and other authentication technologies. However a much simpler solution may already be available using the SIM card on the user’s cell phone or iPad. As these devices increasingly become the most common method of accessing networks and content by users (and nearly ubiquitous) using SIMs for authentication and authorization starts to make a lot of sense. The next generation of SIMs will support on board web services allowing for complex interactions such as on-line banking etc. There is no reason why such SIM web services could not be used for a universal single sign-on authentication and authorization system. For those applications that required it you could even add sophisticated multi-layer authentication technology such as rolling time synchronized ID numbers sent as text messages. Smart phones and iPads could also be programmed to communicate via Bluetooth to authenticate any nearby client PC that needs access to a distant server, etc. For many applications that need only “good enough” security the SIM card may provide the holy grail solution of never having to remember a password or log in to a site. Nobody” logs on” to their cell phone. Device sign-on (as long as it tied to a certified and verifiable account) is sufficient for most applications as to opposed to individual human authentication.

The big challenge is that access to the SIM is controlled by network operators. That is why in my opinion it is crucial that university R&E networks should operate their own virtual national 3G/4G wireless network integrated with campus WiFi facilities. Several WiFi manufacturers also plan to support SIM technology to provide seamless authentication. Many universities offer a separate Eduroam SSID, but a simpler solution would be to use SIM authentication provided by a national R&E 5G network.

I still believe that all public sector institutions should provide open access Wifi. If airports can do it (who have considerable security concerns) I cant see why universities should not be able to make the same offer. Abuse of the open network can be controlled by traffic throttling and layer 7 policies. Of course all public hotspots should be entirely solely by renewable energy using 400 HZ or power over Ethernet – hence 5G network. SIM authentication would allow users more bandwidth, use of more robust hotspots and access to databases, libraries etc. Here are some useful pointers - BSA]

Overview of SIM cards
Wi-Fi: It Just Keeps Going and Growing
By Stacey Higginbotham May. 14, 2010, 12:30pm PDT 2 Comments
3 64
As Wi-Fi approaches its 25th birthday, innovations based on the technology just keep on coming. A company called Anyfi Networks today launched a product that gives a Wi-Fi network the basic properties of a cell phone network, which means a user could move from hotspot to hotspot without losing coverage or having to authenticate on the network again. Another startup, called Compiled Networks offers technology that has the same effect.
Other innovations are detailed in a nice article over at Network World, including ones we’ve covered like the combination of Wi-Fi and wireless HD video transfer technology using spectrum in the 60GHz band and the peer-to-peer network technology known as Wi-Fi Direct. The story also lays out how Wi-Fi can be used for unified mesh networks — similar to the technology that Anyfi and Compiled are currently trying to offer through proprietary efforts.
Such mesh networks make Wi-Fi that much more competitive with cellular networks, because they extend its range — and the user experience is seamless. Plus, as Wi-Fi is embedded into more and more smartphones, the need for seamless and ubiquitous Wi-Fi grows with it. For example, my child’s pediatrician has installed Wi-Fi in her office within the last two months, precisely because it’s something that parents kept requesting while they waited.

If Wi-Fi networks can be linked using standard (in other words, cheaper and interoperable) technology, it becomes harder to run through the limited gigabytes or megabytes in your high-cost cellular data plans, which is a good thing for consumers and possibly good for carriers whose networks are overloaded. For an example of the mesh Wi-Fi future, visit New York to see the Comcast, Time Warner Cable and Cablevision shared Wi-Fi in action. When it comes to cost, it’s hard to beat Wi-Fi. Thanks to anticipated updates to the standard, when it comes to coverage and the user experience, it may be hard not to choose Wi-Fi.
Related GigaOM Pro content (sub req’d):
How AT&T Will Deal With iPad Data Traffic

twitter: BillStArnaud
skype: Pocketpro

Thursday, May 20, 2010

Moving beyond cyber-infrastructure - greening and moving HPC into the cloud

[At one time cyber-infrastructure was seen as the integration of high performance computing (HPC) clusters, databases, instruments and networks for the enabling of eScience. Increasingly there several countries such as Netherland and organizations like MIT and SARA that are ahead of the curve and looking at clouds as the alternate to the HPC clusters on the campus as evidenced by these pointers below. The dream of integrating HPC and networks may soon come to end as researchers move to cloud computing for convenience and cost, especially if universities move to asses energy and cooling costs to users of HPC clusters BSA]

Innovation in Computing and Information Technology for Sustainability

Computer Science and Telecommunications Board (CSTB) of the National Academies is holding a workshop to examine the role of innovation in computing and information technology for sustainability. The workshop is being organized by the National Academies Committee on Computing Research for Environmental and Societal Sustainability as part of an ongoing study sponsored by the National Science Foundation.

Workshop Goals

Computing has many potential green applications including improving energy conservation, enhancing energy management, reducing carbon emissions in many sectors, improving environmental protection (including mitigation and adaptation to climate change), and increasing awareness of environmental challenges and responses. A public workshop will survey sustainability challenges, current research initiatives, results from previously-held topical workshops, and related industry and government development efforts in these areas. The workshop will feature presentations and discussions that explore research themes and specific research opportunities that could advance sustainability objectives and also result in advances in computer science and consider research modalities, with a focus on applicable computational techniques and long-term research that might be supported by the National Science Foundation, and with an emphasis on problem- or user-driven research. A report of the workshop will be be prepared by the study committee and published by National Academies Press.

StarCluster Brings HPC to the Amazon Cloud

Setting up an HPC cluster in the cloud can be a daunting task for new users looking to utilize the cloud to run their HPC applications.
Learning the ins and outs of the infrastructure as a service (IaaS) model in addition to configuring and installing a typical HPC system is not an easy task.

In order to use the cloud effectively users need to be able to automate the process of requesting and configuring new resources and also terminate resources when they're no longer required without losing data. These concerns can be a challenge even for advanced users and require some level of cloud programming in order to get it right.
In an effort to improve this situation, the Software Tools for Academics and Researchers (STAR) group at MIT has created an open-source project called StarCluster that allows anyone to create and manage their own HPC clusters hosted on Amazon's Elastic Compute Cloud (EC2) without needing to be a cloud expert.

StarCluster is open-source software and can be downloaded for free from the StarCluster website or from the Python Package Index (PyPI) at

Microsoft announces HPC cloud service

HPC stands for High-Performance Computing. That's the politically correct acronym for what we used to call supercomputing.

As Microsoft laid it out in an e-mail, there are three specific areas of focus:

Cloud: Bringing technical computing power to scientists, engineers and analysts through cloud computing to help ensure processing resources are available whenever they are neededreliably, consistently and quickly. Supercomputing work may emerge as a killer app for the cloud.

Easier, consistent parallel programming: Delivering new tools that will help simplify parallel development from the desktop to the cluster to the cloud.

Powerful new tools: Developing powerful, easy-to-use technical computing tools that will help significantly speed discovery. This includes working with customers and industry partners on innovative solutions that will bring our technical computing vision to life.

Trust me that this is indeed powerful stuff. As Hilf told me in a brief interview: "We've been doing HPC Server and selling infrastructure and tools into supercomputing, but there's really a much broader opportunity. What we're trying to do is democratize supercomputing, to take a capability that's been available to a fraction of users to the broader scientific computing."

In some sense, what this will do is open up what can be characterized as "supercomputing light" to a very broad group of users. There will be two main classes of customers who take advantage of this HPC-class access. The first will be those who need to augment their available capacity with access to additional, on-demand "burst" compute capacity.

The second group, according to Hilf, "is the broad base of users further down the pyramid. People who will never have a cluster, but may want to have the capability exposed to them in the desktop."

What's your take? Let me know, by leaving a comment below or e-mailing me directly

SARA Opens Gate for HPC Cloud Researchers

Researchers in the Netherlands are being granted the opportunity to take part in a grandHPC experiment over the coming year as the limits of BiG Grid are pushed into the cloud. If the full test is a success, this could mean that there will be a significant number of similar efforts in coming years from other national and international grid and research organizations.



twitter: BillStArnaud


skype: Pocketpro

The changing traffic of R&E networks and cyber-infrastructure

[As I pointed out several months in my paper on the Future of R&E networks, traffic patterns are rapidly changing on all Internet networks including R&E networks. Increasingly traffic is becoming more local as these networks start to directly peer with large content, application and cloud providers at major IXs. A good example of this change in traffic pattern is the recent press release issued by the R&E network in the province of Ontario in Canada. They have seen a traffic volumes triple over the last 3 years. The overwhelming volume of the traffic is with over 40 content, cloud and application providers such as Google, Limelight etc. Only about 16% of the traffic goes to the national Internet backbone operated by CANARIE.

As cloud services for apps and research become more popular it is expected that this traffic volume will increase. For researchers clouds are still largely seen as a one to one replacement of physical clusters for virtual equivalents , but this is only the thin edge of the wedge for clouds. The real growth in clouds as a replacement for cyber-infrastructure will happen with next generation research applications that break down this limiting relationship between compute cycles and applications. This will be especially evident in the wireless sensor space and using mobile phone and pads as scientific instruments integrated with clouds. Please see my presentation on his subject at:

As Ed Lazowska points out in his one of blogs only a small percentage of cyber-infrastructure research applications need high performance computing facilities. Most of these applications can easily use clouds.

With the advent of the iphone and ipad a large percentage of traffic is now moving to wireless WiFi networks. Many public wifi networks now report that 50% of their traffic is from mobile data sources such as the Android, iphone and IPad. These volumes are expected to increase dramatically over the next few years. The proximity of content and application servers for wireless devices becomes more critical as data throughput can fall of f dramatically if there is any congestion on the wireless networks. Most likely a major future role for R&E networks will be to partner with various content and application providers to host their distributed infrastructure as close as possible to the end user. In addition to peering at small a number of IXs, hosting caching, content and cloud gateways/servers as near as possible to the institutions will significantly improve performance, particularly for wireless access. A major demand for lightpaths may not be to support research applications directly but to enable this widely distributed content, application and cloud infrastructure across a research networks service area. Access to this infrastructure will also enhance the performance and accessibility of education and research services offered by R&E institutions to users around the world.

ORON press release on tripling of traffic

Personal perspective on Future of R&E networks

Tuesday, May 11, 2010

Scientists Explain Why Computers Crash But We Don't

From ACM TechNews, Monday, May 10, 2010

Yale Scientists Explain Why Computers Crash But We Don't Yale University (05/03/10) Hathaway, Bill

Yale University researchers have described why computers tend to malfunction more than living organisms by analyzing the control networks in both an E-coli bacterium and the Linux operating system. Both systems are arranged in hierarchies, but with some key differences in how they achieve operational efficiencies. The molecular networks in the bacteria are arranged in a pyramid, with a limited number of master regulator genes at the top that control a wide base of specialized functions. The Linux operating system is set up more like an inverted pyramid, with many different top-level routines controlling a few generic functions at the bottom. This organization arises because software engineers tend to save money and time by building on existing routines rather than starting systems from scratch, says Yale professor Mark Gerstein. "But it also means the operating system is more vulnerable to breakdowns because even simple updates to a generic routine can be very disruptive," Gerstein says.