[There has been a lot of buzz about clouds and their future potential for research, energy savings and many other applications. But what I think one of the most important features of clouds is that they lower the barriers to innovation. With clouds you can use as little or as much as the resource as you need. This provides great flexibility in terms of developing new applications and making them quickly available to a large user base. Many universities are now starting to take advantage of clouds for research and ICT support services. This trend is likely to accelerate as new open source applications such as Kuali Student Generation System start to be deployed, which are ideally suited for using on clouds with shared access by many institutions and students.
Below are several more new cloud applications for e-mail and content distribution. The many universities continue to maintain their e-mail services and other applications are unlikely to able to keep up with the new services and applications that will come from the cloud. The writing is on the wall. Already many students use Facebook and texting as their main means of communication as opposed to outdated e-mail.
Many universities are also using clouds and content distribution networks for delivery of their education and research content. Neptune Canada for example uses Akamai to deliver the video of the launch of its undersea network.
What does all this mean for the University Computing Services?
Traditionally they were the keepers of the gate maintaining the mainframe computers, servers and network services and applications.
But as more and more services and applications move to the cloud, OUTSIDE the campus, the need to maintain physical systems will decline. On the other hand building and maintaining collaborative applications such as Kuali that use clouds will become increasingly important. Obviously network connectivity and bandwidth will be critical, especially interfacing to the multitude of cloud and content distribution networks that are now being deployed by companies like Akamai, Google, Limelight , Microsoft and others. R&E networks will have an important role in hosting these various cloud and content nodes as close as possible to connected institutions, to minimize latency and delay.
Although there still remain many issues with clouds in terms of privacy, security and interoperability, the trend is clear. Much like the PC, in the early days snuck onto the campus without approval of computing services, I suspect the same thing will happen with clouds.
Their ability to reduce costs, enable innovation and create solutions without prior approval will be very tempting for many research departments and users alike. BSA]
For more thoughts on this subject please see my paper:
A personal perspective on the future of R&E networks
http://billstarnaud.blogspot.com/2010/02/personal-perspective-on-evolving.html
Kuali Student System
http://student.kuali.org/
How the Cloud will revolutionize e-mail
http://www.readwriteweb.com/archives/ready_for_gmail_mashups_google_adds_oauth_to_imap.php
You may or may not be excited by the acronyms OAuth and IMAP/SMTP, but the combination of them all together is very exciting news. Google Code Labs announced this afternoon that it has just enabled 3rd party developers to securely access the contents of your email without ever asking you for your password. If you're logged in to Gmail, you can give those apps permission with as little as one click.
What does that mean? It means mashups based on the actual emails in your inbox. If you've given a 3rd party app secure access to your Twitter account, then you'll be familiar with the user experience. The first example out of the gate is a company called Syphir, which lets you apply all kinds of complex rules to your incoming mail and then lets you get iPhone push notification for your smartly filtered mail.
Backup service Backupify will announce tomorrow morning that it is leveraging the new technology to back up your Gmail account, as well.
People are often wary about the idea of giving outside services access to their email, and well they should. OAuth is designed to make that safe to do. Combined with the IMAP/SMTP email retrieval protocols, it gives an app a way to ask Gmail for access to your information. Gmail pops up a little window and says "this other app wants us to give it your info - if you can prove to us that you are who they say you are (just give Gmail your password) - then we'll go vouch for you and give them the info." The 3rd party app never sees your password and can have its access revoked at any time. You can read more about OAuth, how it was developed and how it works, on the OAuth website.
Why is this so exciting? Because it means that the application we all spend so much time in, where so much of our communication goes on and where you can find some of our closest work and personal contacts - can now have value-added services built on top of it by a whole world of independent developers, without your having to give them your email password.
That's the kind of thing that the data portability paradigm is all about. It's the opposite of lock-in and seeks to allow users to take their data securely from site to site, using it as the foundation for fabulous new services. Google says it is working with Yahoo!, Mozilla and others to develop an industry-wide standard way to combine OAuth and IMAP/SMTP.
http://www.datacenterknowledge.com/archives/2010/03/18/google-boosts-peering-to-save-on-bandwidth/
Is Googles Network Morphing Into a CDN?
Google has dramatically increased its use of peering over the past year, and has also accelerated deployment of local caching servers at large ISPs, making the companys network resemble a content distribution network (CDN) such as Akamai.
The latest information about Googles network structure has emerged from an analysis by Arbor Networks, which has revived debates about Googles bandwidth costs, a topic weve examined several times here at DCK. Theres a discussion of YouTubes bandwidth bills today at Slashdot, while Stacey at GigaOm focused on Googles famed infrastructure advantage.
Expanded Use of Caching Servers
Arbors Craig Labovitz also provides some interesting detail on Googles caching strategy. Over the last year, Google deployed large numbers of Google Global Cache (GGC) servers within consumer networks around the world, Labovitz writes. Anecdotal discussions with providers, suggests more than half of all large consumer networks in North America and Europe now have a rack or more of GGC servers.
This has, in effect, made Googles network look a lot like CDNs such as Akamai or Limelight Networks, which have caching servers at ISPs around the globe. The Google caching servers allow large ISPs to serve Google content from the edge of their network, reducing backbone congestion and traffic on peering connections.
This has a telescoping benefit on bandwidth savings - Google can use the peering connections to reduce its transit costs, and the local caching to further reduce its peering traffic. For more on Googles peering philosophy and practice, see this 2008 document (PDF).
Microsoft Also Adopts CDN Architecture
Google isnt the only Internet titan that has restructured its network to adopt CDN practices. In 2008 Microsoft began building its own CDN, known as the Edge Content Network.
Both companies are preparing for the tidal wave of video-driven data described byBrian Lillieof Equinix in his keynote last week at Data Center World. Lillie said Internet traffic growth is being driven by the development of mobile apps for the iPhone, Blackberry, Android phones and other mobile devices. As these new apps bring a universe of everyday tasks into the palms of users hands, usage is accelerating along with the data traffic streaming across global networks.
How peering is changing the shape of the Internet
http://www.boingboing.net/2010/03/02/how-peering-is-chang.html?utm_source=twitterfeed&utm_medium=twitter
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro