Sunday, March 28, 2010

85% of research computing can be done using clouds

Take a look at this great presentation by Ed Lazowska, who is the Bill & Melinda Gates Chair in Computer Science & Engineering, Director, eScience Institute and Chair, Computing Community Consortium at the University of Washington. He heads up the eScience research institute at University of Washington. Recently they did an analysis of the computing needs of all the researchers at UoWashington. They concluded that 85% of research computing could easily use clouds. They found that it is only some very specialized research and computational scientists who need dedicated high performance computers. These findings I believe will have profound implications for funding councils, university energy costs, R&E networks and future or

1. Funding councils face an insatiable demand in research grant
applications for more computers and clusters. If most of this research could be done by using clouds then it would redirect a lot of research dollars to the actual science instead of buying expensive hardware. Of course money must now be found to pay for the cloud services but since clouds are only needed as required, the grant money isnt tied up for years in the procurement process of purchasing hardware

2. Computing and ICT represent anywhere from 30-50% of the energy
consumption at a typical research university. If a large percentage of research computing could move to clouds this would make a huge dent in a universitys energy bill.

3. As I mentioned previously if R&E networks charged fees based on an
institutions energy bill and then offered free cloud services in partnership with various cloud providers it would be a powerful incentive for researchers and institutions to move in the direction of clouds. The service is not technically free but are bundled as part of the R&Es network costs which is tied to the institutions energy consumption. If an institution reduces its energy costs it still has access to the cloud services

4. Following the above logic cloud services could be included in the
indirect costs of research much like energy is today

Ed Lazowska gave his talk at the recent CENIC conference. Thanks to BCnet for this pointer

It is included below, and I think it is important to view.

"Santa Claus takes care of the power, cooling and space at public universities."



twitter: BillStArnaud


skype: Pocketpro

Saturday, March 27, 2010

The Role of Student-Led Innovation in "Killer Apps" for Broadband Networks

Tom Kalil and Aneesh Chopra of the White House Office of Science and Technology Policy have proposed a new initiative to spur student innovation to develop "killer apps" for broadband networks. The initiative would involve companies, universities, and students and would take advantage of high-capacity networks such as Internet2 and National LambdaRail. Kalil and Chopra are seeking feedback for their ideas.

The White House blog post is located at

The Role of Student-Led Innovation in “Killer Apps” for Broadband Networks
Posted by Tom Kalil and Aneesh Chopra on March 25, 2010 at 10:37 AM EDT
Students have contributed (pdf) some of the most important advances in information and communications technologies—including data compression, interactive computer graphics, Ethernet, Berkeley Unix, the spreadsheet, public key cryptography, speech recognition, Mosaic, and Google.
Today, with the right kind of support, students can play the role of innovators again—by leading the way in the development of broadband applications. In the same way that Mosaic and Google drove demand for today’s Internet, new applications could drive demand for a gigabit/second Internet and 4G wireless. Indeed, a key component of the Federal Communications Commission’s recently released National Broadband Plan is the development of new broadband applications.
Now is the time to launch an initiative that would cultivate, with student involvement, such a wave of innovation. Although it’s impossible to predict what the next generation of applications will be, universities, companies, and students could work together under such an initiative, which would serve as a sort of “Petri dish” where new ideas could incubate and grow. This initiative could be led by the private sector, encourage multi-campus and even global collaboration, build on investments already made in high-speed research networks such as Internet2 and National LambdaRail, and take advantage of a growing number of grants from the Department of Commerce’s Broadband Technology Opportunities Program (BTOP).
The initiative could have a number of elements, including:
• Campus-based incubators for the development of broadband applications, with access to high-speed networks, cutting-edge peripherals, software development kits, and cloud computing services.
• Relevant courses that encourage multidisciplinary teams of students to design and develop broadband applications.
• Competitions that recognize compelling applications developed by students. Some existing competitions that could serve as models include Google’s Android Developer Challenge, Microsoft’s Imagine Cup, and the FCC-Knight Foundation’s “Apps for Inclusion” competition.
Let us know what you think of this idea. You can send us e-mail at
Tom Kalil is Deputy Director for Policy in the White House Office of Science and Technology Policy
Aneesh Chopra is U.S. Chief Technology Officer and Associate Director for Technology in the White House Office of Science and Technology Policy

twitter: BillStArnaud
skype: Pocketpro

Tuesday, March 23, 2010

Australian govt sets out ICT carbon reduction targets

[Kudos to the Australian government on this important step. I estimate such a Green ICT strategy would save up to $100 million for the Canadian government and over $1 billion for US government (more details to follow). All governments are under serious financial pressure to cut costs – a green ICT strategy is the low hanging fruit and does not need cutting jobs or pay. Relocating data centers to low carbon/low energy sites is an important first step ]

Australian govt sets out ICT carbon reduction targets

Australia’s Finance Minister, Lindsay Tanner, has reportedly laid out a target to cut roughly 13% of the carbon emissions from its data centre operations over the next five years.

According to this report by ITwire,Tanner told a conference at CeBIT that the Australian government is the largest data centre operator in the country - larger than the country’s four big banks combined.

The goal is to reduce the estimated 300,000 tonnes of emissions annually today by 40,000 tonnes on an annual basis in five years, Tanner said.

Under a 15-year data centre strategy announced by Tanner, all departments and agencies will have to measure and report the energy consumption of their data centres and ICT infrastructure annually.

Tanner added that future government procurement of data centres will put a major consideration on the ‘green credentials’ of the site and infrastructure. The locations of data centres, as well as other contributing factors, such as free air cooling, and access to telecommunications and power infrastructure would also play key parts in the decision making process. The new procurement parametres will come into effect in the second half of the year.
twitter: BillStArnaud
skype: Pocketpro

The Rise of Research-driven Cloud Computing

[Of course another big advantage of moving to cloud computing is the reduction in energy costs at a university. A modern computing data center can represent a significant fraction of a university’s energy and carbon costs. If more researchers used cloud computing this would have a big impact on direct and in-direct costs of computing and energy at a university. For those R&E networks that are planning to experiment with bundling network services as a component of the energy bill, adding cloud computing to the service bundle increases the attraction of this approach – BSA]

To Space and Beyond: The Rise of Research-driven Cloud Computing

I remember attending the inaugural GridWorld conference in 2006 and hearing Argonne National Laboratory’s Ian Foster discuss the possible implications of the newly announced Amazon EC2 on the world of grid computing that he helped create. Well, 2010 is upon us, and some of the implications Foster pondered at GridWorld have become clear, among them: For many workloads, the cloud appears to be replacing the grid. This point is driven home in a new GigaOM Pro article by Paul Miller, in which he looks at how space agencies are using the cloud to do work that likely would have had the word “grid” written all over it just a few short years ago.

Miller cites a particularly illustrative case with the European Space Agency, which is utilizing Amazon EC2 for the data-processing needs of its Gaia mission, set to launch in 2012. The 40GB per night that Gaia will generate would have cost $1.5 million using local resources (read “a grid” or “a cluster”), but research suggests it could cost in the $500,000 range using EC2. The demand for cost savings and flexibility isn’t limited to astronomy research, either.

Research organizations that need sheer computing power on demand are looking at EC2 as the means for attaining it. Several prominent examples come from the pharmaceutical industry, where companies like Amylin and Eli Lilly have publicly embraced the cloud, as has research-driven Wellcome Trust Sanger Institute. A related case study comes from CERN’s Large Hadron Collider project, which is using EC2’s capabilities as a framework for upgrading its worldwide grid infrastructure. So high is demand cloud for resources, in fact, that even high-performance computing software vendors, such as Univa UD (which Foster co-founded), are building tools to let research-focused customers run jobs on EC2.
Unlike HPC-focused grid software, however, the cloud opens up doors beyond crunching numbers.


twitter: BillStArnaud
skype: Pocketpro

Saturday, March 20, 2010

SURFnet lays ground work for next generation 5G wireless network

SURFnet is probably the worlds most advanced R&E network and has continued to show leadership in a variety of new fields from cyber-infrastructure to optical networks. Now, in their more recent funding program GigaPort3, they have announced plans to integrate their wireless and wireline worlds for their users. Here are some additional pointers to articles discussing why need to integrate wireless and wireline networks through mobile data off load because of the huge data volumes on wireless networks. Skype Access is a good example of how this technology might eveolve. Since R&E networks are the only independent organizations that operate backbone networks, and are also the same people who brought you the Internet in the first place, they are well positioned to pioneer these new technology concepts.

Details on 5G next generation wireless networks

Kees Neggars from Surfnet writes:

In 2010 we will start a technology assessment and scenario building how we will integrate the new wireless and wireline worlds for our users as seamlessly as possible.

Here are a few phrase from the activity descriptions for 2010:


The ambition is for students, teachers and researchers to be among the first users world-wide of a nationwide fixed-wireless seamless high speed network no later than 2015. This will allow access to the network and related resources independent of device, time and location. The vision of the "anywhere anytime" paradigm is that the end-users are provided with enhanced capabilities for sharing, collaboration and social interaction, independent of their location, interaction medium and time of day. The integration of portable devices in the communication infrastructure also offers opportunities for interactive teaching processes that are independent of location by quickly and easily bringing together people and data.


GigaPort3 intends to implement seamless network connectivity at virtually every location in The Netherlands, for its users, whether they are connected to the fixed network or to a wireless network infrastructure. For the development of the required wireless services SURFnet will seek partnerships with operators and suppliers of wireless networks.

The main goal for this project is to explore the business, legal and technological aspects for SURFnet to support a nationwide fixed-wireless seamless high speed network.

This project will focus on getting a better understanding of available wireless network technologies, various use cases and business cases.Aspects like heterogeneous network access, mobile applications and wireless testbeds will be studied as well.

4G speeds: hype vs. reality (CNN)

Wireless Broadband Disruption WiMAX, LTE or Wi-Fi

LTE (News - Alert) versus WiMAX is a standard topic in the press and at conferences, as if something disruptive was happening or might happen. Wrong! WiMAX and LTE are technical variations on the same business model providing similar services. If were looking for disruption, we need to catch up on whats happening with Wi-Fi.

Today, WiMAX (News - Alert) is ahead of LTE, but only for green field deployments. All GSM operators will adopt LTE so, by 2015-2020, there will be billions of LTE devices sold each year. WiMAX will survive as a service platform alternative, but for the same services and business models as LTE.

Wi-Fi is a very different story. There are no carriers. Individuals, corporations, communities anyone whos interested buys their own infrastructure and deploys it wherever they want. Carriers are still needed for Internet connectivity, but otherwise, Wi-Fi infrastructure is a completely different beast.

First, Wi-Fi and freemium go together. Business models range from completely free to retail sponsorship (your local coffee shop), community sharing (the FON network) and/or bundled with other services (e.g. Verizon (News - Alert) adds Boingo to FiOS subscriptions). Yes, a few paid hotspot services remain, but they are a small part of the Wi-Fi ecosystem.


The most important result of Wi-Fis ownership model has been widespread adoption, leading to lower prices and ever more adoption.
Projections are that there will be more than a billion Wi-Fi chips per year by 2011, with Wi-Fi showing up in all smart phones and all manner of other devices.

Finally, Wi-Fi has technology leadership. 4G leverages orthogonal frequency division multiplexing and multiple input multiple output, aka MIMO. But Wi-Fi adopted OFDM in versions 802.11a (in 1999) and 802.11g (in 2003), allowing Wi-Fi to achieve 54mbps operation. And Wi-Fi adopted MIMO with 802.11n (draft in 2007). Today, 11n devices ship in high volumes, use 2.4gHz or 5gHz spectrum and provide 100-300mbps. New Wi-Fi silicon will deliver as much as 600mbps, and beamforming antennas will increase range and allow dramatically more wireless connections in the same area.

As consumer devices with access to more spectrum than either WiMAX or LTE, Wi-Fi can deliver more megabits per second per dollar. Expect to see both fixed and mobile carriers including free Wi-Fi access in their subscription bundles as Wi-Fi trumps femtocells. Conventional operators are not going away but, over the next decade, its Wi-Fi that will shake up business models and drive disruption.

Skype Access: pay for WiFi conveniently via Skype

As a longtime Skype user, I was pleased to see that Skype now allows me to pay for Wi-Fi access in places that charge for Wi-Fi (such as
airports) without going through the trouble of getting out the credit card, typing in the details, and authenticating via a cumbersome login screen. This is very handy when you are in an airport or other location where you dont have a lot of time and you dont want to pay the hourly rate because you only want 10 minutes to check email.
The service is called Skype Access.

All you do is select the Wi-Fi network you want to connect to, and if it is a network that works with Skype, Skype will pop-up a message asking if you want to pay using your Skype credit. You can view the Wi-Fi networks that work with Skype Access and check out the rates.



twitter: BillStArnaud


skype: Pocketpro

Thursday, March 18, 2010

R&E networks as unified community anchor networks for national broadband

Further to my testimony to the FCC Workshop on broadband I am pleased to see that one of the recommendations of the FCC broadband task force is that research and education networks serve as anchor facilities. The FCC plan outlines the goal of providing community anchor institutions such as libraries, school and hospitals with one Gigabit per second (Gbps) connections as well as support for the development of a “Unified Community Anchor Network” (UCAN) that could be built leveraging existing non-profit research and education networks like Internet2 and NLR and their partner regional networks.

Not only with this provide these institutions with critical Internet bandwidth but R&E networks could also provide innovative business models and architectures that reduce an institutions overall energy, wireless, travel and related costs. I have blogged about this in the past that basing an institutions Internet bill on their energy consumption and perhaps travel budget will provide them with an incentive to reduce energy and travel using tools such as video conferencing, collaborative tools, etc

Community anchor institutions can also play a critical role by stimulating competitive broadband in their community by hosting carrier neutral transit or peering exchanges such as those deployed in New Zealand, British Columbia and Norway. The New Zealand network KAREN also provides detailed instructions and how a community anchor can deploy a carrier neutral IX. Well worth the look

My FCC Testimony on R&E networks as anchor facilities

New Zealand Open Standards for Building a Community IX

NLR and Internet2 Applaud FCC’s One Gbps Connectivity Goal For Community Anchor Institutions

Thursday, March 11, 2010

FCC launches tool for consumers to measure broadband performance

[From Google Policy Blog. Check out FCC's site. Great to see a regulator representing interests of consumers -- BSA]

Posted by Vint Cerf, Chief Internet Evangelist

Internet users deserve to be well-informed about the performance of their broadband connections, and good data is the foundation of sound policy. So I'm excited to see that the FCC has launched a "beta" consumer broadband test on today. The site provides access to two third-party measurement tools, and is "the FCC's first attempt at providing consumers real-time information about the quality of their broadband connection."

One of the tests is provided through Measurement Lab (M-Lab), the open server platform that a group of researchers and other organizations created with our help last year. The FCC allows users to run Network Diagnostic Tool (NDT) -- an open source tool developed by Internet2 -- and see their estimated download and upload speeds. They can also see the estimated latency and jitter of the connection test between the user's computer and an M-Lab server.

Since M-Lab launched, a number of partners have joined to add new tools, improve the platform, and make the data more accessible. One of M-Lab's core goals is to help advance network research, and we're thrilled to have the FCC contribute to this effort as well. All M-Lab test results are made open and publicly available so that researchers can build on and learn from the data without restriction. By pointing users to this tool, the FCC is contributing to this open pool of broadband data. (Note that as part of these tests, the FCC asks users to submit their addresses; to be clear, M-Lab is not collecting any of this information.)

The FCC has also said that the forthcoming National Broadband Plan will recommend different measures to improve broadband transparency. As we stated in previous comments, we think it's important to consider the complementary ways it can use multiple measurement and data collection methodologies, and we look forward to seeing what else the Plan recommends.

For now, you can head over to to try out this first step.

Posted By Google Public Policy Blog to Google Public Policy Blog at 3/11/2010 01:45:00 PM

For more options, visit this group at

Wireless National R&E Test Bed (WiNTeB) - NSF workshop

[While I think a wireless National Testbed is an excellent, I think it should be built in parallel with a "production" 5G wireless R&E network as described in my earlier posts. Students and "non professional" researchers can come up with a lot of creative applications if given the freedom to do so -- BSA]

NSF Workshop on a Wireless National Test Bed (WiNTeB)

May 5-6, 2010; at Hilton Hotel 950 North Stafford Street, Arlington, Virginia There is a current and growing need for a Wireless National scale Test Bed (WiNTeB). WiNTeB could support research in application areas such as sensor nets, healthcare.

Possible WiNTeB Applications & Approach: Start with relatively simple and constrained experiments with application S/W in end user devices, moving from there as technical and operational procedures to protect the underlying networks are proven:

Experimental applications that run on a range of existing devices Heterogeneous Networks - combining cellular, wireless broadband and GENI wired media Medical research involving large numbers of people with bio sensors tied to their cell phones Sensor Nets Mobile applications into a client side and a service side running in a compute cloud Overlay/mixed reality involving user interaction with synthesized environments Swarm behavior and co-ordination of actions between users across large gaps of time and space Smart Grid Collaborative Networking Dynamic Spectrum Allocation Ubiquitous Computing Infrastructure RAN & Backhual as technical and operational network safeguards are proven

One possible way of achieving this is by creating a non-profit Mobile Virtual Network Operator (MVNO) that would contract with wireless service providers to obtain access for researchers to national scale networks. This MVNO structure is currently well established in commercial applications, but has not been used to date for research infrastructure. WiNTeB will:

Extend the limits of scope, geographic extent, size, and meaningfulness, of research results Lower the costs for large scale experiments Democratize research by making testing facilities available to a broader cross section of academia, industry and government Provide recent graduates the experience / knowledge needed to be productive in the larger wireless industry Improve network robustness, reliability, and security Accelerate innovation leading to advanced services for the public, industry, civil and military government Improve US industrial competitiveness.

Workshop Goal: To develop a common understanding amongst the stakeholders of what the benefits and challenges are in building WiNTeB. Stakeholders include:

Academic researchers interested in using WinTeB for applications research, such as public health studies involving large numbers of people using cell phones equipped with bio sensors Academic researchers interested in using WinTeb to explore how to improve wireless networks Wireless service providers who might provide facilities for WinTeB under MVNO contracts Wireless equipment, software, semiconductor, and component companies who might want to perform experiments on WiNTeB Representatives of other government and industry organizations sponsoring research that might benefit from the availability of WiNTeB The workshop will result in a report that will frame and guide efforts to create WiNTeB including recommendations to research agencies.

Workshop Attendance: to be limited to approximately 40 invited participants selected on the basis of a one page PDF description of themselves and why they should be part of the workshop sent with WiNTeB Workshop Application in the subject line to by March 29. Confirmation invitations will be returned by April 5. A few places may be kept for late applicants. NSF funding for the Workshop will not cover travel.

Additional Information:

Workshop Organizers: Mark Cummings (Kennesaw State University), James Kempf (Ericsson Labs USA) with support from Chip Elliott and Aaron Falk (BBN/GENI). NSF funding pending.RSS Feed:

If you do not want to receive any more newsletters,

To update your preferences and to unsubscribe visit
Forward a Message to Someone

Powered by PHPlist, --

Tuesday, March 9, 2010

More on building a "5G" wireless R&E cellphone network

I would like to thank a lot of people who responded to my original posting on this subject who provided additional information. As I mentioned in my previous blog R&E networks are ideally positioned for deployment of these next generation wireless mobile networks as they have the extensive large bandwidth capability to backhaul the data from the wireless hotspot at universities and open access community networks. It can also be a valuable revenue service for both the R&E networks and universities, while at the same time saving students and researchers outrageous fees in roaming, texting, data etc. There is probably many issues still be sorted out, but I think once again this is an excellent opportunity for R&E networks to show a leadership role as they did with the original deployment of the Internet. Once again most incumbent cell phone companies will have little interest in deploying such networks as it will clearly undermine their existing revenue stream. So the only organizations that have the necessary technical skills, motivation and wherewithal to deploy wireless R&E networks are the same one who first brought you the Internet. Eliminating ridiculous cell phone charges will hopefully also stimulate new applications and services for wired and wireless networks. It is also rumored that both Cisco and Google will be releasing products and solutions later this year.

5G wireless networks should not be confused with a similar cellphone standard called UMA. Several universities have installed something called Unlicensed Mobile Access (UMA) , sometimes integrated with unified messaging services. UMA is said to provide roaming and handover between GSM, UMTS, Bluetooth and 802.11 networks. It is largely focused on voice roaming as opposed to data roaming applications. But this is largely a telco cell phone standard that is hideously complex and not recommended for R&E networks.

The new standards under development come from the data side of networks particularly the IEEE 802.11 Ethernet standards development track. The other assumption is that university will not have to deploy any voice server or PBX, as all applications will be hosted in the cloud (presumably a zero carbon data center). This is a big change for most university telcom staff, but it is a trend happening with many hosted applications such as e-mail, blogs, webs, etc which are all moving to the cloud hosted by commercial providers. It is expected that soon applications like Facebook and Buzz will be integrated with cloud based voice services making the old stand alone voice cell phone service an anachronism.

For those who are interested there are 2 new IEEE WiFi standards that are in development that will enable the first 5G networks to be deployed later this year. Essentially these new standards can be considered Eduroam on steroids. A key thing to note the assumption is that the mobile cellphone is essentially a mobile internet data device. Supporting traditional mobile cell phone protocols and hand offs would make the idea untenable. Instead voice is seen as just another IP application.

802.21 is an IEEE emerging standard. The standard supports algorithms enabling seamless handover between networks of the same type as well as handover between different network types also called Media independent handover (MIH) or vertical handover. The standard provides information to allow handing over to and from cellular, GSM, GPRS, WiFi, Bluetooth, 802.11 and 802.16 networks through different handover mechanisms.

802.21 Will Allow roaming between 802.11 networks and 3G cellular
networks. A cellular phone user in the midst of a call should be able to enter an 802.11 network hotspot and be seamlessly handed off from a GSM network to the 802.11 network and back again when leaving the hotspot.

IEEE 802.11u is a proposed amendment to the IEEE 802.11-2007 standard to add features that improve interworking with external networks.

IEEE 802.11 currently makes an assumption that a user is pre-authorized to use the network. IEEE 802.11u covers the cases where user is not pre-authorized. A network will be able to allow access based on the user's relationship with an external network (e.g.hotspot roaming agreements), or indicate that online enrollment is possible, or allow access to a strictly limited set of services such as emergency services (client to authority and authority to client.)

From a user perspective, the aim is to improve the experience of a traveling user who turns on a laptop in a hotel many miles from home. Instead of being presented with a long list of largely meaningless SSIDs the user could be presented with a list of networks, the services they provide, and the conditions under which the user could access them.


Wednesday, March 3, 2010

More on new revenue opportunities for R&E and open access networks - building next generation 5G wireless network

I have long argued that R&E networks not only have an important role in helping universities and scientists in pursuit of their research objectives, but they can play a critical leadership role in defining new business, health care, and education opportunities for society at large. It was the university R&E networks that first brought us the Internet and have played an important catalyst role in defining new low cost optical network architectures and the development of open access networks by connecting anchor institutions and facilitating the developing of transit exchanges amongst other initiatives. As I mentioned in my previous blog I think they can now also play an important role in helping universities and schools reduce their energy costs and CO2 footprint:

Another important area where R&E networks can play a new leadership role is in the development of next generation wireless networks which I have labeled as “5G” networks. The 5th “G” stands for green. The idea is based on an original concept developed by David Reed at MIT to deploy a national wireless R&E network. The green bits are my embellishment to his original idea.

The ability of R&E and open access networks to now provide wireless services has been enabled by the evolution and flattening of the Internet as documented by John Markov in his NY Times article and as I outlined in my paper that was originally commissioned to be submitted to the FCC as part of the Network Neutrality hearings at The development of a content centric network with local connectivity makes delivery of a low cost 5G network possible as now you dont have to build an extensive back haul network.

As noted recently by many pundits, the big challenge facing today’s wireless 3G networks is data overload. Many industry experts are talking about “mobile offload” to deal with the huge data volumes now being carried on wireless networks. With mobile offload, large data streams from a wireless device (or tower) would be redirected through the nearest WiFi or “white space” (the old UHF TV spectrum) base station wherever possible. This offloads the data from the congested 3g/4g network which often only have microwave backhaul links.

Demand for mobile wireless access to the Internet is expected to explode in the next few years. Increasingly most university students and researchers are using wireless devices such as iPhones and Blackberrys or connecting their laptops with 3G sticks as their primary interface to the Internet. As well mobile and wireless devices are being used in research fields as environmental, geological, agricultural, forestry sensors etc. These mobile sensor networks need low cost high bandwidth connectivity back to the host research institution. In addition, many universities are offloading many student (and some research) applications like e-mail and applications to Google and others and so primary access to their e-mail and applications is not through the institution’s LAN network, but directly over the Internet. Increasingly the primary form of electronic communication for many young people is not e-mail, but texting and facebook which again does not need to be carried over the university LAN network. It is not only students who are changing their Internet habits. But wireless access to social networking tools like Twitter, Linkedin, etc is becoming increasingly important to many researchers. Wireless access to clouds and a variety of cyber-infrastructure services will be also become critically important.

What I propose is that R&E and open access networks become “5G” wireless networks where the primary connection for wireless devices will be transparent interface connection to nearby wifi and/or “white space” base stations at R&E connected institutions. WiFi mesh radio networks can significantly extend coverage off campus, especially if deployed in partnership with open access fiber networks. If wifi or white space connection is not available the device would fall back to the default backup 3G/4G network. It is proposed that all participating Wifi and white base stations be powered solely by micro wind mills, solar panels and/or micro hydro. If there is no energy for the wifi or white space base station the wireless device would transparently connect to the much slower default backup 3G/4G network

R&E networks and their partner institutions would be the customer facing service provider to students and faculty perhaps also in partnership the 3G/4G network provider. They would charge a small fee to users for this service which would be significantly less than what students pay now to the cell phone companies with their the concerted practice of outrageous charges for texting and roaming. The R&E network would contract with an existing 3G/4G carrier to be the default service provider where there is no wifi or white space signal from a participating institutions for example when the students are off campus. This business model already exists with what are called virtual network cell phone providers like Virgin. The R&E network would also provide voice gateway to the PSTN for voice calls that are carried over the WiFi or white space connections.

Understandably most of the incumbent cell phone companies would be not keen on this model, but in jurisdictions where there is competition from both domestic and international players I suspect some new entrants might be interested in this business model.

Some of the issues that need to be addressed are:

(a) Where to find wireless devices that will automatically switch from authorized Wifi and white space node to 3G/4g seamlessly? How do you extend 3g/4g authentication and signaling to the wifi nodes over the R&E network? How do you integrate with SIMs and RADIUS?

(b) How do you provide seamless and mobile services between Wifi and 3G/4G? Should Skype or Google voice be the default voice application? Can this be done transparently and seamlessly?

(c) How do we co-manage wifi and white space base stations between R&E network and participating institutions? (we can do this with most wifi base stations)?

Will cell phone companies build devices for this type of market,? Will Wifi companies build the appropriate green powered base stations that can be interoperate with 3G/4G data?

Your feedback on this idea is most welcome