Wednesday, April 28, 2010

Cook Report on "Building a National Knowledge Infrastructure"

http://www.cookreport.com/


Building a National Knowledge Infrastructure How the Dutch Did it
Right When So Many Others Got it Wrong

In 1965, French film director Jean-Luc Godard made a film noir called
Alphaville about a grim future in which all the world’s infrastructure
is controlled by a computer. Imagine that. Towards the end of
Alphaville, the hero pulls the computer’s plug. Suddenly nothing
works.
The world goes blind. People are feeling their way along walls. In
1965 infrastructure controlled by computer was science fiction. And today?
Today lives and economies depend on the Information and Communications
Technology (ICT) Infrastructure more every day. The quality of the
national ICT infrastructure increasingly defines the quality of life
as well as opportunity for the future.

It used to be the United States, with it’s Internet initiative and
“intelligent networks” that claimed the leadership position. Now
that’s the science fiction. Today it is the Netherlands which has
consistently taken the initiative, made the investments and delivered
the goods.

The Dutch have developed blazingly fast hybrid optical networks of
hitherto unimagined efficiency and cost effectiveness.

They have taken the technique of user controlled light paths as
developed by CANARIE in Canada and incorporated this on a robust
platform of Web services and with collaborative e-science middleware
into optical network technology. So their networks cannot only
transmit huge data files across oceans starting one side of the world
to the other instantaneously. But increasingly the Dutch can support
the collaborative and multi-disciplinary practices that are being
described as “the 4th paradigm”, data intensive science.


Building a National Knowledge Infrastructure

Unlike the United States — where private interests have walled off and
Balkanized much of the Internet — the Dutch are committed to
collaboration both inside and outside the country. They have been a
proactive force for international collaboration with scientists and
network specialists in the EU, the United States and elsewhere with
their Global Lambda In- tegrated Facility or GLIF.

And the more you learn, the more it is apparent that the Netherlands
is building the electronic network and knowledge infrastructure on
which the economy of the 21st century will be based. This report will
explain their continuing technology direction -- a direction that on
its own level is quite impressive.

So why is this happening in a country of only 17 million people, small
and crowded with one of the highest popu- lation densities in the
world?
Why is this not happening in the United States or Canada or China or
Brazil? Those are all big countries, rich in re- sources, that are
supposed to “own the future.” This is an important question.

We can begin with what any- body who has worked in technology knows.
The real problems with technology are not, typically, the technology. The
real problems have to do with intentions, models, governance,
implementation, re- sourcing, and user support. It’s not the
technology, it’s what you do with it. And that, in turn, depends upon
what you have intended to do with the technology.

So when we ask the ques- tion, why are the Dutch doing it right, when
so many other countries got it wrong, the answer begins with the fact
that they have better inten- tions and a more inclusive process. In
demonstrating admirably farsighted planning and negotiated discussion
among their stakeholders, the Dutch are leading the world in making
the ICT technology transition.

This is the change described by Carlota Perez that is common to all
technology revolutions. Speculative or finance capital in the support
and development of ICT must no longer predominate. So- ciety must
shift to use of productive capital. In other words it must use money
for infrastructure to install these ICT resources in society and treat
them as knowledge in- frastructure. They become the logical follow on
to roads and highways, canals, rail- roads, electric grids, airports
water and sewage systems and electric plants. In the end they are
simply an integral part of the basic infra- structure of an advanced and
civilized capitalist nation.


So, once again, it is not the technology so much as it is the
thoughtful and careful way that technology policy is determined. It is
the way that policy is turned into reality. And it is the way in which
they continue to push edges in a never ending pursuit of technical
excellence and extreme performance. And it is the counter-intuitive strategy in which
geek values of open source software and collaborative networks are
harvested into creating public/private partnerships that evolve into
the business opportunities and a dynamic and competi- tive national
economy.

John Hagel and John Seely Brown recently updated their sobering 2009
Shift Index: Measuring the forces of long- term change, which reveals
that in the U.S., despite an economic focus on private good rather
than public good, American businesses, includ- ing telcos, now earn
75% less return on assets than they did in 1965. One of the reasons,
according to John Hagel, speaking at the 2009 SuperNova Conference in
San Francisco, is that businesses have not been able to ration- alize
the disruptive advances in technology which (thank you, Moore’s Law)
never stop advancing.

The severity of events is made more difficult because of the doctrine
that the inde- pendence of the private car- rier is sacrosanct. The
prob- lem is that the direction of technology has been moving ever
more rapidly into the creation of network capability of almost
limitless abundance. In contrast to this technology push, the invest- ment
pull of the rules by which the share owner Cor- poration is governed
de- mands that management take the opposite course and establish a
regime based on scarcity - measured usage, constricted bandwidth, con-
stricted user freedom and charging the user the very maximum that
traffic will bear. The privatized carrier becomes a predator that
feeds on society that in the- ory it serves. The result is an enormous
gap between the technological capabilities at- tainable with
state-of-the-art optical network technology and the reality enforced
by share owner networks in their respective societies.

In the Netherlands, the ICT infrastructure ecosystem does have a
strategy. It works for them. And it works for us. And that is to
embrace those disruptive advances and use the improved pro- ductivity
for the public good.
The COOK Report contends that the Netherlands, through the good
fortune of rather unique circumstances over the past decade, has been
able to articulate a vision whereby it begins to treat its information
and communication technology investment as an investment in public
infrastructure rather than pri- vate share owner determined
enterprise.

The nation tele- communications infrastruc- ture is treated as a
public rather than a private good. Partly this is for the purposes of
science and research. But it has allowed the country to develop world
leading
tech- nology that operates along- side the share owner main- tained
voice network of KPN as well as those of the MSOs (Cable TV companies)
of the Netherlands.

In trying to understand the emergence of the Nether- lands as a leader
in ICT infra- structure, again, this is not just about network
technol-
ogy or the network applica- tions. Rather the key lesson to take away
is that the Dutch network and research effort has been built by means
of an exceptional at- tention paid to the economic impact of the
network as
in- frastructure that contributes to the Dutch national interest.

And beyond that is another parallel story. It’s the story of how
history and economic circumstance shaped the character and confidence
of the Dutch. It tells what can happen when a nation, toughened by
centuries of challenges that would (liter- ally) sink most countries,
learn how to work together to leverage technology at the
infrastructure level and
over- come impossible odds. It’s a great story, and it starts in
Chapter II.

Contents

I. How the Dutch Did It Right
When so many others get it wrong p. 1

II. The Netherlands National ICT Research Infrastructure History,
character, policy and pragmatism p. 5

III. The Direction of ICT Infrastructure in the Netherlands in Early
2010 An interview with Wim Liebrand, Kees Neggers and Cees de Laat p.
24

IV. TheAscent of e-Science
Promise of the 4th paradigm p. 41

V. Making E-science Work: The Middleware Solution An interview with
Bob Hertzberger p. 44

VI. Growing E-Science Domains for The Netherlands Roadmap for a next
generation of science p. 64

VII. Potential Customers with Global Agendas An interview with David
Zakim, MD p. 68

VIII. How A Progressive ICT Infrastructure Benefits the Economy The
innovation engine as open fabric p. 78

IX. SURF as an Economic “Midwife” for Technology Transfer An interview
with Hans Dijkman, Kees Neggers and Bob Hertzberger p. 8

X.Coming to Conclusions
Realizing the benefits of a re-usable infrastructure p. 91

XI. Re-thinking American Infrastructure What has to change p. 95


__
This message came via the stars@startap.net mailing list. Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the views of STAR TAP/StarLight or the US National Science Foundation. Mailing list related requests should be sent to Majordomo@startap.net.


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Monday, April 26, 2010

Alberta innovation: cap and trade and next generation broadband

[Many are surprised to discover that Canada is the biggest oil exporter to the US almost shipping double the oil that the US imports from Saudi Arabia. Most of this oil unfortunately comes from the Alberta tar sands – which are one of the largest emitters of CO2 in North America.

Alberta is as large as Texas and not unlike Texas is generally perceived as a right wing, conservative oil rich province. But despite this reputation and its production of dirty oil, Alberta has been a world leader in deploying cap and trade and building a province wide broadband network. Alberta implemented one of the first cap and trade programs in the world (albeit an intensity based system as opposed to a true cap and trade system) years before most people even heard of the expression “cap and trade”. As well Alberta deployed SuperNet – one of the first government sponsored open access networks to provide Internet service throughout Alberta.

One of the biggest issues facing Alberta is the potential for US Congress to pass a national cap and trade program. If this happens the Canadian government has publicly committed to implementing a matching program in Canada to insure there is no trade distortions between Canada and the US with respect to the cost of carbon. The TD Bank and Pembina institute estimates that that this will cost Alberta anywhere between $40 to $70 billion in carbon offsets. They will need to purchase these offset from the rest of Canada and/or internationally in order to comply with these programs. This will be a huge transfer of wealth out of Alberta in order to comply with a North American cap and trade program. This underlines the problem of many proposed cap and trade systems in that they can cause huge regional variances and disparities in terms of money flows.

While I believe anthropogenic warming is real and present danger to this planet I am not a big fan of cap and trade or carbon taxes. Cap and trade systems have worked extremely well in eliminating sulfur dioxide pollution. But these have been narrowly defined relatively small scale markets. CO2 cap and trade is a much larger beast as it touches so many industry sectors. Many CO2 abatement strategies are also very suspect. Already many CO2 cap and trade systems have been tainted with scandal and dubious claims of CO2 reduction. Combined with such huge regional financial disparity in terms of its cost, I think cap and trade will be a difficult sell in Canada as anywhere else in the world. They same issue lies with carbon taxes – although more likely to equitably distributed in terms of the pain – nobody wants more taxes disappearing into the maws of government (even though most governments claim such taxes will be revenue neutral – we have all heard that line before).

There are now several proposals for alternatives to cap and trade such as “cap and dividend” and “cap and reward”. Jim Hansen has also come out in favour of a scheme similar to cap and dividend called “People’s Climate Stewardship Act” which is very similar the Cap and Dividend bill now before Congress. In both situations there is an effective carbon tax and cap but the revenues are turned over directly to consumers who are then free to spend the money in reducing their energy bill. A variant of “cap and dividend” is “cap and reward” where the money raised from a carbon tax and cap is also handed over to consumers, but they can only spend the money on activities that further reduce their carbon footprint. Such activities may include next generation broadband, tele-working, distance education, downloading virtual goods over the Internet etc. Cap and Reward will hopefully create a virtuous circle of carbon reduction in all walks of life.

Alberta’s cap and trade intensity program is also running into many of the same problems as other cap and trade programs in that they are having a difficult time finding well qualified projects that will reduce carbon in a measurable and verifiable way. I think Alberta has the opportunity to once again show world leadership in adopting a province wide cap and reward program as an alternate solution. Much in the same way that Alberta deployed North America’s first cap and trade system and the first government funded province wide open access network, they could once again set the mark of deploying the worlds first cap and reward system. Rather than waiting for the inevitable cap and trade bill to come out of congress whether it is this year or 10 years from now, Alberta could do a pre-emptive strike by implementing a cap and reward program where the proceeds going to consumers could be used for the purchase of low carbon goods and services produced in Alberta. This would provide Alberta’s industry and education sectors with new revenue opportunities and demonstrate an alternate approach to addressing the global challenge of CO2 emissions.

For example Alberta operates Canada’s only open university – University of Athabasca. Its course programs and degrees could be offered for free in exchange for the offset dollars earned by families under a cap and reward systems. Clearly distance education over the Internet will have a very small carbon footprint. Alberta has also been a leader, through its provincial R&E network Cybera in deploying advanced cyber-infrastructure, clouds and grids. They also operate one of the nodes on the Greenstar network – the world’s first zero carbon Internet. Again these low carbon activities, as well as related industry projects could be funded under a cap and reward program.

But most importantly Alberta needs to address the challenge of deploying a next generation broadband network. Supernet was a wonderful achievement for getting broadband deployed to rural areas. But it not address the challenge of building high speed open access competitive broadband in the urban centers. A “cap and reward” system could easily pay for such a network deployment. Many of the energy companies in Alberta who would need to collect the carbon fees already have extensive fiber networks. This could be a loss leader opportunity for them to expend the money on their customer’s behalf in building a next generation open access fiber to the home network.

Of course all these low carbon activities need to be properly quantified to prove that they genuinely reduce CO2 emissions. Organizations like Canada Standards Association, ClimateCheck amongst others are now developing the necessary standards for the ICT sector to enable a successful cap and reward program.

Despite its reputation as a right wing conservative province, Alberta has the unique opportunity to use its oil wealth in solutions that do not penalize the province in terms of CO2 emisssions, but instead create new opportunities for its businesses and education sectors by promoting a low carbon society through a cap and reward program – BSA]

------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Sunday, April 25, 2010

Enabling Innovation with next generation wireless 5G Internet + clouds - technical details

[As I have blogged several times I believe there is a huge potential to create a platform for innovation through the integration of next generation wireless Internet with cloud mobile applications. I recently have a talk on this subject at an OttawaU WiSense seminar - http://www.slideshare.net/bstarn/ottawa-u-deploying-5g-networks

The new 5G wireless networks have 2 major features – they can be powered solely by renewable energy and enable tight integration with clouds. The WiFi nodes can be powered solely by renewable energy using power over Ethernet or 400 Hz multiplex power on existing 60Hz power lines. With overlapping coverage and the use of 3G/4G as a backup this type of arrangement is quite feasible. An early example of this architecture is the Green Star Network (http://www.greenstarnetwork.com/). The Internet applications that will run over mobile devices will largely be supported by clouds and deeply distributed content networks as the consumer mobile devices will not have the computing power to support advanced applications, such as using the mobile device as a sensor. I have provided some technical details on what a 5G network would look like with some application examples are given in my talk above. Also see my paper on the integration of 5G networks and clouds at http://docs.google.com/Doc?docid=0ARgRwniJ-qh6ZGdiZ2pyY3RfMjc3NmdmbWd4OWZr&hl=en


Integrating Wifi with 3G/4G networks is not a new idea. But in the past Wifi networks were seen as a second cousin to the carrier’s mobile network, and even if the Wfi network was for local service the end to end connection was operated and controlled by the carrier. With 5G networks the exact opposite arrangement is possible where the enterprise WiFi network is the controlling network and only uses the 3G/4G as back up and fill-in when there is no WiFi service. Several companies are already in this space such as BelAir Networks (www.belair.com) and Stoke (www.stoke.com).

I believe R&E networks can be early leaders in this market as they have high capacity backbone networks and their clients such as universities and schools have extensive WiFi networks. Some institutions have deployed an early progenitor of a 5G network with tools like Eduroam. In countries with competitive telecom markets there are now a number of virtual mobile wholesale service providers who would be willing to provide the backup 3G/4G network. A good example is Harbingers proposed global LTE wholesale LTE service http://gigaom.com/2010/03/27/harbinger-lte-network/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+OmMalik+(GigaOM)&utm_content=Google+Feedfetcher


Analysts have always wondered why Cisco has not entered the 3G/4G market. They clearly have dominated so many other aspects of telecom and Internet. The one obvious missing market segment in Cisco’s strategy is mobile. I suspect that Cisco is also planning a Wifi over 3G/4G strategy as well. It makes sense. Data networks are a completely different beast than voice networks. Cisco has never dabbled in voice network technology. No wonder. Today’s mobile networks are very complex using arcane and convoluted voice protocols. As more and more mobile traffic is Internet based the need for a new, more data efficient, far less complex wireless Internet architecture is required. I would not be surprised we see some major announcements from Cisco and Google in this space later this year. Cisco brings its Wifi network expertise to the table and Google brings the cloud infrastructure plus the Android to support the applications.


Japan is already moving in this direction as noted in an e-mail exchange with Herman Wagter on Gordon Cook’s ARCH-ECON list:

“The demand for ubiquitous broadband experience in Japan and its effects is a lesson we should learn from and prepare for.

I understand Softbank is subsidizing customers to get as many picocells connected to home-broadband-wirelines as possible, to offload as much traffic as possible to residential lines. This strategy is much faster and flexible (add where the demand is) than building more masts. In urban areas, where the demand is the highest it is hard to add cell towers (negotiations, permits, costs) and easy to add picocells to residential lines.

In a couple of blogposts I recently have elaborated on a realistic and cheap architecture, using existing technology, to support various kinds of use of the broadband line other than Internet access, like these picocells. Broadband as a utility, taking virtualization one step further.

http://www.dadamotive.com/2010/04/gigabit-society-broadband-as-a-utility.html
http://www.dadamotive.com/2010/04/broadband-as-a-utility-2.html
http://www.dadamotive.com/2010/04/broadband-as-a-utility-for-all-technologie”


Finally here some excerpts from a blog about the innovation potential of clouds

Startups, cloud computing, and the freedom to innovate
http://www.zapthink.com/2010/04/21/startups-cloud-computing-and-the-freedom-to-innovate/
Cloud computing is grabbing a lot of headlines these days. As we have seen with SOA in the past, there is a lot of confusion of what cloud computing is, a lot of resistance to change, and a lot of vendors repackaging their products and calling it cloud-enabled. While many analysts, vendors, journalists, and big companies argue back and forth about semantics, economic models, and viability of cloud computing, startups are innovating and deploying in the cloud at warp speed for a fraction of the cost. This begs the question, “Can large organizations keep up with the pace of change and innovation that we are seeing from startups?”

Innovate or Die
Unlike large well established companies, startups don’t have the time or money to debate the merits of cloud computing. In fact, a startup will have a hard time getting funded if they choose to build data centers, unless building data centers is their core competency. Startups are looking for two things: Speed to market and keeping the burn rate to a minimum. Cloud computing provides both. Speed to market is accomplished by eliminating long procurement cycles for hardware and software, outsourcing various management and security functions to the cloud service providers, and the automation of scaling up and down resources as needed. The low burn rate can be achieved by not assuming all of the costs of physical data centers (cooling, rent, labor, etc.), only paying for the resources you use, and freeing up resources to work on core business functions.
I happen to be a CTO of a startup. For us, without cloud computing, we would not even be in business. We are a retail technology company that aggregates digital coupons from numerous content providers and automatically redeems these coupons in real time at the point of sale when customers shop. To provide this service, we need to have highly scalable, reliable, and secure infrastructure in multiple locations across the nation and eventually across the globe. The amount of capital required to build these datacenters ourselves and hire the staff to manage them is at least ten times the amount we are spending to build our 100% cloud based platform. There are a hand full of large companies who own the paper coupon industry. You would think that they would easily be the leaders in the digital coupon industry. These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough and build the new digital solutions cheap enough to compete with a handful of startups that are racing to sign up all the retailers for this service.
Oh the irony of it all! The bigger companies have a ton of talent, well established data centers and best practices, and lots of capital. Yet the cash strapped startups are able to innovate faster, cheaper, and produce legacy free solutions that are designed specifically to address a new opportunity driven by increased mobile usage and a surge in the redemption rates of both web and mobile coupons due to economic pressures.
My story is just one use case where we see startups grabbing accounts that used to be a honey pot for larger organizations. Take a look at the innovation coming out of the medical, education, home health services, and social networking areas to name a few and you will see many smaller, newer companies providing superior products and services at lower cost (or free) and quicker to market. While bigger companies are trying to change their cultures to be more agile, to do “more with less”, and to better align business and IT, good startups just focus on delivery as a means of survival.
Legacy systems and company culture can be boat anchors
Startups get to start with a blank sheet of paper and design solutions to specifically take advantage of cloud computing whether they leverage SaaS, PaaS, or IaaS services or a combination of all three. For large companies, the shift to the cloud is a much tougher undertaking. First, someone has to sell the concept of cloud computing to senior management to secure funding to undertake a cloud based initiative. Second, most companies have years of legacy systems to deal with. Most, if not all of these systems were never designed to be deployed or to integrate with systems deployed outside of an on-premise data center. Often the risk/reward for reengineering existing systems to take advantage of the cloud is not economically feasible and has limited value for the end users. If it is not broke don’t fix it! Smarter companies will start new products and services in the cloud. This approach makes more sense but there are still issues like internal resistance to change, skill gaps, outdated processes/best practices, and a host of organizational challenges that can get in the way. Like we witnessed with SOA, organization change management is a critical element for successfully implementing any disruptive technology. Resistance to change and communication silos can and will kill these types of initiatives. Startups don’t have these issues, or at least they shouldn’t. Startups define their culture from inception. The culture for most startups is entrepreneurial by nature. The focus is on speed, low cost, results.
Large companies also have tons of assets that are depreciating on the books and armies of people trained on how to manage stuff on-site. Many of these companies want the benefits of the cloud without given up control that they are used to having. This often leads them down an ill advised path to build private clouds within their datacenter. To make matters worse, some even use the same technology partners that supply their on-premise servers without giving the proper evaluation to the thought leading vendors in this space. When you see people arguing about the economics of the cloud, this is why. The cloud is economically feasible when you do not procure and manage the infrastructure on-site. With private clouds, you give up much of the benefits of cloud computing in return for control. Hybrid clouds offer the best of both worlds but even hybrids add a layer of complexity and manageability that may drive costs higher than desired. We see that startups are leveraging the public cloud for almost everything. There are a few exceptions where due to customer demands, certain data are kept at the customer site or in a hosted or private cloud, but that is the exception not the norm.
The Zapthink Take
Startups will continue to innovate and leverage cloud computing as a competitive advantage while large, well established companies will test the waters with non-mission critical solutions first. Large companies will not be able to deliver at the speed of startups due to legacy systems and organizational issues, thus conceding to startups for certain business opportunities. Our advice is that larger companies create a separate cloud team that is not bound by the constraints of the existing organization and let them operate as a startup. Larger companies should also consider funding external startups that are working on products and services that fit into their portfolio. Finally, large companies should also have their merger and acquisition department actively looking for promising startups for strategic partnerships, acquisitions, or even buy to kill type strategies. This strategy allows larger companies to focus on their core business while shifting the risks of failed cloud executions to the startup companies.




------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Wednesday, April 21, 2010

Enabling innovation for small business through clouds and R&E networks

[It is generally recognized around the world that small and medium sized businesses particularly in the ICT sector are a major source of innovation and job creation. Governments and economists are always looking for ways to support and enable this sector because of its critical contribution to the economy. The 2 webcasts below clearly demonstrate how clouds will enable greater innovation by this sector as they significantly reduce the startup costs for small businesses and immediately provide access to a global market. The Singapore government estimates that a successful startup only needs $50k because of the availability of clouds. It is not only ICT small businesses that benefit but companies in all different sectors such as materials science can also significantly reduce their upfront capital costs for computers and specialized equipment because of clouds and R&E networks. A good example given in the webcast is how it is possible for a material science company to start in a garage because they can access all their computing resources they need over the cloud and if they have access to a R&E network they can get access to network connected scanning electronic microscope at a university. This is where R&E networks can play a critical role. Although most R&E networks cannot carry commercial traffic they usually can carry R&E traffic for commercial organizations. Getting bandwidth to a cloud provider can be expensive especially if you have large data sets or significant computation requirements. By connecting to an R&E network a small business can get the necessary bandwidth to the cloud that is generally not available from the commercial carriers. Since the traffic is for R&E purposes and not a connection between the small business and a commercial client this traffic generally does not violate the AUP of most R&E networks. Some R&E networks are also looking to be a value added reseller of cloud services to both universities and small business in order to obtain a revenue stream from providing this service. Other potential revenue opportunities exist by bundling cloud service with energy consumption and CO2 reduction. The other big field for innovation for small businesses will be the integration of clouds with mobile applications. Tim OReilly’s presentation gives a good overview of the potential for this market. Thanks to Jaap van Till for the pointer – BSA]

Collaborative Innovation and a Pull Economy

http://ecorner.stanford.edu/authorMaterialInfo.html?mid=2425
What can extreme surfing and World of Warcraft teach the enterprise? Independent Co-Chairman of the Deloitte Center for the Edge and former Xerox PARC Chief Scientist John Seely Brown holds them as examples of the power of frequent benchmarking and full industry info-share. He also uses them to show how the core ecosystem can be made stronger by sharing knowledge gathered from learning on the edge. In addition, Seely Brown touches upon his theory of a monumental economic shift from a push to a pull economy as outlaid in his 2010 book, The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion.



The Internet of Things – Clouds + Social Networks + Mobile
http://www.readwriteweb.com/archives/tim_oreilly_explains_the_internet_of_things.php

The Internet of Things is the idea of a web of data provided by things like real-world devices and sensors. The Internet of Things offers a whole new world of opportunities for improved decision making, innovative services and (unfortunately) social surveillance. It's loaded with implications to consider.

------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Tuesday, April 20, 2010

iPhone slowing down the Internet - desperate need for 5G R&E networks

[There are several reports of how Internet traffic is being slowed down because of the huge growth in mobile data traffic from devices like the iPhone. Mobile data now exceeds mobile voice in terms of traffic volume. As I mentioned in previous blogs I think this a fantastic opportunity for research networks to demonstrate global leadership and create a new environment for innovation. Many of today’s R&E networks were originally established because of the concern of traffic volumes overwhelming the Internet at that time. The conventional thinking at that time was that we need QoS to address the shortage of bandwidth. But a few networks such as CANARIE and SURFnet pioneered a different concept of customer owned fiber networks using new high capacity DWDM equipment. This strategy was so successful that we now have enormous capacity on the R&E networks. It is time now for R&E networks to address the challenge of lack of capacity on wireless networks as well. Once again the carriers are looking at QoS and other restrictive practices. Mobile data handoff to the closest R&E network node using 802.11u is one possible approach. Once again SURFnet is the in the lead and has an active program to look at integrating their student and faculty wireless solution with their nation wide wireless network – BSA]


For more information on 5G wireless networking
http://billstarnaud.blogspot.com/2010/03/surfnet-lays-ground-work-for-next.html
http://billstarnaud.blogspot.com/2010/03/more-on-new-revenue-opportunities-for-r.html


Akamai reports finds slower internet due to mobile growth
http://www.telecompaper.com/news/SendArticle.aspx?u=False

South Korea's average 'Net speed plunges 24%, iPhone blamed
http://arstechnica.com/apple/news/2010/04/south-koreas-average-net-speed-plunges-24-iphone-blamed.ars?utm_source=microblogging&utm_medium=arstch&utm_term=Main%20Account&utm_campaign=microblogging

In the course of three months during 2009, South Korea's average Internet connection speed dropped by a dramatic 24 percent. Think about the magnitude of the decline here: one of the world's most wired countries suddenly sees its overall Internet speeds reduced by a quarter over a few months while similarly positioned countries like Sweden, the Netherlands, and Hong Kong all saw speed increases.
What happened? Blame it on the iPhone.
According to Akamai's recent State of the Internet report, South Korea's bizarre Internet slowdown can largely be traced to the introduction of the iPhone in that country in November 2009. Akamai saw an explosion of unique IP addresses associated with a particular mobile operator (apparently KT, formerly known as Korea Telecom) soon after the phone's launch, indicating broad new iPhone usage.
Unfortunately, this particular mobile provider is slow. "As the average observed connection speed for this mobile provider was a fraction of that observed from wireline connections in South Korea," says the report, "we believe that this launch was likely responsible for the significant drop in South Korea's average undeserved connection speed in the fourth quarter [of 2009]."
That's... a lot of slow iPhones (well, slow iPhone service, at least). Still, despite a massive drop in average access speeds, Korea remains number one on the worldwide list, with an average of 11.7Mbps. The US, if you were wondering, is at 22nd place with 3.8Mbps.


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Monday, April 19, 2010

Dis-intermediation of the university via open courseware -- NYTimes: An Open Mind

[Some excerpts from NYtimes – BSA]

http://www.nytimes.com/2010/04/18/education/edlife/18open-t.html?pagewanted=1

Open courseware is a classic example of disruptive technology, which, loosely defined, is an innovation that comes along one day to change a product or service, often standing an industry on its head. Craigslist did this to newspapers by posting classified ads for free. And the music industry got blindsided when iTunes started unbundling songs from albums and selling them for 99 cents apiece.

Some imagine a situation in which the bulk of introductory course materials are online, as videos or interactive environments; students engage with the material when convenient and show up only for smaller seminars. “In an on-demand environment, they’re thinking,

Mr. Schonfeld sees still more potential in “unbundling” the four elements of educating: design of a course, delivery of that course, delivery of credit and delivery of a degree. “Traditionally, they’ve all lived in the same institutional setting.” Must all four continue to live together, or can one or more be outsourced?

Edupunks — the term for high-tech do-it-yourself educators who skirt traditional structures — are piloting wiki-type U’s that stitch together open course material from many institutions and combine it with student-to-student interaction. In September, Neeru Paharia, a doctoral student at Harvard Business School, and four others from the open education field started up Peer 2 Peer University, a tuition-free, nonprofit experiment financed with seed money from the Hewlett and Shuttleworth foundations.

Ms. Paharia doesn’t speak the same language as traditional educators: P2PU “runs” courses. It doesn’t “offer” them. There are currently 16 courses, in subjects as diverse as behavioral economics, music theory, cyberpunk literature and “managing election campaigns” (and all with a Creative Commons license that grants more freedom of use than a standard copyright). Several hundred people are taking classes, Ms. Paharia says.

P2PU’s mission isn’t to develop a model and stick with it. It is to “experiment and iterate,” says Ms. Paharia, the former executive director of Creative Commons. She likes to talk about signals, a concept borrowed from economics. “Having a degree is a signal,” she says. “It’s a signal to employers that you’ve passed a certain bar.” Here’s the radical part: Ms. Paharia doesn’t think degrees are necessary. P2PU is working to come up with alternative signals that indicate to potential employers that an individual is a good thinker and has the skills he or she claims to have — maybe a written report or an online portfolio.

David Wiley, associate professor of instructional psychology and technology at Brigham Young University, is an adviser to P2PU. For the past several years, he has been referring to “the disaggregation of higher education,” the breaking apart of university functions. Dr. Wiley says that models like P2PU address an important component missing from open courseware: human support. That is, when you have a question, whom can you ask? “No one gets all the way through a textbook without a dozen questions,” he says. “Who’s the T.A.? Where’s your study group?”

“If you go to M.I.T. OpenCourseWare, there’s no way to find out who else is studying the same material and ask them for help,” he says. At P2PU, a “course organizer” leads the discussion but “you are working together with others, so when you have a question you can ask any of your peers. The core idea of P2PU is putting people together around these open courses.”

A similar philosophy is employed by Shai Reshef, the founder of several Internet educational businesses. Mr. Reshef has used $1 million of his own money to start theUniversity of the People, which taps open courses that other universities have put online and relies on student interaction to guide learning; students even grade one another’s papers.

The focus is business administration and computer science, chosen because they hold promise for employment. He says he hopes to seek accreditation, and offer degrees.

Mr. Reshef’s plan is to “take anyone, anyone whatsoever,” as long as they can pass an English orientation course and a course in basic computer skills, and have a high school diploma or equivalent. The nonprofit venture has accepted, and enrolled, 380 of 3,000 applicants, and is trying to raise funds through microphilanthropy — “$80 will send one student to UoPeople for a term” — through social networking.

A decade has passed since M.I.T. decided to give much of its course materials to the public in an act of largesse. The M.I.T. OpenCourseWare Initiative helped usher in the “open educational resources” movement, with its ethos of sharing knowledge via free online educational offerings, including podcasts and videos of lectures, syllabuses and downloadable textbooks. The movement has also helped dislodge higher education from its brick-and-mortar moorings.

If the mission of the university is the creation of knowledge (via research) and the dissemination of knowledge (via teaching and publishing), then it stands to reason that giving that knowledge away fits neatly with that mission. And the branding benefits are clear.
The Open University, the distance-learning behemoth based in England, has vastly increased its visibility with open courses, which frequently show up in the Top 5 downloads on Apple’s iTunes U, a portal to institutions’ free courseware as well as marketing material. The Open University’s free offerings have been downloaded more than 16 million times, with 89 percent of those downloads outside the U.K., says Martin Bean, vice chancellor of the university. Some 6,000 students started out with a free online course before registering for a paid online course.

Carnegie Mellon’s Open Learning Initiative is working with teams of faculty members, researchers on learning and software engineers to develop e-courses designed to improve the educational experience. So far there are 10 complete courses, including logic, statistics, chemistry, biology, economics and French, which cost about $250,000 each to build. Carnegie Mellon is working with community colleges to build four more courses, with the three-year goal of 25 percent more students passing when the class is bolstered by the online instruction.

The intended user is the beginning college student, whom Dr. Smith describes as “someone with limited prior knowledge in a college subject and with little or no experience in successfully directing his or her own learning.”

It works like this: Virtual simulations, labs and tutorials allow for continuous feedback that helps the student along. The student’s progress is tracked step by step, and that information is then used to make improvements to the course. Several studies have shown that students learn a full semester’s worth of material in half the time when the online coursework is added. More students stick with the class, too. “We now have the technology that enables us to go back to what we all know is the best educational experience: personalized, interactive engagement,” Dr. Smith says.

------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Tuesday, April 6, 2010

Will Clouds make University Computing Services obsolete?

[There has been a lot of buzz about clouds and their future potential for research, energy savings and many other applications. But what I think one of the most important features of clouds is that they lower the barriers to innovation. With clouds you can use as little or as much as the resource as you need. This provides great flexibility in terms of developing new applications and making them quickly available to a large user base. Many universities are now starting to take advantage of clouds for research and ICT support services. This trend is likely to accelerate as new open source applications such as Kuali Student Generation System start to be deployed, which are ideally suited for using on clouds with shared access by many institutions and students.

Below are several more new cloud applications for e-mail and content distribution. The many universities continue to maintain their e-mail services and other applications are unlikely to able to keep up with the new services and applications that will come from the cloud. The writing is on the wall. Already many students use Facebook and texting as their main means of communication as opposed to outdated e-mail.
Many universities are also using clouds and content distribution networks for delivery of their education and research content. Neptune Canada for example uses Akamai to deliver the video of the launch of its undersea network.

What does all this mean for the University Computing Services?

Traditionally they were the keepers of the gate maintaining the mainframe computers, servers and network services and applications.
But as more and more services and applications move to the cloud, OUTSIDE the campus, the need to maintain physical systems will decline. On the other hand building and maintaining collaborative applications such as Kuali that use clouds will become increasingly important. Obviously network connectivity and bandwidth will be critical, especially interfacing to the multitude of cloud and content distribution networks that are now being deployed by companies like Akamai, Google, Limelight , Microsoft and others. R&E networks will have an important role in hosting these various cloud and content nodes as close as possible to connected institutions, to minimize latency and delay.

Although there still remain many issues with clouds in terms of privacy, security and interoperability, the trend is clear. Much like the PC, in the early days snuck onto the campus without approval of computing services, I suspect the same thing will happen with clouds.
Their ability to reduce costs, enable innovation and create solutions without prior approval will be very tempting for many research departments and users alike. BSA]

For more thoughts on this subject please see my paper:

A personal perspective on the future of R&E networks

http://billstarnaud.blogspot.com/2010/02/personal-perspective-on-evolving.html

Kuali Student System

http://student.kuali.org/

How the Cloud will revolutionize e-mail

http://www.readwriteweb.com/archives/ready_for_gmail_mashups_google_adds_oauth_to_imap.php

You may or may not be excited by the acronyms OAuth and IMAP/SMTP, but the combination of them all together is very exciting news. Google Code Labs announced this afternoon that it has just enabled 3rd party developers to securely access the contents of your email without ever asking you for your password. If you're logged in to Gmail, you can give those apps permission with as little as one click.

What does that mean? It means mashups based on the actual emails in your inbox. If you've given a 3rd party app secure access to your Twitter account, then you'll be familiar with the user experience. The first example out of the gate is a company called Syphir, which lets you apply all kinds of complex rules to your incoming mail and then lets you get iPhone push notification for your smartly filtered mail.
Backup service Backupify will announce tomorrow morning that it is leveraging the new technology to back up your Gmail account, as well.

People are often wary about the idea of giving outside services access to their email, and well they should. OAuth is designed to make that safe to do. Combined with the IMAP/SMTP email retrieval protocols, it gives an app a way to ask Gmail for access to your information. Gmail pops up a little window and says "this other app wants us to give it your info - if you can prove to us that you are who they say you are (just give Gmail your password) - then we'll go vouch for you and give them the info." The 3rd party app never sees your password and can have its access revoked at any time. You can read more about OAuth, how it was developed and how it works, on the OAuth website.

Why is this so exciting? Because it means that the application we all spend so much time in, where so much of our communication goes on and where you can find some of our closest work and personal contacts - can now have value-added services built on top of it by a whole world of independent developers, without your having to give them your email password.

That's the kind of thing that the data portability paradigm is all about. It's the opposite of lock-in and seeks to allow users to take their data securely from site to site, using it as the foundation for fabulous new services. Google says it is working with Yahoo!, Mozilla and others to develop an industry-wide standard way to combine OAuth and IMAP/SMTP.

http://www.datacenterknowledge.com/archives/2010/03/18/google-boosts-peering-to-save-on-bandwidth/

Is Googles Network Morphing Into a CDN?

Google has dramatically increased its use of peering over the past year, and has also accelerated deployment of local caching servers at large ISPs, making the companys network resemble a content distribution network (CDN) such as Akamai.

The latest information about Googles network structure has emerged from an analysis by Arbor Networks, which has revived debates about Googles bandwidth costs, a topic weve examined several times here at DCK. Theres a discussion of YouTubes bandwidth bills today at Slashdot, while Stacey at GigaOm focused on Googles famed infrastructure advantage.

Expanded Use of Caching Servers

Arbors Craig Labovitz also provides some interesting detail on Googles caching strategy. Over the last year, Google deployed large numbers of Google Global Cache (GGC) servers within consumer networks around the world, Labovitz writes. Anecdotal discussions with providers, suggests more than half of all large consumer networks in North America and Europe now have a rack or more of GGC servers.

This has, in effect, made Googles network look a lot like CDNs such as Akamai or Limelight Networks, which have caching servers at ISPs around the globe. The Google caching servers allow large ISPs to serve Google content from the edge of their network, reducing backbone congestion and traffic on peering connections.

This has a telescoping benefit on bandwidth savings - Google can use the peering connections to reduce its transit costs, and the local caching to further reduce its peering traffic. For more on Googles peering philosophy and practice, see this 2008 document (PDF).

Microsoft Also Adopts CDN Architecture

Google isnt the only Internet titan that has restructured its network to adopt CDN practices. In 2008 Microsoft began building its own CDN, known as the Edge Content Network.

Both companies are preparing for the tidal wave of video-driven data described byBrian Lillieof Equinix in his keynote last week at Data Center World. Lillie said Internet traffic growth is being driven by the development of mobile apps for the iPhone, Blackberry, Android phones and other mobile devices. As these new apps bring a universe of everyday tasks into the palms of users hands, usage is accelerating along with the data traffic streaming across global networks.

How peering is changing the shape of the Internet

http://www.boingboing.net/2010/03/02/how-peering-is-chang.html?utm_source=twitterfeed&utm_medium=twitter

------

email: Bill.St.Arnaud@gmail.com

twitter: BillStArnaud

blog: http://billstarnaud.blogspot.com/

skype: Pocketpro