[The Mantychore project is a very exciting new initiative funded under the European FP7 program. It builds on earlier work in Europe called Manticore and ultimately on pioneering concepts at Canada’s Communication Research Center called UCLP, funded originally by CANARIE. Mantychore plans to deploy pre-production facilities to enable virtual networks supporting a number of virtual organizations in Europe including the Nordic Health Data Network, the British Advance High Quality Media Services and the Irish Grid effort. An important part of the partnership is collaborating with Canada’s Greenstar to insure the facilities are low or zero carbon. This work will also be critical for deploying 5G networks. Ultimately these virtual network services will be available as part of research and education collaborative tool sets such as SURFnet COIN -- BSA]
http://jira.i2cat.net:8090/display/MANTECH/Home
Current National Research and Education Networks (NRENs) in Europe provide connectivity services to their main customers: the research and education community. Traditionally these services have been delivered on a manual basis, although some efforts towards automating the service setup and operation have been initiated. At the same time, more focus is being put in the ability of the community to control some characteristics of these connectivity services, so that users can change some of the service characteristics without having to renegotiate with the service provider.
The Mantychore FP7 [2] project wants to consolidate this trend and allow the NRENs to provide a complete, flexible IP network service that allows research communities to create an IP network under their control, where they can configure:
i) Layer 1, Optical links. Users will be able to get permissions over optical devices like optical switches, and configure some important properties of its cards and ports. Mantychore will integrate the Argia framework [3] which provides complete control of optical resources.
ii) Layer 2, Ethernet and MPLS. Users will be able to get permission over Ethernet and MPLS (Layer 2.5) switches, and configure different services. In this aspect, Mantychore will integrate ETHER project and its capabilities for the management of Ethernet and MPLS resources.
iii) Layer 3, Mantychore FP7 suite includes set of features for:
1. Configuration of virtual networks.
2. Configuration of physical interfaces.
3. Support of routing protocols, both internal (RIP, OSPF) and external (BGP).
4. Support of QoS and firewall services.
5. Creation, modification and deletion of virtual resources: logical interfaces, logical routers.
6. Support of IPv6. It allows the configuration of IPv6 in interfaces, routing protocols, networks,
…
. Mantychore FP7 will carry out pre-operational deployments of the IP network service at two NRENS: HEAnet [4] and NORDUnet [5]. Initially three communities of researchers will benefit from this service: the Nordic Health Data Network, the British Advance High Quality Media Services and the Irish Grid effort [6]. Part of the project effort will be dedicated to consolidate and enhance the community of providers (NRENs but also commercial) and users of the IP network service. It includes a first phase to get requirements of each Mantychore users, and a second phase to define necessary use cases.
In order to improve IaaS service some alternative but very interesting topics will be researched. Framed as Joint Research Activities (JRAs), an infrastructure resource marketplace and the use of renewable energy sources to power e-Infrastructures will be liaised, enriching both the user community and the roadmap of the Mantychore project.
A marketplace provides a single venue that facilitates the sharing of information about resources and services between providers and customers. It provides an interface through which consumers are able to access the discovered resources from their respective providers. The Mantychore FP7 marketplace represents a virtual resource pool that provides a unified platform in which multiple infrastructure providers can advertise their network resources and characteristics to be discovered by potential consumers of the resource. Thus, the Marketplace involves three types of entities. (a) the customers that use the resources. These customers may be end-users, service providers or other providers who wish to extend their point of presence, (b) the infrastructure providers, that provide information about the state of their underlying infrastructure to satisfy the demands of customers, and (c) the matchmaking entity that is used to lookup and locate relevant resources as requested by the customer. The matchmaking entity mediates among the providers and the customer and uses a matching algorithm that parses requests into queries, evaluates the queries over the resources in the marketplace repository and returns the relevant resources. These algorithms are implemented on a generic manner using quality of service parameters suitable to both Layer 3, 2 and 1.
Also, as a part of a JRA, the Mantychore FP7 project will start collaborating with the GreenStar Network project (GSN) [7], a CANARIE [8] funded initiative. The GSN project will develop a practical carbon footprint exchange standard for Information & Communication Technology (ICT) services, will carry out studies on the feasibility of powering e-Infrastructures with unstable renewable energy sources, such as solar or wind, and will also develop management & technical policies that leverage virtualization to migrate virtual infrastructure resources from one site to another based on power availability to facilitate use of renewable energy within the GreenStar Network. The principal objective of this collaboration relies on providing an IaaS management tool and integrating the NREN infrastructures with the GSN network that is formed by a set of green nodes where each one is powered by renewable energy source. The benefits obtained from this collaboration are reflected on the emergence of new and rare use cases where energy considerations are taken into account. Among other topics of research, how to move virtual services without suffering connectivity interruptions and how the physical location can influence in that relocation.
In addition to the two JRA, NA3 is working towards incorporating new users and communities to enrich the user base. The Mantychore FP7 project is committed to incorporate as much viewpoints and uses as possible in order to reach a more complete and valuable software and expertise pool. In that regard, and taking to account that the project is developed inside a research framework, coordination channels and infrastructure tools have been setup around an open model that not only allows but welcomes expert participation at all levels. For this reason, the project resources (technical discussions, contributions, official documents) are open and available for any interested individual or community to join. In addition, Mantychore FP7 is very much open to receive feedback and collaborations with other research fields.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Thursday, November 25, 2010
Wednesday, November 24, 2010
Research Collaborative Tools for integrating commercial clouds and university cyber-infrastructure
[Around the world there are a number of initiatives on developing new collaborative tools and generic portal services for various research communities that allows the seamless integration of commercial cloud services and campus HPC facilities. There is no question that some applications require dedicated high performance HPC facilities on the campus, but there is a wide range of other research and education applications and services using commercial clouds that could make life much more easier for both researchers and IT staff. Two great examples of this type of architectural thinking is the new Globus Online: A cloud based managed file transfer service and SURFnet’s COIN – Collaboration Infrastructure Project. Other related initiatives include Zero Hub an Internet 2’s CoManage Project.
The big advantage of providing integrated collaborative services with commercial clouds as Ian Fosters eloquently states is that “The biggest IT challenge facing science today is not volume but complexity…. It is establishing and operating the processes required to collect, manage, analyze, share, archive, etc., that data that is taking all of our time and killing creativity. And that's where outsourcing can be transformative....For that to happen, we need to make it easy for providers to develop "apps" that encapsulate useful capabilities and for researchers to discover, customize, and apply these "apps" in their work. The effect, I will argue, will be a dramatic acceleration of discovery.”
And of course the major attraction to me personally is that these are the types of collaborative services that could run on a zero carbon infrastructure such as Greenstar, and then direct the compute or application jobs to the appropriate cloud or HPC with the lowest carbon footprint.
https://projectcoin.surfnet.nl/
The COIN infrastructure will link collaboration services set up by educational institutions, research organisations, commercial parties and SURFnet and enable them to interact, thus making custom, flexible online collaboration possible.
At the moment, users are still obliged to choose one or, at most, a couple of online applications for their groupwork. Sharing information between these systems is almost impossible. The COIN project is designed to change this by ensuring that institutions connected to SURFnet can offer their users a greater variety of collaboration services. The aim is to develop a set of online tools that users can combine into a collaboration environment suitable for them.
COIN is based around OpenSocial, a powerful collaborative tool originally developed by Google and now made open to the research and education community. For more details please see http://www.surfnet.nl/en/Thema/coin/Pages/OpenSocialevent.aspx
http://ianfoster.typepad.com/blog/
Globus Online: A cloud-based managed file transfer service
Moving data .. can sound trivial, but in practice is often tedious and difficult. Datasets may have complex nested structures, containing many files of varying sizes. Source and destination may have different authentication requirements and interfaces. End-to-end performance may require careful optimization. Failures must be recovered from. Perhaps only some files differ between source and destination. And so on.
Many tools exist to manage data movement: RFT, FTS, Phedex, rsync, etc. However, all must be installed and run by the user, which can be challenging for all concerned. Globus Online uses software-as-a-service (SaaS) methods to overcome those problems. It's a cloud-hosted, managed service, meaning that you ask Globus Online to move data; Globus Online does its best to make that happen, and tells you if it fails.
The Globus Online a service can be accessed via different interfaces depending on the user and their application:
-A simple Web UI is designed to serve the needs of ad hoc and less technical users
-A command line interface exposes more advanced capabilities and enables scripting for use in automated workflows
- A REST interface facilitates integration for system builders who don't want to re-engineer file transfer solutions for their end users
All three access methods allow a client to:
-establish and update a user profile, and specify the method(s) you want to use to authenticate to the service;
-authenticate using various common methods, such as Google OpenID or MyProxy providers;
-characterize endpoints to/from which transfers may be performed;
-request transfers;
-monitor the progress of transfers; and
-cancel active transfers
The two keys to successful SaaS are reliability and scalability. The service must behave appropriately as usage grows to 1,000 then 1,000,000 and maybe more users. To this end, we run Globus Online on Amazon Web Services. User and transfer profile information are maintained in a database that is replicated, for reliability, across multiple geographical regions. Transfers are serviced by nodes in Amazon's Elastic Compute Cloud (EC2) which automatically scale as service demands increase.
We will support InCommon credentials and other OpenID providers in addition to Google; support other transfer protocols, including HTTP and SRM; and continue to refine automated transfer optimization, by for example optimizing endpoint configurations based on number and size of files.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
The big advantage of providing integrated collaborative services with commercial clouds as Ian Fosters eloquently states is that “The biggest IT challenge facing science today is not volume but complexity…. It is establishing and operating the processes required to collect, manage, analyze, share, archive, etc., that data that is taking all of our time and killing creativity. And that's where outsourcing can be transformative....For that to happen, we need to make it easy for providers to develop "apps" that encapsulate useful capabilities and for researchers to discover, customize, and apply these "apps" in their work. The effect, I will argue, will be a dramatic acceleration of discovery.”
And of course the major attraction to me personally is that these are the types of collaborative services that could run on a zero carbon infrastructure such as Greenstar, and then direct the compute or application jobs to the appropriate cloud or HPC with the lowest carbon footprint.
https://projectcoin.surfnet.nl/
The COIN infrastructure will link collaboration services set up by educational institutions, research organisations, commercial parties and SURFnet and enable them to interact, thus making custom, flexible online collaboration possible.
At the moment, users are still obliged to choose one or, at most, a couple of online applications for their groupwork. Sharing information between these systems is almost impossible. The COIN project is designed to change this by ensuring that institutions connected to SURFnet can offer their users a greater variety of collaboration services. The aim is to develop a set of online tools that users can combine into a collaboration environment suitable for them.
COIN is based around OpenSocial, a powerful collaborative tool originally developed by Google and now made open to the research and education community. For more details please see http://www.surfnet.nl/en/Thema/coin/Pages/OpenSocialevent.aspx
http://ianfoster.typepad.com/blog/
Globus Online: A cloud-based managed file transfer service
Moving data .. can sound trivial, but in practice is often tedious and difficult. Datasets may have complex nested structures, containing many files of varying sizes. Source and destination may have different authentication requirements and interfaces. End-to-end performance may require careful optimization. Failures must be recovered from. Perhaps only some files differ between source and destination. And so on.
Many tools exist to manage data movement: RFT, FTS, Phedex, rsync, etc. However, all must be installed and run by the user, which can be challenging for all concerned. Globus Online uses software-as-a-service (SaaS) methods to overcome those problems. It's a cloud-hosted, managed service, meaning that you ask Globus Online to move data; Globus Online does its best to make that happen, and tells you if it fails.
The Globus Online a service can be accessed via different interfaces depending on the user and their application:
-A simple Web UI is designed to serve the needs of ad hoc and less technical users
-A command line interface exposes more advanced capabilities and enables scripting for use in automated workflows
- A REST interface facilitates integration for system builders who don't want to re-engineer file transfer solutions for their end users
All three access methods allow a client to:
-establish and update a user profile, and specify the method(s) you want to use to authenticate to the service;
-authenticate using various common methods, such as Google OpenID or MyProxy providers;
-characterize endpoints to/from which transfers may be performed;
-request transfers;
-monitor the progress of transfers; and
-cancel active transfers
The two keys to successful SaaS are reliability and scalability. The service must behave appropriately as usage grows to 1,000 then 1,000,000 and maybe more users. To this end, we run Globus Online on Amazon Web Services. User and transfer profile information are maintained in a database that is replicated, for reliability, across multiple geographical regions. Transfers are serviced by nodes in Amazon's Elastic Compute Cloud (EC2) which automatically scale as service demands increase.
We will support InCommon credentials and other OpenID providers in addition to Google; support other transfer protocols, including HTTP and SRM; and continue to refine automated transfer optimization, by for example optimizing endpoint configurations based on number and size of files.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Tuesday, November 23, 2010
More on Apple software SIM and impact of Internet of Things and Smart Grid
[Another excellent article on the critical importance of software SIMs and the Future Internet of Things. I am pleased to see GSMA is looking at software SIMs, but given that they are controlled by the carriers I have little faith that they will produce a software SIM that will allow access to simultaneous 5G network services both licensed and unlicensed. I think this is a critical role for regulators, especially in the EU, to make sure that future software SIMs meet needs of consumers and innovative application companies – rather than entrenching once again the “confuseopoly” of the carriers – BSA]
Apple SIM Soap Opera to Play Out on M2M and Smart Grid
http://gigaom.com/2010/11/23/apple-sim-soap-opera-to-play-out-on-m2m-and-smartgrid/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+OmMalik+(GigaOM:+Tech)&utm_content=Google+Feedfetcher
•
Last month, we broke the news that Apple was working with SIM-card manufacturer Gemalto to create an embedded SIM that could effectively bypass carrier control. Instead of carrier-specific data on such a SIM, for example, an embedded SIM allows for use with various operator networks and would be activated remotely instead of at the point of purchase for a device. In theory, a consumer could purchase an unactivated smartphone with an embedded SIM and later decide which carrier to use it with.
The GSMA, a worldwide consortium of telecommunications companies, lent credence to our reports last week by announcing the formation of a task-force to research the use of programmable SIM cards. The intent of the organization’s research is to set usage standards as early as this January, with the expectation that embedded SIMs will appear in devices starting in 2012. According to The Telegraph, a UK-based publication, carriers aren’t happy with the prospect of losing their direct customer relationships by way of embedded SIMs; some have reportedly threatened to cease phone subsidies to Apple if the handset maker continues its desire for embedded SIM cards.
The battle between Apple and the carriers may be over for now, although I expect this to be unfinished business between the two sides. In the meantime, embedded SIM technology represents huge benefits to the “Internet of Things” or web-connected machines, gadgets and appliances that are use the web in a near autonomous method.
Imagine you want a web-connected refrigerator that sends you a reminder text to buy milk when it realizes you’re running low. Would you want to contract with a carrier during the purchase of such a device, or would you rather have options to choose from? An embedded SIM would allow for the latter, and even better, enables easier network provider switching if you can find a better connectivity deal in the future. The same goes for smart electric meters that shoot your consumption data into the cloud, both for your own monitoring as well as your electric company to see. Do you really want to run outside to swap a SIM card if you change Internet service for your meter?
I suspect the carriers will continue to fight Apple in the embedded SIM war, but over the long term, it’s likely to be a losing battle. Other handset makers will see the same opportunity to own customer relationships that Apple does, and are sure to band together. If the largest telecom industry group sees benefit for embedded SIM cards in the growing number of web-connected devices, carriers may want to stop fighting and instead start figuring out new ways to prevent themselves from becoming dumb pipes.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Apple SIM Soap Opera to Play Out on M2M and Smart Grid
http://gigaom.com/2010/11/23/apple-sim-soap-opera-to-play-out-on-m2m-and-smartgrid/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+OmMalik+(GigaOM:+Tech)&utm_content=Google+Feedfetcher
•
Last month, we broke the news that Apple was working with SIM-card manufacturer Gemalto to create an embedded SIM that could effectively bypass carrier control. Instead of carrier-specific data on such a SIM, for example, an embedded SIM allows for use with various operator networks and would be activated remotely instead of at the point of purchase for a device. In theory, a consumer could purchase an unactivated smartphone with an embedded SIM and later decide which carrier to use it with.
The GSMA, a worldwide consortium of telecommunications companies, lent credence to our reports last week by announcing the formation of a task-force to research the use of programmable SIM cards. The intent of the organization’s research is to set usage standards as early as this January, with the expectation that embedded SIMs will appear in devices starting in 2012. According to The Telegraph, a UK-based publication, carriers aren’t happy with the prospect of losing their direct customer relationships by way of embedded SIMs; some have reportedly threatened to cease phone subsidies to Apple if the handset maker continues its desire for embedded SIM cards.
The battle between Apple and the carriers may be over for now, although I expect this to be unfinished business between the two sides. In the meantime, embedded SIM technology represents huge benefits to the “Internet of Things” or web-connected machines, gadgets and appliances that are use the web in a near autonomous method.
Imagine you want a web-connected refrigerator that sends you a reminder text to buy milk when it realizes you’re running low. Would you want to contract with a carrier during the purchase of such a device, or would you rather have options to choose from? An embedded SIM would allow for the latter, and even better, enables easier network provider switching if you can find a better connectivity deal in the future. The same goes for smart electric meters that shoot your consumption data into the cloud, both for your own monitoring as well as your electric company to see. Do you really want to run outside to swap a SIM card if you change Internet service for your meter?
I suspect the carriers will continue to fight Apple in the embedded SIM war, but over the long term, it’s likely to be a losing battle. Other handset makers will see the same opportunity to own customer relationships that Apple does, and are sure to band together. If the largest telecom industry group sees benefit for embedded SIM cards in the growing number of web-connected devices, carriers may want to stop fighting and instead start figuring out new ways to prevent themselves from becoming dumb pipes.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Sunday, November 21, 2010
Must Read: How to Bypass Carriers Apple-Style
[Another excellent article by Rudolph van der Berg on Apple using software SIMs. The implications for community and R&E networks are significant. For example if the Internet 2/NLR national community network UCAN could obtain its own IMSI then it could offer students and the public a open, not for profit, national cell phone service where it could then resell commercial services from a number of providers. Also with an independent IMSI, students could be provided with a single wireless access service for emergency contact in additional to the service provided by the commercial carrier of their choice. R&E networks would also be able to support the exploding demand for network of things integrated with clouds for personal medical applications, environmental sensors, etc. Rather than being locked into a commercial service provider, a national public virtual wireless operator with its own IMSI could transform the market -- BSA]
How to Bypass Carriers Apple-Style
http://gigaom.com/2010/11/20/how-to-bypass-carriers-apple-style/
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
How to Bypass Carriers Apple-Style
http://gigaom.com/2010/11/20/how-to-bypass-carriers-apple-style/
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Monday, November 15, 2010
How will we know when the Internet is dead? - the need for an Open Internet
[I recently signed a statement with a large and diverse group of advocates for the Open Internet filed with the FCC under their notice of proposed rulemaking entitled Further Inquiry into Two Under-developed Issues in the Open Internet Proceeding. This is an extremely important undertaking to protect the future of the Open Internet. I will not repeat the arguments made in the statement, but I particularly encourage readers to look at David Reeds eloquent posting on this subject (http://www.reed.com/blog-dpr/?p=47) as well as the excellent summary posted on Ars Technica
(http://arstechnica.com/tech-policy/news/2010/11/are-you-on-the-internet-or-something-else.ars)
But I would emphasize there is historical precedent by the FCC (and the Canadian regulator CRTC) to take proactive steps to protect an important telecommunications/information service such as the Open Internet from the predatory practices of incumbent operators. Although it has largely been forgotten about by most cable company CEOs, the entire existence of cablecos in North America is largely due to regulatory actions by the FCC and CRTC in the 1970s and early 80s to protect them from being taken over by the telephone companies, and prohibiting the telephone companies from offer competing video services. In the US the FCC imposed such restrictions on the telcos in order to prevent market concentration , and in Canada it was done for cultural protection reasons. Countries that allowed the telcos to compete with cable companies such as Australia largely killed off this important industry sector in those early years. But in North America as a result of these regulatory prohibitions, a relatively small industry at that time, was allowed to grow and thrive to the point where it can now hire as many lobbyists as the telcos ( a true measure of any mature industry). And like the telcos they now argue vociferously against government interference in the private sector market supposedly created single handedly by themselves.
Given the importance of an Open Internet to our economy and society I would urge regulators to seriously think of the economic and social consequence if we do not protect this important facility of an Open Internet. Special kudos to Seth Johnson for organizing such an incredible group of Internet leaders to sign onto this filing BSA]
>
How will we know when the Internet is dead?
http://arstechnica.com/tech-policy/news/2010/11/are-you-on-the-internet-or-something-else.ars
>
Slashdot picks up the Grant Gross/IDG story:
http://tech.slashdot.org/story/10/11/08/235243/Net-Pioneers-Say-Open-
Internet-Should-Be-Separate
>
Rob Powell: Definitions, Dialogue, and the FCC
http://www.telecomramblings.com/2010/11/definitions-dialog-and-the-
fcc/
>
Joly Macfie/ISOC-NY: Internet to FCC dont mess!
http://www.isoc-ny.org/p2/?p=1403
>
>
Grant Gross: 'Net pioneers: Open Internet should be separate
>
http://www.computerworld.com/s/article/9195221/_Net_pioneers_Open_Inter
net_should_be_separate
>
>
http://www.pcworld.com/businesscenter/article/209919/net_pioneers_open_
internet_should_be_separate.html
>
http://www.networkworld.com/news/2010/110510-net-pioneers-open-
internet-should.html
>
>
http://www.cio.com/article/633616/_Net_Pioneers_Open_Internet_Should_Be
_Separate
>
http://www.itworld.com/government/126709/net-pioneers-open-internet-
should-be-separate
>
>
On Sat, Nov 6, 2010 at 8:17 PM, Seth Johnson
wrote:
>
Robin Chase: The Internet is not Triple Play
http://networkmusings.blogspot.com/2010/11/internet-is-not-triple-
play.html
>
>
Jon Lebkowsky: Advocating for the Open Internet:
http://weblogsky.com/2010/11/05/advocating-for-the-open-internet/
>
(Very good incisive summary and selection in this.)
>
Kenneth Carter: Defining the Open Internet
http://kennethrcarter.com/CoolStuff/2010/11/defining-the-open-
internet/
>
David Isenberg: Towards an Open Internet
http://isen.com/blog/2010/11/towards-an-open-internet/
>
Paul Jones: Identifying the Internet (for the FCC)
http://ibiblio.org/pjones/blog/identifying-the-internet/
>
Gene Gaines posted the Press Release here:
http://www.scribd.com/doc/41150786/Notice-Open-Internet-Advocates-
Urge-the-FCC-on-Praise-Increased-Clarity-11-05-2010
>
Brough Turner/Netblazr: Seeking Federal Recognition for the Open
Internet
http://netblazr.com/node/451
>
David Weinberger: Identifying the Internet
http://www.hyperorg.com/blogger/2010/11/05/identifying-the-internet/
>
On Advancing the Open Internet by Distinguishing it from Specialized
Services:
http://www.scribd.com/doc/41002510/On-Advancing-the-Open-Internet-by-
Distinguishing-it-from-Specialized-Services
>
Exclusive: Big Name Industry Pioneers & Experts Push FCC for Open
Internet
http://siliconangle.com/blog/2010/11/05/big-name-industry-pioneers-
experts-push-fcc-for-open-internet/
>
David Reed: A Statement from Various Advocates for an Open Internet
Why I Signed On
http://www.reed.com/blog-dpr/?p=47
>
>
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
(http://arstechnica.com/tech-policy/news/2010/11/are-you-on-the-internet-or-something-else.ars)
But I would emphasize there is historical precedent by the FCC (and the Canadian regulator CRTC) to take proactive steps to protect an important telecommunications/information service such as the Open Internet from the predatory practices of incumbent operators. Although it has largely been forgotten about by most cable company CEOs, the entire existence of cablecos in North America is largely due to regulatory actions by the FCC and CRTC in the 1970s and early 80s to protect them from being taken over by the telephone companies, and prohibiting the telephone companies from offer competing video services. In the US the FCC imposed such restrictions on the telcos in order to prevent market concentration , and in Canada it was done for cultural protection reasons. Countries that allowed the telcos to compete with cable companies such as Australia largely killed off this important industry sector in those early years. But in North America as a result of these regulatory prohibitions, a relatively small industry at that time, was allowed to grow and thrive to the point where it can now hire as many lobbyists as the telcos ( a true measure of any mature industry). And like the telcos they now argue vociferously against government interference in the private sector market supposedly created single handedly by themselves.
Given the importance of an Open Internet to our economy and society I would urge regulators to seriously think of the economic and social consequence if we do not protect this important facility of an Open Internet. Special kudos to Seth Johnson for organizing such an incredible group of Internet leaders to sign onto this filing BSA]
>
How will we know when the Internet is dead?
http://arstechnica.com/tech-policy/news/2010/11/are-you-on-the-internet-or-something-else.ars
>
Slashdot picks up the Grant Gross/IDG story:
http://tech.slashdot.org/story/10/11/08/235243/Net-Pioneers-Say-Open-
Internet-Should-Be-Separate
>
Rob Powell: Definitions, Dialogue, and the FCC
http://www.telecomramblings.com/2010/11/definitions-dialog-and-the-
fcc/
>
Joly Macfie/ISOC-NY: Internet to FCC dont mess!
http://www.isoc-ny.org/p2/?p=1403
>
>
Grant Gross: 'Net pioneers: Open Internet should be separate
>
http://www.computerworld.com/s/article/9195221/_Net_pioneers_Open_Inter
net_should_be_separate
>
>
http://www.pcworld.com/businesscenter/article/209919/net_pioneers_open_
internet_should_be_separate.html
>
http://www.networkworld.com/news/2010/110510-net-pioneers-open-
internet-should.html
>
>
http://www.cio.com/article/633616/_Net_Pioneers_Open_Internet_Should_Be
_Separate
>
http://www.itworld.com/government/126709/net-pioneers-open-internet-
should-be-separate
>
>
On Sat, Nov 6, 2010 at 8:17 PM, Seth Johnson
wrote:
>
Robin Chase: The Internet is not Triple Play
http://networkmusings.blogspot.com/2010/11/internet-is-not-triple-
play.html
>
>
Jon Lebkowsky: Advocating for the Open Internet:
http://weblogsky.com/2010/11/05/advocating-for-the-open-internet/
>
(Very good incisive summary and selection in this.)
>
Kenneth Carter: Defining the Open Internet
http://kennethrcarter.com/CoolStuff/2010/11/defining-the-open-
internet/
>
David Isenberg: Towards an Open Internet
http://isen.com/blog/2010/11/towards-an-open-internet/
>
Paul Jones: Identifying the Internet (for the FCC)
http://ibiblio.org/pjones/blog/identifying-the-internet/
>
Gene Gaines posted the Press Release here:
http://www.scribd.com/doc/41150786/Notice-Open-Internet-Advocates-
Urge-the-FCC-on-Praise-Increased-Clarity-11-05-2010
>
Brough Turner/Netblazr: Seeking Federal Recognition for the Open
Internet
http://netblazr.com/node/451
>
David Weinberger: Identifying the Internet
http://www.hyperorg.com/blogger/2010/11/05/identifying-the-internet/
>
On Advancing the Open Internet by Distinguishing it from Specialized
Services:
http://www.scribd.com/doc/41002510/On-Advancing-the-Open-Internet-by-
Distinguishing-it-from-Specialized-Services
>
Exclusive: Big Name Industry Pioneers & Experts Push FCC for Open
Internet
http://siliconangle.com/blog/2010/11/05/big-name-industry-pioneers-
experts-push-fcc-for-open-internet/
>
David Reed: A Statement from Various Advocates for an Open Internet
Why I Signed On
http://www.reed.com/blog-dpr/?p=47
>
>
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Wednesday, November 3, 2010
What the cloud *really* means for science
[Looking forward to this presentation by Ian Foster. I couldn’t agree more. There are number of projects working on developing a common set of collaborative tools for commercial clouds to be used by researchers such as the OOI at UCSD, COINS at SURFnet, etc. SURFnet has also taken on the responsibility to negotiate with all commercial cloud providers on behalf of the science and education community in the Netherlands to develop common standards on privacy, federated identity, attributes, etc. For more details please see http://www.terena.org/about/ga/ga34/20101021SURFNETgaClouds.pdf -- BSA]
What the cloud *really* means for science
Ian Foster's Blog
http://ianfoster.typepad.com/blog/2010/11/what-the-cloud-really-means-for-science.html
Nah, I'm not going to tell you here ... that is the title of a talk I will give in Indianapolis on December 1st, at the CloudCom conference. But here's the abstract:
We've all heard about how on-demand computing and storage will transform scientific practice. But by focusing on resources alone, we're missing the real benefit of the large-scale outsourcing and consequent economies of scale that cloud is about. The biggest IT challenge facing science today is not volume but complexity. Sure, terabytes demand new storage and computing solutions. But they're cheap. It is establishing and operating the processes required to collect, manage, analyze, share, archive, etc., that data that is taking all of our time and killing creativity. And that's where outsourcing can be transformative. An entrepreneur can run a small business from a coffee shop, outsourcing essentially every business function to a software-as-a-service provider--accounting, payroll, customer relationship management, the works. Why can't a young researcher run a research lab from a coffee shop? For that to happen, we need to make it easy for providers to develop "apps" that encapsulate useful capabilities and for researchers to discover, customize, and apply these "apps" in their work. The effect, I will argue, will be a dramatic acceleration of discovery.
November 02, 2010 | Permalink
TrackBack
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
What the cloud *really* means for science
Ian Foster's Blog
http://ianfoster.typepad.com/blog/2010/11/what-the-cloud-really-means-for-science.html
Nah, I'm not going to tell you here ... that is the title of a talk I will give in Indianapolis on December 1st, at the CloudCom conference. But here's the abstract:
We've all heard about how on-demand computing and storage will transform scientific practice. But by focusing on resources alone, we're missing the real benefit of the large-scale outsourcing and consequent economies of scale that cloud is about. The biggest IT challenge facing science today is not volume but complexity. Sure, terabytes demand new storage and computing solutions. But they're cheap. It is establishing and operating the processes required to collect, manage, analyze, share, archive, etc., that data that is taking all of our time and killing creativity. And that's where outsourcing can be transformative. An entrepreneur can run a small business from a coffee shop, outsourcing essentially every business function to a software-as-a-service provider--accounting, payroll, customer relationship management, the works. Why can't a young researcher run a research lab from a coffee shop? For that to happen, we need to make it easy for providers to develop "apps" that encapsulate useful capabilities and for researchers to discover, customize, and apply these "apps" in their work. The effect, I will argue, will be a dramatic acceleration of discovery.
November 02, 2010 | Permalink
TrackBack
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Reconnecting universities to communities
[Excellent speech by Internet 2’s new president. Universities, and by extension, R&E networks have a multi-purpose role in society. Not only do they need to support advanced research and cyber-infrastructure they should also be providing leadership in other major challenges facing society such as broadband deployment, green IT, etc –BSA]
http://chronicle.com/blogs/wiredcampus/internet2s-new-leader-outlines-vision-for-superfast-education-networks/27996
Internet2′s New Leader Outlines Vision for Superfast Education Networks
November 2, 2010, 3:57 pm
By Jeff Young
Universities need superfast computer networks now more than ever—to connect to global satellite campuses, to participate in international research, and to build better ties with communities near their campuses by providing broadband access—but a slew of financial and cultural obstacles stand in the way of their development.
That was the message of H. David Lambert, the new president and chief executive of the Internet2 college networking group, at its member meeting today in Atlanta. Mr. Lambert was appointed to the job at Internet2 in July, and he comes to the organization after serving as Georgetown University’s vice president for information services.
He touted the group’s big new project to bring broadband to communities, which received $62.5-million in federal stimulus money, calling it an important political tool to convince lawmakers that universities play a useful role worthy of support. As he put it, the project will start “the process of reconnecting universities to communities.”
“If we can do that, I guarantee it will make a difference when we go fight public funding battles,” he said. “This may be the best thing that’s happened since the Morrill Land-Grant Act,” which established public universities.
He identified many challenges, however, including a need for better cooperation among various national and regional university networking projects. “We have got to get our ecosystem healed,” he said, though he admitted, “I don’t know what all the answers are.”
Globalization of higher education was a major theme of his remarks as well. “Universities are recognizing that they have to compete globally,” he said in his keynote address, which was streamed online, noting that American colleges and universities now collectively have more than 160 campuses overseas. “To do business at a distance means you become very dependent on technology infrastructure,” he said.
He ended his talk by reminded his colleagues that colleges and universities played a key role in building the Internet, and argued that people in academe should remain leaders. “We have to think about how we get back in that leading edge—how we drive the innovation that affects the Internet moving forward rather than being driven by it.”
This entry was posted in Leadership, Uncategorized. Bookmark the permalink.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
http://chronicle.com/blogs/wiredcampus/internet2s-new-leader-outlines-vision-for-superfast-education-networks/27996
Internet2′s New Leader Outlines Vision for Superfast Education Networks
November 2, 2010, 3:57 pm
By Jeff Young
Universities need superfast computer networks now more than ever—to connect to global satellite campuses, to participate in international research, and to build better ties with communities near their campuses by providing broadband access—but a slew of financial and cultural obstacles stand in the way of their development.
That was the message of H. David Lambert, the new president and chief executive of the Internet2 college networking group, at its member meeting today in Atlanta. Mr. Lambert was appointed to the job at Internet2 in July, and he comes to the organization after serving as Georgetown University’s vice president for information services.
He touted the group’s big new project to bring broadband to communities, which received $62.5-million in federal stimulus money, calling it an important political tool to convince lawmakers that universities play a useful role worthy of support. As he put it, the project will start “the process of reconnecting universities to communities.”
“If we can do that, I guarantee it will make a difference when we go fight public funding battles,” he said. “This may be the best thing that’s happened since the Morrill Land-Grant Act,” which established public universities.
He identified many challenges, however, including a need for better cooperation among various national and regional university networking projects. “We have got to get our ecosystem healed,” he said, though he admitted, “I don’t know what all the answers are.”
Globalization of higher education was a major theme of his remarks as well. “Universities are recognizing that they have to compete globally,” he said in his keynote address, which was streamed online, noting that American colleges and universities now collectively have more than 160 campuses overseas. “To do business at a distance means you become very dependent on technology infrastructure,” he said.
He ended his talk by reminded his colleagues that colleges and universities played a key role in building the Internet, and argued that people in academe should remain leaders. “We have to think about how we get back in that leading edge—how we drive the innovation that affects the Internet moving forward rather than being driven by it.”
This entry was posted in Leadership, Uncategorized. Bookmark the permalink.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Monday, November 1, 2010
More on Apple software SIM and implications for R&E/community networks
[There has been a lot of traffic in the blogosphere about the recent rumours of Apple producing a software SIM. I have collected together here a number of useful pointers and commentaries on the subject. As I mentioned before software SIMs pose a significant opportunity for R&E/community networks to extend their geographical reach to students, researchers and (for community networks funded under BTOP for example) the general public. With a software SIM, R&E networks can add a variety of security features such as Eduroam/Shibboleth authentication/authorization via the SIM and build out WiFi or WhiteFi extended networks using their connected institutions as a hub. In fact they may want to make that a condition of service that any connected institution allow the operation of a mast and antenna to extend reach of the network. A great example is the WhiteFi experiment going on in East Huston in partnership with Rice University. Many companies are now building low cost, solar powered GSM, WiFi network devices for this market. The key issue is making sure that users purchase their smart phone or SIM enabled PC from the manufacturer rather than the telco as mentioned in the article below– that way the telco cannot block access to the SIM. After purchase the customer can then contract for 3G/4G access from their favourite telco. Owing and operating a fiber backbone provides the R&E networks with the ability to easily backhaul this traffic and provide much higher bandwidth capabilities then with a traditional 3G/4G network. I also agree with Rudolph van der Berg that I think R&E and community networks should be allowed to get their own IMSI numbers as the network of things (i.e. machine to machine communications) becomes a critical component of research cyber-infrastructure. Given the limited number of IMSI allocations it probably makes sense for national R&E networks to be assigned such numbers –BSA]
Berners-Lee Wants Free, Mobile Data
http://gigaom.com/2010/09/15/berners-lee-wants-free-low-bandwidth-mobile-data/
http://venturebeat.com/2010/09/14/demo-range-networks-cheap-cell-phone-service/
Cell phones have reached nearly ever corner of the globe. Sixty percent of the world’s population have phones and one in four have Internet access. But http://www.rangenetworks.com/ Range Networks doesn’t think that’s good enough. The startup, which is launching today at DEMO Fall 2010, believes that everybody on the planet should have access to Web-connected cell phones. And it believes it can enable cell phones that are so cheap they can be operated profitably with $2- to $3-a-month subscriptions.
Range Networks says it can do this not with bargain-basement technology, but by applying sophisticated chips and clever ideas to the problem of providing basic phone service in areas that are normally out of reach. ...
---------------
No broadband price wars? It’s the duopoly, stupid
http://blog.connectedplanetonline.com/unfiltered/2010/09/15/no-broadband-price-wars-its-the-duopoly-stupid/
Researchers at Northwestern University’s Kellogg School of Management did a recent study looking at why the broadband services market hasn’t seen the type of price declines over the years that have affected other technology sectors. Their answer: It’s a supplier’s market that doesn’t labor under the scrutiny of price regulation.
InformationWeek has more:
Professor Shane Greenstein found that a decision in 2003 to leave regulation up to the broadband companies themselves ‘has caused much of the stagnation in broadband service prices,’ according to an article in the September issue of Kellogg Insight, a publication of Northwestern’s Kellogg School of Management.
File this under “thing that should be painfully obvious.” Broadband is still subject to the telco-cable duopoly in most markets, and their idea of price competition is to throw out a limited-time promotion or a special price tag on a bundle that gets you to sign up for two other services.
---------------------------
An Apple Integrated SIM: What It Could Mean
http://gigaom.com/2010/10/29/an-apple-integrated-sim-what-it-could-mean/
Earlier this week, I reported on rumors that Apple and Gemalto were developing a SIM that Apple could integrate into its iPhone motherboard. In the emails, comments and phone calls that have poured in since then, I’ve received confirmation of the rumors, (although still no word from Apple or Gemalto) and gotten a lot more context about what this move might mean.
While the idea of Apple cutting out mobile operators by selling the device with a SIM already inside — and the ability to choose your carrier via an App Store download — is the most obvious option being discussed, there are plenty of other options that might also be on the table, from a mobile payment scheme to Apple launching its own bid to become a cell-phone company that uses other carrier networks. Let’s break it down.
The Payment Game
The idea here is that Apple would use the integrated SIM not only as the keys to the carrier kingdom, but also as the keys to the banking kingdom. After all, Gemalto has a big business in secure payments, and Apple has already filed some interesting patents when it comes to hardware that could offer payments on a cell phone. The mobile payments market is potentially huge, and Apple has the experience to get it right, and a significant interest in doing so. With iTunes, it already has the credit card information from 160 million consumers, which has enabled a frictionless app-buying experience from its handsets.
Apple clearly has an interest in expanding its payments efforts beyond digital goods and into the real world, where it could not only capture additional revenue from processing fees, but also change the device game by turning the iPhone into a mobile wallet. Integrating such a feature into the handset as opposed to the clunky dongles in use today would appeal to the Apple design aesthetic. I’m pretty sure Steve Jobs doesn’t have a few dongles dangling from his key chain so he can swipe and go at his local gas station.
Apple Becomes a Carrier (sort of)
For those who are focused on the carrier side of the equation, it seems I didn’t go far enough in my initial analysis. Several folks pointed out that the SIM card move could allow Apple to create a network of operators that provide service, and thus turn itself into a mobile virtual network operator or MVNO. MVNOs are popular in other parts of the world, where companies resell access on mobile broadband networks to certain populations. Several companies attempted that in the U.S. around demographics like sports or teens, but generally failed. Prepaid is one area where it has been successful, which could be an interesting option as a way of getting Apple’s iPads onto a network, for example.
There’s Room for Debate
The biggest debate in the comments of the original story centered around whether people would pay full price for a handset, since under such a model, consumers wouldn’t sign a data contract with a carrier. I think some people would, and some wouldn’t, but I do think there are still ways to offer a subsidy, even if Apple could offer folks access to a network directly. Carriers could still offer subsidies if users sign a contract, and even Apple could offer some kind of discount.
•
I don’t think those are very likely scenarios, but if Apple succeeds in changing the relationship between device sales and the mobile network, rest assured handset vendors and companies like Dell or Samsung that see huge opportunities in the mobile device space would hop on the bandwagon faster than you could swap out a SIM card. Those companies aren’t known for producing high-margin hardware as Apple is, so their devices may be less of a squeeze on consumers’ wallets, and those companies might also work out some kind of subsidy of their own.
[..]
I also heard about some really interesting options for this type of SIM for the machine-to-machine market, and about services that provide virtual SIMs already such as Truphoneand MaxRoam. So keep the ideas and information coming, and let’s hope that Apple can push its vision forward.
-----------------
Melding Wi-Fi with digital TV 'white space'
Rice University researchers have won a $1.8 million federal grant for one of the nation's first, real-world tests of wireless communications technology that uses a broad spectral range -- including dormant broadcast television channels -- to deliver free, high-speed broadband Internet service. The five-year project calls for Rice and Houston nonprofit Technology For All (TFA) to add "white space" technology to the wide spectrum Wi-Fi network they jointly operate in Houston's working-class East End neighborhood.
The TFA Wireless network, launched in 2004 with a grant from the National Science Foundation (NSF), today uses unlicensed frequencies ranging from 900 megahertz (MHz) to 5 gigahertz. The new grant -- also from the NSF -- will allow researchers to take advantage of new federal rules that allow the use of licensed TV spectrum between 500 MHz to 700 MHz. The network will dynamically adapt its frequency usage to meet the coverage, capacity and energy-efficiency demands of both the network and clients.
The new grant will pay for the development and testing of custom-built networking gear as well as smart phones, laptops and other devices that can receive white-space signals and seamlessly switch frequencies -- in much the way that today's smart phones connect to the Internet via either Wi-Fi or a cellular network. The grant will also allow Rice social scientists to conduct extensive studies in the neighborhood to find out how people interact with and use the new technology.
"Ideally, users shouldn't have to be concerned with which part of the spectrum they're using at a given time," said Rice's Edward Knightly, the principal investigator on the project. "However, the use of white space should eliminate many of the problems related to Wi-Fi 'dead zones,' so the overall user experience should improve."
White space is a telecom industry moniker for unused frequencies that are set aside for television broadcasters. Examples include TV channels that are unused in a particular market, as well as the spaces between channels that have traditionally been set aside to avoid interference.
[snip]
Dewayne-Net RSS Feed:
--------------------------
From Rudolph van der Berg’s Blog:
http://internetthought.blogspot.com/2010/10/how-regulators-and-telcos-are-holding.html
How regulator's and Telco's are holding up the Internet of Things
[..]
Where do we use M2M?
There are many ways of doing machine to machine communication. Much of it is already done in Scada systems and generally uses wired networks. One of these may be analysis systems for hundreds of thousands of sensors in chemical plants. All of this is wired communication. However unlike chemical plants most systems don't sit nicely in one place, they either move or are too distributed.
• When going outside of individual sites like chemical plants, fields with windturbines etc and going into the general society there are still a gazillion machines that could benefit from a communication module. Such machines are:
• beer ceggs in bars, to check on quality and beerlevels.
• trains to check on seat availability, roundness of the wheels, info displays etc. The average Dutch train now has 4-5 communications devices
• sewage pumps
• water pressurizers in high rises
• fire extinguishers of various kinds, sprinklers, but also gas (require specially trained personel for access)
• streetlighting: A colleague is working on LED streetlights that are more energy efficient change color and intensity based on the situation. ie presence of people or to warn people of oncoming ambulances or to guide people to and from a concert.
• smart meters: two types are available. Those for residential use are mostly in a pilot fase. Those for high use customers send values every few minutes to allow for peak shaving and real time trading.
• consumer electronics, like the 1.4 million devices TomTom now has that are equipped with real time traffic data, or the Amazon Kindle 3G or the Kindle DX, but also other devices like digital photo frames.
• Transport applications: Like eCall, OnStar, monitoring by lease and rental companies etc. Cooperative Vehicle Information Systems (CVIS) etc.
[…]
The biggest business problems have to do with the whole lifecycle of the device. What makes M2M different from consumer communications, is the lack of the consumer. The consumer can be trusted upon to change handsets every couple of years and to do all the practical work, like switching SIM-cards, choosing operators etc. Unfortunately M2M devices have to function for 30 years in the field without tender loving care. Some examples of problems identified:
1. The costs of roaming: One of the big problems, certainly for consumer electronics, but also for other devices is, that you never know where they will be used. An Italian may buy a GPRS equipped TomTom or Kindle in Amsterdam and use it Croatia. The device has to work everywhere and preferably also with the lowest roaming rate available. Working everywhere isn't a big problem with the coverage GSM offers. Getting an efficient roaming rate is however very hard. I've heard it to be compared to Sudoku multivariate analysis. No matter who you choose, if they are the cheapest in Scandinavia, they are the most expensive one in Southern Europe and if they are cheap in Eastern Europe, it's expensive in Western Europe. At the end of the analysis, all networks cost exactly the same per month The reason for this is that no network is truly global and the other networks have no reason to play nice. They just see a device that belongs to a foreign competitor, so there is no reason to drop prices. For all they know and care it's a consumer, who will be fleeced by it's home network for using data roaming abroad. The solution may be to use different devices for different countries, but then the Italian guy can't buy a TomTom in Amsterdam and use it in Croatia. Furthermore retailers don't like devices that are country specific. They want the flexibility to buy one device and distribute according to need across Europe. Producers preferably want one device for the global market. The only market that is a bit exempt of this is North America, only a few networks and continent wide coverage of some sorts.
2. Getting full coverage in a country. Unfortunately most fixed applications and some mobile applications suffer from the fact that perfect wireless coverage is almost impossible. If the telco changes antenna orientation or someone parks a truck or builds a building in the line of sight, signals can get lost. This happened for instance to a municipality who had equipped some traffic lights with GPRS to allow them to coordinate the flow of traffic, then one day the orientation of the antenna changed and service was lost to two traffic lights, gone too was a perfectly managed traffic flow and back were the traffic jams. Really bad is it that in most cases the competing networks still have perfect coverage. So how do you get a device to use the network that is available, regardless of whose it is.
3. Switching mobile operators: There are a myriad of reasons why a large scale end-user may want to switch part or all of the M2M devices from one network to another. Some of them include; switching supplier of network, merger with another company, selling of part of the M2M devices to some other company etc. Just imagine what happens if Sony would sell its eReader business to Amazon. Amazon may not want to stick with Sony's mobile network provider. Another example that got me involved in this discussion. A customer was faced with a European procurement procedure for mobile communications services and wanted to know how it could prevent future SIM-swaps as these were getting costly for their 10k devices (which most likely would grow substantially in the coming years). The costs are in the either logistical chain. First of all getting the right SIM to the right person, managing who uses what and where. Do you switch during regular maintenance or when the SIM-switch is. Regular maintenance can be once every 5 years or never in case of smart meters. All of this is problematic, difficult and often underestimated at first. So it costs serious money to fix.
4. Lack of innovation: It’s quite possible to use SIM-cards to authenticate over other networks than just the GSM network. One could think of automatic authentication on wifi-networks for instance. Unfortunately telco’s are currently blocking much of the needed innovation, because of a fear it would cannibalize their revenue in data sales.
So yes, these are some pretty big issues
Is there really no technical fix for the three issues?
People have suggested I didn't look to closely at the technical solutions, so I'll review those that have been suggested to me. Do understand that on the SIM-card there is a unique IMSI that is tied to an operator and operator specific encryption. The first six digits of an IMSI-number are used to find the network that the device belongs to and authenticate it:
1. Multi-SIM devices. Why not stick a SIM-card of every operator you want to deal with in the device and you're done. This solution has some appeal and may work for fixed locations. Most countries have only 4-5 physical networks. So if you disregard the MVNO's, then putting 4-5 SIM-cards in a device should do the trick. Of course when working on an international or global scale this fails quickly; there just isn't any space in the device for all the SIM-cards. Furthermore even mobile markets change, in NL alone in the last couple of years 2 networks stopped operating, when bought by competitors and likely 2 new one's will start in the coming years, when the spectrum is auctioned. So Multi-SIM is rather static. Furthermore, SIM's often carry a monthly charge regardless of them being used. This is because telco's often pay per 'activated' device to their suppliers, so this solution increases costs.
2. Multi-IMSI devices: Why bother with physical SIM's if you can put multiple IMSI's and associated crypto-keys on to one SIM. This might be a solution, However, telco's hate the security implications of it. There is also a question whose SIM-card it will be if all those IMSI's are present. At the moment the SIM-card is owned by one network. And it's a terrible waste of IMSI's, you need one IMSI per operator that could possibly be used. Assuming global coverage, that's more than 800 not counting MVNO's. Multi-IMSI is used sometimes, but mostly by operators with for instance a European footprint who load their IMSI's unto the SIM-card to allow for local coverage. Vodafone NL does this by loading a German IMSI unto phones of Dutch customers who want to be able to call, should the Vodafone network go down. The phone then switching to the German IMSI, which does allow for roaming anywhere in the Netherlands.
3. Over the Air provisioning: This has been extensively researched by the security working group of the 3GPP. They have some interesting solutions, which are described in my report. However, the mobile telco's hate it. The GSMA who represents them has said twice that it hates any form of over the air updating of SIM-cards. It sees it as an abomination. So unless they change their mind, it's a definite no no for this solution.
4. IP-adresses will fix this: sorry, but unfortunately being tied to a mobile operator happens at a layer below the IP-adress. So it may well be that a company can span it's corporate IP-adresses all the way to M2M devices. They may also be able to use different IP-adresses, but this doesn't fix the problem. Changing mobile operators requires that different IMSI's are used and you can't change IMSI's over IP.
So there you have it... technology doesn't save the day. Not the on the technological side and as we will see, not on the business side either.
Business problems not fixed by technology
Even if we would be able to use a technical fix, unfortunately it won’t fix all business issues. These two below are the most impartant ones.
• The price of roaming is fixed by the telco whose network you aren’t roaming on.:The biggest problem for a large scale M2M user is that he is completely dependent upon his mobile telco. The M2M user can only do what his telco allows him to do. This is true for the choice in technology, but even more so for the choice of roaming partners. The way roaming works is that telco's charge eachother a wholesale price for roaming. This wholesale price X is secret. The retail price that the large scale M2M user pays is X plus something Y. But because X is secret, Y is unkown too. So the customer only knows he's paying X+Y. It is impossible to verify if X or Y went up or both if the rates change. Also for the networks that the customer is roaming on, it's impossible to distinguish the customer based on IMSI-number. How would they know for sure that a specific IMSI belongs to that specific M2M application. All they see is that it belongs to Vodafone UK or T-Mobile NL. It might as well be a consumer. Now you might be able to bypass that with Over The Air updates, but which telco is going to allow his customer to change IMSI’s so that they can quickly hop over to another network.
• The lack of competition: Another problem, closely related, is the lack of competition for an M2M end-users business when roaming. In most countries there are 4-5 mobile operators. All of whom would love the M2M business of 50,000 foreigners roaming in their country with cars, eReaders etc. However generally all of them are contracted by the home network of the M2M user. So there are no competitive prices for the user. What the M2M user would like to do is choose 1 or 2 of those 5 networks to roam on. the cheapest ones preferably.
So why is the regulator holding up the future of the Internet of Things?
Well, as stated in the study, if large scale end-users could use their own IMSI's, then all these problems would be solved. Devices could have national and international roaming There would be competition to offer roaming. One device could be sold globally. All of this controlled by the large scale M2M user.
However regulators have created a world where it isn't easy to get access to IMSI-numbers. Only public networks can get them and public is a vague term. Changing the rules to allow private networks access to these numbers is however scary because of unfounded fears:
1. IMSI number scarcity: The current design of IMSI numbers allows for one million ranges to be issued. Well over half of that range hasn't been allocated to countries yet.
2. 3 digit MNC’s:In Europe all the regulators hand out 5 digits of an IMSI to the mobile operatos as the identification of their mobile network. The standard allows for 6. Some people worry that stuff may break if we move to 6. However some parts of the rest of the world use 3 digits too. Most notably North-America. The technical people tell me it shouldn’t be a problem.
3. Unfair competition: If private networks could connect, they could compete with public networks in an unfair way, because they don’t have to abide by the same rules. This is completely wrong. A private network implies it’s private and therefore not directly competing with a telco in the market. It just means a company decided to take matters into it’s own hand
4. ITU rules or European law isn’t up to it. In my opinion it wouldn’t break European law, just bend it a little, the same with the ITU.
5. Etc.
6. The scariest thing may be that it creates a world where the regulator is less relevant at first sight. It cannot determine anymore the right to participate in the market place up front. It may find out that private networks also will call upon the regulator for its services or to have disputes settled. All of this is scary on an institutional level. Instead of the usual 10-20 people that alway show up at the regulator’s office to represent the telecom industry and 1 or 2 to represent the users, things might change drastically.
7. Lastly it’s scary, because it’s the internet way of doing things. All the internet cares about is whether there is a network that needs interconnection. RIPE, ARIN, LACNIC, AFRINIC and APNIC have proven with AS-numbers and Provider independent AS-numbers, that they can efficiently run a numbering space that allows everyone access and creates a dynamic and highly competitive market for interconnection that hardly needs any regulation. If we use the same rules to give access to E.164 and E.212, the telephony world would be way more competitive then it now is, with less regulator involvement.
So please, if you know a regulator, ask them to consider this. Thousands of companies and consumers will thank you later on.
----------------------------
Hi Bill,
A few comments on this posting. I agree with you on the advantages of using a crypto device that you already carry (your mobile phone). […] For network access EAP-AKA can be used, also to authenticate to eduroam. So while I applaud NRENs leading the way, I don't think it is an either or situation but rather and and. Authentication via a smartcard that you hold is an attractive proposition for increasing security whether this "PKI" is operated by the NREN themselves or not.
Klaas W.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Berners-Lee Wants Free, Mobile Data
http://gigaom.com/2010/09/15/berners-lee-wants-free-low-bandwidth-mobile-data/
http://venturebeat.com/2010/09/14/demo-range-networks-cheap-cell-phone-service/
Cell phones have reached nearly ever corner of the globe. Sixty percent of the world’s population have phones and one in four have Internet access. But http://www.rangenetworks.com/ Range Networks doesn’t think that’s good enough. The startup, which is launching today at DEMO Fall 2010, believes that everybody on the planet should have access to Web-connected cell phones. And it believes it can enable cell phones that are so cheap they can be operated profitably with $2- to $3-a-month subscriptions.
Range Networks says it can do this not with bargain-basement technology, but by applying sophisticated chips and clever ideas to the problem of providing basic phone service in areas that are normally out of reach. ...
---------------
No broadband price wars? It’s the duopoly, stupid
http://blog.connectedplanetonline.com/unfiltered/2010/09/15/no-broadband-price-wars-its-the-duopoly-stupid/
Researchers at Northwestern University’s Kellogg School of Management did a recent study looking at why the broadband services market hasn’t seen the type of price declines over the years that have affected other technology sectors. Their answer: It’s a supplier’s market that doesn’t labor under the scrutiny of price regulation.
InformationWeek has more:
Professor Shane Greenstein found that a decision in 2003 to leave regulation up to the broadband companies themselves ‘has caused much of the stagnation in broadband service prices,’ according to an article in the September issue of Kellogg Insight, a publication of Northwestern’s Kellogg School of Management.
File this under “thing that should be painfully obvious.” Broadband is still subject to the telco-cable duopoly in most markets, and their idea of price competition is to throw out a limited-time promotion or a special price tag on a bundle that gets you to sign up for two other services.
---------------------------
An Apple Integrated SIM: What It Could Mean
http://gigaom.com/2010/10/29/an-apple-integrated-sim-what-it-could-mean/
Earlier this week, I reported on rumors that Apple and Gemalto were developing a SIM that Apple could integrate into its iPhone motherboard. In the emails, comments and phone calls that have poured in since then, I’ve received confirmation of the rumors, (although still no word from Apple or Gemalto) and gotten a lot more context about what this move might mean.
While the idea of Apple cutting out mobile operators by selling the device with a SIM already inside — and the ability to choose your carrier via an App Store download — is the most obvious option being discussed, there are plenty of other options that might also be on the table, from a mobile payment scheme to Apple launching its own bid to become a cell-phone company that uses other carrier networks. Let’s break it down.
The Payment Game
The idea here is that Apple would use the integrated SIM not only as the keys to the carrier kingdom, but also as the keys to the banking kingdom. After all, Gemalto has a big business in secure payments, and Apple has already filed some interesting patents when it comes to hardware that could offer payments on a cell phone. The mobile payments market is potentially huge, and Apple has the experience to get it right, and a significant interest in doing so. With iTunes, it already has the credit card information from 160 million consumers, which has enabled a frictionless app-buying experience from its handsets.
Apple clearly has an interest in expanding its payments efforts beyond digital goods and into the real world, where it could not only capture additional revenue from processing fees, but also change the device game by turning the iPhone into a mobile wallet. Integrating such a feature into the handset as opposed to the clunky dongles in use today would appeal to the Apple design aesthetic. I’m pretty sure Steve Jobs doesn’t have a few dongles dangling from his key chain so he can swipe and go at his local gas station.
Apple Becomes a Carrier (sort of)
For those who are focused on the carrier side of the equation, it seems I didn’t go far enough in my initial analysis. Several folks pointed out that the SIM card move could allow Apple to create a network of operators that provide service, and thus turn itself into a mobile virtual network operator or MVNO. MVNOs are popular in other parts of the world, where companies resell access on mobile broadband networks to certain populations. Several companies attempted that in the U.S. around demographics like sports or teens, but generally failed. Prepaid is one area where it has been successful, which could be an interesting option as a way of getting Apple’s iPads onto a network, for example.
There’s Room for Debate
The biggest debate in the comments of the original story centered around whether people would pay full price for a handset, since under such a model, consumers wouldn’t sign a data contract with a carrier. I think some people would, and some wouldn’t, but I do think there are still ways to offer a subsidy, even if Apple could offer folks access to a network directly. Carriers could still offer subsidies if users sign a contract, and even Apple could offer some kind of discount.
•
I don’t think those are very likely scenarios, but if Apple succeeds in changing the relationship between device sales and the mobile network, rest assured handset vendors and companies like Dell or Samsung that see huge opportunities in the mobile device space would hop on the bandwagon faster than you could swap out a SIM card. Those companies aren’t known for producing high-margin hardware as Apple is, so their devices may be less of a squeeze on consumers’ wallets, and those companies might also work out some kind of subsidy of their own.
[..]
I also heard about some really interesting options for this type of SIM for the machine-to-machine market, and about services that provide virtual SIMs already such as Truphoneand MaxRoam. So keep the ideas and information coming, and let’s hope that Apple can push its vision forward.
-----------------
Melding Wi-Fi with digital TV 'white space'
Rice University researchers have won a $1.8 million federal grant for one of the nation's first, real-world tests of wireless communications technology that uses a broad spectral range -- including dormant broadcast television channels -- to deliver free, high-speed broadband Internet service. The five-year project calls for Rice and Houston nonprofit Technology For All (TFA) to add "white space" technology to the wide spectrum Wi-Fi network they jointly operate in Houston's working-class East End neighborhood.
The TFA Wireless network, launched in 2004 with a grant from the National Science Foundation (NSF), today uses unlicensed frequencies ranging from 900 megahertz (MHz) to 5 gigahertz. The new grant -- also from the NSF -- will allow researchers to take advantage of new federal rules that allow the use of licensed TV spectrum between 500 MHz to 700 MHz. The network will dynamically adapt its frequency usage to meet the coverage, capacity and energy-efficiency demands of both the network and clients.
The new grant will pay for the development and testing of custom-built networking gear as well as smart phones, laptops and other devices that can receive white-space signals and seamlessly switch frequencies -- in much the way that today's smart phones connect to the Internet via either Wi-Fi or a cellular network. The grant will also allow Rice social scientists to conduct extensive studies in the neighborhood to find out how people interact with and use the new technology.
"Ideally, users shouldn't have to be concerned with which part of the spectrum they're using at a given time," said Rice's Edward Knightly, the principal investigator on the project. "However, the use of white space should eliminate many of the problems related to Wi-Fi 'dead zones,' so the overall user experience should improve."
White space is a telecom industry moniker for unused frequencies that are set aside for television broadcasters. Examples include TV channels that are unused in a particular market, as well as the spaces between channels that have traditionally been set aside to avoid interference.
[snip]
Dewayne-Net RSS Feed:
--------------------------
From Rudolph van der Berg’s Blog:
http://internetthought.blogspot.com/2010/10/how-regulators-and-telcos-are-holding.html
How regulator's and Telco's are holding up the Internet of Things
[..]
Where do we use M2M?
There are many ways of doing machine to machine communication. Much of it is already done in Scada systems and generally uses wired networks. One of these may be analysis systems for hundreds of thousands of sensors in chemical plants. All of this is wired communication. However unlike chemical plants most systems don't sit nicely in one place, they either move or are too distributed.
• When going outside of individual sites like chemical plants, fields with windturbines etc and going into the general society there are still a gazillion machines that could benefit from a communication module. Such machines are:
• beer ceggs in bars, to check on quality and beerlevels.
• trains to check on seat availability, roundness of the wheels, info displays etc. The average Dutch train now has 4-5 communications devices
• sewage pumps
• water pressurizers in high rises
• fire extinguishers of various kinds, sprinklers, but also gas (require specially trained personel for access)
• streetlighting: A colleague is working on LED streetlights that are more energy efficient change color and intensity based on the situation. ie presence of people or to warn people of oncoming ambulances or to guide people to and from a concert.
• smart meters: two types are available. Those for residential use are mostly in a pilot fase. Those for high use customers send values every few minutes to allow for peak shaving and real time trading.
• consumer electronics, like the 1.4 million devices TomTom now has that are equipped with real time traffic data, or the Amazon Kindle 3G or the Kindle DX, but also other devices like digital photo frames.
• Transport applications: Like eCall, OnStar, monitoring by lease and rental companies etc. Cooperative Vehicle Information Systems (CVIS) etc.
[…]
The biggest business problems have to do with the whole lifecycle of the device. What makes M2M different from consumer communications, is the lack of the consumer. The consumer can be trusted upon to change handsets every couple of years and to do all the practical work, like switching SIM-cards, choosing operators etc. Unfortunately M2M devices have to function for 30 years in the field without tender loving care. Some examples of problems identified:
1. The costs of roaming: One of the big problems, certainly for consumer electronics, but also for other devices is, that you never know where they will be used. An Italian may buy a GPRS equipped TomTom or Kindle in Amsterdam and use it Croatia. The device has to work everywhere and preferably also with the lowest roaming rate available. Working everywhere isn't a big problem with the coverage GSM offers. Getting an efficient roaming rate is however very hard. I've heard it to be compared to Sudoku multivariate analysis. No matter who you choose, if they are the cheapest in Scandinavia, they are the most expensive one in Southern Europe and if they are cheap in Eastern Europe, it's expensive in Western Europe. At the end of the analysis, all networks cost exactly the same per month The reason for this is that no network is truly global and the other networks have no reason to play nice. They just see a device that belongs to a foreign competitor, so there is no reason to drop prices. For all they know and care it's a consumer, who will be fleeced by it's home network for using data roaming abroad. The solution may be to use different devices for different countries, but then the Italian guy can't buy a TomTom in Amsterdam and use it in Croatia. Furthermore retailers don't like devices that are country specific. They want the flexibility to buy one device and distribute according to need across Europe. Producers preferably want one device for the global market. The only market that is a bit exempt of this is North America, only a few networks and continent wide coverage of some sorts.
2. Getting full coverage in a country. Unfortunately most fixed applications and some mobile applications suffer from the fact that perfect wireless coverage is almost impossible. If the telco changes antenna orientation or someone parks a truck or builds a building in the line of sight, signals can get lost. This happened for instance to a municipality who had equipped some traffic lights with GPRS to allow them to coordinate the flow of traffic, then one day the orientation of the antenna changed and service was lost to two traffic lights, gone too was a perfectly managed traffic flow and back were the traffic jams. Really bad is it that in most cases the competing networks still have perfect coverage. So how do you get a device to use the network that is available, regardless of whose it is.
3. Switching mobile operators: There are a myriad of reasons why a large scale end-user may want to switch part or all of the M2M devices from one network to another. Some of them include; switching supplier of network, merger with another company, selling of part of the M2M devices to some other company etc. Just imagine what happens if Sony would sell its eReader business to Amazon. Amazon may not want to stick with Sony's mobile network provider. Another example that got me involved in this discussion. A customer was faced with a European procurement procedure for mobile communications services and wanted to know how it could prevent future SIM-swaps as these were getting costly for their 10k devices (which most likely would grow substantially in the coming years). The costs are in the either logistical chain. First of all getting the right SIM to the right person, managing who uses what and where. Do you switch during regular maintenance or when the SIM-switch is. Regular maintenance can be once every 5 years or never in case of smart meters. All of this is problematic, difficult and often underestimated at first. So it costs serious money to fix.
4. Lack of innovation: It’s quite possible to use SIM-cards to authenticate over other networks than just the GSM network. One could think of automatic authentication on wifi-networks for instance. Unfortunately telco’s are currently blocking much of the needed innovation, because of a fear it would cannibalize their revenue in data sales.
So yes, these are some pretty big issues
Is there really no technical fix for the three issues?
People have suggested I didn't look to closely at the technical solutions, so I'll review those that have been suggested to me. Do understand that on the SIM-card there is a unique IMSI that is tied to an operator and operator specific encryption. The first six digits of an IMSI-number are used to find the network that the device belongs to and authenticate it:
1. Multi-SIM devices. Why not stick a SIM-card of every operator you want to deal with in the device and you're done. This solution has some appeal and may work for fixed locations. Most countries have only 4-5 physical networks. So if you disregard the MVNO's, then putting 4-5 SIM-cards in a device should do the trick. Of course when working on an international or global scale this fails quickly; there just isn't any space in the device for all the SIM-cards. Furthermore even mobile markets change, in NL alone in the last couple of years 2 networks stopped operating, when bought by competitors and likely 2 new one's will start in the coming years, when the spectrum is auctioned. So Multi-SIM is rather static. Furthermore, SIM's often carry a monthly charge regardless of them being used. This is because telco's often pay per 'activated' device to their suppliers, so this solution increases costs.
2. Multi-IMSI devices: Why bother with physical SIM's if you can put multiple IMSI's and associated crypto-keys on to one SIM. This might be a solution, However, telco's hate the security implications of it. There is also a question whose SIM-card it will be if all those IMSI's are present. At the moment the SIM-card is owned by one network. And it's a terrible waste of IMSI's, you need one IMSI per operator that could possibly be used. Assuming global coverage, that's more than 800 not counting MVNO's. Multi-IMSI is used sometimes, but mostly by operators with for instance a European footprint who load their IMSI's unto the SIM-card to allow for local coverage. Vodafone NL does this by loading a German IMSI unto phones of Dutch customers who want to be able to call, should the Vodafone network go down. The phone then switching to the German IMSI, which does allow for roaming anywhere in the Netherlands.
3. Over the Air provisioning: This has been extensively researched by the security working group of the 3GPP. They have some interesting solutions, which are described in my report. However, the mobile telco's hate it. The GSMA who represents them has said twice that it hates any form of over the air updating of SIM-cards. It sees it as an abomination. So unless they change their mind, it's a definite no no for this solution.
4. IP-adresses will fix this: sorry, but unfortunately being tied to a mobile operator happens at a layer below the IP-adress. So it may well be that a company can span it's corporate IP-adresses all the way to M2M devices. They may also be able to use different IP-adresses, but this doesn't fix the problem. Changing mobile operators requires that different IMSI's are used and you can't change IMSI's over IP.
So there you have it... technology doesn't save the day. Not the on the technological side and as we will see, not on the business side either.
Business problems not fixed by technology
Even if we would be able to use a technical fix, unfortunately it won’t fix all business issues. These two below are the most impartant ones.
• The price of roaming is fixed by the telco whose network you aren’t roaming on.:The biggest problem for a large scale M2M user is that he is completely dependent upon his mobile telco. The M2M user can only do what his telco allows him to do. This is true for the choice in technology, but even more so for the choice of roaming partners. The way roaming works is that telco's charge eachother a wholesale price for roaming. This wholesale price X is secret. The retail price that the large scale M2M user pays is X plus something Y. But because X is secret, Y is unkown too. So the customer only knows he's paying X+Y. It is impossible to verify if X or Y went up or both if the rates change. Also for the networks that the customer is roaming on, it's impossible to distinguish the customer based on IMSI-number. How would they know for sure that a specific IMSI belongs to that specific M2M application. All they see is that it belongs to Vodafone UK or T-Mobile NL. It might as well be a consumer. Now you might be able to bypass that with Over The Air updates, but which telco is going to allow his customer to change IMSI’s so that they can quickly hop over to another network.
• The lack of competition: Another problem, closely related, is the lack of competition for an M2M end-users business when roaming. In most countries there are 4-5 mobile operators. All of whom would love the M2M business of 50,000 foreigners roaming in their country with cars, eReaders etc. However generally all of them are contracted by the home network of the M2M user. So there are no competitive prices for the user. What the M2M user would like to do is choose 1 or 2 of those 5 networks to roam on. the cheapest ones preferably.
So why is the regulator holding up the future of the Internet of Things?
Well, as stated in the study, if large scale end-users could use their own IMSI's, then all these problems would be solved. Devices could have national and international roaming There would be competition to offer roaming. One device could be sold globally. All of this controlled by the large scale M2M user.
However regulators have created a world where it isn't easy to get access to IMSI-numbers. Only public networks can get them and public is a vague term. Changing the rules to allow private networks access to these numbers is however scary because of unfounded fears:
1. IMSI number scarcity: The current design of IMSI numbers allows for one million ranges to be issued. Well over half of that range hasn't been allocated to countries yet.
2. 3 digit MNC’s:In Europe all the regulators hand out 5 digits of an IMSI to the mobile operatos as the identification of their mobile network. The standard allows for 6. Some people worry that stuff may break if we move to 6. However some parts of the rest of the world use 3 digits too. Most notably North-America. The technical people tell me it shouldn’t be a problem.
3. Unfair competition: If private networks could connect, they could compete with public networks in an unfair way, because they don’t have to abide by the same rules. This is completely wrong. A private network implies it’s private and therefore not directly competing with a telco in the market. It just means a company decided to take matters into it’s own hand
4. ITU rules or European law isn’t up to it. In my opinion it wouldn’t break European law, just bend it a little, the same with the ITU.
5. Etc.
6. The scariest thing may be that it creates a world where the regulator is less relevant at first sight. It cannot determine anymore the right to participate in the market place up front. It may find out that private networks also will call upon the regulator for its services or to have disputes settled. All of this is scary on an institutional level. Instead of the usual 10-20 people that alway show up at the regulator’s office to represent the telecom industry and 1 or 2 to represent the users, things might change drastically.
7. Lastly it’s scary, because it’s the internet way of doing things. All the internet cares about is whether there is a network that needs interconnection. RIPE, ARIN, LACNIC, AFRINIC and APNIC have proven with AS-numbers and Provider independent AS-numbers, that they can efficiently run a numbering space that allows everyone access and creates a dynamic and highly competitive market for interconnection that hardly needs any regulation. If we use the same rules to give access to E.164 and E.212, the telephony world would be way more competitive then it now is, with less regulator involvement.
So please, if you know a regulator, ask them to consider this. Thousands of companies and consumers will thank you later on.
----------------------------
Hi Bill,
A few comments on this posting. I agree with you on the advantages of using a crypto device that you already carry (your mobile phone). […] For network access EAP-AKA can be used, also to authenticate to eduroam. So while I applaud NRENs leading the way, I don't think it is an either or situation but rather and and. Authentication via a smartcard that you hold is an attractive proposition for increasing security whether this "PKI" is operated by the NREN themselves or not.
Klaas W.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro
Subscribe to:
Posts (Atom)