Monday, December 20, 2010

My top 10 predictions for Internet and R&E networks for 2011

I have decided to join the parade of prognosticators and seers who at this time of year make wildly, uneducated predictions for 2011.  So for no apparent reason or logic here are my 10 top predictions for 2011:

Saturday, December 18, 2010

Thursday, December 16, 2010

The dirty tricks that monopolists play and why Level 3 needs to get into last mile access

[It is interesting to watch the dirty little games that Comcast is playing in the US to protect its video delivery monopoly and protect themselves from “Over the Top” (OTT) competitors like Google and Netflix.

Wednesday, December 15, 2010

Smartphones and HPC-Clouds: The Emerging eScience Mobile Trend

[This is another critical reason why R&E networks need to start to provide national wireless 5G networks.

Thursday, December 9, 2010

What comes after IPv6 and DNS??

[Around the world alarm bells are going off that we are running out of IPv4 address space.

Saturday, December 4, 2010

US legislation to mandate open WiFi in all government buildings

[This legislation is something that I have been advocating for some time – that all public sector buildings (especially universities) should have open WiFi preferably powered by renewable energy. As the authors of the bill notes this will save telecom costs for consumers and government. National R&E networks or organizations like UCAN could obtain their own IMSI numbers to effectively offer a national wireless service that operating as a MVNO, in partnership with 3G/4G providers could offer a very low cost national broadband wireless Internet service. Understandably many institutions are leery of offering an open WiFi service because of the fear of DMCA take down orders and other abuse – but a national R&E network or similar entity that is the MVNO could manage all the hot spots on behalf of the various institutions – much like is what is done at many airports today. Eduroam or Shibboleth could be used for authentication on 3G/4G MVNO networks. In combination with new software based SIMs for smart phones and tablets would be the first steps to a National Public Internet. More details at http://billstarnaud.blogspot.com/– BSA]



Sens. Snowe and Warner want WiFi in all federal buildings
http://thehill.com/blogs/hillicon-valley/technology/131969-sens-snowe-and-warner-want-wifi-in-all-federal-buildings

Sens. Olympia Snowe (R-Maine) and Mark Warner (D-Va.) introduced legislation on Friday that would require all public federal buildings to install WiFi base stations in order to free up cell phone networks.
The Federal Wi-Net Act would mandate the installation of small WiFi base stations in all publicly accessible federal buildings in order to increase wireless coverage and free up mobile networks. The bill would require all new buildings under construction to comply and all older buildings to be retrofitted by 2014. It also orders $15 million from the Federal Buildings Fund be allocated to fund the installations.
“I see a great opportunity to leverage federal buildings in order to improve wireless broadband coverage at a very reasonable cost," Warner said. "By starting with the nearly 9,000 federal buildings owned or operated by the General Services Administration, we will be able to provide appreciable improvement in wireless coverage for consumers while also reducing some of the pressure on existing wireless broadband networks."
The bill is aimed at preventing dropped calls that occur indoors and in rural areas due to poor cell phone coverage, while also hopefully boosting wireless network capacity by more effectively deploying broadband wireless networks. The bill is also an acknowledgement of the crucial role that cell phones and smartphones such as BlackBerrys play in the daily routine of federal workers.
“With over 276 million wireless subscribers across our nation and growing demand for wireless broadband, it is imperative that we take steps to improve wireless communication capacity, and this legislation will make measurable progress towards that goal,” said Snowe. “Given that approximately 60 percent of mobile Internet use and 40 percent of cell phone calls are completed indoors, utilizing technologies such as Wi-Fi and femtocells will dramatically improve coverage.”
The Federal Communications Commission’s National Broadband Plan argues most smartphones sold today have Wi-Fi capabilities, so installing mini-base stations and Wi-Fi hotspots in federal buildings would improve indoor cell phone coverage and increase wireless network capacity.

Rudolph van der Berg on how to become a MVNO
http://www.slideshare.net/Raindeer/your-customer-wants-to-be-mvno-v2

------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Thursday, December 2, 2010

The need for a National Public Internet (NPI) - important role for R&E/community networks

[Once again there has been a lot of press discussion about network neutrality with FCC Chair Genachowski’s announcement to put a proposal to his fellow commissioners “Preserving a Free and Open Internet” and EC Commissioner Neelie Kroes statements in Europe on the same issue. The recent debacle with Comcast charging Level 3 a peering fee to deliver NetFlix and Comcast’s intent to purchase NBC (see below) reinforces the need to protect and enshrine an Open Internet given the huge concentration of market power of the cable/telco duopoly and the lack of competition in the Internet telecom marketplace. This is not only true for US, but other countries like Canada where restrictions on foreign ownership have created a similar vortex of market concentration between media and telecom/cable companies.

The highly respected Internet pioneer David Reed I think summed up the issue quit well in his recent blog (http://www.reed.com/blog-dpr/?p=64) that the Internet is a “separate” thing. It is often confused with and equated with broadband, but the two are not the same. The Internet is essentially an agreed upon set of protocols for the sharing and transmission of data over virtually any medium, while broadband is one of many possible infrastructures for delivering the Internet to users.

To my mind treating the Internet as a separate “thing”, especially different from broadband is an important concept and why regulation of an Open Internet needs to be treated differently than broadband. This is not the first time that regulators and policy makers have recognized a new technology as requiring special treatment. As I have blogged before cable TV, in the early days was also given special regulatory treatment in Canada and US, in recognition that it’s a separate thing than being just another telecom service. (http://billstarnaud.blogspot.com/2010/11/how-will-we-know-when-internet-is-dead.html) The outcome of regulation of cable TV as being a separate thing was the creation of a strong and vibrant cable TV industry in North America. Countries that allowed the telcos to compete with cable companies such as Australia largely killed off this important industry sector in those early years.

I fear the same thing will happen to the Open Internet today as happened in Australia with cable TV in the 1970s. The cablecos and telcos will continue to push for ways to control and modify the Internet, especially in the wireless domain. They will morph it into many different variants of “Internet-like” architectures and “special” services, and essentially kill the Open Internet as we know it today. This is why I also agree with David Reed that making a special exemption in terms of network neutrality for wireless broadband is a bad idea. The case for treating wireless Internet differently is based on the misguided assumption that wireless is a narrow, single-channel, low-bandwidth service. But in reality there is an incredible wave of innovation occurring in the wireless market with multi-channel, cognitive radio integrating WiFi, WhiteFi, mesh radio etc that will more than enable sufficient bandwidth to treat the Internet over wireless the same as Internet over wires.

However I am not as hopeful as David Reed in terms of regulatory protection for an Open Internet. The incumbents will emasculate any regulation through the courts or by lobbying their political friends . A case in point is Canada where the regulator has probably imposed some of the most stringent requirements anywhere, in terms of open access on both cable and telephone industry, and yet for the most part these requirements have been thwarted in gaming of the system by the incumbents.

From the lessons we have learned about cable television, I believe if we want a truly Open Internet we need to deploy an infrastructure that is independent of the telcos and cablecos. Fortunately we have most of the important components of such an infrastructure in place thanks to the deployment of R&E networks nationally and regionally. With the added capabilities of the many community networks funded by BTOP such as UCAN it is well within the realm of possibility to deploy both a wired and wireless National Public Internet (NPI)– that is committed to the principles of an Open Internet. I am not advocating that we replace the telcos and cableco and their “Internet-like” service. But much like PBS and NPR provides an alternate voice to the mainstream broadcasters, NPI could ensure that there remains an independent and open Internet with all the benefits that entails in terms of innovation, freedom of speech and freedom of assembly.

I also believe that deployment of a NPI is critical for the future of R&E networks, as the mandate for many R&E networks is to provide services to researchers and educators that are not available on the commercial Internet. The Internet would have never been created by the telcos/cablecos in the first place as its basic principles of openness and intelligence at the edge are fundamentally counter to that of large bureaucratic monopolies. New advances in wireless 5G networking, green Internet, cloud computing and next generation optical technologies are in many ways are even a greater threat to the incumbents. The next wave of Internet innovation, especially in the wireless domain, is likely to be even more profound consequences than what we saw over the last two decades.

To my mind a NPI is more than deploying a network, but should also about providing services like Transit exchanges, Peering routes and free Internet at all public institutions. New technologies such as 100G and 1000G wavelengths will insure that there is plenty of bandwidth on the backbones, and technologies like distributed federated forwarding tables will allow deployment of low cost routing (and hopefully zero carbon) using hundreds, if not thousands of ordinary PCs, much in the same way Google revolutionized data center computing.

Initially NPI may not be accessible to all users, because of the current duopoly in broadband access. But with the growing number of community fiber networks the ability to deliver next generation 5G wireless using hubs at schools and libraries. By obtaining its own IMSI codes, as advocated by Rudoplh van der Berg (http://internetthought.blogspot.com/) and open source GSM base stations, coverage to most citizens and machines (e.g.sensor networks, grids etc) should be within the realm of possibility.

An early example of what a NPI may look like is the R&E network in Alberta Canada – Cybera (www.cybera.ca). Cybera has installed a Transit Exchange, which allows Cybera to aggregate members’ commercial Internet traffic and pass it directly to an Internet Service Provider (ISP) of their choice. This group buying setup will secure Cybera members the kind of low-cost Internet rates usually reserved for large corporations. Also, Cybera has set up initial peering connections with the Toronto Internet Exchange (TorIX) and the Seattle Internet Exchange (SIX) – where users can take advantage of these direct connections and avoid the inevitable queuing for bandwidth that takes place during peak use periods on the regular commercial Internet. These services are not only available to academic community but to small businesses and communities that are connected through the Alberta province wide broadband network – SuperNet. Cybera is quite clever in that rather than trying to establish their own peering connections at the TorIX and SIX they are sharing peering routes with other R&E networks. This is something I have been advocating for some time amongst all international R&E and community networks – such an arrangement could reduce Internet costs for users by as much as 90%. The advent of 100G and soon 1000G waves will obviate any concern about bandwidth congestion.
In conclusion while we should continue to press on the regulatory front for an Open Internet, if nothing else to prevent egregious harm to the Internet and society by the incumbents, I think ultimately the only way we will protect and insure an Open Internet, is if we deploy the technology ourselves. We have the tools. We have the means.


Bill


For more information:

A personal perspective on the evolving Internet and Research and Education Networks
http://docs.google.com/Doc?docid=0ARgRwniJ-qh6ZGdiZ2pyY3RfMjc3NmdmbWd4OWZr&hl=en


http://www.telecomramblings.com/2010/11/traffic-ratio-is-a-code-word-for-over-the-top-video/?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed:+TelecomRamblings+(Telecom+Ramblings)&utm_content=Twitter

By choosing to make a stand on traffic ratios in peers, Comcast is fighting directly against over-the-top video, pure and simple. This way they can state to regulators that they will not interfere with over the top traffic, as they did recently during the NBC merger oversight, even while trying to create a world where they *always* get paid on both ends for that same traffic. And while they must compete in a duopoly for the consumer end, they can set whatever price they like for transit because there is no way to bypass the consumer connections they have at any one time. Quite elegant actually. Why build a new toll booth when you can just re-purpose the one you have and close down all other gates.
There has long been a dispute about whether traffic ratios are a real and important criteria in peering, or else simply a conveniently labeled bargaining chip in a game of power. But we’re going to see that debate move beyond the traditional crowd of IP nerds now, I think.



------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Thursday, November 25, 2010

Europe and Canada partner on Next Generation R&E networks and Green Internet

[The Mantychore project is a very exciting new initiative funded under the European FP7 program. It builds on earlier work in Europe called Manticore and ultimately on pioneering concepts at Canada’s Communication Research Center called UCLP, funded originally by CANARIE. Mantychore plans to deploy pre-production facilities to enable virtual networks supporting a number of virtual organizations in Europe including the Nordic Health Data Network, the British Advance High Quality Media Services and the Irish Grid effort. An important part of the partnership is collaborating with Canada’s Greenstar to insure the facilities are low or zero carbon. This work will also be critical for deploying 5G networks. Ultimately these virtual network services will be available as part of research and education collaborative tool sets such as SURFnet COIN -- BSA]
http://jira.i2cat.net:8090/display/MANTECH/Home

Current National Research and Education Networks (NRENs) in Europe provide connectivity services to their main customers: the research and education community. Traditionally these services have been delivered on a manual basis, although some efforts towards automating the service setup and operation have been initiated. At the same time, more focus is being put in the ability of the community to control some characteristics of these connectivity services, so that users can change some of the service characteristics without having to renegotiate with the service provider.
The Mantychore FP7 [2] project wants to consolidate this trend and allow the NRENs to provide a complete, flexible IP network service that allows research communities to create an IP network under their control, where they can configure:
i) Layer 1, Optical links. Users will be able to get permissions over optical devices like optical switches, and configure some important properties of its cards and ports. Mantychore will integrate the Argia framework [3] which provides complete control of optical resources.
ii) Layer 2, Ethernet and MPLS. Users will be able to get permission over Ethernet and MPLS (Layer 2.5) switches, and configure different services. In this aspect, Mantychore will integrate ETHER project and its capabilities for the management of Ethernet and MPLS resources.
iii) Layer 3, Mantychore FP7 suite includes set of features for:
1. Configuration of virtual networks.
2. Configuration of physical interfaces.
3. Support of routing protocols, both internal (RIP, OSPF) and external (BGP).
4. Support of QoS and firewall services.
5. Creation, modification and deletion of virtual resources: logical interfaces, logical routers.
6. Support of IPv6. It allows the configuration of IPv6 in interfaces, routing protocols, networks,

. Mantychore FP7 will carry out pre-operational deployments of the IP network service at two NRENS: HEAnet [4] and NORDUnet [5]. Initially three communities of researchers will benefit from this service: the Nordic Health Data Network, the British Advance High Quality Media Services and the Irish Grid effort [6]. Part of the project effort will be dedicated to consolidate and enhance the community of providers (NRENs but also commercial) and users of the IP network service. It includes a first phase to get requirements of each Mantychore users, and a second phase to define necessary use cases.
In order to improve IaaS service some alternative but very interesting topics will be researched. Framed as Joint Research Activities (JRAs), an infrastructure resource marketplace and the use of renewable energy sources to power e-Infrastructures will be liaised, enriching both the user community and the roadmap of the Mantychore project.
A marketplace provides a single venue that facilitates the sharing of information about resources and services between providers and customers. It provides an interface through which consumers are able to access the discovered resources from their respective providers. The Mantychore FP7 marketplace represents a virtual resource pool that provides a unified platform in which multiple infrastructure providers can advertise their network resources and characteristics to be discovered by potential consumers of the resource. Thus, the Marketplace involves three types of entities. (a) the customers that use the resources. These customers may be end-users, service providers or other providers who wish to extend their point of presence, (b) the infrastructure providers, that provide information about the state of their underlying infrastructure to satisfy the demands of customers, and (c) the matchmaking entity that is used to lookup and locate relevant resources as requested by the customer. The matchmaking entity mediates among the providers and the customer and uses a matching algorithm that parses requests into queries, evaluates the queries over the resources in the marketplace repository and returns the relevant resources. These algorithms are implemented on a generic manner using quality of service parameters suitable to both Layer 3, 2 and 1.

Also, as a part of a JRA, the Mantychore FP7 project will start collaborating with the GreenStar Network project (GSN) [7], a CANARIE [8] funded initiative. The GSN project will develop a practical carbon footprint exchange standard for Information & Communication Technology (ICT) services, will carry out studies on the feasibility of powering e-Infrastructures with unstable renewable energy sources, such as solar or wind, and will also develop management & technical policies that leverage virtualization to migrate virtual infrastructure resources from one site to another based on power availability to facilitate use of renewable energy within the GreenStar Network. The principal objective of this collaboration relies on providing an IaaS management tool and integrating the NREN infrastructures with the GSN network that is formed by a set of green nodes where each one is powered by renewable energy source. The benefits obtained from this collaboration are reflected on the emergence of new and rare use cases where energy considerations are taken into account. Among other topics of research, how to move virtual services without suffering connectivity interruptions and how the physical location can influence in that relocation.
In addition to the two JRA, NA3 is working towards incorporating new users and communities to enrich the user base. The Mantychore FP7 project is committed to incorporate as much viewpoints and uses as possible in order to reach a more complete and valuable software and expertise pool. In that regard, and taking to account that the project is developed inside a research framework, coordination channels and infrastructure tools have been setup around an open model that not only allows but welcomes expert participation at all levels. For this reason, the project resources (technical discussions, contributions, official documents) are open and available for any interested individual or community to join. In addition, Mantychore FP7 is very much open to receive feedback and collaborations with other research fields.
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Wednesday, November 24, 2010

Research Collaborative Tools for integrating commercial clouds and university cyber-infrastructure

[Around the world there are a number of initiatives on developing new collaborative tools and generic portal services for various research communities that allows the seamless integration of commercial cloud services and campus HPC facilities. There is no question that some applications require dedicated high performance HPC facilities on the campus, but there is a wide range of other research and education applications and services using commercial clouds that could make life much more easier for both researchers and IT staff. Two great examples of this type of architectural thinking is the new Globus Online: A cloud based managed file transfer service and SURFnet’s COIN – Collaboration Infrastructure Project. Other related initiatives include Zero Hub an Internet 2’s CoManage Project.

The big advantage of providing integrated collaborative services with commercial clouds as Ian Fosters eloquently states is that “The biggest IT challenge facing science today is not volume but complexity…. It is establishing and operating the processes required to collect, manage, analyze, share, archive, etc., that data that is taking all of our time and killing creativity. And that's where outsourcing can be transformative....For that to happen, we need to make it easy for providers to develop "apps" that encapsulate useful capabilities and for researchers to discover, customize, and apply these "apps" in their work. The effect, I will argue, will be a dramatic acceleration of discovery.”

And of course the major attraction to me personally is that these are the types of collaborative services that could run on a zero carbon infrastructure such as Greenstar, and then direct the compute or application jobs to the appropriate cloud or HPC with the lowest carbon footprint.


https://projectcoin.surfnet.nl/

The COIN infrastructure will link collaboration services set up by educational institutions, research organisations, commercial parties and SURFnet and enable them to interact, thus making custom, flexible online collaboration possible.
At the moment, users are still obliged to choose one or, at most, a couple of online applications for their groupwork. Sharing information between these systems is almost impossible. The COIN project is designed to change this by ensuring that institutions connected to SURFnet can offer their users a greater variety of collaboration services. The aim is to develop a set of online tools that users can combine into a collaboration environment suitable for them.
COIN is based around OpenSocial, a powerful collaborative tool originally developed by Google and now made open to the research and education community. For more details please see http://www.surfnet.nl/en/Thema/coin/Pages/OpenSocialevent.aspx


http://ianfoster.typepad.com/blog/

Globus Online: A cloud-based managed file transfer service

Moving data .. can sound trivial, but in practice is often tedious and difficult. Datasets may have complex nested structures, containing many files of varying sizes. Source and destination may have different authentication requirements and interfaces. End-to-end performance may require careful optimization. Failures must be recovered from. Perhaps only some files differ between source and destination. And so on.

Many tools exist to manage data movement: RFT, FTS, Phedex, rsync, etc. However, all must be installed and run by the user, which can be challenging for all concerned. Globus Online uses software-as-a-service (SaaS) methods to overcome those problems. It's a cloud-hosted, managed service, meaning that you ask Globus Online to move data; Globus Online does its best to make that happen, and tells you if it fails.

The Globus Online a service can be accessed via different interfaces depending on the user and their application:

-A simple Web UI is designed to serve the needs of ad hoc and less technical users
-A command line interface exposes more advanced capabilities and enables scripting for use in automated workflows
- A REST interface facilitates integration for system builders who don't want to re-engineer file transfer solutions for their end users

All three access methods allow a client to:

-establish and update a user profile, and specify the method(s) you want to use to authenticate to the service;
-authenticate using various common methods, such as Google OpenID or MyProxy providers;
-characterize endpoints to/from which transfers may be performed;
-request transfers;
-monitor the progress of transfers; and
-cancel active transfers


The two keys to successful SaaS are reliability and scalability. The service must behave appropriately as usage grows to 1,000 then 1,000,000 and maybe more users. To this end, we run Globus Online on Amazon Web Services. User and transfer profile information are maintained in a database that is replicated, for reliability, across multiple geographical regions. Transfers are serviced by nodes in Amazon's Elastic Compute Cloud (EC2) which automatically scale as service demands increase.

We will support InCommon credentials and other OpenID providers in addition to Google; support other transfer protocols, including HTTP and SRM; and continue to refine automated transfer optimization, by for example optimizing endpoint configurations based on number and size of files.


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Tuesday, November 23, 2010

More on Apple software SIM and impact of Internet of Things and Smart Grid

[Another excellent article on the critical importance of software SIMs and the Future Internet of Things. I am pleased to see GSMA is looking at software SIMs, but given that they are controlled by the carriers I have little faith that they will produce a software SIM that will allow access to simultaneous 5G network services both licensed and unlicensed. I think this is a critical role for regulators, especially in the EU, to make sure that future software SIMs meet needs of consumers and innovative application companies – rather than entrenching once again the “confuseopoly” of the carriers – BSA]


Apple SIM Soap Opera to Play Out on M2M and Smart Grid
http://gigaom.com/2010/11/23/apple-sim-soap-opera-to-play-out-on-m2m-and-smartgrid/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+OmMalik+(GigaOM:+Tech)&utm_content=Google+Feedfetcher


Last month, we broke the news that Apple was working with SIM-card manufacturer Gemalto to create an embedded SIM that could effectively bypass carrier control. Instead of carrier-specific data on such a SIM, for example, an embedded SIM allows for use with various operator networks and would be activated remotely instead of at the point of purchase for a device. In theory, a consumer could purchase an unactivated smartphone with an embedded SIM and later decide which carrier to use it with.
The GSMA, a worldwide consortium of telecommunications companies, lent credence to our reports last week by announcing the formation of a task-force to research the use of programmable SIM cards. The intent of the organization’s research is to set usage standards as early as this January, with the expectation that embedded SIMs will appear in devices starting in 2012. According to The Telegraph, a UK-based publication, carriers aren’t happy with the prospect of losing their direct customer relationships by way of embedded SIMs; some have reportedly threatened to cease phone subsidies to Apple if the handset maker continues its desire for embedded SIM cards.
The battle between Apple and the carriers may be over for now, although I expect this to be unfinished business between the two sides. In the meantime, embedded SIM technology represents huge benefits to the “Internet of Things” or web-connected machines, gadgets and appliances that are use the web in a near autonomous method.
Imagine you want a web-connected refrigerator that sends you a reminder text to buy milk when it realizes you’re running low. Would you want to contract with a carrier during the purchase of such a device, or would you rather have options to choose from? An embedded SIM would allow for the latter, and even better, enables easier network provider switching if you can find a better connectivity deal in the future. The same goes for smart electric meters that shoot your consumption data into the cloud, both for your own monitoring as well as your electric company to see. Do you really want to run outside to swap a SIM card if you change Internet service for your meter?
I suspect the carriers will continue to fight Apple in the embedded SIM war, but over the long term, it’s likely to be a losing battle. Other handset makers will see the same opportunity to own customer relationships that Apple does, and are sure to band together. If the largest telecom industry group sees benefit for embedded SIM cards in the growing number of web-connected devices, carriers may want to stop fighting and instead start figuring out new ways to prevent themselves from becoming dumb pipes.

------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Sunday, November 21, 2010

Must Read: How to Bypass Carriers Apple-Style

[Another excellent article by Rudolph van der Berg on Apple using software SIMs. The implications for community and R&E networks are significant. For example if the Internet 2/NLR national community network UCAN could obtain its own IMSI then it could offer students and the public a open, not for profit, national cell phone service where it could then resell commercial services from a number of providers. Also with an independent IMSI, students could be provided with a single wireless access service for emergency contact in additional to the service provided by the commercial carrier of their choice. R&E networks would also be able to support the exploding demand for network of things integrated with clouds for personal medical applications, environmental sensors, etc. Rather than being locked into a commercial service provider, a national public virtual wireless operator with its own IMSI could transform the market -- BSA]

How to Bypass Carriers Apple-Style
http://gigaom.com/2010/11/20/how-to-bypass-carriers-apple-style/


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Monday, November 15, 2010

How will we know when the Internet is dead? - the need for an Open Internet

[I recently signed a statement with a large and diverse group of advocates for the Open Internet filed with the FCC under their notice of proposed rulemaking entitled Further Inquiry into Two Under-developed Issues in the Open Internet Proceeding. This is an extremely important undertaking to protect the future of the Open Internet. I will not repeat the arguments made in the statement, but I particularly encourage readers to look at David Reeds eloquent posting on this subject (http://www.reed.com/blog-dpr/?p=47) as well as the excellent summary posted on Ars Technica
(http://arstechnica.com/tech-policy/news/2010/11/are-you-on-the-internet-or-something-else.ars)

But I would emphasize there is historical precedent by the FCC (and the Canadian regulator CRTC) to take proactive steps to protect an important telecommunications/information service such as the Open Internet from the predatory practices of incumbent operators. Although it has largely been forgotten about by most cable company CEOs, the entire existence of cablecos in North America is largely due to regulatory actions by the FCC and CRTC in the 1970s and early 80s to protect them from being taken over by the telephone companies, and prohibiting the telephone companies from offer competing video services. In the US the FCC imposed such restrictions on the telcos in order to prevent market concentration , and in Canada it was done for cultural protection reasons. Countries that allowed the telcos to compete with cable companies such as Australia largely killed off this important industry sector in those early years. But in North America as a result of these regulatory prohibitions, a relatively small industry at that time, was allowed to grow and thrive to the point where it can now hire as many lobbyists as the telcos ( a true measure of any mature industry). And like the telcos they now argue vociferously against government interference in the private sector market supposedly created single handedly by themselves.

Given the importance of an Open Internet to our economy and society I would urge regulators to seriously think of the economic and social consequence if we do not protect this important facility of an Open Internet. Special kudos to Seth Johnson for organizing such an incredible group of Internet leaders to sign onto this filing BSA]

>

How will we know when the Internet is dead?

http://arstechnica.com/tech-policy/news/2010/11/are-you-on-the-internet-or-something-else.ars

>

Slashdot picks up the Grant Gross/IDG story:

http://tech.slashdot.org/story/10/11/08/235243/Net-Pioneers-Say-Open-

Internet-Should-Be-Separate

>

Rob Powell: Definitions, Dialogue, and the FCC

http://www.telecomramblings.com/2010/11/definitions-dialog-and-the-

fcc/

>

Joly Macfie/ISOC-NY: Internet to FCC dont mess!

http://www.isoc-ny.org/p2/?p=1403

>

>

Grant Gross: 'Net pioneers: Open Internet should be separate

>

http://www.computerworld.com/s/article/9195221/_Net_pioneers_Open_Inter

net_should_be_separate

>

>

http://www.pcworld.com/businesscenter/article/209919/net_pioneers_open_

internet_should_be_separate.html

>

http://www.networkworld.com/news/2010/110510-net-pioneers-open-

internet-should.html

>

>

http://www.cio.com/article/633616/_Net_Pioneers_Open_Internet_Should_Be

_Separate

>

http://www.itworld.com/government/126709/net-pioneers-open-internet-

should-be-separate

>

>

On Sat, Nov 6, 2010 at 8:17 PM, Seth Johnson



wrote:

>

Robin Chase: The Internet is not Triple Play

http://networkmusings.blogspot.com/2010/11/internet-is-not-triple-

play.html

>

>

Jon Lebkowsky: Advocating for the Open Internet:

http://weblogsky.com/2010/11/05/advocating-for-the-open-internet/

>

(Very good incisive summary and selection in this.)

>

Kenneth Carter: Defining the Open Internet

http://kennethrcarter.com/CoolStuff/2010/11/defining-the-open-

internet/

>

David Isenberg: Towards an Open Internet

http://isen.com/blog/2010/11/towards-an-open-internet/

>

Paul Jones: Identifying the Internet (for the FCC)

http://ibiblio.org/pjones/blog/identifying-the-internet/

>

Gene Gaines posted the Press Release here:

http://www.scribd.com/doc/41150786/Notice-Open-Internet-Advocates-

Urge-the-FCC-on-Praise-Increased-Clarity-11-05-2010

>

Brough Turner/Netblazr: Seeking Federal Recognition for the Open

Internet

http://netblazr.com/node/451

>

David Weinberger: Identifying the Internet

http://www.hyperorg.com/blogger/2010/11/05/identifying-the-internet/

>

On Advancing the Open Internet by Distinguishing it from Specialized

Services:

http://www.scribd.com/doc/41002510/On-Advancing-the-Open-Internet-by-

Distinguishing-it-from-Specialized-Services

>

Exclusive: Big Name Industry Pioneers & Experts Push FCC for Open

Internet

http://siliconangle.com/blog/2010/11/05/big-name-industry-pioneers-

experts-push-fcc-for-open-internet/

>

David Reed: A Statement from Various Advocates for an Open Internet


Why I Signed On

http://www.reed.com/blog-dpr/?p=47

>

>

------

email: Bill.St.Arnaud@gmail.com

twitter: BillStArnaud

blog: http://billstarnaud.blogspot.com/

skype: Pocketpro

Wednesday, November 3, 2010

What the cloud *really* means for science

[Looking forward to this presentation by Ian Foster. I couldn’t agree more. There are number of projects working on developing a common set of collaborative tools for commercial clouds to be used by researchers such as the OOI at UCSD, COINS at SURFnet, etc. SURFnet has also taken on the responsibility to negotiate with all commercial cloud providers on behalf of the science and education community in the Netherlands to develop common standards on privacy, federated identity, attributes, etc. For more details please see http://www.terena.org/about/ga/ga34/20101021SURFNETgaClouds.pdf -- BSA]

What the cloud *really* means for science
Ian Foster's Blog

http://ianfoster.typepad.com/blog/2010/11/what-the-cloud-really-means-for-science.html
Nah, I'm not going to tell you here ... that is the title of a talk I will give in Indianapolis on December 1st, at the CloudCom conference. But here's the abstract:
We've all heard about how on-demand computing and storage will transform scientific practice. But by focusing on resources alone, we're missing the real benefit of the large-scale outsourcing and consequent economies of scale that cloud is about. The biggest IT challenge facing science today is not volume but complexity. Sure, terabytes demand new storage and computing solutions. But they're cheap. It is establishing and operating the processes required to collect, manage, analyze, share, archive, etc., that data that is taking all of our time and killing creativity. And that's where outsourcing can be transformative. An entrepreneur can run a small business from a coffee shop, outsourcing essentially every business function to a software-as-a-service provider--accounting, payroll, customer relationship management, the works. Why can't a young researcher run a research lab from a coffee shop? For that to happen, we need to make it easy for providers to develop "apps" that encapsulate useful capabilities and for researchers to discover, customize, and apply these "apps" in their work. The effect, I will argue, will be a dramatic acceleration of discovery.
November 02, 2010 | Permalink
TrackBack


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Reconnecting universities to communities

[Excellent speech by Internet 2’s new president. Universities, and by extension, R&E networks have a multi-purpose role in society. Not only do they need to support advanced research and cyber-infrastructure they should also be providing leadership in other major challenges facing society such as broadband deployment, green IT, etc –BSA]

http://chronicle.com/blogs/wiredcampus/internet2s-new-leader-outlines-vision-for-superfast-education-networks/27996

Internet2′s New Leader Outlines Vision for Superfast Education Networks
November 2, 2010, 3:57 pm
By Jeff Young
Universities need superfast computer networks now more than ever—to connect to global satellite campuses, to participate in international research, and to build better ties with communities near their campuses by providing broadband access—but a slew of financial and cultural obstacles stand in the way of their development.
That was the message of H. David Lambert, the new president and chief executive of the Internet2 college networking group, at its member meeting today in Atlanta. Mr. Lambert was appointed to the job at Internet2 in July, and he comes to the organization after serving as Georgetown University’s vice president for information services.
He touted the group’s big new project to bring broadband to communities, which received $62.5-million in federal stimulus money, calling it an important political tool to convince lawmakers that universities play a useful role worthy of support. As he put it, the project will start “the process of reconnecting universities to communities.”
“If we can do that, I guarantee it will make a difference when we go fight public funding battles,” he said. “This may be the best thing that’s happened since the Morrill Land-Grant Act,” which established public universities.
He identified many challenges, however, including a need for better cooperation among various national and regional university networking projects. “We have got to get our ecosystem healed,” he said, though he admitted, “I don’t know what all the answers are.”
Globalization of higher education was a major theme of his remarks as well. “Universities are recognizing that they have to compete globally,” he said in his keynote address, which was streamed online, noting that American colleges and universities now collectively have more than 160 campuses overseas. “To do business at a distance means you become very dependent on technology infrastructure,” he said.
He ended his talk by reminded his colleagues that colleges and universities played a key role in building the Internet, and argued that people in academe should remain leaders. “We have to think about how we get back in that leading edge—how we drive the innovation that affects the Internet moving forward rather than being driven by it.”
This entry was posted in Leadership, Uncategorized. Bookmark the permalink.


------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Monday, November 1, 2010

More on Apple software SIM and implications for R&E/community networks

[There has been a lot of traffic in the blogosphere about the recent rumours of Apple producing a software SIM. I have collected together here a number of useful pointers and commentaries on the subject. As I mentioned before software SIMs pose a significant opportunity for R&E/community networks to extend their geographical reach to students, researchers and (for community networks funded under BTOP for example) the general public. With a software SIM, R&E networks can add a variety of security features such as Eduroam/Shibboleth authentication/authorization via the SIM and build out WiFi or WhiteFi extended networks using their connected institutions as a hub. In fact they may want to make that a condition of service that any connected institution allow the operation of a mast and antenna to extend reach of the network. A great example is the WhiteFi experiment going on in East Huston in partnership with Rice University. Many companies are now building low cost, solar powered GSM, WiFi network devices for this market. The key issue is making sure that users purchase their smart phone or SIM enabled PC from the manufacturer rather than the telco as mentioned in the article below– that way the telco cannot block access to the SIM. After purchase the customer can then contract for 3G/4G access from their favourite telco. Owing and operating a fiber backbone provides the R&E networks with the ability to easily backhaul this traffic and provide much higher bandwidth capabilities then with a traditional 3G/4G network. I also agree with Rudolph van der Berg that I think R&E and community networks should be allowed to get their own IMSI numbers as the network of things (i.e. machine to machine communications) becomes a critical component of research cyber-infrastructure. Given the limited number of IMSI allocations it probably makes sense for national R&E networks to be assigned such numbers –BSA]

Berners-Lee Wants Free, Mobile Data
http://gigaom.com/2010/09/15/berners-lee-wants-free-low-bandwidth-mobile-data/


http://venturebeat.com/2010/09/14/demo-range-networks-cheap-cell-phone-service/

Cell phones have reached nearly ever corner of the globe. Sixty percent of the world’s population have phones and one in four have Internet access. But http://www.rangenetworks.com/ Range Networks doesn’t think that’s good enough. The startup, which is launching today at DEMO Fall 2010, believes that everybody on the planet should have access to Web-connected cell phones. And it believes it can enable cell phones that are so cheap they can be operated profitably with $2- to $3-a-month subscriptions.

Range Networks says it can do this not with bargain-basement technology, but by applying sophisticated chips and clever ideas to the problem of providing basic phone service in areas that are normally out of reach. ...

---------------
No broadband price wars? It’s the duopoly, stupid
http://blog.connectedplanetonline.com/unfiltered/2010/09/15/no-broadband-price-wars-its-the-duopoly-stupid/



Researchers at Northwestern University’s Kellogg School of Management did a recent study looking at why the broadband services market hasn’t seen the type of price declines over the years that have affected other technology sectors. Their answer: It’s a supplier’s market that doesn’t labor under the scrutiny of price regulation.

InformationWeek has more:
Professor Shane Greenstein found that a decision in 2003 to leave regulation up to the broadband companies themselves ‘has caused much of the stagnation in broadband service prices,’ according to an article in the September issue of Kellogg Insight, a publication of Northwestern’s Kellogg School of Management.
File this under “thing that should be painfully obvious.” Broadband is still subject to the telco-cable duopoly in most markets, and their idea of price competition is to throw out a limited-time promotion or a special price tag on a bundle that gets you to sign up for two other services.
---------------------------

An Apple Integrated SIM: What It Could Mean
http://gigaom.com/2010/10/29/an-apple-integrated-sim-what-it-could-mean/
Earlier this week, I reported on rumors that Apple and Gemalto were developing a SIM that Apple could integrate into its iPhone motherboard. In the emails, comments and phone calls that have poured in since then, I’ve received confirmation of the rumors, (although still no word from Apple or Gemalto) and gotten a lot more context about what this move might mean.
While the idea of Apple cutting out mobile operators by selling the device with a SIM already inside — and the ability to choose your carrier via an App Store download — is the most obvious option being discussed, there are plenty of other options that might also be on the table, from a mobile payment scheme to Apple launching its own bid to become a cell-phone company that uses other carrier networks. Let’s break it down.
The Payment Game
The idea here is that Apple would use the integrated SIM not only as the keys to the carrier kingdom, but also as the keys to the banking kingdom. After all, Gemalto has a big business in secure payments, and Apple has already filed some interesting patents when it comes to hardware that could offer payments on a cell phone. The mobile payments market is potentially huge, and Apple has the experience to get it right, and a significant interest in doing so. With iTunes, it already has the credit card information from 160 million consumers, which has enabled a frictionless app-buying experience from its handsets.
Apple clearly has an interest in expanding its payments efforts beyond digital goods and into the real world, where it could not only capture additional revenue from processing fees, but also change the device game by turning the iPhone into a mobile wallet. Integrating such a feature into the handset as opposed to the clunky dongles in use today would appeal to the Apple design aesthetic. I’m pretty sure Steve Jobs doesn’t have a few dongles dangling from his key chain so he can swipe and go at his local gas station.
Apple Becomes a Carrier (sort of)
For those who are focused on the carrier side of the equation, it seems I didn’t go far enough in my initial analysis. Several folks pointed out that the SIM card move could allow Apple to create a network of operators that provide service, and thus turn itself into a mobile virtual network operator or MVNO. MVNOs are popular in other parts of the world, where companies resell access on mobile broadband networks to certain populations. Several companies attempted that in the U.S. around demographics like sports or teens, but generally failed. Prepaid is one area where it has been successful, which could be an interesting option as a way of getting Apple’s iPads onto a network, for example.
There’s Room for Debate
The biggest debate in the comments of the original story centered around whether people would pay full price for a handset, since under such a model, consumers wouldn’t sign a data contract with a carrier. I think some people would, and some wouldn’t, but I do think there are still ways to offer a subsidy, even if Apple could offer folks access to a network directly. Carriers could still offer subsidies if users sign a contract, and even Apple could offer some kind of discount.

I don’t think those are very likely scenarios, but if Apple succeeds in changing the relationship between device sales and the mobile network, rest assured handset vendors and companies like Dell or Samsung that see huge opportunities in the mobile device space would hop on the bandwagon faster than you could swap out a SIM card. Those companies aren’t known for producing high-margin hardware as Apple is, so their devices may be less of a squeeze on consumers’ wallets, and those companies might also work out some kind of subsidy of their own.
[..]
I also heard about some really interesting options for this type of SIM for the machine-to-machine market, and about services that provide virtual SIMs already such as Truphoneand MaxRoam. So keep the ideas and information coming, and let’s hope that Apple can push its vision forward.
-----------------

Melding Wi-Fi with digital TV 'white space'


Rice University researchers have won a $1.8 million federal grant for one of the nation's first, real-world tests of wireless communications technology that uses a broad spectral range -- including dormant broadcast television channels -- to deliver free, high-speed broadband Internet service. The five-year project calls for Rice and Houston nonprofit Technology For All (TFA) to add "white space" technology to the wide spectrum Wi-Fi network they jointly operate in Houston's working-class East End neighborhood.
The TFA Wireless network, launched in 2004 with a grant from the National Science Foundation (NSF), today uses unlicensed frequencies ranging from 900 megahertz (MHz) to 5 gigahertz. The new grant -- also from the NSF -- will allow researchers to take advantage of new federal rules that allow the use of licensed TV spectrum between 500 MHz to 700 MHz. The network will dynamically adapt its frequency usage to meet the coverage, capacity and energy-efficiency demands of both the network and clients.
The new grant will pay for the development and testing of custom-built networking gear as well as smart phones, laptops and other devices that can receive white-space signals and seamlessly switch frequencies -- in much the way that today's smart phones connect to the Internet via either Wi-Fi or a cellular network. The grant will also allow Rice social scientists to conduct extensive studies in the neighborhood to find out how people interact with and use the new technology.
"Ideally, users shouldn't have to be concerned with which part of the spectrum they're using at a given time," said Rice's Edward Knightly, the principal investigator on the project. "However, the use of white space should eliminate many of the problems related to Wi-Fi 'dead zones,' so the overall user experience should improve."
White space is a telecom industry moniker for unused frequencies that are set aside for television broadcasters. Examples include TV channels that are unused in a particular market, as well as the spaces between channels that have traditionally been set aside to avoid interference.
[snip]
Dewayne-Net RSS Feed:

--------------------------

From Rudolph van der Berg’s Blog:
http://internetthought.blogspot.com/2010/10/how-regulators-and-telcos-are-holding.html
How regulator's and Telco's are holding up the Internet of Things
[..]
Where do we use M2M?
There are many ways of doing machine to machine communication. Much of it is already done in Scada systems and generally uses wired networks. One of these may be analysis systems for hundreds of thousands of sensors in chemical plants. All of this is wired communication. However unlike chemical plants most systems don't sit nicely in one place, they either move or are too distributed.
• When going outside of individual sites like chemical plants, fields with windturbines etc and going into the general society there are still a gazillion machines that could benefit from a communication module. Such machines are:
• beer ceggs in bars, to check on quality and beerlevels.
• trains to check on seat availability, roundness of the wheels, info displays etc. The average Dutch train now has 4-5 communications devices
• sewage pumps
• water pressurizers in high rises
• fire extinguishers of various kinds, sprinklers, but also gas (require specially trained personel for access)
• streetlighting: A colleague is working on LED streetlights that are more energy efficient change color and intensity based on the situation. ie presence of people or to warn people of oncoming ambulances or to guide people to and from a concert.
• smart meters: two types are available. Those for residential use are mostly in a pilot fase. Those for high use customers send values every few minutes to allow for peak shaving and real time trading.
• consumer electronics, like the 1.4 million devices TomTom now has that are equipped with real time traffic data, or the Amazon Kindle 3G or the Kindle DX, but also other devices like digital photo frames.
• Transport applications: Like eCall, OnStar, monitoring by lease and rental companies etc. Cooperative Vehicle Information Systems (CVIS) etc.


[…]


The biggest business problems have to do with the whole lifecycle of the device. What makes M2M different from consumer communications, is the lack of the consumer. The consumer can be trusted upon to change handsets every couple of years and to do all the practical work, like switching SIM-cards, choosing operators etc. Unfortunately M2M devices have to function for 30 years in the field without tender loving care. Some examples of problems identified:

1. The costs of roaming: One of the big problems, certainly for consumer electronics, but also for other devices is, that you never know where they will be used. An Italian may buy a GPRS equipped TomTom or Kindle in Amsterdam and use it Croatia. The device has to work everywhere and preferably also with the lowest roaming rate available. Working everywhere isn't a big problem with the coverage GSM offers. Getting an efficient roaming rate is however very hard. I've heard it to be compared to Sudoku multivariate analysis. No matter who you choose, if they are the cheapest in Scandinavia, they are the most expensive one in Southern Europe and if they are cheap in Eastern Europe, it's expensive in Western Europe. At the end of the analysis, all networks cost exactly the same per month The reason for this is that no network is truly global and the other networks have no reason to play nice. They just see a device that belongs to a foreign competitor, so there is no reason to drop prices. For all they know and care it's a consumer, who will be fleeced by it's home network for using data roaming abroad. The solution may be to use different devices for different countries, but then the Italian guy can't buy a TomTom in Amsterdam and use it in Croatia. Furthermore retailers don't like devices that are country specific. They want the flexibility to buy one device and distribute according to need across Europe. Producers preferably want one device for the global market. The only market that is a bit exempt of this is North America, only a few networks and continent wide coverage of some sorts.

2. Getting full coverage in a country. Unfortunately most fixed applications and some mobile applications suffer from the fact that perfect wireless coverage is almost impossible. If the telco changes antenna orientation or someone parks a truck or builds a building in the line of sight, signals can get lost. This happened for instance to a municipality who had equipped some traffic lights with GPRS to allow them to coordinate the flow of traffic, then one day the orientation of the antenna changed and service was lost to two traffic lights, gone too was a perfectly managed traffic flow and back were the traffic jams. Really bad is it that in most cases the competing networks still have perfect coverage. So how do you get a device to use the network that is available, regardless of whose it is.

3. Switching mobile operators: There are a myriad of reasons why a large scale end-user may want to switch part or all of the M2M devices from one network to another. Some of them include; switching supplier of network, merger with another company, selling of part of the M2M devices to some other company etc. Just imagine what happens if Sony would sell its eReader business to Amazon. Amazon may not want to stick with Sony's mobile network provider. Another example that got me involved in this discussion. A customer was faced with a European procurement procedure for mobile communications services and wanted to know how it could prevent future SIM-swaps as these were getting costly for their 10k devices (which most likely would grow substantially in the coming years). The costs are in the either logistical chain. First of all getting the right SIM to the right person, managing who uses what and where. Do you switch during regular maintenance or when the SIM-switch is. Regular maintenance can be once every 5 years or never in case of smart meters. All of this is problematic, difficult and often underestimated at first. So it costs serious money to fix.

4. Lack of innovation: It’s quite possible to use SIM-cards to authenticate over other networks than just the GSM network. One could think of automatic authentication on wifi-networks for instance. Unfortunately telco’s are currently blocking much of the needed innovation, because of a fear it would cannibalize their revenue in data sales.


So yes, these are some pretty big issues
Is there really no technical fix for the three issues?
People have suggested I didn't look to closely at the technical solutions, so I'll review those that have been suggested to me. Do understand that on the SIM-card there is a unique IMSI that is tied to an operator and operator specific encryption. The first six digits of an IMSI-number are used to find the network that the device belongs to and authenticate it:
1. Multi-SIM devices. Why not stick a SIM-card of every operator you want to deal with in the device and you're done. This solution has some appeal and may work for fixed locations. Most countries have only 4-5 physical networks. So if you disregard the MVNO's, then putting 4-5 SIM-cards in a device should do the trick. Of course when working on an international or global scale this fails quickly; there just isn't any space in the device for all the SIM-cards. Furthermore even mobile markets change, in NL alone in the last couple of years 2 networks stopped operating, when bought by competitors and likely 2 new one's will start in the coming years, when the spectrum is auctioned. So Multi-SIM is rather static. Furthermore, SIM's often carry a monthly charge regardless of them being used. This is because telco's often pay per 'activated' device to their suppliers, so this solution increases costs.
2. Multi-IMSI devices: Why bother with physical SIM's if you can put multiple IMSI's and associated crypto-keys on to one SIM. This might be a solution, However, telco's hate the security implications of it. There is also a question whose SIM-card it will be if all those IMSI's are present. At the moment the SIM-card is owned by one network. And it's a terrible waste of IMSI's, you need one IMSI per operator that could possibly be used. Assuming global coverage, that's more than 800 not counting MVNO's. Multi-IMSI is used sometimes, but mostly by operators with for instance a European footprint who load their IMSI's unto the SIM-card to allow for local coverage. Vodafone NL does this by loading a German IMSI unto phones of Dutch customers who want to be able to call, should the Vodafone network go down. The phone then switching to the German IMSI, which does allow for roaming anywhere in the Netherlands.
3. Over the Air provisioning: This has been extensively researched by the security working group of the 3GPP. They have some interesting solutions, which are described in my report. However, the mobile telco's hate it. The GSMA who represents them has said twice that it hates any form of over the air updating of SIM-cards. It sees it as an abomination. So unless they change their mind, it's a definite no no for this solution.
4. IP-adresses will fix this: sorry, but unfortunately being tied to a mobile operator happens at a layer below the IP-adress. So it may well be that a company can span it's corporate IP-adresses all the way to M2M devices. They may also be able to use different IP-adresses, but this doesn't fix the problem. Changing mobile operators requires that different IMSI's are used and you can't change IMSI's over IP.
So there you have it... technology doesn't save the day. Not the on the technological side and as we will see, not on the business side either.

Business problems not fixed by technology
Even if we would be able to use a technical fix, unfortunately it won’t fix all business issues. These two below are the most impartant ones.
• The price of roaming is fixed by the telco whose network you aren’t roaming on.:The biggest problem for a large scale M2M user is that he is completely dependent upon his mobile telco. The M2M user can only do what his telco allows him to do. This is true for the choice in technology, but even more so for the choice of roaming partners. The way roaming works is that telco's charge eachother a wholesale price for roaming. This wholesale price X is secret. The retail price that the large scale M2M user pays is X plus something Y. But because X is secret, Y is unkown too. So the customer only knows he's paying X+Y. It is impossible to verify if X or Y went up or both if the rates change. Also for the networks that the customer is roaming on, it's impossible to distinguish the customer based on IMSI-number. How would they know for sure that a specific IMSI belongs to that specific M2M application. All they see is that it belongs to Vodafone UK or T-Mobile NL. It might as well be a consumer. Now you might be able to bypass that with Over The Air updates, but which telco is going to allow his customer to change IMSI’s so that they can quickly hop over to another network.
• The lack of competition: Another problem, closely related, is the lack of competition for an M2M end-users business when roaming. In most countries there are 4-5 mobile operators. All of whom would love the M2M business of 50,000 foreigners roaming in their country with cars, eReaders etc. However generally all of them are contracted by the home network of the M2M user. So there are no competitive prices for the user. What the M2M user would like to do is choose 1 or 2 of those 5 networks to roam on. the cheapest ones preferably.
So why is the regulator holding up the future of the Internet of Things?
Well, as stated in the study, if large scale end-users could use their own IMSI's, then all these problems would be solved. Devices could have national and international roaming There would be competition to offer roaming. One device could be sold globally. All of this controlled by the large scale M2M user.


However regulators have created a world where it isn't easy to get access to IMSI-numbers. Only public networks can get them and public is a vague term. Changing the rules to allow private networks access to these numbers is however scary because of unfounded fears:
1. IMSI number scarcity: The current design of IMSI numbers allows for one million ranges to be issued. Well over half of that range hasn't been allocated to countries yet.
2. 3 digit MNC’s:In Europe all the regulators hand out 5 digits of an IMSI to the mobile operatos as the identification of their mobile network. The standard allows for 6. Some people worry that stuff may break if we move to 6. However some parts of the rest of the world use 3 digits too. Most notably North-America. The technical people tell me it shouldn’t be a problem.
3. Unfair competition: If private networks could connect, they could compete with public networks in an unfair way, because they don’t have to abide by the same rules. This is completely wrong. A private network implies it’s private and therefore not directly competing with a telco in the market. It just means a company decided to take matters into it’s own hand
4. ITU rules or European law isn’t up to it. In my opinion it wouldn’t break European law, just bend it a little, the same with the ITU.
5. Etc.
6. The scariest thing may be that it creates a world where the regulator is less relevant at first sight. It cannot determine anymore the right to participate in the market place up front. It may find out that private networks also will call upon the regulator for its services or to have disputes settled. All of this is scary on an institutional level. Instead of the usual 10-20 people that alway show up at the regulator’s office to represent the telecom industry and 1 or 2 to represent the users, things might change drastically.
7. Lastly it’s scary, because it’s the internet way of doing things. All the internet cares about is whether there is a network that needs interconnection. RIPE, ARIN, LACNIC, AFRINIC and APNIC have proven with AS-numbers and Provider independent AS-numbers, that they can efficiently run a numbering space that allows everyone access and creates a dynamic and highly competitive market for interconnection that hardly needs any regulation. If we use the same rules to give access to E.164 and E.212, the telephony world would be way more competitive then it now is, with less regulator involvement.
So please, if you know a regulator, ask them to consider this. Thousands of companies and consumers will thank you later on.

----------------------------



Hi Bill,
A few comments on this posting. I agree with you on the advantages of using a crypto device that you already carry (your mobile phone). […] For network access EAP-AKA can be used, also to authenticate to eduroam. So while I applaud NRENs leading the way, I don't think it is an either or situation but rather and and. Authentication via a smartcard that you hold is an attractive proposition for increasing security whether this "PKI" is operated by the NREN themselves or not.
Klaas W.





------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Thursday, October 28, 2010

Apple's new software SIMs may allow for 5G network deployment by R&E and community networks

[Some more exciting developments in the world of 5G networks. The development of soft SIM cards will liberate computer and telephone manufacturers to build next generation wireless networks in partnership with R&E and community networks. Unfortunately the telcos and regulators are still trapped in the antiquated view of spectrum has being a land grant, while in reality new technologies such as cognitive radios, wireless mesh, RF orbital angular momentum, WhiteFi, etc will allow next generation radios to communicate through a variety of techniques across large spectral regions. More importantly this abundance of new radio services can be powered solely by micro renewable energy. As well, most R&E/Community networks have the essential open access backhaul infrastructure. Liberating the handset from the monopoly of a sole provider is a crucial first step- which is enabled by removing the SIM from the clutches of the telco. As noted in the article below this new world of wireless Internet will most likely happen first in Europe. In Europe policy makers understand that “competition” is the essence of a free market, while in North American we are wedded to the false belief that “non-interference” by government is the essence of a free market. European regulators such as those in Netherlands are already signaling that they will be receptive to new wireless Internet strategies enabled by the Apple SIM card and forward thinking innovative networks like SURFnet. But as Rudolf van der Berg points out the regulator also has a critical role in ensuring access and roaming. Quoting Rudolf “The access bit is dependant upon getting the right IMSI's and cryptographic keys of the operator. The roaming bit is either done by subscribing on all networks with OTA updates or by paying the racketeering fee to telco's to allow you to roam”. The new software SIM cards will allow a host of new applications for education/research community in authentication/authorization, sensor networks, etc. Some excerpts from relevant articles—BSA].

More on 5G networks can be found at http://billstarnaud.blogspot.com/

http://gigaom.com/2010/10/27/is-apple-about-to-cut-out-the-carriers/

Is Apple About To Cut Out the Carriers?

Sources inside European carriers have reported that Apple has been working with SIM-card manufacturer Gemalto to create a special SIM card that would allow consumers in Europe to buy a phone via the web or at the Apple Store and get the phones working using Apple’s App Store.
It’s rumored that Apple and Gemalto have created a SIM card, which is typically a chip that carries subscriber identification information for the carriers, that will be integrated into the iPhone itself. Then customers will then be able to choose their carrier at time of purchase at the Apple web site or retail store, or buy the phone and get their handset up and running through a download at the App Store as opposed to visiting a carrier store or calling the carrier. Either way, it reduces the role of the carrier in the iPhone purchase. Gemalto and Apple have not responded to requests for comment. I’m also waiting to hear back from other sources to get more details.
However, if Apple is doing an end run around the carrier by putting its own SIM inside the iPhone, it could do what Google with its NexusOne could not, which is create an easy way to sell a handset via the web without carrier involvement. Much like it helped cut operators out of the app store game, Apple could be taking them out of the device retail game. Yes, carriers will still have to allow the phone to operate on their networks, which appears to be why executives from various French carriers have been to Cupertino in recent weeks.
The Gemalto SIM, according to my sources, is embedded in a chip that has an upgradeable flash component and a ROM area. The ROM area contains data provided by Gemalto with everything related to IT and network security, except for the carrier-related information.
The model should work well in Europe, where the carriers tend to use the same networking technology and are far more competitive. It also means that customers can roam more easily with the iPhones, swapping out the carriers as needed. The iPhone has lost its exclusivity in much of Europe and other markets of the world, which makes this model a compelling one for consumers, but a nightmare for carriers. Apple could change the mobile game once again.


From Dewayne Hendricks Blog
[Note: This item comes from friend Scott McNeil. DLH]

A Cell-Phone Network without a License
A trial system offers calling, texting, and data by weaving signals around the chatter of baby monitors and cordless phones.



A trial cell-phone network in Fort Lauderdale, Florida, gets by without something every other wireless carrier needs: its own chunk of the airwaves. Instead, xG Technology, which made the network, uses base stations and handsets of its own design that steer signals through the unrestricted 900-megahertz band used by cordless phones and other short-range devices.

It's a technique called "cognitive" radio, and it has the potential to make efficient use of an increasingly limited resource: the wireless spectrum. By demonstrating the first cellular network that uses the technique, xG hopes to show that it could help wireless carriers facing growing demand but a relatively fixed supply of spectrum.

Its cognitive radios are built into both the base stations of the trial network, dubbed xMax, and handsets made for it. Every radio scans for clear spectrum 33 times a second. If another signal is detected, the handset and base station retune to avoid the other signal, keeping the connection alive. Each of the six base stations in xG's network can serve devices in a 2.5-mile radius, comparable to an average cell-phone tower.

"In Fort Lauderdale, our network covers an urban area with around 110,000 people, and so we're seeing wireless security cameras, baby monitors, and cordless phones all using that band," says Rick Rotondo, a vice president with xG, which is headquartered in Sarasota, Florida. "Because our radios are so agile, though, we can deliver the experience of a licensed cellular network in that unlicensed band."

[snip]

Dewayne-Net RSS Feed:



------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Thursday, October 14, 2010

GAO says broadband costs, not availability, is hindering adoption

[This is consistent with an article way back in August 1993 Scientific American that showed once a technology is broadly as perceived as useful, it is cost above all that is the major factor in adoption rates. The article compared adoption rates of various technologies compared to the average annual income over time. No surprise that the telephone, because it was a monopoly, has traditionally been the most expensive technology in terms of average annual income – as a consequence it took over 75 years for the telephone to reach 50% adoption. Competition drives down prices and accelerates adoption. Canada is a textbook case of how NOT do drive broadband adoption – because of restrictions on foreign ownership Canada has very little competition and as result we pay some of the highest prices for broadband (wireless and wired) in the world and we have one of the lowest adoption rates. On the other hand our broadband providers are generally the most profitable in the OECD. Rather than expand overseas, or invest in new innovative services, our broadband providers are more focused on concentrating their stranglehold on the Canadian economy through concentration of ownership of media and broadcast companies. Thanks to Dewayne Hendricks for this pointer. – BSA]

GAO says broadband costs, not availability, is hindering adoption The Hill By Gautham Nagesh

The main barrier to increasing the adoption of broadband Internet in the U.S. is cost, not the availability of access, according to a new report from the Government Accountability Office.

The GAO noted that broadband has been deployed to 95 percent of households in the U.S., putting it on pace with other developed countries despite America's larger population. While ranked in the middle of the pack in broadband adoption, the U.S. also leads Australia and Canada, the only two developed nations with comparable populations.



Dewayne-Net RSS Feed:

Wednesday, October 13, 2010

Microsoft demonstrates 5G wireless seamless mobile 3G/Wifi - solar powered WiFi

[Here is a couple of interesting articles on evolving developments in 5G wireless space. 5G wireless networks use Open SIM for authentication, OpenFlow for VM routing and WiFi powered by renewable energy as the primary customer interface. It is an ideal technology for R&E networks and small ISPs to integrate with their fiber backbones. More details can be found at http://billstarnaud.blogspot.com/-- BSA]

Microsoft Wiffler lets smartphones use free WiFi from moving vehicles
http://www.networkworld.com/community/node/67344#

Researchers from Microsoft and University of Massachusetts test a promising new protocol to offload 3G traffic to WiFi even from a moving vehicle.
Microsoft Researchers have been working on a technology that would let mobile phones and other 3G devices automatically switch to public WiFi even while the device is traveling in a vehicle. The technology is dubbed Wiffler and earlier this year, researchers took it for some test drives in Amherst, Mass, Seattle and San Francisco.
Mind you, WiFi was available only about 11 percent of the time for a mobile device in transit, the team discovered, compared to 87% of the time 3G was available. So it would stand to reason that, at best, the mobile device wouldn't only be able to use WiFi a tiny bit of the time. However, the Wiffler protocol allowed the device to offload nearly half of its data from 3G to WiFi.
How so? Wiffler is smart about when to send the packets. It doesn't replace 3G, it augments it and transmits over WiFi simultaneously, allowing users to set WiFi as the delivery method of choice when it is available -- and when an application can tolerate it.
"We try to ensure that application performance requirements are met. So, if some data needs to be transferred right away (e.g., VoIP) we do not wait for WiFi connectivity to appear. But if some data can wait for a few seconds, waiting for WiFi instead of transmitting right away on 3G, that can reduce 3G usage," Ratul Mahajan told me in an e-mail interview. Mahajan is a researcher with the Networking Research Group at Microsoft Research Redmond. Mahajan worked on the project with two teammates, Aruna Balasubramanian and Arun Venkataramani, both of whom are researchers at the University of Massachusetts Amherst.
"The second feature is that we may actually use both connections in parallel instead of using only one. So, if we deem that some data cannot be transferred using WiFi alone within its latency requirement, we will use both 3G and WiFi simultaneously. This parallel use is different from a handoff from one technology to the other, and it better balances the sometimes conflicting goals of reducing 3G usage and meeting application constraints," Mahajan explained.
The results of the test was presented in a paper, Augmenting Mobile 3G Using WiFi (PDF), presented in June 2010 at the eighth annual International Conference on Mobile Systems, Applications and Services.
The test consisted of running Wiffler units on 20 buses in Amherst, MA as well as in one car in Seattle and one in San Francisco at SFO. The Wiffler unit itself was a proxy device that included a small-from factor computer, similar to a car computer (no keyboard), an 802.11b radio, a 3G data modem, and a GPS unit. The 3G modem was using HSDPA-based service via AT&T.

"Today, the WiFi/3G combo management is highly suboptimal. Today, smart phones tend to use WiFi connectivity only when they are stationary and not use WiFi connectivity when they are on the move. At the same time, they experience poor application performance when the WiFi connectivity is poor because they happen to be far from the AP (access point) or because the WiFi network is congested. This experience occurs because the devices insist on using WiFi whenever they are connected, largely independent of the performance of WiFi. Our technology provides an automatic combo management that is aware of application performance," Mahajan says.
Next up, the crew plans to test the Wiffler protocol in other uses, including the 3G savings "in a setting when users have Wiffler running all the time rather than just driving. Another is to understand current smartphone traffic workloads to get a sense of how much traffic individual applications generate; this is important because data for some of the applications can be delayed and for some it cannot be delayed," Mahajan explains.


http://www.voltsxamps.com/?p=532

Solar Powered DIY Portable HotSpot
Ever wondered what it would be like to have your own hotspot no matter where you went? Well now you can with this portable solar powered Wi-Fi repeater.
This little mod is simply a wifi router connected to 5 AA batteries that is charged with the built in solar panel and all mounted into a little cigar box. I used this in the back window of my car and no matter where I am at I am able to surf the net and check email within 150 feet of my car.
Here is how it works:
First there is the solar panel. This panel puts out enough voltage and current to run the wireless router without the batteries. The batteries are simply there to act as a flywheel in case of clouds, shade, etc. The panel then recharges the 5 AA batteries which in turn supply the energy needed to run the router.
The router runs a custom firmware called dd-wrt that automatically scans for open hotspots and then connects to the strongest signal it finds automatically. It then repeats the signal locally so you can surf the net with a more reliable connection. No need to search for open hotspots, it finds em and connects to the strongest for you.
This comes in handy as one is at work and their car sits in the parking lot all day long. Then when I come out for lunch I am able to immediately login to my asus netbook and check email, etc.
Notes; This router (Linksys WRT54G v8) will run on up to 12 volts 1 amp or 12 volts 500ma or 6 volts 500ma. I know because I personally tested it with other wall warts of these voltages and amperages.
The solar panel puts out 7.5 volts @ 500 ma in direct sunlight. So no charging circuit was needed as the panel is unable to over charge the batteries due to the fact that the batteries voltage is too close to what the panel puts out in regards to voltage. Now you might be asking how can this router take such different voltages and amperages as stated…? Well, the router has a built in voltage regulator that takes care of any voltage ranging from 6v DC all the way up to 24 volts DC.
In further testing it did not seem to effect the unit in anyway if it was using 12 volts @ 1,000ma or 6 volts @500ma.
One might prefer to use the 12 volts 500ma instead simply due to the fact that a charge regulator is alot easier and cheaper to get for 12 volts then it is for 6 volts. The one below is one that I have that was less then $20 on ebay so I could use it for a larger 12 volt battery if I wanted to.
[…]
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Thursday, September 23, 2010

FCC makes two momentous decisions for 5G Internet

[Today many you may have heard that FCC approved opening up unused airwaves between television channels for wireless broadband networks and allowing schools to purchase dark fiber under the eRate program. These 2 developments will enable an explosion of innovation in the Green Internet and 5G markets as we have seen starting to happen in the Netherlands.

5G networks are characterized by the following features:

1. Wifi, mesh and White space is the primary air interface for devices while MVNO 3G/4G wholesale networks are used for back fill (think FON for data on steroids)
2. Open SIM cards for automatic double factor authentication and authorization for service and applications (think Shibboleth on steroids)
3. National-regional fiber R&E networks provide seamless integration and essential backhaul services
4. Host of new applications and services enabled by integrating smart phones as sensors with clouds for research, education and business
5. All nodes are powered by renewable energy only with seamless overlapping service from Wifi, White space, 3G/4G etc to insure reliability and coverage and re-direction in the green cloud (think Greenstar on steroids)
--BSA]
------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro

Wednesday, September 22, 2010

New business and research opportunities in building 5G Internet networks - Dutch lead the world

[The Netherlands has one of the highest cable penetrations outside of North America with large dominant telcos and cablecos. But as opposed to North America enlightened Dutch regulators and politicians have long recognized that this duopoly and ersatz competitive market is insufficient to ensure true competition and development of new innovative companies and services. While North America is seeing an incredible and alarming concentration of ownership between the pipe and media companies, the Netherlands and most of Europe is moving in the opposite direction. The active promotion of structurally separated networks either municipally or privately owned such as CityNet in Amsterdam or RegenFiber will insure an open and competitive playing field. This will enable new innovations and applications in both wired and wireless world. The forward thinking R&E Dutch network – SURFnet and its development of next generation fiber and wireless networks is a quintessential example of the innovation that is enabled by this enlightened attitude of Dutch regulators and politicians. While North America retreats into a world of largely unregulated and unfettered monopolies with the inevitable stifling of innovation, many look to Europe and particularly the Netherlands as the future world leader in Internet and telecom innovation.

A textbook example is the development of 5G networks:

There is a general consensus that future of wireless Internet networks will be the use of smart phones as sensors integrated with clouds for a variety of exciting new applications in remote medical monitoring, smart metering, environmental sensors, traffic management etc. One of the big obstacles to realizing this vision is the control of the GSM networks by old line incumbent operators. The Dutch once again are leading the world in demonstrating that this telco world perspective is not necessarily the only future path for wide scale Internet wireless deployment. History is repeating itself. Many may remember that in the days before the Internet became widespread the telcos/cablecos were deploying expensive walled gardens as their vision of the Information Highway. But the Internet changed all that and allowed users to deploy their own applications without the prior approval of the telco/cableco. Eventually technology changes and the liberating influence of the Internet allowed many organizations like R&E networks and municipalities to deploy their network fiber infrastructure at a fraction of the cost of the telco. We are now about to witness the same transformation in the wireless Internet space.

Today’s mobile wireless network market is characterized by hideously complex protocols, usurious access fees and attempts to build micro managed applications inside walled gardens e.g. GSMA. They never learn. This is the same approach the telcos/cablecos tried two decades ago with the first implementation of networked applications, prior to the widespread deployment of the Internet. But the Dutch are now exploring a game changer for the wireless internet world, very much like the Internet transformed the wire line market - and that is how to allow users and organizations like R&E networks to deploy their own nationwide or global next generation wireless networks integrated with their fiber backbones. SURFnet’s prescient decision many years ago to deploy its own nation wide fiber network, now places it in an enviable position to be a world innovator in the next stage of Internet evolution.

Here are some pointers to some brilliant innovators who are doing some very exciting stuff in the Netherlands:

Rudolf van der Berg has written a great report for the Dutch Ministry of Economic Affairs on how we can make SIM cards open and accessible to everyone which enable a whole new generation of wireless networks and applications. Freeing the SIM cards will allow users, R&E networks, community networks to deploy their own Internet wireless infrastructure that integrates Wifi, Whitespace, FTTH, mesh networks and traditional 3G/4G networks. This will also allow them to extend the 5G network with solar or wind powered WiFi or GSM nodes in specific areas – especially where there is no business case for the network operator e.g. environmental networks. This will enable a whole range of exciting applications integrating sensors with the cloud. It will also allow for “automated” double factor identity and authentication management. Opening up the SIM card is vital to realizing this vision.

Another Dutch leader – Herman Wagter who built the Amsterdam CityNet network is also developing technology that will home owners connected with fiber to deploy micro or pico solar or wind powered Wifi or GSM cells integrated with these user controlled national wireless Internet networks. Finally Jaap van Till is working on technology to deliver WiFi /GMS signals over fiber at schools, homes and universities. –BSA]




Summary of . Rudolf van der Berg’s report for the Dutch Ministry of Economic Affairs
rudolfvanderberg@gmail.com

This last year I've worked on one of the most interesting subjects I've ever come across: Switching costs for Machine to Machine users (or embedded wireless as the GSMA calls it). Though I independently came to a solution, I can't claim I've thought of this solution first. That honor goes to at least 2 of my Logica colleagues in the Nordics and the UK or maybe to someone at Stratix consulting, but more likely someone somewhere, elsewhere. I do think I can now claim to have written the first full research in a public document on the subject. It's called "Onderzoek flexibel gebruik MNC's". (and yes please put the links that prove this wrong in the comments, someone must have done this before, but I couldn't find it) :-)


History
It started with a simple question of a customer, but the answer fundamentally questions the business model and regulation of mobile telecommunications. Fortunately the Dutch Ministry of Economic Affairs commissioned Logica (my colleague Jan Lindoff and me) with a report to research this question.

The Question:
How to migrate 10,000 M2M devices from one mobile operator to another mobile operator?

Sounds simple doesn't it? Contract a different operator, receive SIM-cards, switch SIM-cards and presto. Except that the 10k SIM-cards are all over the country in hard to reach places. The logistical nightmare pushed costs up to high heavens. So the customer wanted to be able to do it (in the future) without changing SIM-cards.

For whom?
It's not just one customer, it's everybody. Beer kegs, ereaders, eCall, smart meters, personal navigation devices, cars, photocopiers, containers, trains, fire alarm systems all have been equipped with embedded GSM/GPRS/UMTS. Those not in this list are contemplating whether to embed some wireless.. estimates go into the billions of devices in the next 10 years.

The Answer
With some colleagues I went through the possible solutions until and found that each had a problem associated with it.
1. The simple answer seems to be to use the same SIM-card and just switch operators withouth changing SIM-cards. Let the operators fix something in their systems, something like number portability. Unfortunately this doesn't work, because of the way SIM-cards work. Embedded in an unchangeable way on the SIM-card are an IMSI-number (E.212) and several operator specific cryptographic parameters. The first 5 or 6 digits of the IMSI-number are operator specific. These numbers uniquely identify the operator and through this the associated pieces of kit, like a Home Location Register that authenticates the SIM-cards as belonging to a contract having, subscription paying customer. So changing operators would require all mobile operators in the world to recognize that the specific IMSI numbers had changed mobile operator and for these specific numbers to be routed to a different destination. That's already impossible on a global level, but the worst bit is the cryptography. It would require operators to share their cryptographic keys and parameters with competitors. That's a no no. This kills the simple answer
2. The technical answer is slightly more feasible. What if you could remotely change the details on the SIM-card. Over the air an update is send to the SIM and automagically the IMSI is changed and the security parameters are overwritten with new ones and the device moves to a different network. This is possible. Several companies have proprietary solutions. However, there is no standardized solution and there isn't any mobile operator that supports it. That's problematic as it's the mobile network that determines what SIM's can be used and not the M2M user who wants to switch operator. The 3GPPacknowledged the problem and has worked on functional descriptions of solutions, but until now none of these have found the support of mobile operators (appendix A). Also know that this solution would have to work globally as customers may want to switch all (or some) devices from a European carrier to an Asian one or vice versa.
3. The regulatory answer: What if a large scale M2M deployment could use its own SIM-cards; SIMs that carried its IMSI and cryptographic parameters. This way it might not have to change the data on the SIM-card. This is the solution used by true MVNOs today (like Tele2 NL). It would mean it would have to find someone (operator or third party) to host an HLR for it, but this way it could change operators by changing the routing and switching of data instead of going through a logistical or technical process. For this to work the M2M end user should get access to an IMSI-number range, which have to be obtained from a national regulator. The national regulator can assign, within the 3 digit Mobile Country Code of that country (MCC 204 for NL), 2 or 3 digit Mobile Network Codes (MNC) to individual operators or maybe to individual M2M end-users as well
The great thing of the last solution is that it fits in the way GSM was designed it doesn't require any fundamental technical changes. It might however require changes in political and business thinking. Not only does this last solution allow for 'easy' switching, it also seems to solve some other problems large scale M2M users have. For instance:
• Coverage issues: This is a big problem. Almost by definition there is perfect coverage everywhere, except in the 4 square meters where the device is located. Worse, the other networks have perfect coverage except the one chosen for this deployment. This is an unsolvable problem, because wireless propagation is voodoo. Only in the field you know what coverage you have at that moment, a new building, a change in the orientation of an antenna etc. can change it in the future. By contracting (national) roaming on multiple networks in a geographic area an M2M deployment could increase its coverage. (This is the same as when you go abroad and generally can use all available networks instead of just the one you have your contract with. A Vodafone NL customer can roam on Orange UK, but a Vodafone UK customer cannot. Some M2M users actually use international SIMs for this reason)
• International roaming: Those M2M users whose devices travel: personal navigation, eReaders, cars, could contract different operators in different geographic regions. (At this moment thought this might be hard to arrange as roaming agreements seem to be more or less closed to non-GSM network owners)
• Initial provisioning/Lifecycle management: Large scale M2M deployments often want to use one device on a global scale. You could think of an eReader with an embedded communications device or a car. For communications cost reasons this may not be smart. So they are now torn between producing just one device in China and distributing this globally or producing country/region specific devices. Many retailers however don't like country specific devices, they want to distribute inventory according to need and not according to how the supplier pre-packaged it. Furthermore those nasty customers may buy a German version and use it in The Netherlands, which either means crippling the functionality while the user is abroad or accepting huge roaming fees.
So the last answer looked very promising, but the big question was: Why isn't anyone doing this? It looks so easy. I'm old enough to know that other people are more experienced than I, so if no one is doing it, there must be a good reason for it ... and there is. There is a snag in the regulations regarding E.212 numbers (as IMSI's are also known). The Dutch (and as far as I know all European) regulations require the user of such a number to be a "public" provider of electronic communications networks, which many M2M end users can't claim to be.

Being an ex-regulator I know regulations can be changed, so I talked to the government, who happened to have been informed of the solution by a side note in a report of Stratix. In the end we won a commission in competition to evaluate the solution in full. What was really nice about this was that it touches on some core issues in mobile telecommunications:
1. Why is it that end-users don't have access to wholesale markets for (mobile) telecommunication. Almost every market allows significantly large users to access the wholesale market. In energy every farmer can become an electricity or gas producer, they can access the spot trade market etc. The same goes for banking services where lease companies run banks and trade on the international markets. In telecoms this is only really possible for internet peering and transit. The fixed telephony market is semi-locked and the mobile telephony market is fully locked for large scale end users.
2. How come roaming is expensive? If all that distinguishes one mobile device from another is the IMSI number used, than why is it expensive to use an Orange UK SIM on an Orange France network? It could be the same HLR and network serving the customer. If foreign just means that the first three digits of the SIM are different, it's not much of a distinction. So why is there a distinction?]
3. What does it mean to be a public network and why do we have the distinction in law between public and private networks? If it is such an important difference, then why do private networks like Google, BBC and Microsoft have no problems connecting to the internet. I used to love this debate, when I was discussing lawful interception and data retention, but this is another view on the same problem.
4. How to use Private GSM in the DECT Guard Band, which is possible in the UK and NL already. Private GSM makes use of some spectrum that was reserved as a guard band between DECT and GSM 1800. In The Netherlands the use is license free. It allows users to make use of normal GSM's to connect to a private low power network which could replace the use of DECT, which often has horrible propagation characteristics. A problem is getting access to SIM-cards for this use.
I won't comment on the specific results of the report, it should speak for itself.

All in all, I've had tremendous fun researching these questions. Please read the report, all 55 pages or the slides, if you can't read Dutch. If you have any questions, give me a call +31613414512 or post them in the comments. And if anyone wants a similar report in English or another language, please commission me ;-)



Herman Wagter’s NetU device to enable WiFi or open micro WiFi/GSM cells

http://www.dadamotive.com/2010/04/gigabit-society-broadband-as-a-utility.html

[..]
Fortunately there are solutions which will takes us in the right direction. It so happened that my ideas overlapped with work in progress by Genexis in the Netherlands. The following is the result of a recent inspiring discussion with Gerlas van der Hoven, their CEO.The key conceptual step is to define a "Neutral Termination Unit" (NeTU) in the home. (For the sake of argument we assume a FttH access network). The physical fiber line terminates in the NetU.

The NeTU solves the above issues by allowing multiple independent networks with associated services to be configured in a simple and reliable way, to make it possible that each (virtual) network can be billed separately.

The NeTU is active, needs power but only performs a limited number of functions. It could be described as a minimal fibermodem that is redesigned to act as a platform for a number of physical "apps" (P-apps). It will be cheaper than a modem because it is minimal......

A P-app is a physical device ( a piece of electronics with software) that is designed to be user-pluggable in the NeTU. The NeTU gives the P-app physical support, power and a connection to the broadband network.

The trick is that the NeTU recognizes the P-app and automatically provisions a specificVLAN over the access line to the nearest aggregation point (exchange, central office, cabinet). A VLAN creates a virtual connection to the exchange. Each P-app has its own VLAN, its own IP-space and so on. The P-app has its own connectors to be used by the consumer or the application.

[..]
In the exchange the VLAN's are separated and recombined per type of P-app., fed into backhaul specific for each type of P-app. This allows for very diverse qualities and prices per type of P-app. For instance a very "thin" but highly reliable en secure backhaul for smart meters directly to the energy companies, or a "thick" backhaul for telepresence, or [networked operated Wifi or GMS node].

In the home a P-app is the gateway for a home network and specific applications. The first P-app will be the "triple play" P-app, providing the traditional voice, TV and Internet access with routing, firewall and wifi.
The consumer buys a P-app, or receives it from their ISP, plugs it in and voila. Do you want to change your ISP? Just unplug the P-app and change it to a different one.

[…]


Jaap van Till on Wifi over Fiber

Yes, I have seen the future and ... it looks good.
Network managers I spoke to recently in Belgium said the Netherlands is about five years ahead in the price and avialability of high speed lines, international links through the AMS-IX and dark fiber links. Germans estimate that our bandwidth market with FttBusiness is about 10 years ahead of theirs. I am quite sure that SURFnet, for the closed user group of National Research and Education, with ist insatiable demand of students and scientists for bandwidth, has played a major role in opening up these markets and infrastructures. It is possible that SURFnet as breeding ground for new idea's and bundling of new demand will do an experiment with a new phase of mobile networking within campuses of universities, schools and reasearch institutes.

Yes you will think that this will be 4G (aka LTE) after the present 2G-GSM, Wi-Fi and 3G-UMTS. 4G will come for sure, but we want to look a bit beyond that.
The problem with the present cell-phone networks is that mobile internet users start to make a huge demand in data volume from their smartphones and tablets. The mobiles with Android use even more capacity on 3G networks than those with iPhones. Especially for video, also on the new Pad's from Apple, Samsung and others. This forces the operators to change their billing policies recently. Despite recently increased pace of investments in the whole country in extra basessations and fiber backhaul capacity, that brings the optic fibers closer to the basestations, the operators foresee that netcongestion & delays (latency) will now and then strike. User will not be amused by that. The operators blame the 5 - 10 % of users who eat up about 80% of the network capacity. Bad users bad users!! (I guess it is those students using the SURFnet computer networks too). The operators can slow down these super-users by tariff boundaries when they exceed monthly volume limit or put them back in certain waiting lines. But I am affraid this will not help much once the rest of the users start for instance to put streaming radio on their smartphone. You not have to be a member of the superdigerati to do this. And the whole situation seems a bit weltfremd to me. At last people nearly eat up the networks and now the operators are complaining. Does a brewery ever complain when students drink too much beer?

The direction in which the solution to this situation can be found is in my opinion rather obvious. It is inevitable that for higher wireless dataspeeds and volumes the used radiofrequencies must be higher and broader in the sense of spectrum size. That results in smaller cell sizes because of the transmission power required at these higher frequencies vs the legal boundaries for emission and the battery power of the smartphones. That means many more basesstations must be instaaled than for GSM or 3G. And indoor coverage for LTE would be an even bigger problem.This can be catered for by placing (so called femtocell-) basestations inside homes, appartment buildings and offices that ....have a FTTX network access connection. Because for the high speed cellphone traffic you need high capacity terrestial lines to carry the basestation traffic.
The telecom industry already aware [1] that small cell basestations are a MUST to the deployment
of 4G (LTE networks). "Femtocells and Enterprise Femtocells Provide Real Solutions to Wireless Carrier
Problems" was the headline of [1].
And a good reason for the industry to step up the pace of roll-out of "glass" don't you think ?! So here is the killer app for FttX: smartphones !! So it is an honour and previlege for me to announce to you all the intended marriage between the happy couple "Wireless" and "Fiber"! It will no doubt be as heavenly as the combination between WiFi and your DSL- or cablemodem.

But there is still something not quite right. Even if you put a femtocell into an appartment bulding radiation above 1 Ghz (Wi-Fi is 2.4 and 3.5 GHz) does not pass walls very well, so you would need femtocell basestations connected with fiber in every room of the home/office, lecture hall and laboratory room. The solution to that has been thought of by prof. Ton Koonen of the COBRA institute at the Technical University of Eindhoven (TUe, NL) to use 'Radio over Fiber' (aka 'Radio over Fibre' if you look it up on Google) for the distribution of the radiation of the basestations inhouse to every room.
Let me explain this a bit. RoF has nothing to do with Radio transmission but has to do with radio frequency (RF) waves like those of wireless which are put into a optic fibre wire. In fact there is a huge bandwith for electromagnetic waves like for mobile devices, including light in every optic fibre. So instead of wiring fiber to connect femtobasestations in every room the idea is to put a fiber connected central femtocell in a faculty building and then beyond that radiate the wireless signals (to and from it) through a number of optic fibers with at their end rather simple antenna's. And bingo you can connect smartphones, iPads (without having to change anything to them) etc wirelessly in that room with much less energy(!) from both basestation and mobile devices. The techology for ROF has been around and implemented for instance in tunnels since 2001, but Koonen designed a number of new applications for iknstance in homes for radiation at 60 Ghz. A friend of mine and former collegue at Stratix ir. Almar Giesberts graduated with him in 2003 on the use of multimode plastic fibers [2] for ROF and he could show that even plastic fishing line wires would do! Plastic wire is already used to a certain extent for low cost cabling for TV and LAN connections in homes [3], so why not for wireless too?

In such a construction, which may be what will become G5 wireless: fiber to the basestation AND fiber beyond to rooms, has a number of advantages besides solving the network congestion problem for wireless internet use. As i said much lower total energy use (including lower use of battery power) Less powerfull radiation needed. And massive reuse of frequency spectrum bands since radiation is boxed in into rooms.When frequencies are later changed the simple antenna's do not have to be upgraded. They do not have to process the RF signals but simply pass them on in and out of the fiber. Whel this is basicly what we want to test in the SURFnet new wireless trial.

Further down the road is forseeable the use this RoF construction '5G?' in homes, appartment buildings and office buildings for the general public. So it will mean not only FttH but even (plastic) 'fiber beyond the meter' into the rooms, with for instance only one base-station for each appartment building. So the nice thing for mobile operators is that they can move the intelligent equipment to a more central place in the neigborhood or even more upstream. That is what
a have preached for years as the real message of optic fibers: the place where to put equipment is not so important any more. Things will shift to the edge or even further to rooms, and other things will shift to more central places. I call this "the FiberShuffle" and it heas been going on for years.
Last thing is that we have eventualy to wire our offices and home-rooms with fibers, glass or plastic. But maybe people will invent nice camouflage for them like ivy or vine-tendrils on the walls. That would make networks green and fruity at last.

Jaap van Till


[1] http://www.instat.com/abstract.asp?id=29&SKU=IN1004712GW "Femtocells will play a large part in 4G"
[2] http://alexandria.tue.nl/extra1/afstversl/E/567004.pdf
[3] http://en.wikipedia.org/wiki/Plastic_optical_fiber
===================================
Groet van/ Yours Sincerely
ir Jaap van Till http://www.vantill.dds.nl/
* Chief Scientist-Tildro Research BV
* Senior Advisor- Stratix Consulting, Hilversum
* Professor emeritus telecommunication & networks- HAN Polytechnic & TUDelft, NL
vantill(at)gmail.com +31 (6) - 55 30 3210
=================================





------
email: Bill.St.Arnaud@gmail.com
twitter: BillStArnaud
blog: http://billstarnaud.blogspot.com/
skype: Pocketpro