My Photo

Bill St. Arnaud is a R&E Network and Green IT consultant who works with clients on a variety of subjects such as the next generation research and education and Internet networks. He also works with clients to develop practical solutions to reduce GHG emissions such as free broadband and dynamiccharging of eVehicles (See . For more about me please see my complete profile

Wednesday, January 31, 2007

Detailed fiber to the home report for San Francisco

[Here is an excellent report, prepared for the city of San Francisco detailing various fiber to the X architectures for the city's internal needs as well as to the home. My only quibble is that as far as I can tell it does not do any analysis for micro-conduit which in my opinion provides the lowest up front cost, especially with low take up rates. Thanks to Dirk van der woude for this pointer on Gordon Cook's list -- BSA]

Web 2.0 - SOA and AJAX - the next killer app?

[Two excellent articles on the role of Web 2.0, SOA and AJAX - some excerpts -- BSA]

There is no Web 2.0 without SOA

For Web 2.0 to really deliver the hard work is getting the existing systems ready, not in the flashy GUI.

The point here is that not only do SOA and Web 2.0 work well together its actually really hard to see how you can have enterprise grade Web 2.0 without changing the way you deliver your existing IT. Please note here that again I'm not talking about WS-* v REST as those are just implementation technology decisions. What I'm talking about is creating an existing IT estate that can be easily consumed by "Mashup" or dynamic applications.

The key here is that a lot of future value for organisations is going to be based around that form of external collaboration with suppliers, customers and partners. Enabling that is a goal of both SOA (thinking), SOA (technology), Web 2.0 (technology) and Web 2.0 (Kool-aid drinking PPT jockeys). This won't be done in a green field environment it will be done based on the existing applications, ERPs and mainframes that the organisation has today and which were never intended to work in that way.

SOA doesn't need Web 2.0, but its looking like being the best interaction model for future systems.

Web 2.0 does need SOA if its going to help enterprises deliver external value.

AJAX + SOA: The Next Killer App
SOA lacks a face; that's where AJAX comes in - it puts a face on SOA

Enterprises trying to improve business unit productivity and the reuse of IT assets continue to struggle. IT organizations have achieved some success by attacking these challenges with Service Oriented Architecture (SOA), but in most cases have still only exposed small portions of the overall IT service portfolio.

The fact is that SOA is middleware - and middleware traditionally relies on more middleware to translate data into a consumer-friendly state. It's certainly a major disappointment when you finally get your SOA right only to find that building a composite application requires using a portal (middleware) and/or orchestrating it with a BPEL engine (even more middleware.

AJAX is driving a renewed interest in SOA, especially in the mashup space. But how can two very different technologies combine and connect to provide something far greater than the part

The ability to apply logic in the client (browser) and to access server data without disrupting the Web page - are what al-low the new Web 2.0 paradigm to open up so many enticing possibilities for rich enterprise applications.

Earlier I said that SOA was lacking a face. That's where AJAX comes in - it puts a face on SOA.

I see this as an opportunity to begin to apply AJAX + SOA to drive a whole new class of Web 2.0 business applications.

What I really want is a user-based composite application, not a middleware-based composite application. I truly believe we're entering a new era with amazing opportunity. Web 2.0 social networks, photo-sharing and tagging are great, but the real corporate impact comes in a form of Web 2.0 for the enterprise.

Thursday, January 25, 2007

Cable companies confront bandwidth crunch as networks move TV shows to Internet

[Here is an excellent article from Lightreading on the bandwidth challenges facing cable companies. They are starting to realize that their HFC plant may not be sufficient to handle this tsunami of video data and new applications and they too must start to explore fiber to the home (but I hope that will not be sucked into various PONzi technologies that are being promoted. What is also interesting is the plans for CBS to move its prime time TV shows to the Internet a day before they are broadcast on TV. Thanks to Dirk van der Woude for this pointer. Some excerpts from the Lightreading article -- BSA]

Cable Confronts Bandwidth Crunch

Shaking off two years of disbelief and dismay, the cable industry has finally started dealing with the prospect of an impending bandwidth shortage.

They're even weighing such previously unthinkable moves as building fiber-to-the-home (FTTH) networks and adopting PON architecture, just like some of the big phone companies.

...the Society of Cable Telecommunications Engineers 's annual Emerging Technologies show, found cable officials soberly agreeing that skyrocketing subscriber bandwidth consumption is threatening to overwhelm even their fattest broadband pipes.

...panelists blamed the startling increase in Internet video use over the past couple of years. In particular, they focused on the sudden rise of YouTube Inc. , which now serves up 120 million video streams per day and draws more than 34 million unique users each month to its Website.

Jeff Binder, senior director of Motorola Inc. warned that the big broadcast networks may soon pose an even greater threat to the cable industry's video business model than YouTube. He cited CBS Corp. plans to stream its primetime programs on the Web for no charge a day earlier than their first run on the TV network.

Knorr, whose cable system serves a major college town, said he's already seeing early signs that younger consumers are opting for Internet video downloads over traditional cable video service. In Lawrence, home to the University of Kansas, 5,000 of the cable system's 40,000 subscribers only take high-speed data service.

Wednesday, January 24, 2007

Structural Separation and Golden Age of Network Centric Computing

[Here is a couple more of excellent reports from CIBC World Markets and Paul Budde's blog about why it is in interest of the telco's shareholders for them to be structurally separated companies in the last mile. The basic argument is the same - that new technologies such as web services, Web 2.0, P2P etc are allowing companies that are not linked to the network to offer the same services that were traditionally offered by the carriers including control and management of network equipment and facilities. Thanks to Timothy Horan and Paul Budde for these pointers. Some excerpts --- BSA]

CIBC World Markets Report

Paul Budde Blog

The industry food chain is now in place to enable Network Centric (NC) computing to become mainstream, or enter the golden age. NC computing is well described by Sun’s old adage that “The network is the computer.” • Among other things, new cost-effective technologies are making legacy telecom networks obsolete and enabling the broad convergence of applications in four separate industries—communications, computing, entertainment and consumer electronics. • In this network environment applications and services and are being provisioned by independent providers that are not linked to the underlying network. • NC computing is driving demand for basic communications services, but it is also rapidly changing legacy business models throughout the economy. • While growing vertical integration has been the industry trend in the last five years, which we believe should help earnings for the next few years, a shift to a more horizontally segmented industry structure is inevitable in the long term, in our view. • Emerging service providers are best positioned to take advantage of these trends. Incumbents will also likely adjust their business models to capitalize on these solid fundamentals.

With respect to the large telcos, we expect them to report good earnings growth over the next few years, partially aided by recent industry restructuring. While the long-term business fundamentals are uncertain, restructuring has helped pricing, improved service quality and lowered costs. However, to maximize long-term shareholder value, we believe integrated carriers should split themselves along business lines, such as transport-only wholesalers (selling to other service providers) and “customer-facing” service integrators (buying from other wholesalers to build a complete bundle). The alternative, in our view, is sub-optimal performance in one or both businesses.

If integrated carriers do end up separating their underlying “last mile” pipes from their services’ businesses, we believe it could be one of the greatest value maximizing events in the history of the telecom industry. This process will probably begin outside the U.S., because regulators cannot point to robust cable competition, and the only real way to generate long-term competition is through structural separation (horizontal segmentation), in our view.

Other than the last five years of vertical integration, the industry has been marching towards a more horizontally segmented structure for forty years—T (SO) spun out its PBX business, Local Exchange, Computing, Wireless and Equipment businesses. In this regard, we would note that the three major steps taken by the old AT&T towards horizontal segmentation were some of the greatest creations of shareholder value in its history: 1) allowing independent telcos to interconnect with its LD network in the 1920s; 2) allowing CPE equipment to be connected to its network, the 1968 Carter decision; and 3)spinning-out the RBOCs in 1984. These steps were all forced on the company by regulators, and interestingly all three were vehemently fought bymanagement. In many ways, this is exactly what the net neutrality principle is all about.

As we have been writing for the last seven years, we foresee an economy-wide shift to NC computing - driven by disruptive technologies. Technologies such as IP/Ethernet, softswitches, optronics, and wireless broadband, are driving traffic onto one multi-purpose IP network that enables new applications (e.g., IT to small businesses) to be purchased separately from network access (e.g., voice and video over IP). These technologies have also increased broadband speeds and reduced latency. Additionally, substantial improvements in computing power (Moore’s law), network security (authentication, intrusion detection, encryption, etc.), compression and higher layer protocols are setting the stage for the golden age of NC computing.

There are a lot of different names or segments of NC computing including the Internet itself, Grid Computing (which has quietly become very popular), Software as a Service (SaaS), Virtual Servers and Web 2.0. Regardless of the name used, it basically means networking together and sharing computer processing, storage and data, which can be accessed over the Internet using thin devices at the edge of the network. This increases demand for communications connectivity and boosts computer utilization and efficiencies.

Tuesday, January 23, 2007

Structural separation gives the best ROI for telcos - Bears Stern

Here is an interesting analysts report from Bears Stern that argues structural separation will unlock value and provide the best return for telcos. The argument is similar to that recently made by Bernstein Research for the cable companies. They both argue there is little hope that the telcos and cablecos will be able to compete on services and applications given the incredible innovation, global market and financial resources available to companies like Google, Joost, Microsoft, Amazon, etc

Both reports argue that telcos and cablecos should unlock the value of their last mile assets through structural separation and focus on a solid regulated return, rather than trying to compete in the highly dynamic application and service space. More importantly the entire business model for the application/service industry is changing from fee for service to other modalities such as advertiser driven, which work well with large global markets, but are not practical on the small physical footprint of the outside plant.

The biggest challenge for telcos is that the cablecos have a far superior outside plant in terms of bandwidth and lower opex. The telcos have to move to providing raw fiber connectivity to customers with new business models if they hope to compete with cablecos, as for example with Green Broadband. Thanks to Dirk van de Woude for this pointer-- BSA]

Bernstein report for cablecos

Bear Sterns report for telcos

Green Broadband

Monday, January 22, 2007

Mashup Corporations - the end of business as usual

Mashups - web services, SOA, cyber-infrastructure, Enterprise 2.0, grids P2P, are all aspects of radically changing environment of the Internet that will have major impacts on research and business. Thanks to Dirk Van der Woude for the pointer to this book and Fabchannel. Some excerpts from his posting on NANOG -- BSA

This book is a cultural, rather than technical, guide to Service-Oriented Architectures and Web 2.0 technologies.

Mashup Corporations: The End of Business As Usual tells the story of fictional appliance maker Vorpal Inc. and its pursuit of creative sales methods for its popcorn poppers. Marketing manager Hugo Wunderkind has identified a new channel and willing market for a personalized popper. CEO Jane Moneymaker recognizes a winner, but how can she persuade CIO Josh Lovecraft to adapt his processes?

Still need to get a better grip on what the new world of Mashup business models really is leading to? Have a look at this new mashup service of Fabchannel: until now 'just' an award-winning website which gave its members access to videos of rock concerts in Amsterdam's famous Paradiso concert hall. Not any more. Today Fabchannel launched a new, unique service which enables music fans to create their own, custom made concert videos and then share them with others through their blogs, community profiles, websites or any other application.

So suppose you have this weird music taste, which sort of urges you to create an ideal concert featuring the Simple Minds, Motorpsycho, The Fun Loving Criminals, Ojos de Brujo and Bauer & the Metrople Orchestra. Just suppose it's true. The only thing you need to do is click this concert together at Fabchannel's site – choosing from the many hundreds of videos available -, customize it with your own tags, image and description and then have Fabchannel automatically create the few lines of html code that you need to embed this tailor-made concert in whatever web application you want.

As Fabchannel put it in their announcement, "this makes live concerts available to fans all over the world. Not centralised in one place, but where the fans gather online". And this is precisely the major concept behind the Mashup Corporation: - supply the outside world with simple, embeddable, services – support and facilitate the community that starts to use them and – watch growth and innovation take place in many unexpected ways.

Fabchannel expects to attract many more fans than they currently do. Not by having more hits at their website, but rather through the potentially thousands and thousands of blogs, myspace pages, websites, forums and desktop widgets that all could reach their own niche group of music fans, mashing up the Fabplayer service with many other services that the Fabchannel crew – no matter how creative – would have never thought of.

Maximise your growth, attract less people to your site. Sounds like a paradox. But not in a Mashup world.

Thursday, January 18, 2007

Marriage of Next generation Internet and Cyber-infrastructure

[Andrew McAfee at Harvard University has an excellent blog on Enterprise 2.0 discussing SOA, Web 2.0. In his latest posting I think he makes an excellent point that was reinforced by Van Jacobson's talk at Google that next generation of the Internet will be focused on "platforms" rather than "channels". From two opposite starting points we are gradually seeing a convergence in architecture for both cyber-infrastructure and for next generation Internet. To date a lot of next generation Internet and telecom architecture has been focused on communication channels and all the challenges they impose in terns of security, reliability, routing, overlays, VPNs, etc etc. But if we start to think of Internet applications and services as platforms such as SOA architectures and peer to peer applications like Joost, Skype, SIP-P2P, etc then the channels become an integral part of constructing information and service platforms. See for more information. We are less interested in getting from here to there as with traditional telecom and Internet point to point binary channels - and more about building collaborative platforms and service with Web 2.0, P2P and SOA. Some excerpts from Andrew McAfee's blog-- BSA]

A Technology Flip Test: Introducing Channels in a World of Platforms

The writer and and cultural observer Stanley Crouch, when asking his audience to consider a given issue, sometimes proposes a 'flip test' in which important elements of the status quo are reversed. It's an effective way to unmask hidden assumptions and double standards. And it can work quite well for questions around technology.

One useful flip test consists of mentally switching the order of appearance of a new technology and an existing one. At a conference years back I was sitting on a panel that was asked to talk about future of the book. As the discussion was heating up about the inevitability of the electric media, someone on the panel (I wish it had been me) proposed a flip test. He said "Let's say the world has only e-books, then someone introduces this technology called 'paper.' It's cheap, portable, lasts essentially forever, and requires no batteries. You can't write over it once it's been written on, but you buy more very cheaply. Wouldn't that technology come to dominate the market?" It's fair to say that comment changed the direction of the panel.

So as talk about the risks and possible downsides of Enterprise 2.0 technologies continues, a flip test might bring some clarity to the discussion. This flip test consists of imagining that communication platforms (like E2.0 tools) are already in place, and then channels show up within corporations.

Most current collaboration technologies, including email, instant messaging, and cell phone texting are what I call channels. They essentially keep communications private. People beyond the sender and receiver(s) can't view the contents of information sent over channels, and usually don't even know that communication has taken place. Information sent via channels isn't widely visible, consultable, or searchable. And no record exists of who sent what to whom, so channels leave no trace of collaboration patterns.

The new generation of collaboration technologies that are underpinning Web 2.0 and Enterprise 2.0, in contrast, are all platforms. They're repositories of digital content where contributions are globally visible (everyone with access to the platform can see them) and persistent (they stick around, and so can be consulted and searched for). Access to platforms can be restricted (to, for example, only members of an R&D lab or a team working on a particular deal) so that proprietary content isn't universally visible within a company, but the goal of a platform technology is to make content widely and perennially available to its members. A lot of content on this blog and other writing on W2.0 and E2.0 has articulated the desirable properties of digital platforms.

So here's the flip test: imagine that current corporate collaboration and communication technologies were exclusively E2.0 platforms -- blogs, wikis, etc. -- and all of a sudden a crop of new channel technologies -- email, instant messaging, text messaging -- became available. In other words, imagine the inverse of the present situation. What would happen? How, in the flip-test universe, would the new channel technologies be received?

I imagine two main outcomes. First, users would adopt the new channel technologies for private communications, but not for much more than that. They'd quickly see that it's less efficient to use channels, and less helpful to their colleagues. In other words, whether they were thinking selfishly or selflessly they'd keep using platforms. And the endowment effect would be working in favor of the platform technologies they're already using.

Second, many constituencies would hate the new technologies, and strenuously advocate that they be kept out. In a company accustomed to platforms, introducing channels would be perceived as asking for trouble. They'd be seen as tools that would let sensitive information leave the company and jump over Chinese walls, let sexual harassment and other inappropriate behavior flourish below the radar, and let people waste as much time as they wanted to chatting with each other about irrelevant stuff. What's even worse, compliance officers and other managers would feel largely powerless to stop this bad behavior, because channel traffic is so hard to monitor. They couldn't read all employee emails, and sampling would be unlikely to catch all the problems quickly enough to head them off.

For managers accustomed to platforms where all contributions are immediately and universally visible and traceable, channel technologies would seem scary. I could imagine that a common response, upon hearing about them, would be something like "No way. The risks of email and IM are too great. If people need to talk privately, let them pick up the phone. We'll set up a few email accounts so that we can exchange information with the outside world, but we're sticking with our platforms for internal communication."

What does this flip test reveal? To me, it indicates that many companies are paying far too much attention to the possible risks and downsides of E2.0 platforms, given that they've already deployed technologies that have much greater potential for abuse. I'm not advocating that channel technologies should be shut off and entirely replaced by platforms; I'm just trying to highlight the relative risks of the two technology categories. The flip test is a good way to do this.

What do you think? Am I missing something, or downplaying some important downsides about E2.0? Or is the flip test telling us what I think it is?

Wednesday, January 17, 2007

The Grand unification of Grids, SOA, workflow, Amazon EC2 and Oracle

This is excellent article showing how BPEL workflow integrates SOA applications across multiple utility computing facilities. I think these type of platform architectures are going to transform academic research and business practices. The EC2 concept of instantiating computing services as a web service, is very similar to the CANARIE concept of instantiating IP networks through a web service. See In the near future I suspect we all sorts of physical resources such as computing, networks, instruments, etc instantiated as a web service and linked together through BPEL compositions. Some excerpts -- BSA]

The union of Amazon Elastic Compute Cloud (EC2) and Oracle SOA Suite is a match made in heaven. Recently, we tested Amazon EC2 by marrying components of Oracle SOA Suite with EC2 to demonstrate how utility computing and SOA are shaking the foundations of current IT provisioning, development, deployment and maintenance models. This article examines how SOA technologies enable utility computing to move beyond the vision stage and become a reality.

Amazon EC2 is a massive farm of Linux servers at your finger tips. It enables you to bring up a single server or many server instances -- all installed with preconfigured Linux images -- with a single command. You can use pre-packed Linux images provided by EC2 -- public images are currently available that include Apache and MySQL -- or build new images and upload them to Amazon's S3 storage service.

The recently released Oracle SOA Suite 10g is a packaged set of standards-based components for enabling web services-based SOA. Oracle SOA Suite covers web services development, orchestration, monitoring, and security. Within the SOA Suite, Oracle BPEL Process Manager orchestrates transactions across disparate applications within and across corporate boundaries. But across all such technologies, what is important in the context of this article is that they be Web-service enabled and support a grid computing model where several low-cost servers can be deployed in a cluster to provide scalability and high availability.

Web services-based SOA has fundamentally changed how applications integrate. Add on top of that Amazon EC2 to host your business operations, and you get a potent combination. The significant, yet unnoticed breakthrough of Amazon EC2 is in its ability to spawn up a server instance by a mere web-service call. In addition to a command line interface, EC2 provides a detailed provisioning WSDL that can be used by any web-services application to dynamically control (e.g., run, terminate, authorize) Linux instances within the Amazon Cloud.

This EC2 provisioning enables WSDL-aware products to readily call into EC2 through SOAP-based messaging. Because of this approach, SOA platform products and orchestration languages like BPEL can now be extended beyond their typical application development role to also manage infrastructure provisioning. Now the same components which run business applications can also control dynamic provisioning and maintenance of the very physical infrastructure that they are deployed on. With Amazon EC2, for the first time, SOA components are aware of and in control of their host machines and can clone new instances of themselves based on environmental factors such as user load, available resources and cost.

Amazon Elastic Compute Cloud is an ideal hosting environment for commodity SOA components. Web services-based administration and provisioning of Linux servers on-the-fly heralds a new era of dynamic traffic management. With such flexible SOA components, reliable, resilient, scalable and high-performance SOA deployments can be built on utility computing infrastructure that lives outside corporate boundaries. Some simple enhancements to the Amazon Web services API and open collaboration with the developer community as well as with commercial software vendors could position Amazon EC2 as the utility computing platform of choice. Overtime, business models, service level agreements, and regulatory requirements will all find a happy balance to optimize IT assets' efficiency. We anticipate that the Amazon EC2 cloud coupled with grid-enabled software like Oracle SOA Suite will help realize IT gestalt: The whole is greater than the sum of the parts. At the same time, the ability to share computing pools across many users can smooth out the issues of peak loads for much more efficient use of resources. This will enable an effect which could be called "economic
gestalt": the whole costs much less than the sum of the parts .

Monday, January 8, 2007

Van Jacobson on the Future of the Internet: Best ofGoogle videos

Google has a web site on the best of their research videos. Lots of excellent material - but the one I particularly recommend is Van Jacobson's talk on the future of the Net. Anybody who is doing research on next generation Internet and/or native XML/URI routing, repositories and peer to peer architectures should be interested in this talk. His basic premise is that Netheads are in danger of being caught in the similar closed mindset of how networks should be designed much in the same way that Bellhead researchers were trapped by circuit switching modality. His argument was that the current Internet is ideally designed for conversations between devices, while what we need now is global facility for the dissemination of information. Thanks to Richard Ackerman for this pointer -- BSA]

Google Research Picks for Videos of the Year
By Peter Norvig - 12/11/2006 02:58:00 PM
Posted by Peter Norvig

Everyone else is giving you year-end top ten lists of their favorite movies, so we thought we'd give you ours, but we're skipping Cars and The Da Vinci Code and giving you autonomous cars and open source code. Our top twenty (we couldn't stop at ten):

Van Jacobson's talk:

Today's research community congratulates itself for the success of the internet and passionately argues whether circuits or datagrams are the One True Way. Meanwhile the list of unsolved problems grows.

Security, mobility, ubiquitous computing, wireless, autonomous sensors, content distribution, digital divide, third world infrastructure, etc., are all poorly served by what's available from either the research community or the marketplace. I'll use various strained analogies and contrived examples to argue that network research is moribund because the only thing it knows how to do is fill in the details of a conversation between two applications. Today as in the 60s problems go unsolved due to our tunnel vision and not because of their intrinsic difficulty. And now, like then, simply changing our point of view may make many hard things easy. <

Saturday, January 6, 2007

Venice project from the founders of Skype may significantly drive demand for bandwidth

[The Venice project is a new system developed by the entrepreneurs who created the wildly successful Kazaa and Skype. It can loosely described as Peer to Peer YouTube combined with cableTV. Thanks to Dirk van der Woude post on Gordon Cook's list for this pointer. Some excerpts -- BSA]

Dirk van der Woude reports: I am one of Venice' beta testers. Works like a charm,admittedly with a 20/1 Mbs ADSL2+ connection and a unlimited use ISP.

Even at sub-DVD quality the data use is staggering...

Venice Project would break many users' ISP conditions OUT-LAW News, 03/01/2007

Internet television system The Venice Project could break users' monthly internet bandwith limits in hours, according to the team behind it.

It downloads 320 megabytes (MB) per hour from users' computers, meaning that users could reach their monthly download limits in hours and that it could be unusable for bandwidth-capped users.

The Venice Project is the new system being developed by Janus Friis and Niklas Zennström, the Scandinavian entrepreneurs behind the revolutionary services Kazaa and Skype. It is currently being used by 6,000 beta testers and is due to be launched next year.

The data transfer rate is revealed in the documentation sent to beta testers and the instructions make it very clear what the bandwidth requirements are so that users are not caught out.

Under a banner saying 'Important notice for users with limits on their internet usage', the document says: "The Venice Project is a streaming video application, and so uses a relatively high amount of bandwidth per hour. One hour of viewing is 320MB downloaded and 105 Megabytes uploaded, which means that it will exhaust a 1 Gigabyte cap in 10 hours. Also, the application continues to run in the background after you close the main window."

Many ISPs offer broadband connections which are unlimited to use by time, but have limits on the amount of data that can be transferred over the connection each month.

The software is also likely to transfer data even when not being used. The Venice system is going to run on a peer-to-peer (P2P) network, which means that users host and send the programmes to other users in an automated system.

OUT-LAW has seen screenshots from the system and talked to one of the testers of it, who reports very favourably on its use. "This is going to be the one. I've used some of the other software out there and it's fine, but my dad could use this, they've just got it right," he said. "It looks great, you fire it up and in two minutes you're live, you're watching television."

Thursday, January 4, 2007

Municipal Broadband - Beware of Geeks bearing Gifts

In the past few weeks there have been several good reports on municipal broadband. Two from Reason magazine and one recently published by the Swedish Government - IT Policy Strategy Group - "Broadband for growth, innovation and competitiveness". I highly recommend the later report as it is a well articulated argument why governments (municipal and national) should focus on the build out of "passive" infrastructure to enable broadband for all. Passive infrastructure includes conduit, support structure on poles, and sometime the enabling of condominium dark fiber networks where government agencies, schools and hospitals act as anchor tenants.

Although there are many good examples of successful municipal broadband projects, particularly in smaller communities around the world, I worry that many of these in time will suffer the fate of iProvo as documented in the Reason report. Already we are starting to see companies offer free broadband services (up to 2 Mbps) such as Sky in the UK. I suspect, in the not too distant future you will see companies offering free fiber to the home. The companies offering these free services will make their money through innovative new business models. This will demonstrate once again that the private sector can be far more innovative than the public sector in delivering these types of services. I, for one, would not want to be the owner of a municipal broadband network competing against free broadband from the private sector.

However, I do believe municipalities and other levels of government can play a critical role in enabling the deployment of broadband by the private sector, especially the creation of a playing field. Two good examples I often cite is the CSEVM in Montreal where the city provides open conduit throughout the city to all telecommunication carriers. The other example is "@22" in Barcelona where the city has not only provided open conduit, but also open and secure equipment space in each city block where carriers can locate their equipment, etc.

But even if municipalities chose not to deploy such elaborate facilities they can still make a difference. There are many small things that municipalities and governments can do immediately to enable faster rollout of broadband and create level playing fields. They include:

(a) Placing open conduit under all freeways, overpasses, railway crossings, canals and bridges. These facilities are usually a costly roadblock for many fiber deployment companies.

(b) Allow overlashing of fiber on existing aerial fiber structures. Overlashing can dramatically reduce costs of fiber deployment.

(c) Forcing existing owners of conduit such as electrical companies, telephone companies, etc to make 100% of their conduit accessible to 3rd parties, especially that capacity supposedly reserved for future use, and then coordinate construction of new conduit between all parties when the spare capacity is truly all used up.

(d) Coordinate construction of all new conduit, especially building entrances to minimize the "serial rippers" and make all such conduit open to 3rd parties.

Thanks to Kevin Barron for the pointer to the Reason reports and Patrik Falstrom for the Swedish Government IT report -- BSA]

Swedish Government IT report:

The Reason reports:

The first Reason study is by Jerry Ellig, former deputy director and acting director of the Federal Trade Commission’s Office of Policy Planning. He warns city officials to “beware of geeks bearing gifts”

The second Reason study is on iProvo and claims that they are digging themselves in deeper every year.

>From recent study on broadband by City of Ottawa

"Montreal has the reputation across the carrier industry of "getting it right". All carriers that operate there praise the ease of access and the low cost of duct rental.

Montreal created an agency called the "Commission des Services Électriques de la Ville de Montreal" (CSEVM) in 1907 to emphasize urban design and to adapt to changing technological requirements as the city grew and to contribute to the beautification of the city's streets and public places.

The CSEVM has the mandate to operate the whole underground space. It builds, manages and operates the system of underground conduits.

Since that beginning the City now has 19.2 million metres of linear conduits, covering 623 of the city's 2,123 kilometres of streets. It provides direct accesses to 38,500 private and public buildings through more than 18,000 access facilities.

The City operates in the role of standardizing construction, consolidating demand from various sources and building additional space for the City to rent out to others. The City shares the capital cost with all and engineers additional capacity for resell. The City is also very active on retrofitting duct banks that are filled by adding additional internal ducts to be used. The primary advantage to the building partners is in cost reduction of duct construction.

The end result is that the carriers have only the highest of praise for Montreal as a territory in which they operate and the costs of duct rental are $3.65 per metre."

Wednesday, January 3, 2007

VoIP, Web services, Grids and Super scale telephony

Tim Oreilly, in a recent posting to Dave Farber's Iper List alluded to some interesting new developments where open source VoIP server can use grid facilities like Amazon's EC2 and S3 to build on demand, scalable phone networks. This is a very new and usual application for grid technology and web services. Worth checking out. Some excerpts from Tim OReilly's posting and conference pages-- BSA]

Another very cool session at the conference is a workshop entitled Calls in
the Utility Computing Cloud - Experiments in on demand ultra scalable
telephony Using Amazon EC2 and S3 - This is freeswitch and Asterisk
both - This is really exciting stuff. They are using Ec2 to create on
demand scalable phone networks - something that was not possible
before. You basically had to over-specify and pray your network never
reached capacity.

[From the conference web site..]

Just as OS telephony allows you to hack your own PBX, would you want to do that with the entire network? Are we going to create our own? Will we be liberated or shackled by poor voice quality? How does wi-max change the rules? Access to disruptive technologies has never been easier and we showcase some of the most powerful projects at the fringes. Intersection of VoIP Telephony and Web Services

Are all of your phone calls plotted on a Google Maps mash-up? What will happen when every site allows you to call someone or track someones cellphone? Instant price check from Amazon? Add an event to your web calendar by just calling an artificial voice agent? Or it reminding you by calling you? ETel will be happy to showcase some of the most amazing pocket money startups at some of the best and some of the best corporately funded innovation projects out there. Interesting things happen as open data and open networks ram down the doors of closed Telco networks. Business Models

What will the new business models look like? Will low cost and accessibility help innovate services that don't depend on billing systems, or will the telcos re-invent themselves? Will the community or legislation make us shy away from taking the risks needed for truly innovative ideas to have a chance? Will there be opportunity in the forthcoming local search wars, or will it be better done by the people themselves? If calls are becoming free, then will the service on the other end be worth paying for? some of the best business minds, product managers and venture capitalist offer their insight into picking and developing the most sustainable business sectors out there for companies and entrepreneurs.

For more information about the conference, see http://

More universities switch to Google mail and calendar services

Here is another example following UoArizona of how universities and enterprises are switching to e-mail and other services offered by Google and Amazon (for grid services), etc. In fact Mark Gaynor predicted this trend several years ago in a seminal paper "The Real Options Approach to Network-based Service Architecture" where he argued that as innovative applications matured on the Internet they would migrate from an edge based services to a centrally managed service. At the time, most of us thought that the telephone company would be the natural supplier of such services - but they have been largely displaced in that market by companies like Google, Amazon, etc. Of course, as more universities and institutions move to Google and Amazon for these types of services it may have a major impact on the research network architectures i.e having your own dedicated network pipe to the nearest Google, Amazon server complex. Thanks to Rene Hatem and André Quenneville for this pointer. Some excerpts from the IT business article -- BSA

Mark Gaynor's paper:

IT Business article

*Server crash spurs Lakehead to speed up Gmail rollout*

*Staff switch over more than 38,000 e-mail accounts in three days*

The university in Thunder Bay, Ont., knew it needed a new e-mail system and had spent several months looking at alternatives before settling on the online application suite from Mountain View, Calif.-based Google Inc.

Lakehead switched over 38,000 e-mail accounts to Google Gmail in three days, with almost no interruption to service. Google also helped out with some extra customer support during the high-pressure changeover, though Kevin Gogh, enterprise product manager at Google, said there was "nothing that (reached) extraordinary levels."

Besides replacing Lakehead's tottery old e-mail system, the conversion brought some added benefits. Because it is getting the whole suite for no charge and it is entirely hosted by Google rather than on university hardware, the university expects to save $2 million to $3 million a year on maintenance and about $6 million annually on infrastructure.

And, Jafri said, students, staff and faculty now get 2GB each of storage space, versus 60MB with the old system. In addition, he expects Google to deliver 99 per cent availability. "It's very hard for us to get to that level of availability."

Now fully operational on Gmail, Lakehead still has some work to do on other aspects of the conversion. This month, the university will be converting from an ageing in-house calendaring system to the Web calendaring facility included in Google Apps.

The suite also includes a Web chat capability, and the company will probably add other features, Gogh said. "Our goal with Google Apps for Education is to provide a very rich set of communication tools." The suite has been available free of charge under a beta program since late summer, he said, and universities and colleges that adopt it during that beta period will never have to pay.

As is Google's practice, the company has not said when the beta period will end, but Gogh said the company is working on a "premium version" of Google Apps for Education for which there will be a charge.