Friday, December 8, 2006

Case studies of how dark fiber benefits business

[NetFiber is an interesting company that provides a variety of dark fiber resources for companies and institutions. It offers tools to map the optimum network for a client's application and locations and provides a detailed map of the closest network to a client's location; identifies possible interconnection points and provides a desktop estimate for new construction. Their web site also a number of case studies on how enterprises have benefited from customer controlled and managed dark fiber. Some excerpst from their web site -- BSA]


http://www.nefiber.com/enterprise_case_studies.php

Case Study | Law Firm

The Challenge
Connect an in-house data facility to an off-site datacenter with high-capacity bandwidth.

Background
The law firm, headquartered in Boston, Mass., has six offices nationwide employing over 1,800 people from east coast to west coast.

Business Challenge
The firm needed high-speed data communications capability between their corporate headquarters and off-site datacenter facility both located in Boston, Mass. The connectivity needed to be reliable, scalable and cost- effective.

Network Solution
NEF sourced a cost-effective dark fiber solution for the firm. With access to numerous dark fiber networks in the Boston and Greater Boston area, NEF was able to secure the lease to existing fiber that allowed the firm to be operational with it's dark fiber connection in 30 days. The use of existing fiber virtually eliminated the cost and time required to construct new lateral connections.

The dark fiber option saves money over traditional lit services. With dark fiber's nearly unlimited bandwidth at a fixed cost, data needs can grow without paying for additional bandwidth from a lit service provider. As capacity needs grow, the termination equipment can be changed to meet new demands without having to pay additional monthly costs.

Reliability was an important consideration as well. Leasing dark fiber allows the customer to control the electronics used to provide the connection. There is never the need to wait for your service provider to resolve a problem. All maintenance and repairs, with the exception of restoring a physical break in the fiber, can be handled by the end user.

The Final Results

* 10 Gb connectivity at the price of a DS3
* Ability to run multiple systems across a common infrastructure
* Unlimited upgrade capability
* Low cost, high reliability solution

Case Study | Hosting Company

The Challenge
Connect a suburban hosting facility to a metropolitan carrier hotel with high-capacity bandwidth.

Background
A web hosting company manages a datacenter in a suburban community. The provider offers shared hosting solutions, managed dedicated servers, shared ASP services and collocation.

Business Challenge
Web hosting is a very competitive market place and many of the buyers' decisions are based on price. One of the key factors for being a price leader is to be able to secure Internet bandwidth at a competitive price. The low-cost bandwidth providers are all located at the major telecommunications carrier hotels found in the metropolitan area. In order for this hosting company to get low cost bandwidth they had to purchase a circuit from the Local Exchange Carrier (LEC) which drove their cost of bandwidth above that of their competitors.

Network Solution
The hosting company was buying OC-12 SONET circuits from the LEC for roughly $20,000 month and purchasing bandwidth at the carrier hotel for $18 per meg. The total cost of bandwidth was $40 per meg. The growth of the company's bandwidth needs required them to purchase an OC12 every 120 days.

NEF did the research and provided a dark fiber solution for the hosting company. This solution allowed the hosting company to deploy a 10 Gigabit ring connecting the datacenter and the carrier hotel. After a capital investment of just under $100,000 for the construction of fiber laterals and Ethernet equipment, the monthly cost of the dark fiber ring was $12,000 month. The 10 Gb ring has the capacity of 16 OC12 circuits for a fraction of the cost of lit services.

The Final Results
The hosting provider's total bandwidth costs were now less than $20 per meg, less than half of their original costs, allowing them to be competitive in the hosting market.

"The escalating costs of bandwidth were really hurting our ability to remain cost competitive. The solution that NEF provided fit all of our needs and completely changed our cost structure for the better. When they say better, faster and cheaper, they mean it."

Cyber-infrastructure for Distributed Simulation and Modeling

Here is some excellent papers of the power of SOA, web services and grids in support of research in distributed simulation and modeling with multiple computational node connected over a network- BSA

For all the papers on this topic please see: http://www.sce.carleton.ca/faculty/wainer/UpcomingPubs.htm


ABSTRACT
DEVS is a Modeling and Simulation formalism that has been used to study the dynamics of discrete event systems. Cell-DEVS is a DEVS-based formalism that defines the cell space as a group of DEVS models connected together. This work presents the design and implementation of a distributed simulation engine based on
CD++; a modeling and simulation toolkit capable of executing
DEVS and Cell-DEVS models. The proposed simulation engine follows the conservative approach for synchronization among the nodes, and takes advantage of web service technologies in order to execute complex models using the resources available in a grid environment. In addition, it allows for the integration with other systems using standard web service tools. The performance of the engine depends on the network connectivity among the nodes; which can be commodity Internet connections, or dedicated pointto- point links created using User Controlled Light Path (UCLP). UCLP is a web service-based network management tool used by grid applications to allocate bandwidth on demand.

1. INTRODUCTION
Modeling and simulation (M&S) plays an important role in studying complex natural and artificial systems. For some systems, analytical analysis is not always feasible due to the complexity pertinent to them, for others, it is too dangerous or impractical to experiment with them. One of the fields of M&S is discrete event simulation which is related to studying systems that exist in finite set of discrete states over continuous periods of time. Some examples of these systems include customer queues in a bank, computer networks, and manufacturing facilities. Discrete Event System Specification (DEVS) is a modeling and simulation formalism that has been used to study discrete event systems. It depends on modeling the system as hierarchal components, each of which has input and output ports to interact with other components and with the external environment

[...]

Thursday, December 7, 2006

Will Amazon help reduce university research computing costs?

A good article in this month's Nature about Amazon's new Elastic Compute Cloud EC2 and how it might benefit academic and university computing. Universities and funding agencies are spending an increasing amount of their dollars on expensive computational infrastructure, as various departments from architecture to zoology are requiring computing clusters and high performance computers in order to do their research. It is not only the capital cost of the equipment that is a major problem - but also the cost of electricity, air conditioning and maintenance that is becoming a serious issue. The beauty of the Amazon EC2 service is that it is built around web services and virtualization. This makes it very easy for researchers who have moved their applications to web services to easily incorporate the compute services offered by Amazon. See my previous post on Eucalyptus. Thanks to Richard Ackerman and Declan Butler blogs for this timely information -- BSA


Information on Amazon's web services and EC2 http://www.amazon.com/gp/browse.html?node=3435361


Excerpts from Rickard Ackerman's blog http://scilib.typepad.com/science_library_pad/

Amazon Computing Cloud - for academics?

Declan Butler has an article in Nature about researchers using Amazon's compute services

The service is still in a test phase, so few scientists have even heard of it yet, let alone tried it. But it is a movement that experts believe could revolutionize how researchers use computers. In future, they will export computing jobs to industry networks rather than trying to run them in-house, says Alberto Pace, head of Internet services at CERN, the European particle-physics laboratory near Geneva. CERN has built the world's largest scientific computing grid, bringing together 10,000 computers in 31 countries to handle the 1.5 gigabytes of data that its new accelerator, the Large Hadron Collider, will churn out every second once it is switched on next year.

"I see no reason why the Amazon service wouldn't take off," Pace says. "For a lab that wants to go fast and cheaply, this is a huge advantage over buying material and hiring IT staff. You spend a few dollars, you have a computer farm and you get results."

[Dutch computer scientist Rudi] Cilibrasi, a researcher at the National Institute for Mathematics and Computer Science in Amsterdam, was using Amazon's service to test an algorithm aimed at predicting how much someone will like a movie based on their current preferences. He says he is a convert: "It's substantially more reliable, cheaper and easier to use [than academic computing networks]. It opens up powerful computing-on-demand to the masses."


From Declan Butler's Blod
http://declanbutler.info/blog/?p=93

Excerpt

Virtualization uses a layer of software to allow multiple operating systems to run together. This means that different computers can be recreated on the same machine. So one machine can host say ten ‘virtual’ computers, each with a different operating system.

That’s a big deal. Running multiple virtual computers on a single server uses available resources much more efficiently. But it also means that instead of having to physically install a machine with a particular operating system, a virtual version can be created in seconds. Such virtual computers can be copied just like a file, and will run on any machine irrespective of the hardware it is using.

Virtualization is going to be one of the next big things in computing, as it brings both large economies of compute resources, and unprecedented flexibility.

Scientists are also testing using virtualization to overcome one of the biggest drawbacks of most current Grids - see here and here for more info on Grids — and computing clusters. They are balkanised, each using a different operating systems or versions, which results in poor use of the available computing resources. Virtualizing the Grid allows virtual computers — image files — to be run on top of all available resources irrespective of the underlying operating systems.

Researchers can also develop applications on whatever software and operating system they have on their lab machine. But at present when they go to run the application at a large-scale, they often need to completely rewrite it to fit the protocols and systems used by a particular cluster or Grid. Virtualization frees researchers from these constraints.

I asked Ian Foster, cofounder of the Grid computing concept what he thought of the prospects for Amazon type-services.

“It’s neat stuff. Exactly what it means remains to be seen, but my expectation is that Amazon’s EC2 and S3 will be seen as significant milestones in the commercial realization of Grid computing. I also think that they may turn out to be important technologies for scientific communities, because they start to address the current high costs associated with hosting services.”

In passing, anyone who has tested the Amazon service, do get in touch to give me your experience, and how you have used it, on d.butler@nature-france.com via Declan's blog

Tuesday, December 5, 2006

Why the Internet only just works

An excellent paper on the challenges of trying to change the architecture of the Internet. While I agree that the traditional layer 3 Internet is ossifying - the exciting and new Internet architectures are occurring above layer 3 (web services and Web 2.0) and below (optical federated IP networks and wireless mesh). Ultimately HTTP/port 80 may be the new neck of the hourglass between applications and physical infrastructure. I think this reflects a continued trend where Internet innovation will always find a away to route around obstruction or blockage -- BSA]

Why the Internet only just works
M Handley


The core Internet protocols have not changed significantly in more than a decade, in spite of exponential growth in the number of Internet users and the speed of the fastest links. The requirements placed on the net are also changing, as digital convergence finally occurs. Will the Internet cope gracefully with all this change, or are the cracks already beginning to show? In this paper I examine how the Internet has coped with past challenges resulting in attempts to change the architecture and core protocols of the Internet. Unfortunately, the recent history of failed architectural changes does not bode well. With this history in mind, I explore some of the challenges currently facing the Internet.


Friday, December 1, 2006

Cyber-infrastructure and grids for Architecture Collaborative Design

Cyber-infrastructure and grids are often associated with academic high performance computing, but their real potential and significant impact will be in many other fields and disciplines unrelated to high performance computing. A quintessential example is the Eucalyptus project - which is a collaborative cyber-infrastructure project between Carleton University School of Architecture, Carleton University Systems Engineering Department, National Research Council (NRC), Communication Research Center (CRC), IBM and a small company called Pleora that makes little hand held encoders to transmit HDTV and SDI over lightpaths with IP multicast. This multi-disciplinary group has created a powerful set of architectural collaborative design tools using web services and web services work flow (BPEL). From a simple "dash board" on their respective computers, architectural collaborators across North America can link together rendering machines, spare computational resources, grids (Maya CUBE), collaborative HDTV video sessions, and network elements (using UCLP) to create complex multidomain network workflows and topologies called APNs (Articulated Private Networks).

APNs are next generations VPNs that allow users to integrate a seamless mesh of VPNS (layer 0- 3), service and applications. They can then manipulate their interconnection and topology to create "network workflows" linking together various facilities and applications across multi-domain networks.

Last week the Eucalyptus team gave a demo at Sc06. What was very impressive about their cyber-infrastructure architecture was the ability to simply and quickly link in new web services and workflows over thousands of kilometers. For example at the demo the team demonstrated how they could link in a fire emergency model from the Systems Engineering department which allowed the collaborative architectural team to validate fire safety and emergency response scenarios within their design. They also demonstrated how they could quickly manipulate designs, acquire computational resources for rendering and share the results through high quality HDTV video conferencing and shared workspaces.

The NRC-IIT team did an amazing job in developing the various web services tools and workflows. The CRC team integrated these with UCLP to provide the collaborators with control over the network resources to create their APNs. Overall a very impressive collaboration. And the architect students at Carleton were amazing at how easily they could manipulate and adapt to these tools.

For more information please check out the Eucalyptus web site: http://www.cims.carleton.ca/60.html

For the hand held HTDV video encoders see:
www.pleora.com

For more information on APNs and "network workflows" see: www.uclp.ca

For more information on using web services and workflow to integrate fire & emergency response models please see http://www.sce.carleton.ca/faculty/wainer.html

Thursday, November 23, 2006

End of Walled Gardens- How your customers will co-design your company's future

A new book worth reading is called "Outside Innovation: How your customers will co-design your company's future" Patricia B Seybold

HarperBusiness. There is a good review in this week's Economist. In her

book she argues that companies should focus on customers rather than employees in the innovation process. She covers a number of case studies of several new web based companies that are using open source software, web services and Web 2.0 to give more customer control of the business processes. (See my earlier posts on this subject). She argues that most companies make two common mistakes about their customers: First they think customers can't innovate and so innovation must be driven internally and second that they believe they already do a good job of listening to their customers. A good example of this type of thinking is how many carriers want to build walled gardens in the belief that they can provide the innovative services that their customers need. But this archetypical business model is being undermined by new open source tools like the Linux phone mentioned in David Isenberg's blog below. As with the Internet the new innovative cell phone features will come from open source solutions and web services - not the traditional walled gardens. Companies that embrace this vision and tear down their walled gardens will win. These lessons not only apply to commercial organizations, but to research facilities, governments and other organizations. In almost all of these environments the software, network and organizational processes are fixed and immobile limiting the ability for users or customers to innovate and create new custom solutions. Grids and cyber-infrastructure are the first steps in this direction of user control and management in the research community. Just as the web revolutionized the distribution of information and made creation and distribution accessible to all, web 2.0 and web services promises to transform business and research processes in a similar way. Thanks to David Isenberg for this blog--BSA]


From isen.blog http://isen.com/blog/2006/11/breaching-cellcos-garden-wall.html