Friday, December 8, 2006
Case Study | Law Firm
Connect an in-house data facility to an off-site datacenter with high-capacity bandwidth.
The law firm, headquartered in Boston, Mass., has six offices nationwide employing over 1,800 people from east coast to west coast.
The firm needed high-speed data communications capability between their corporate headquarters and off-site datacenter facility both located in Boston, Mass. The connectivity needed to be reliable, scalable and cost- effective.
NEF sourced a cost-effective dark fiber solution for the firm. With access to numerous dark fiber networks in the Boston and Greater Boston area, NEF was able to secure the lease to existing fiber that allowed the firm to be operational with it's dark fiber connection in 30 days. The use of existing fiber virtually eliminated the cost and time required to construct new lateral connections.
The dark fiber option saves money over traditional lit services. With dark fiber's nearly unlimited bandwidth at a fixed cost, data needs can grow without paying for additional bandwidth from a lit service provider. As capacity needs grow, the termination equipment can be changed to meet new demands without having to pay additional monthly costs.
Reliability was an important consideration as well. Leasing dark fiber allows the customer to control the electronics used to provide the connection. There is never the need to wait for your service provider to resolve a problem. All maintenance and repairs, with the exception of restoring a physical break in the fiber, can be handled by the end user.
The Final Results
* 10 Gb connectivity at the price of a DS3
* Ability to run multiple systems across a common infrastructure
* Unlimited upgrade capability
* Low cost, high reliability solution
Case Study | Hosting Company
Connect a suburban hosting facility to a metropolitan carrier hotel with high-capacity bandwidth.
A web hosting company manages a datacenter in a suburban community. The provider offers shared hosting solutions, managed dedicated servers, shared ASP services and collocation.
Web hosting is a very competitive market place and many of the buyers' decisions are based on price. One of the key factors for being a price leader is to be able to secure Internet bandwidth at a competitive price. The low-cost bandwidth providers are all located at the major telecommunications carrier hotels found in the metropolitan area. In order for this hosting company to get low cost bandwidth they had to purchase a circuit from the Local Exchange Carrier (LEC) which drove their cost of bandwidth above that of their competitors.
The hosting company was buying OC-12 SONET circuits from the LEC for roughly $20,000 month and purchasing bandwidth at the carrier hotel for $18 per meg. The total cost of bandwidth was $40 per meg. The growth of the company's bandwidth needs required them to purchase an OC12 every 120 days.
NEF did the research and provided a dark fiber solution for the hosting company. This solution allowed the hosting company to deploy a 10 Gigabit ring connecting the datacenter and the carrier hotel. After a capital investment of just under $100,000 for the construction of fiber laterals and Ethernet equipment, the monthly cost of the dark fiber ring was $12,000 month. The 10 Gb ring has the capacity of 16 OC12 circuits for a fraction of the cost of lit services.
The Final Results
The hosting provider's total bandwidth costs were now less than $20 per meg, less than half of their original costs, allowing them to be competitive in the hosting market.
"The escalating costs of bandwidth were really hurting our ability to remain cost competitive. The solution that NEF provided fit all of our needs and completely changed our cost structure for the better. When they say better, faster and cheaper, they mean it."
For all the papers on this topic please see: http://www.sce.carleton.ca/faculty/wainer/UpcomingPubs.htm
DEVS is a Modeling and Simulation formalism that has been used to study the dynamics of discrete event systems. Cell-DEVS is a DEVS-based formalism that defines the cell space as a group of DEVS models connected together. This work presents the design and implementation of a distributed simulation engine based on
CD++; a modeling and simulation toolkit capable of executing
DEVS and Cell-DEVS models. The proposed simulation engine follows the conservative approach for synchronization among the nodes, and takes advantage of web service technologies in order to execute complex models using the resources available in a grid environment. In addition, it allows for the integration with other systems using standard web service tools. The performance of the engine depends on the network connectivity among the nodes; which can be commodity Internet connections, or dedicated pointto- point links created using User Controlled Light Path (UCLP). UCLP is a web service-based network management tool used by grid applications to allocate bandwidth on demand.
Modeling and simulation (M&S) plays an important role in studying complex natural and artificial systems. For some systems, analytical analysis is not always feasible due to the complexity pertinent to them, for others, it is too dangerous or impractical to experiment with them. One of the fields of M&S is discrete event simulation which is related to studying systems that exist in finite set of discrete states over continuous periods of time. Some examples of these systems include customer queues in a bank, computer networks, and manufacturing facilities. Discrete Event System Specification (DEVS) is a modeling and simulation formalism that has been used to study discrete event systems. It depends on modeling the system as hierarchal components, each of which has input and output ports to interact with other components and with the external environment
Thursday, December 7, 2006
Information on Amazon's web services and EC2 http://www.amazon.com/gp/browse.html?node=3435361
Excerpts from Rickard Ackerman's blog http://scilib.typepad.com/science_library_pad/
Amazon Computing Cloud - for academics?
Declan Butler has an article in Nature about researchers using Amazon's compute services
The service is still in a test phase, so few scientists have even heard of it yet, let alone tried it. But it is a movement that experts believe could revolutionize how researchers use computers. In future, they will export computing jobs to industry networks rather than trying to run them in-house, says Alberto Pace, head of Internet services at CERN, the European particle-physics laboratory near Geneva. CERN has built the world's largest scientific computing grid, bringing together 10,000 computers in 31 countries to handle the 1.5 gigabytes of data that its new accelerator, the Large Hadron Collider, will churn out every second once it is switched on next year.
"I see no reason why the Amazon service wouldn't take off," Pace says. "For a lab that wants to go fast and cheaply, this is a huge advantage over buying material and hiring IT staff. You spend a few dollars, you have a computer farm and you get results."
[Dutch computer scientist Rudi] Cilibrasi, a researcher at the National Institute for Mathematics and Computer Science in Amsterdam, was using Amazon's service to test an algorithm aimed at predicting how much someone will like a movie based on their current preferences. He says he is a convert: "It's substantially more reliable, cheaper and easier to use [than academic computing networks]. It opens up powerful computing-on-demand to the masses."
From Declan Butler's Blod
Virtualization uses a layer of software to allow multiple operating systems to run together. This means that different computers can be recreated on the same machine. So one machine can host say ten ‘virtual’ computers, each with a different operating system.
That’s a big deal. Running multiple virtual computers on a single server uses available resources much more efficiently. But it also means that instead of having to physically install a machine with a particular operating system, a virtual version can be created in seconds. Such virtual computers can be copied just like a file, and will run on any machine irrespective of the hardware it is using.
Virtualization is going to be one of the next big things in computing, as it brings both large economies of compute resources, and unprecedented flexibility.
Scientists are also testing using virtualization to overcome one of the biggest drawbacks of most current Grids - see here and here for more info on Grids — and computing clusters. They are balkanised, each using a different operating systems or versions, which results in poor use of the available computing resources. Virtualizing the Grid allows virtual computers — image files — to be run on top of all available resources irrespective of the underlying operating systems.
Researchers can also develop applications on whatever software and operating system they have on their lab machine. But at present when they go to run the application at a large-scale, they often need to completely rewrite it to fit the protocols and systems used by a particular cluster or Grid. Virtualization frees researchers from these constraints.
I asked Ian Foster, cofounder of the Grid computing concept what he thought of the prospects for Amazon type-services.
“It’s neat stuff. Exactly what it means remains to be seen, but my expectation is that Amazon’s EC2 and S3 will be seen as significant milestones in the commercial realization of Grid computing. I also think that they may turn out to be important technologies for scientific communities, because they start to address the current high costs associated with hosting services.”
In passing, anyone who has tested the Amazon service, do get in touch to give me your experience, and how you have used it, on firstname.lastname@example.org via Declan's blog
Tuesday, December 5, 2006
Why the Internet only just works
The core Internet protocols have not changed significantly in more than a decade, in spite of exponential growth in the number of Internet users and the speed of the fastest links. The requirements placed on the net are also changing, as digital convergence finally occurs. Will the Internet cope gracefully with all this change, or are the cracks already beginning to show? In this paper I examine how the Internet has coped with past challenges resulting in attempts to change the architecture and core protocols of the Internet. Unfortunately, the recent history of failed architectural changes does not bode well. With this history in mind, I explore some of the challenges currently facing the Internet.
Friday, December 1, 2006
APNs are next generations VPNs that allow users to integrate a seamless mesh of VPNS (layer 0- 3), service and applications. They can then manipulate their interconnection and topology to create "network workflows" linking together various facilities and applications across multi-domain networks.
Last week the Eucalyptus team gave a demo at Sc06. What was very impressive about their cyber-infrastructure architecture was the ability to simply and quickly link in new web services and workflows over thousands of kilometers. For example at the demo the team demonstrated how they could link in a fire emergency model from the Systems Engineering department which allowed the collaborative architectural team to validate fire safety and emergency response scenarios within their design. They also demonstrated how they could quickly manipulate designs, acquire computational resources for rendering and share the results through high quality HDTV video conferencing and shared workspaces.
The NRC-IIT team did an amazing job in developing the various web services tools and workflows. The CRC team integrated these with UCLP to provide the collaborators with control over the network resources to create their APNs. Overall a very impressive collaboration. And the architect students at Carleton were amazing at how easily they could manipulate and adapt to these tools.
For more information please check out the Eucalyptus web site: http://www.cims.carleton.ca/60.html
For the hand held HTDV video encoders see:
For more information on APNs and "network workflows" see: www.uclp.ca
For more information on using web services and workflow to integrate fire & emergency response models please see http://www.sce.carleton.ca/faculty/wainer.html
Thursday, November 23, 2006
A new book worth reading is called "Outside Innovation: How your customers will co-design your company's future" Patricia B Seybold
HarperBusiness. There is a good review in this week's Economist. In her
book she argues that companies should focus on customers rather than employees in the innovation process. She covers a number of case studies of several new web based companies that are using open source software, web services and Web 2.0 to give more customer control of the business processes. (See my earlier posts on this subject). She argues that most companies make two common mistakes about their customers: First they think customers can't innovate and so innovation must be driven internally and second that they believe they already do a good job of listening to their customers. A good example of this type of thinking is how many carriers want to build walled gardens in the belief that they can provide the innovative services that their customers need. But this archetypical business model is being undermined by new open source tools like the Linux phone mentioned in David Isenberg's blog below. As with the Internet the new innovative cell phone features will come from open source solutions and web services - not the traditional walled gardens. Companies that embrace this vision and tear down their walled gardens will win. These lessons not only apply to commercial organizations, but to research facilities, governments and other organizations. In almost all of these environments the software, network and organizational processes are fixed and immobile limiting the ability for users or customers to innovate and create new custom solutions. Grids and cyber-infrastructure are the first steps in this direction of user control and management in the research community. Just as the web revolutionized the distribution of information and made creation and distribution accessible to all, web 2.0 and web services promises to transform business and research processes in a similar way. Thanks to David Isenberg for this blog--BSA]