When it comes to operational complexity, few avenues of human endeavor rival the modern data center. While the prototypical image of monolithic rows of machines humming along at first conveys the epitome of orderly precision, a closer inspection reveals a separate reality. To be sure, one need look no further than the warren of confused cabling behind and underneath the rows to understand the depth of a data center's convolution. Its ultimate cost has long been apparent to insurance technologists who have struggled daily to force this heterogeneous mix of hardware and software to run in harmony.
One of the reasons for the baroque nature of the modern data center is that its primary physical building blocks-servers, networking equipment and storage arrays-are purchased separately and integrated on site. While this arrangement makes sense for many reasons, most notably cost and customization, it is harder to justify from an architectural perspective. It is somewhat akin to purchasing an engine from Ford, a chassis from General Motors and a transmission from Chrysler in order to build a custom car in your driveway.
Moreover, while highly interconnected physically, servers, networks and storage systems are most commonly managed separately. Thus, even as insurers have made great progress breaking down operational silos, technology silos still persist in their data centers. Indeed, considering the staggering advances made in recent years in areas such processing power, memory density storage capacity, energy efficiency and virtualization, one may very well ask if these aged architectures have prevented the modern data center from fully exploiting these advances.
Frank Petersmark, CIO advocate at Farmington Hills, Mich.-based architectural consultancy X by 2, says that the penchant to extend existing architectures and bolt on more layers of technologies over the years makes tactical sense but ultimately produces strategic drawbacks. "A lot of companies have held on to architectures they've had forever because they're tried and true," he says. "However, maybe we are getting to the point where it's not possible to bolt on much more. So people have to rethink how to craft an architecture than supports legacy, but sets them up for the future."
THE FABRIC
One prominent voice now positing a holistic reassessment of data center architecture and management is that of George Weiss, research VP at Gartner. Weiss has written extensively about the promise of fabric computing, a radically simplified architecture that would offer seamless integration between loosely coupled storage, networking and servers by dis-aggregating resources into pools that are then reconciled with exact business needs.
"Fabric-based architecture is a concept resulting from the need to optimize resources and efficiencies in the data center," Weiss says. "By reallocating and rebalancing resources according to workload requirements or activity levels, IT becomes something less bounded and becomes a pool managed by an intelligent resource manager."
By integrating processing, storage and communication gear into a single device, fabric computing addresses the problem of siloed architectures and IT sprawl, and increases IT efficiency, proponents say. A fabric approach affords IT managers a logical view of resources once considered to be fixed, from memory to storage to CPU cycles, which can then be dynamically allocated according to the needs of disparate users across the enterprise. Thus, Weiss sees the architecture solving some the most endemic problems in data centers today, including a lack of high-level governance, idle workloads and wastage. "If you can deploy just enough capacity and performance in real time, you are eliminating over-costing your systems with poor usage," he says. "So, it's not about IT shaping the organization to itself anymore, but about IT finally functioning as an infrastructure service."
Since any combination of components can be configured in a fabric, they can be tailored to suit particular business needs. For example, an actuary looking to run processor-intense calculations on a desktop application could request a large number of multi-core processors and memory modules from the resource pool. Although the cores and memory could come from a variety of servers, the fabric architecture enables them to appear to the operating system running the application as coming from a single server. Once the calculations are complete, an administrator could then consign the capabilities to other departments in need.
If the centralized administration, scalability and efficiency gains promised by a fabric computing architecture seem reminiscent to you of other trends in IT, including virtualization, cloud computing and service-oriented architectures, you'd be correct. As much as virtualization has changed the way servers, networks and storage are managed, it has done so in an ad hoc manner. By contrast, fabric computing offers unified virtualization management, and looks to change the way data centers are managed as a whole. Given this, fabrics are seen as providing the flexible, expandable infrastructure necessary for wider adoption of service-oriented architectures and cloud computing. Weiss sees fabric computing as a way to unite these separate trends for a common good. "There are so many enabling factors now coming to fruition, but you need purposeful integration that makes it into something that is bigger than its individual parts."
THE HARDWARE
While many data centers have long relied on system integrators-either internal or external-to get the most out of their hardware stack, it is only recently that major hardware vendors such as IBM, Dell, HP and Cisco have begun to push their "datacenter-in-a-box" interpretations of fabric computing.
In the case of HP, their take is called "converged infrastructure," which co-locates servers, storage and networking equipment into a single rack while unified management software controls.
Duncan Campbell, VP of marketing for converged infrastructure (CI) HP, says by integrating the design and converging the hardware upfront, converged infrastructure will help IT provide more strategic value by freeing them from sundry operational management tasks and enabling them to be more agile. "Converged infrastructure (CI) looks to break down technology silos and create pools of assets and a shared services infrastructure," he says. "This puts data centers in a better position to address the needs of the business by bringing up new applications and services much more rapidly. Initially, people are looking at CI for cost savings, but the real benefit is agility."
Another benefit is on the energy side, Campbell says. The holistic nature of converged infrastructure means that it includes tools that manage both the power consumption of the information technology components, as well as of the physical facilities that house them.
Campbell says HP's 2010 multi-billion dollar acquisitions of network switching and routing maker 3Com, and storage virtualization provider 3par, are an affirmation of the company's commitment to a converged infrastructure strategy. What's more, he notes this commitment is far from theoretical because converged infrastructure was the driving force behind the its own recent data center conversion, as it consolidated 85 HP data centers down to six.
In a sense, Cisco's embrace of fabric computing, which it calls unified computing systems (UCS), mirrors that of HP. Long a leader in networking equipment, the company added storage and processing functions to its existing line to offer a unified solution. Cisco took a clean-sheet approach to developing UCS, starting with what it takes to run an application and working backward from there, says Omar Sultan, senior solution manager, data center solutions, for Cisco. "Moving into the server space was not taken lightly; there already were some firmly established players," he says. "We moved in because we thought we had the opportunity to do something better."
Among the benefits of a unified approach, Sultan says, are lower upfront capital costs, lower total cost of ownership, a more compact form factor and a greatly reduced amount of cabling. The benefits of this simplified cabling are not just aesthetic, Sultan says, "A traditional server has many as eight to nine connections coming out of it," he says. "There's a lot of infrastructure tied up just with I/O."
This focus on simplicity also extends to the software side of equation, he says. "Many data center managers are getting crushed by complexity," he says. "The most important thing about UCS is that it is designed to be managed holistically. You are not managing three to four things in parallel."
X by 2's Petersmark agrees that the unification of performance management tools is as important as the converging of the hardware itself. Because a data center manager wants to standardize infrastructure as much as possible and manage it holistically, he says, unified management could go a long way to alleviating the problems that arise when pinpointing sources of trouble or latency. If everybody is viewing the same metrics, much of blame game between, say, server and network administrators, could be eliminated. "When you have a variety of hardware, you are often stuck with proprietary management tools," he says. "If these vendors can build insightful performance management tools into these products, then I think they've really got something."
Despite all the potential advantages, Petersmark counsels carriers to be careful not to hop on the fabric computing bandwagon too quickly, as all the functionalities and capabilities might not be there yet. "CIOs are rightfully wary of any heavily hyped technology," he says. "But, given some time, vendors with these types of resources mostly get it right."
Gartner's Weiss also notes that committing to a fabric computing concept may mean assigning an ever bigger slice of your IT budget to fewer vendors. "The caveat from the users' perspective is that as these vendors gain a bigger share of your data center, how do you prevent yourself from becoming locked in?" adding that he expects it to take the "better part of a decade" for fabric computing to achieve a high level of maturity.
Sultan acknowledges that the road to fabric computing in data centers will be a long one. "It's not something that will happen in one big bang but an evolution that will happen over the next decade," he says. "Everyone I've talked to buys into unified computing conceptually, but the reality is that we are all constrained by costs."
Accordingly, he expects the technology to gradually work its way into data centers as insurers go through hardware refreshes, but adds that implementing unified solutions may be as much a political challenge as a technical one. Petersmark agrees that getting workers who have long focused on individual parts of the technology infrastructure to think holistically will be an issue.
"Even if you had the ability to rip and replace, I'd still recommend baby steps," Sultan says. "The technology piece isn't the biggest challenge; it's moving your staff and their competencies in lock-step with the technology. It's better to move in small steps and give your staff more time to internalize and get comfortable with it before you move all the pieces."
Weiss concurs that a targeted approach may work best. "The important point is not to boil the ocean and make your data center one big fabric," he says. "You want to look for use cases and proofs of concept where you can lay out fabrics."
Nonetheless, there seems to be a consensus emerging fabric computing represents an architectural blueprint for the insurance data center of the future.
"The past few years have all been about CIOs trying to simplify their lives," Petersmark says. "No CIO wants a total black box for infrastructure. However, the more simplified it gets, the more you can focus on delivering business value."
An Eye on Energy Costs
While some may dismiss Green IT as fad, for insurance data center managers concerned about energy usage and its impact on IT budgets, it is no passing fancy.
One interesting development is the possibility of servers running massively parallel arrays of ARM processors, the type of processor commonly found in cell phones. Startups such as Austin, Texas-based Calxeda Inc. say ARM-based servers could cut energy consumption by a factor of 10.
George Weiss, research VP at Gartner, says if challenges surrounding instruction-set capability and performance can be surmounted, such hardware may find a niche. "We could be building yet another tier that, if well organized, can distribute application to the optimum processing element," he says.
Elsewhere, a new initiative by Facebook, the Open Compute Project, seeks to tackle energy usage. The project shares the design and best practices the company employed as it built its new data center in Prineville, Ore., which consumes 38% less energy than the company’s existing data centers.
To achieve this efficiency, the company reconstructed servers, motherboards and power supplies with efficiency in mind. Frank Frankovsky, director, hardware design and supply chain for Facebook, says efforts to maximize mechanical performance and thermal and electrical efficiency are vital. "The thermal side is the other key area of efficiency," he explains in a video on the project’s website. "We use the data center as a cooling device for the servers. They are larger-diameter fans, so they can move a lot more air with a lot less power."