For the first few decades of the computer era, the well-worn migratory path of a technology was from the business to the home user.
However, in the last few years this has begun to change, as several technologies well established in the consumer market are only now beginning to make inroads in the enterprise. For example, while many corporations struggle to profitably leverage Web 2.0 technologies, your average teenager seems to have no such difficulty.
Another prime example is flash-based solid-state drives (SSDs), which have replaced mechanical hard drives in many consumer devices. SSDs have no moving parts, offer faster access speeds and consume less energy than traditional, spinning hard drives.
While ubiquitous in MP3 players and, increasingly, in laptop computers, the technology is exceedingly rare in data centers. Even proponents of SSDs concede that there is ample reason for this. Only in recent years have vendors begun to cobble together flash memory modules into SSDs of appreciable size. "Data centers don't move lighting fast," says Dean Klein VP for memory system development at Boise, Idaho-based NAND (flash) memory manufacturer Micron Technologies Inc. "Nobody wants to take a chance on an unproven technology."
Moreover, cost is a consideration. The price per gigabyte of flash memory is expensive, especially when compared to the traditional, spinning hard drives that currently dominate data centers.
"They are small, and they are expensive," says Chris Pollard, director of storage engineering at New York-based Metropolitan Life Insurance Co. (Metlife). "We don't have any flash drives on the floor but we are strongly considering them."
THE VALUE PROPOSITION
Indeed, any insurance company considering moving toward SSDs will have much to consider. Much of this process will entail wading through the hype that surrounds SSDs, and actually figuring out how best to use them. "SSDs have a compelling value proposition, but the data center and storage architectures have been focused on hard drive technologies," warns Joe Unsworth, lead analyst at Stamford, Conn.-based Gartner Inc.
Yet, on purely technical terms, much of the hype seems justified. SSDs are fast-offering extremely low read latency and performing more than 100,000 input/operations (i/o) per second. These low seek times have big performance implications, as slow i/o has become a bottleneck as processors become ever more powerful in the multi-core era. While traditional hard drives may saturate CPU cores at about 25%, some SSDs can saturate them at 100%, Unsworth says.
"Traditionally, hard drives will give a response in 15 to 20 milliseconds," says Greg Goelz, VP of marketing for Milpitas, Calif.-based Pliant Technology, which makes SSDs geared for the enterprise. "In CPU clock cycles that's an eternity. SSDs give it back in microseconds."
In addition to speed, the other primary value proposition for SSDs is that they consume less energy than spinning drives. As data centers look to trim power and cooling costs, hard drives spinning away at 15,000 RPM are tempting places to start.
What's more, SSDs, at least in theory, will enable data centers to pare back on the over-provisioning of drives. Redundancy is currently the norm in data centers as hard drives are put into Redundant Array of Independent (or Inexpensive) Disks (RAID) setups for reliability purposes, with one drive mirroring another in case of mechanical failure. "SSDs don't require the level of redundancy you would need for rotating media," Klein says.
Additionally, for performance reasons, 15,000-RPM hard drives are often "short-stroked." This technique entails utilizing just the outer edges of a hard drive's spinning platter to achieve maximum performance, at the cost of using only 10% to 15% of the drive's storage capacity. "It's a very inefficient way to get high performance," Unsworth says.
By eliminating the need for short-stroking, and by cutting back on some of the redundancies, data centers could reclaim lost capacity, and save energy on more than a one-to-one basis. "When you re-architect, you start to see real energy savings," Goelz says.
Klein tempers his enthusiasm on this point. "At this point, you'll still need RAID arrays, but going forward there will be architectural improvements that hit the data center to mitigate that," he says. "But, that's still a ways out."
THE DURABILITY ISSUE
For all their benefits, there are still some drawbacks to using SSDs that bear consideration. In addition to the traditional concerns about cost and capacity, the most prominent knock on solid-state media has been durability.
"SSDs are evolving somewhat to have the type of endurance users are expecting," Klein says. "In earlier SSDs, the lifetime wasn't all that great-measured in months as opposed to years."
While they contain no moving parts, there are still ways of destroying solid-state media. Although read durability is good, the constant writing and erasing of data to particular spots on a drive can wear it out. "When you are doing a lot of sustained programming, you are seeing a lot of degradation in the write performance," Unsworth says.
To remedy this, the main suppliers of NAND, including Micron and Korea's Samsung, have introduced enterprise-class NAND with much greater endurance than standard components, Klein notes. Enterprise-class NAND should be good for 1,000,000 erase cycles, as opposed to the 5,000 to 10,000 life cycles common to consumer flash devices, he adds. "There are some tradeoffs to get some additional endurance," Klein says. "We actually slow down the erase cycle."
Yet, for SSDs, the flash memory is only half the story. The controller that determines how data is written to the drive is of equal importance. "This is where the controller software and flash management come into play," Unsworth says.
To avoid creating hot spots on the media, vendors have developed advanced wear-leveling algorithms to spread wear evenly across the drive. "A lot has been done in this area, and companies that are supplying enterprise-class SSDs have pretty much solved this issue," Klein says.
There are other nettlesome software issues that may slow SSD adoption. All major operating systems are optimized for spinning media, and de-fragmentation utilities will only slow down SSDs. Popular operating systems, such as Windows Server and SUSE Linux, will need to modify their code to accommodate the faster I/O solid state drives afford.
"There's still a lot that can be done that is not done today," Klein says, adding that in a perfect world, operating systems would query the SSD about optimum block size to maximize performance. "Operating systems deal in smaller blocks of data than an SSD. There is still a long way to go with optimization."
Since insurers mostly deal with storage and server vendors, and not hardware vendors directly, the rate of SSDs adoption may depend more on the embrace of the technology by storage vendors such as EMC, Sun and server vendors such as Dell, HP and IBM.
This is the case at MetLife, where the company is contemplating mixing SSDs into the Symmetrix SANs the company gets from EMC, Pollard says. Yet of the 960 spinning drives connected via fibre channel in the Symmetrix arrays, only five seems likely to be replaced by SSDs.
"All of our arrays are loaded with 15,000-RPM drives, so there are certain situations where we think flash could be a benefit," Pollard says, noting that EMC has only recently begun to support flash in their arrays. "But we are not looking to dive all the way in because it's so expensive."
If the role of SSDs seems to be limited in storage, where else can insurers leverage the technology? Many think SSDs will have the biggest impact on the application side. Pliant's Goelz mentions data mining, business intelligence and actuarial functions as likely places for insurers to investigate the technology. "Models that took several weeks to run can be done faster," he says. "You can accelerate the speed at which you run complex models, and increase the amount of data you can monitor and analyze."
Pollard sees an opportunity to use flash in heavily used servers, such as mail servers that have a lot of cache read misses. Instead of putting the actual data on a flash drive, it could hold a frequently accessed data base index in order to speed up search and retrieval time.
"If we pick our spots, we can improve applications' response time," he says. "But, we're going to be very selective. We also have to look at the criticality of those applications to the business. Just because an app has a high level of [cache read] misses doesn't mean we're going to give it a flash drive."
THE WAITING GAME
Indeed, for the near term, SSDs will seem to be consigned to niche status until prices come down. "Even if you can get that return on investment in three years' time, you still need to come up with the money," Unsworth says. "It's going to be a very difficult case to make to your CFO."
However, as with most computer technologies, prices will recede as volumes rise and mass production brings down cost. Unsworth says he expects to see a very aggressive pricing curve. "It's still a very small market," he says. "To put this in perspective, last year, of all the enterprise grade servers and SSDs sold, the unit volume was 100,000. You'd have thought there had been millions."
Inexorably, as improved process technology leads to denser drives, capacity is beginning to increase. "For the last few years it's been a terabyte world, but flash was dealing in megabytes," Goelz says. "Now, you're seeing products in the 100 to 300 gigabyte range. That's a substantial amount of data."
These large sizes may well signal a tipping point for widespread adoption in data centers.
"As the drive densities get higher, that's when we will dip our toe in water," Pollard says. "Currently, they're like gold, so you want to be very selective about where you use them. If they were the same price as spinning media, we'd swap everything out and everything would have great performance."
Micron's Klein also sees SSD adoption taking a while. "Will you see data centers adopting them overnight? Absolutely not. We're still in a trial period, but at some point it'll be extremely compelling."
Likewise, Unsworth expects high growth, but doesn't see SSDs gaining serious traction in data centers until 2010 and beyond. "We're recommending patience," he says. "It is expensive right now, but there are clear value propositions where it will pay off. You can still wait another 12 or months, but start looking at this now.
OPPORTUNITIES FOR EFFICIENCY
With energy costs threatening to consume as much as half of an insurer's IT budget, energy efficiency in data centers is a pressing concern. A general rule of thumb is that every watt saved within a system saves an additional watt used for cooling that system. Accordingly, carriers have become more of aware of efficiency of their hardware and the components within it.
Indeed, one of the primary selling points of solid states drives is that they consume energy at a much lower rate than their spinning counterparts. Yet, the fact that they will likely comprise only a small percentage of the overall numbers of drives in the data centers means they will do little to cut overall energy consumption, says Chris Pollard, director of storage engineering at Metlife.
Fortunately, a quick glance around the data center and into server racks will reveal other areas where carriers can implement inexpensive, low-tech fixes to save both watts and money.
One is the decidedly unglamorous component, the power supply, which converts alternating-current power from an external A.C. power source to direct-current used by a computer, is starting to draw notice for its inefficiency. While power supplies that convert at an up to 90% efficiency are widely available on the market, many commodity servers populating data centers ship cheaper units with efficiency ratings in the 60% to 70% range.
These inefficiencies, have not escaped the notice of Mountain View, Calif.-based Google Inc., which likely deploys more servers than any other company. In a whitepaper authored by Urs Hoelzle, Google's VP of operations, and Bill Weihl, the company's green energy czar, a case was made for new standards for power supplies. Hoelzle and Weihl argue much of the inefficiency in modern power supplies is a relic of their origins in the infancy of the computer era. "For historical reasons dating back to the original IBM PC in 1981, standard PC power supplies provide multiple output voltages, most of which are no longer used directly in today's PCs," they write. "Back in 1981, the chips actually did need all these voltages. But those times are long gone."
(c) 2009 Insurance Networking News and SourceMedia, Inc. All Rights Reserved.
Register or login for access to this item and much more
All Digital Insurance content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access