Don't Write Off the Mainframe as a “Legacy” System
Celent has just issued a report that observes how a majority of insurance companies still tether their information requirements to “legacy” systems, but are gradually moving toward “modern” implementations.
As the report correctly observes, “carriers realize that this abundance of legacy code is inflexible and costly to maintain, and that it has large implications for their business.” The report, which queried 30 insurance system professionals, charts a good deal of progress. For example, while the 73% to 27% split between legacy and modern systems five years ago, currently the balance is 52% legacy and 48% modern. Celent estimates that the split will be 61% to 39% in favor of modern systems five years from now.
The question is: what is a “modern” system versus a “legacy” system? And why wouldn't a modern system put in place today be legacy five years from now?
Presumably, a “modern” system is one that runs open, portable software on commodity hardware. An application based on Windows or Linux operating systems running on Intel or AMD processors comes to mind.
Conversely, many think of “legacy” systems as proprietary, closed-system software that runs on one type of hardware. An application written in language such as COBOL, and running on a mainframe is the first thing that comes to mind.
Hmmm ... is this the right definition? The iPad and iMac are actually closed systems that run on one single type of proprietary hardware. Windows essentially only supports one type of hardware architecture. But I haven't heard anyone refer to them as “legacy” systems.
Then there's the mainframe. IBM's System z runs Linux and Unix and even Windows, via virtual partitions or through specialty processors attached to the system. But no one seems to call the mainframe an “open” system.
Perhaps the definition of “legacy” pertains to the age of applications. So all those Linux and Windows servers deployed back in 2005 are legacy? Or, perhaps any application or system where there are no longer enough skilled professionals to provide support and maintenance?
My colleague, and INN Editor-in-Chief, Pat Speer explored these questions in an article a few months back, and perhaps Matthew Josefowicz at Novarica explains it best: “The issue is with legacy applications that are poorly documented, incompletely understood by the people responsible for maintaining them, and unable to provide new functionality.”
The key to this semantic pretzel is that the needs of the business need to come first, and executives should not rush to new systems because they're the latest-and-greatest shiny new architecture. For example, I just saw an interesting case study in which Blue Cross Blue Shield Minnesota migrated Windows and Unix applications off of 140 different servers onto Linux on a single mainframe, saving on power as well as streamlining maintenance and support costs.
With new architectural approaches, such as service-oriented architecture, virtualization and cloud, it often no longer matters what systems are running in the background.
As I've said before, in many cases, legacy shouldn't be such a bad word.
Joe McKendrick is an author, consultant, blogger and frequent INN contributor specializing in information technology.
Readers are encouraged to respond to Joe using the “Add Your Comments” box below. He can also be reached at email@example.com.
This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.
The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.