Why are There Still So Many Legacy Systems?

Alex Benik, a principal at Battery Ventures, recently took a deep dive into the realm of so-called “legacy” systems, still powering many of the world's organizations, as well as a large number of insurance companies. While legacy may be a relative term—today's hot new product is tomorrow's legacy—Benik is referring to any server that is not an Intel or AMD-based commodity box. Legacy means large systems and servers such as mainframes, high-end Unix machines, or classic DEC VAX midrange-class machines.

Why do companies still buy and maintain these expensive behemoths? Because they want the prestige of having a Cadillac computer on premises? Actually, the answer is much more mundane than that, Benik argues. It's simple inertia. Adopting new technologies is risky business, and it's more politically expedient to keep running on the technology already in place. “Inertia in the enterprise is high, otherwise these technologies would be long gone. Further, in an environment with IT budgets that are either flat or, at best, moderately increasing, it’s a zero sum game. Budgeting for your new widget is taking food off someone else’s table. There are no overnight successes in the enterprise.”

This is no surprise when considering large organizations, Benik says. “Someone built an application that works fine and is sitting in the corner. The guy who wrote the application left the company seven years ago, the documentation sucks, and people are afraid to touch it. So the care and feeding of these applications drives this revenue.”

Indeed, larger enterprises tend to be slow-moving ships. And, believe it or not, “most enterprises don’t see technology as providing them with competitive advantage,” he adds. “This is a debatable point in the long run but in the short term they have a business problem they are trying to solve and want to do so for the least amount of money possible. They are short on IT staff and more than willing to let the big vendors tell them what they need. Your OpenStack-Bigdata-NoSQL-Cloud-Openflow-distributed-real-time-anltyics-quantum-flux capacitor may be cool, and buzz word compliant but may fall on deaf ears in the mass markets.”

Think about it: while it certainly is commendable to be knowledgeable about the latest developments in technology, how many IT leaders are willing to bet their careers on new and shiny stuff with unproven track records? At least mainframes and Unix servers have proven their mettle over the decades.

Joe McKendrick is an author, consultant, blogger and frequent INN contributor specializing in information technology.

Readers are encouraged to respond to Joe using the “Add Your Comments” box below. He can also be reached at joe@mckendrickresearch.com.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

For reprint and licensing requests for this article, click here.
Policy adminstration
MORE FROM DIGITAL INSURANCE