IT Disaster Scenarios: Time to Revive the Spirit of Y2K?

In a recent post, fellow INN contributor Ara Trembly talked about the specter of “cyberwarfare,” in which foreign powers, terrorists or criminals bring down our IT infrastructure. Ara recommends, in the true spirit of the insurance industry, planning and being prepared for such worst-case scenarios.

The prospect of a mega-glitch bringing down companies—and let's face it, everything relies on IT—was the subject of a recent Webcast I co-presented with Jeff Papows, CEO of WebLayers, and formerly president of Lotus. Jeff is also preparing a book on the subject.

Again, in the spirit of the insurance industry, we need to apply the same risk management principles to information technology as we do to policyholders. What is likely to impact our businesses?  What are the worst-case scenarios?

Jeff pointed to egregious examples of IT run amok, such as recent cases of patients receiving massive overdoses of X-rays as a result of glitches in radiology equipment software, or bank customers being shut out of their accounts for weeks because of a failed integration project. A deliberate attack on our systems, he adds, could result in a “digital Pearl Harbor.”

Where should we start with IT risk management? Look no further than 10 years ago or so, when we faced our last digital Pearl Harbor with the Year 2000 crisis. As I observed in the Webcast, perhaps what we need now is a Y2K-style focus to the issues. For anyone working in enterprises back in the 1990s, you may remember how for the first time in many cases, IT sat down with the business to map out exactly what kinds of applications it was relying on, and what the potential impacts would be on the business is any one of those applications went down.

Risk management exercises—reviewing likely scenarios of failure and preparing for them—should be a part of every IT project. That includes everything from Web site glitches that freeze out customers to massive outages of your data centers and service providers. As Jeff put it, “As of January of 2010, there were 6 billion networked things talking to other things—whether that’s local area networks, wide area networks, hot spots, Bluetooth or whatever. Its an immense of amount of everything from handheld computing to other Internet-savvy devices interconnecting in an unprecedented volume. There are literally a billion transistors in place for every carbon-based biped Homo sapien life form on the planet. The strain on our infrastructure is more extreme than it’s ever been.”

Jeff encourages companies to foster environments of innovation and automation. One way to do this is through centers of excellent to address better securing the interrelated processes that rely on software. “There is no silver bullet.”  What is essential is a corporate culture and management structure that encourages managers and practitioners to develop and follow best practices.”

Joe McKendrick is an author, consultant, blogger and frequent INN contributor specializing in information technology.

Readers are encouraged to respond to Joe using the “Add Your Comments” box below. He can also be reached at joe@mckendrickresearch.com.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

For reprint and licensing requests for this article, click here.
Security risk Analytics Policy adminstration Data and information management Data security
MORE FROM DIGITAL INSURANCE