Systems Running Slowly? Take Out the Trash

Every once in a while one runs across an idea that makes perfect sense, yet hasn’t been applied. I’d like to think that applies to a recent posting on ScienceDaily that asserts: “A digital dumping ground lies inside most computers, a wasteland where old, rarely used and unneeded files pile up. Such data can deplete precious storage space, bog down the system's efficiency and sap its energy.”

The remarks arise from a recent paper published by two Johns Hopkins University computer scientists who see a need for a new era of computer cleansing, the posting says. Instead of allowing old and/or useless files to accumulate, the authors recommend a “green” solution summarized using the terms: “reduce, reuse, recycle, recover and dispose.”

What kinds of computer data might qualify as “waste”? In the post, they settled on four categories: unintentional waste data, created as a side effect or by-product of a process, with no purpose; used data, which has served its purpose and is no longer useful; degraded data, which has deteriorated to a point where it is no longer useful; and unwanted data, which was never useful in the first place. The researchers say they found no shortage of files and computer code that fit into these categories.

When having a lot of this data on hand, things can certainly get a bit crowded for the data we really do need and use in insurance enterprises. It is also obvious that this pile of unwanted information is undoubtedly growing by leaps and bounds in every kind of system, especially when we consider the legacy applications that linger for many insurance companies. And with new data often being created faster than we can find ways to store it, the question of storage is indeed critical.

While the authors suggest a number of very sensible ways to ameliorate this problem (encouraging developers to create programs that leave fewer unwanted files behind; breaking software code into smaller, reusable strings; mining useless data for potentially usable pieces, or finding a place to store such data until they can be examined), for insurers this may be a slippery slope. It’s easy to say that we should dispose of legacy applications, but the truth is that we’re really not sure what would happen if we did.

This is because the number of people who understand the workings—and the value—of such applications is dwindling as older programmers retire or move on. The truth is that in many cases, we just don’t know whether these legacy data are useful now or will be in the future, so we hold on to them just in case. We’d rather have a crowded closet filled to the bursting point than throw away something that we later realize was valuable.

Thus, the last of the authors’ options—finding a place to store the data until we can truly determine its value—seems the most sensible. In fact, as long as we can determine that these files are not terribly sensitive, we might consider the cloud as a potential resting place for the things we want to stuff in the closet “just in case.” This presents an interesting opportunity for insurers to dip their toes into the cloud while minimizing risk to their enterprises and their customer base.

It’s certainly food for thought.

Ara C. Trembly (www.aratremblytechnology.com) is the founder of Ara Trembly, The Tech Consultant, and a longtime observer of technology in insurance and financial services.

Readers are encouraged to respond to Ara using the “Add Your Comments” box below. He can also be reached at ara@aratremblytechnology.com.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

For reprint and licensing requests for this article, click here.
Analytics Data and information management Policy adminstration
MORE FROM DIGITAL INSURANCE