I remember about a decade ago, Richard Winter, researcher of very big databases, shared some of his latest data with me; his company was tracking some huge, enormous databases run by the world's largest telecom carriers and government agencies.

These vast mega-databases, requiring large arrays of servers and disks probably housed in an air-conditioned facility somewhere, were more than a terabyte in size.

Of course, these days, some people now carry around a terabyte in their laptops and the mega-data sites keep growing – IBM just announced, for example, that it had developed a 120-petabyte drive. Or the equivalent of 120,000 data centers from that study in 2000 packed into a single array.

The question is, then, is “Big Data,” as its now being called, really something new, or just part of a continuing perception? Did we have the same “Big Data” issues in 2000 when we cracked the one-terabyte mark, or even in 1990 when the monster data sites scaled to 20 gigabytes?

There are some unique challenges and opportunities that today's Big Data brings to the table, that weren't seen before. For example, much of the big data surge is unstructured content: Much of the data that is turning data into Big Data is unstructured data typically not seen in relational databases. This includes Web clickstream data, graphics and video and images that may be difficult to store, index and archive. Of course, many carriers have workflow technology that includes imaging of documents. But for many companies with well-established relational database environments, this surge may be difficult to handle.

Another challenge is finding enough people with the skills to manage and analyze these Big Data volumes. A recent McKinsey & Company report estimates that there are not enough people with the skills to effectively leverage the power of Big Data – to the tune of 140,000 to 190,000 jobs requiring deep analytical skills as well as 1.5 million management and analyst positions. On the bright side, you could say Big Data will be an employment generator.

Moreso than ever before, Big Data also means more opportunities for companies as well, as the McKinsey report points out. For example, having large stores of data can help augment or support human decision-making with automated algorithms: “Sophisticated analytics can substantially improve decision-making, minimize risks and unearth valuable insights that would otherwise remain hidden. Such analytics have applications for organizations [such as] retailers that can use algorithms to optimize decision processes such as the automatic fine-tuning of inventories and pricing in response to real-time in-store and online sales.”

McKinsey also points to the ability to use data to design new product offerings, holding up for example the insurance industry's adoption of geo-tracking technology for auto insurance policies, and geographic/geospatial information systems (GIS) for other insurance requirements, such as observing and collecting data on property damage.

There will always be Big Data, no matter how we define or measure it. But lately, we've reached a point where it poses some interesting challenges, as well as interesting opportunities.

Joe McKendrick is an author, consultant, blogger and frequent INN contributor specializing in information technology.

Readers are encouraged to respond to Joe using the “Add Your Comments” box below. He can also be reached at joe@mckendrickresearch.com.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

Register or login for access to this item and much more

All Digital Insurance content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access