In my last post, I suggested that while we are in the era of big data, our core understandings of information management are still driven by the need to drive business value – in contemporary terms, to mine information from the big data.
The data/information/knowledge/wisdom pyramid (DIKW) pyramid still defines the business reality that underpins our big data world. Nowhere is this truer than in customer data and customer engagement systems.
Customer engagement systems have changed dramatically in the past decade. Not so many years ago, a typical customer engagement system had only a handful of data sources and data types to manage. Data warehousing methodologies defined by relational database management systems (RDBMSs), star schema and batch ETL processes were sufficient. The overnight batch was good enough, and outsourced customer data integration was the norm.
Contrast that bygone era with our current data environment that is defined by richer and more varied data sources. Data generated in varieties, velocities and volumes are far outstripping anything we saw only a few years ago, and customer journeys have spread to numerous touchpoints on complex timelines.
Add to this the increasing necessity that successful customer engagement solutions be built from low latency data (a mix of real-time and near real-time data based on source system availability, content transport times, and requirements based on various business decision loops) from both online and legacy systems.
This new environment has given rise to innovative technologies for data storage and processing. New technologies capable of managing the speed, complexity and variability of the contemporary data environment and the technical requirements of real-time customer engagement are generally available.
Onsite relational databases, once the undisputed masters of customer data, have yielded ground to technologies such as cloud-based warehousing, Hadoop, cache-based data storage and the various NoSQL databases. These technologies give us an ecosystem to handle unstructured and semi-structured data alongside traditional structured forms. They can ingest batch and streaming data and be scaled to manage practically any volume of data.
In our new world, the overnight batch has been overtaken with streaming data, message busses and queues, and real-time decisioning. The monolithic enterprise data warehouse has been replaced with the agile data lake. Massive MPP databases have been traded for Hadoop and just-in-time data processing. In-house (whether behind the firewall or on enterprise managed cloud platforms) customer data platforms are becoming the norm.
This new ecosystem provides a technical foundation for a data-driven enterprise. It is, however, only part of the customer engagement puzzle. Solving the computational and storage side of customer data is not the same thing as effectively monetizing these data.
Monetize Data to Make It Valuable
The enterprise data warehouse (EDW) is not an effective use of enterprise resources. Not that it cannot be useful, but in general:
- The cost of curation—identifying, collecting, cleansing and validating data in a traditional warehouse—is too high.
- It requires too much staff time;
- It requires too much hardware and software that is very expensive;
- Most of all, an EDW remains unleveraged and fails to delivery on its value proposition. In addition, many studies show that EDW projects are notoriously late, over budget, and well short of their initial scope.
Data must earn their keep – they must be monetized to justify the expense, complexity and risk inherent in persisting large data stores. But, data do not monetize themselves. An unutilized data store is pure cost.
Although secondary value propositions exist, the primary value proposition for keeping data is informing various business decisions– with customer engagement decisions, which drive both top-line sales and bottom-line profitability, consistently driving the best return on investment. A customer engagement decision involves choosing the channel through which the brand interacts with consumers based on behavioral and preference data.
Effectively monetizing data first requires accessibility, and accessibility requires that data be both fit-for-purpose and stored in a broadly usable format. Obviously, database planning, design and hosting all fall into the accessibility question. So, however, do customer data platforms (CDPs). At a minimum, a CDP in a data intake/processing/output solution that must be able to accommodate first-, second-, and third-party data, process web-log information for content and trackable entities, integrate DMP data (both as data source and target), etc. etc. etc. – a detailed discussion of CDPs is a topic for another day.
Contemporary CDPs are designed to cleanse, curate and integrate multiple data types and sources into a centrally accessible data portal. They must be able to ingest data from the full range of contemporary and legacy data sources, support traditional and web-based transport methods, and support batch, event and streaming data with equal grace. CDPs must also support traditional customer data integration (i.e., matching and deduplication) as well as present-day problems, such as device/anonymous user linkage and the anonymous-to-known customer journey.
Customer data platforms underpin data monetization by making disparate, heterogeneous and unreliable data broadly usable. This, in turn, allows reliable and standardized data to inform consistent, data-driven customer engagement. This is where data become information.
Register or login for access to this item and much more
All Digital Insurance content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access