Veracity – The Challenge Bankers Face with Big Data

Published 22nd Jan 2015
Archived
Back to blog

In a recent survey from Aite Group three quarters of North American bankers said they are dissatisfied with analytics technology.  In particular, 76 percent of North American bankers were particularly dissatisfied with Big Data.

That’s disappointing but not entirely shocking. Big companies are spending a lot of money on Big Data, but it’s a complicated puzzle of technologies and data. However, it is still relatively early in the game for a lot of Big Data projects, so an enterprise should not get too frustrated as they work through what works, how to attain answers and create value.  In an earlier post, we did note that according to Gartner, “…85% of Fortune 500 organizations will be unable to exploit big data for competitive advantage.”  Bankers, it would seem, are more vocal echoing this sentiment while channeling at least some of their angst directly at Big Data.

We sense that, in part, some of this frustration stems from concerns with the quality of the “data” being sourced.  According to at least one study, poor data quality costs the U.S. economy $3.1 trillion per year.  It has also been reported that one in three business leaders do not trust the information they use to make decisions while 27 percent of survey respondents were unsure of how much of their data was accurate.  Big Data may be new, but that doesn’t mean it is immune to technology’s Achilles Heel: garbage in/garbage out.

It’s a challenge.  It isn’t unusual for databases to be populated with data that is pulled from numerous sources, including sources such as Facebook and Twitter, where the data is essentially stored in disparate sets.  That can cause problems in homogenizing and adding context to data and extracting valuable insights that deliver competitive advantage.

Although it is only one of the many challenges, we believe data quality is all too often an overlooked concern, especially early in the Big Data analytics and insights game. Often, experts will refer to the four V’s of data– Volume, Velocity, Variety, and Veracity. The fourth V may be the most difficult to nail down. While data scientists employ their crafty tools to sort the good from the bad, boiling down the most relevant and accurate data can get complicated. Further, while a lot of companies are focusing on real-time data capture, many companies don’t have a good benchmark to measure new data against.

We’ve set out to fix the problem in the financial services world.  Historically, vast silos of publicly-accessible data were available, but it never covered the full economy and it was skewed by error prone sampling. No one could bring all of these silos under one umbrella to accurately generate insights and drive decision making in lending, insurance, risk assessment, etc. Instead, companies and governments would leverage surveys, third-party and, more recently, “real-time data.” That real-time data, while it can be useful it can also fall into the trap of misinformation and out of context data.

For the first time though, Powerlytics has given financial services companies the ability to utilize all of this data. Marketers, analysts and others whose success is dependent upon accurate financial profiles of industries, counties and even zip codes, are rapidly replacing their imprecise survey and sample based datasets with Powerlytics’ Market Intelligence Platform.

This may seem like just a small step, and it currently just addresses the veracity of annual financial results for businesses and consumers across the U.S., but it is really a huge leap in the battle to improve the value of Big Data. It establishes a foundation for technology that can alleviate at least one of the common challenges to Big Data, Veracity, and provides a base on which to build.