In a world where data drives decisions, you’d think businesses would be equipped with the tools they need to make the most of it. But as revealed by our recent Data Challenge survey, commercial teams are facing significant roadblocks, from fragmented systems to incomplete and outdated data, that hinder their ability to make smart, data-driven decisions.
Recent research shows that poor data quality costs businesses an average of $12.9 million annually. Unity Software, for example, reported a $110 million loss in revenue and a $4.2 billion drop in market cap in 2022, attributing the damage to ingesting faulty data from a major customer. Just a few months ago, a data incident severely impacted air traffic in the U.K. and Ireland, resulting in over 2,000 flight cancellations and leaving hundreds of thousands of travelers stranded, with airline losses estimated at $126.5 million.
Yet, businesses continue to grapple with data management challenges. This is something pervasive across functions, but especially prevalent in those roles dealing with clients or commercial operations, such as RevOps, marketing operations or sales enablement, given the variety of tools they need to deal with on a daily basis and the need to reconcile and understand multiple metrics to be able to drive the business forward.
Yet, businesses continue to grapple with data management challenges. This is something pervasive across functions, but especially prevalent in those roles dealing with clients or commercial operations, such as RevOps, marketing operations or sales enablement, given the variety of tools they need to deal with on a daily basis and the need to reconcile and understand multiple metrics to be able to drive the business forward.
Here’s what we discovered from our survey of above a hundred respondents, highlighting just how prevalent these issues are in day-to-day operations.
The clean data assumption is a myth
A major insight from the survey is that, while most analytics tools assume the data is clean by default, our findings show this is rarely the case. 79.2% of respondents reported that their data is either somewhat incomplete or missing key elements, with 37.7% experiencing noticeable gaps in their data. This highlights a critical disconnect: while analytics platforms are built to generate insights from clean data, they are often fed incomplete or inconsistent data, leading to skewed results, poor decisions, and lost revenue.
Without proper cleaning and contextualization of data, businesses end up making decisions based on partial truths.
Data fragmentation is a persistent problem
The survey revealed that more than half of respondents frequently – or even constantly – encounter inconsistencies in their data across different systems or departments. Most companies rely on multiple tools—CRM, marketing platforms, spreadsheets—without having a unified source of truth.
This fragmentation forces teams to spend hours manually consolidating data, a fact reflected by the 35.8% of respondents who said they regularly need to manually align their data. Manual data consolidation is not only time-consuming but also prone to error. Likewise, this fragmented data makes it difficult for teams to have a unified view of their business, leading to disjointed strategies and missed opportunities. 7.5% even admitted to being in total chaos.
This issue is not unique to the respondents in our survey. According to Forrester, data silos cause employees to lose 12 hours a week chasing data, often resulting in fragmented, inconsistent data that inhibits fast decision-making. The reliance on manual processes not only limits efficiency but also causes a 30% increase in potential operational costs according to Harvard Business Review.
Data confidence is low
It’s no surprise that with incomplete and fragmented data, data confidence is low. 26.4% of respondents admitted they have minimal confidence in the accuracy of their data, often feeling the need to double-check it. If teams don’t trust their data, they can’t make confident, proactive decisions. This lack of trust in data accuracy is a significant roadblock for organizations that rely on data-driven decision-making, especially in fast-paced commercial operations.
This lack of data confidence mirrors broader industry challenges. According to Gartner, poor data quality costs organizations an average of $12.9 million annually, with 27% of business leaders reporting that inaccurate data caused them to make flawed decisions. Moreover, businesses that rely on outdated or inaccurate data tend to miss out on growth opportunities—further compounding the problem.
The untapped potential of predictive analytics
While there’s significant potential in using AI and predictive analytics, the survey shows many teams are still hesitant. Only 17% of respondents felt confident in their ability to predict trends using their current tools, while 83% are making use of guesswork to figure out what the future holds.
This gap suggests that many businesses are not fully leveraging the capabilities of AI, either because they don’t have the technical expertise or because their data is too fragmented and incomplete to support reliable predictions.
AI is undoubtedly one of the most transformative forces in business today. According to McKinsey, businesses that effectively leverage AI-driven insights and predictive analytics see a 5-10% increase in sales and a 15-20% improvement in operational efficiency. However, for teams to truly benefit from these tools, they need access to clean, structured, and contextualized data.
Access and availability of data certainly not at your fingertips
The survey also revealed that a striking 85% of respondents struggle to access the data they need on their own – about half of those require some effort, and the other half need external assistance to even be able to understand their own key metrics. This reliance on IT teams creates bottlenecks and delays in business decision-making.