
Why Teams Are Using AI Every Day — But Don’t Trust a Single Number It Produces
Artificial intelligence is everywhere in modern business. If you’re in marketing, sales, or revenue operations, chances are you’ve used AI tools in your daily workflow, even if you didn’t mean to. These days even Google has AI built in, so it’s being used whether you actively chose it or not. Teams use AI for forecast summaries, pipeline analysis, campaign performance explanations (yes, guilty), or questions like “Why did revenue drop?” in ChatGPT or whatever AI tool happens to be open.
Yet despite all this usage, something really weird is happening and that is; teams use AI all the time, but they don’t trust its numbers. People routinely double-check AI outputs against spreadsheets, ignore outputs they don’t like, or rerun queries with slightly adjusted prompts just to feel confident. That’s not because AI is unhelpful, it’s simply because teams have learned, through experience, that AI outputs often sound confident but aren’t always grounded in the reality of their data.
This pattern echoes what researchers call the AI trust paradox. What they mean by this is, people will use AI-enabled technologies even when they don’t fully trust them, often because they believe the benefits outweigh the risks, or they fear being left behind if they don’t adopt them (Kreps et al., 2023).
Why is this happening now?
This is happening now because there has been a massive explosion of tools. We’ve got ChatGPT, Copilot, Gemini embedded into work tools, plus a growing number of sector-specific AI products. At the same time, GTM teams are under pressure to move faster, report more often (don’t get me started on weekly reports), and explain numbers instantly. You’re expected to be ready at all times.
AI fills the speed gap beautifully but it leaves a huge confidence gap behind. That gap becomes very real when decisions affect forecasts, budgets, executive decisions and investor contributions. Imagine making a forecast based on what AI predicted, only for it to land on the wrong side and cost the business real money. This is not far from the truth as a survey from Reuters (2025) on large companies revealed that nearly every organization adopting AI has experienced some financial loss tied to flawed outputs or integration challenges, despite optimism of its long-term benefits.
Another global survey reported that a majority of workers use AI tools regularly at work, but many still feel like they can trust the outputs without verification and some even hide their AI usage from their bosses because they’re unsure how it will be perceived (Thompson, 2025).
So, if you've had this confidence doubt on AI, you are not alone. In short, teams are using AI because they need to, but they aren’t fully confident it’s giving them numbers they can act on without checking.
The quiet distrust nobody talks about
This is the part no one really says out loud. AI distrust is quiet and subtle but it's very real. It shows up in how teams behave, for instance, before sharing a number from AI, someone quietly checks the spreadsheet or in another instance a manager reruns the same prompt three different ways and it further shows up when people default to manual calculations because they feel safer.
That pattern isn’t random. It happens because most AI tools don’t explain how they arrived at a number. You don’t know what data was included, what was excluded, or whether the tool even understood your metric definitions. The confusion is deepened when teams don’t share a common definition of a key metric like pipeline, conversion rate, or MRR, because AI doesn’t automatically fill in that gap.
This isn’t just frustration, it’s rational behavior. When you can’t see the lineage of a number, you can’t trust it for a decision that affects revenue or strategy. So AI becomes a draft assistant; great to start with but not reliable enough to finish with.
The real problem isn’t AI, It’s the metrics layer
Most teams talk about distrust as if AI is the root cause. But the real issue isn’t AI itself, it’s the metrics layer beneath it.
AI doesn’t produce numbers in a vacuum. It reasons on top of your data and definitions. If those are messy, inconsistent, or siloed, the AI’s answers will reflect that mess. The outputs might sound confident. After all, modern language models are trained to produce fluent responses but they aren’t always verifiable or traceable back to a trusted source.
This is a key point: AI doesn’t make numbers up out of nowhere — it regurgitates what it sees, and if your data layer is flawed, the AI’s outputs will be too. Companies that focus on building robust data quality and governance frameworks often report higher confidence in AI outputs because the underlying numbers themselves are consistent and explainable (Giovine et al., 2024)
When teams have conflicting metric definitions, fragmented tools, and manual cleanup steps, AI is essentially guessing — and teams know it, even if subconsciously. That’s why outputs get challenged, ignored, or verified manually.
What “AI-Ready” metrics actually require
If you want teams to trust AI numbers, the metrics below need to be solid first:
1. One agreed definition per KPI.
Everyone needs to know what a metric actually means. If “pipeline value” means something different in Sales vs. RevOps dashboards, AI doesn’t magically reconcile the definitions. Teams would have to do that manually.
2. Traceability.
There needs to be a clear path from number back to source: which system, which filters, which logic produced it. This isn’t about suspicion; it’s about accountability. If a number changes, a teammate should be able to see why. (Dhawan, 2024)
3. Canonical sources of truth.
Teams should agree on where each key metric is owned. Is revenue sourced from the billing system? Is churn from product analytics? Once there’s agreement, AI doesn’t have to guess which dataset “wins.”
When these foundations exist, AI becomes not just fast, but trustworthy. You actually understand why a number looks the way it does, and AI can build on that rather than misinterpret it.
Where Tinkery fits
Tinkery uses AI, but with a clear focus on grounding it in metrics teams actually trust, rather than letting it operate on top of fragmented or unclear data.
We help teams by:
- Unifying metrics across tools
- Creating a shared metrics canon
- Making numbers explainable before AI interprets them
That means your AI outputs are not guesses, they’re backed by defined data logic you can explain. The result of this is fewer double-check moments, fewer mistakes, and decisions backed by real data instead of assumptions. Companies with stronger governance, explainability, and responsible AI practices consistently report better outcomes from AI investments and higher confidence in outputs.
The shift teams are starting to make
AI adoption is no longer the challenge. Trust is. The next phase isn’t about more prompts or smarter questions, it’s about using AI supported by metrics that make numbers real. Teams that get this right will move faster without constantly worrying about whether the numbers are off. And that’s when AI finally starts doing what people expected it to do in the first place.
Sources
Dhawan, A. (2024). Turning black boxes into glass boxes: Building trust in AI-Generated Research Insights. Insight Platforms. https://www.insightplatforms.com/turning-black-boxes-into-glass-boxes-building-trust-in-ai-generated-research-insights
Giovine, C., Roberts, R., Pometti, M., & Bankhwal, M. (2024, November). Building AI Trust: The Key role of Explainability. Mckinsey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/building-ai-trust-the-key-role-of-explainability
Kreps, S., George, J., Lushenko, P., & Rao, A. (2023). Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States. PLoS ONE, 18(7), e0288109. https://doi.org/10.1371/journal.pone.0288109
Reuters. (2025, October). Most companies suffer some risk-related financial loss deploying AI, EY survey shows. Reuters. https://www.reuters.com/business/most-companies-suffer-some-risk-related-financial-loss-deploying-ai-ey-survey-2025-10-08
Thompson, P. (2025, May 29). Researchers asked almost 50,000 people how they use AI. Over half of workers said they hide it from their bosses. Business Insider. https://www.businessinsider.com/kpmg-trust-in-ai-study-2025-how-employees-use-ai-2025-4
It’s time to stop fighting your data
Whether you’re scaling a startup or running lean at a growth stage, you need reporting you can trust and data you don’t have to babysit.

