If you wanted a single question that captured the mood at SapioCon 2026, it was this: if AI is everywhere, why isn’t the impact evenly distributed?

Becky Upton, president and CEO of the Pistoia Alliance, and Mike Hampton, Sapio’s chief commercial officer, came at the answer from different directions but landed in the same place. Upton brought the industry-level view: AI investment is accelerating, but enterprise-wide impact remains uneven. Hampton brought the lab-level reality: science has outgrown the tools scientists rely on every day.

The implication is uncomfortable but useful. Most organizations are not stuck because AI tools do not exist. They are stuck because workflows, data practices and day-to-day operating habits were not built to carry AI-enabled work end to end, with enough context and traceability to support decisions teams will stand behind later.

The bottleneck is coordination, not capability

Upton’s core argument was that many barriers to scaling AI are not really technology problems. They are coordination problems: friction in the seams between teams, tools, data types and the evidence that supports decisions.

That is why the investment picture can be misleading.

Pistoia Alliance survey data puts numbers on the momentum: 70 percent of life sciences organizations describe AI as a top investment area, and 64 percent expect it to become their number one priority. Seventy-seven percent anticipate using AI or machine learning in their laboratories within two years.

Where the workflow breaks, value gets minimized

Hampton’s keynote made the coordination problem tangible. Modern R&D is increasingly interconnected and multimodal, but many informatics approaches still behave like isolated solutions. Teams record the work, but the workflow separates context from data before it can drive the next step.

Sapio’s recent survey of scientists put numbers on where that pain is most acute.

Sixty-five percent of respondents said they repeat experiments because they cannot find the right information, cannot find enough of it, or cannot trust the context well enough to reproduce results downstream. That is lost time, lost instrument capacity and lost momentum. Upton reinforced the same theme from an enterprise perspective, citing an estimate that 55 percent of an organization’s data is “dark,” trapped in unstructured silos, legacy repositories and people’s heads.

Hampton also highlighted what he called the interpretation handoff problem. Scientists generate the data, but the analysis happens elsewhere, often in a separate tool, often by a separate team, and likely days later. Decisions get made, but the reasoning that supported them is hard to trace back. That is not just an efficiency problem. In regulated environments, it is a governance problem.

None of these are edge cases. They are ordinary workflow failures, and they help explain why enterprise AI impact stays uneven even when local teams can demonstrate impressive outcomes.

Shadow AI is the workaround

Once you see those handoff zones, shadow AI stops being surprising. Hampton was pragmatic about it: scientists are not reaching for external tools because they are careless. They are doing it because they need speed and interpretation support closer to the experimental record. More than three-quarters of scientists in Sapio’s survey reported using outside tools today just to get their work done.

It is tempting to treat that as a governance problem. The more useful read is that it is a workflow reality check. Scientists want dialogue with their data, not just storage. They want interpretation inside the workflow, not bolted on afterward. The question for leaders is not “how do we stop people from using AI tools?” It is “why do people feel they have to leave the governed workflow to do the thinking?”

What coordination-first looks like

Upton’s diagnosis and Hampton’s framing converge on a simple conclusion: AI scales when coordination becomes part of the design, not an afterthought.

That means reducing the handoff zones where context disappears and aligning teams on shared ways of capturing meaning. Decisions need to stay traceable to the evidence that supported them.

According to Hampton, the path forward is not simply better models. The answer is scientific workflows enabled by AI agents, ones that keep context intact, reduce the handoffs where meaning gets lost, and coordinate the people, tools, and data that science depends on.

Key points

  • AI investment in life sciences is accelerating, but impact remains uneven: 70% of organizations cite AI as a top investment area, yet enterprise-wide returns are inconsistent across teams and workflows.
  • The core bottleneck is coordination, not capability. Most organizations are not held back by a lack of AI tools but by workflows, data practices and operating habits that were never built to carry AI-enabled work end to end.
  • 65% of scientists in a Sapio Sciences survey reported repeating experiments because they could not find the right information, trust the context, or reproduce results downstream, representing lost time, instrument capacity and momentum.
  • An estimated 55% of organizational data is “dark,” trapped in unstructured silos, legacy repositories and undocumented institutional knowledge, limiting the context AI needs to support decisions teams will stand behind.
  • Shadow AI is a workflow signal, not a compliance failure: more than three-quarters of scientists report using unvetted tools outside governed systems, not out of carelessness but because they need interpretation support closer to the experimental record.