Gartner estimates that over 40% of agentic AI projects will be cancelled by the end of 2027, due to escalating costs, unclear business value, or inadequate risk controls. Attendees at SapioCon 2026 felt the weight of that statistic. The conversation repeatedly returned to what organizations need in place before they can truly rely on AI.
As the industry moves from simple chatbots to autonomous agents that reason, plan, and act, the central question shifts. It is no longer about whether a system can return the right answer. It is whether the conditions exist to trust the answer it returns.
What emerged across three excellent keynote sessions was a shared view that the algorithms are ready, but the conditions for trusting their outputs are not. Data infrastructure, governance, and compute each represent a distinct prerequisite, and none can be skipped.
The Data Foundation
Andreas Steinbacher, who leads digital transformation in therapeutics discovery at Novo Nordisk, argued that the challenge in preclinical research runs deeper than simply access to tools or technologies. Steinbacher identifies the root cause as what he calls the “my data” mindset, the assumption that data generated by one person for one purpose is self-explanatory to anyone else who encounters it.
Take an IC50 value, a standard measure used in drug discovery that quantifies the concentration of an inhibitor at which a biological response is reduced by half. To the scientist who generated it, the number is perfectly legible; they know which formulas they applied, how they subtracted the baseline, and what experimental conditions they used. That context lives in their head or a notebook, never associated with the IC50 number in the record. An AI agent encounters only the number, a single piece of data that makes sense to a human with context but is practically useless to a machine without it.
Getting the sequencing right, Steinbacher argues, is non-negotiable. Moving to automation before data is properly digitized and standardized produces what he calls “automation islands,” sophisticated systems built on top of data that was never structured for machine reasoning.
And this is precisely why, in Steinbachers’ opinion, the ELN needs to evolve. If the underlying data it captures remains unstructured narrative text, it will continue to produce exactly the kind of automation islands Steinbacher describes. The paper-on-glass record-keeping tool of today needs to become an active AI co-scientist, one that does not just capture what a researcher did but helps them decide what to do next.
The Verification Imperative
Building on a structured data foundation, the next challenge is ensuring that what AI systems do with it can be trusted. Christine Tsien Silvers, Health Executive Advisor at AWS and a former emergency physician, brings a clinical lens to this argument.
Dr. Silvers argues that mistakes in complex systems rarely stem from intent to harm. They happen because capable professionals are working inside systems that lack adequate safeguards. And this same principle applies directly to AI in life sciences. An AI system that generates unvalidated outputs, however fluent, introduces the same category of risk as a pharmacist who clicks through an alert without reading it.
In life sciences, an acceptable margin of error is not when an AI system produces outputs that sound plausible but are factually wrong. That is a fundamental failure.
Dr. Silvers points to a verification layer as the necessary response, tools for bias detection in training data, drift monitoring in deployed models, and automated reasoning that uses mathematical verification to distinguish factual outputs from confabulation. This is not a feature to be added later; it is the infrastructure that makes AI outputs actionable in a regulated context.
Into the Physical World
Rory Kelleher, Global Head of Business Development for Life Sciences at NVIDIA, describes what he sees as the next frontier, physical AI. Kelleher defines this as the point where autonomous agents move beyond digital reasoning to interact with and manipulate the physical world, for example through roboticized factories, automated manufacturing lines, and instrumented lab environments. This brings with it a third prerequisite, and it concerns training data.
Defect detection models on high-throughput pharmaceutical manufacturing lines routinely underperform because real-world defect data is too sparse to train them adequately. The solution Kelleher describes is synthetic data, generated through physics-accurate digital twin simulations, and used to close the training gap and drive model performance to a point where it can be reliably deployed.
Where real-world data is scarce or expensive to generate, simulation becomes the primary mechanism for building models robust enough to act on.
Three Prerequisites, One Argument
These are not sequential challenges. An organization cannot deploy trustworthy physical AI without a verification architecture, and that architecture cannot function without well-structured, governed data at the foundation. The question every R&D organization needs to answer is whether the conditions for trusting AI in their workflows and manufacturing processes have been built.
Key Points
- Gartner estimates 40% of agentic AI projects will be cancelled by 2027; in life sciences, the cause is typically weak foundations, not weak algorithms
- The “my data” mindset is a primary barrier to AI-ready data: context that exists only in a scientist’s head is invisible to a machine
- Moving to automation before data is standardised creates fragile, costly systems; getting the sequencing right is non-negotiable
- The ELN needs to evolve from a passive record into an active AI co-scientist; that shift requires data structured for machines, not just humans
- In life sciences, AI producing outputs that sound plausible but are factually wrong is not an acceptable margin of error; mathematical verification tools are required infrastructure, not an optional addition
- Structured data, verification architecture, and scalable compute are simultaneous prerequisites; none is sufficient without the others