Four lessons from the lab floor on practical AI success
The opportunity to use AI to accelerate scientific discovery and reduce manual burden is real. So is the frustration of organizations that have invested heavily without seeing the returns they expected.
At a recent joint Sapio Sciences and Zifo event, R&D leaders, data scientists, and informatics practitioners gathered to work through the question many AI programs are quietly wrestling with: Why isn’t this scaling?
The consensus was clear. AI readiness for scientific organizations is not a technology problem. It is a foundation problem.
Here is what we heard.
Good AI needs bad data
The biggest breakthroughs in science rarely come from repeating what worked. They come from finally understanding why something did not. The instinct when building AI systems is to clean the data first. Organizations filter out the failures and present the model with only successful results. That instinct produces a blind spot.
Building an AI-ready foundation means capturing and structuring failed experiments with the same rigor as successful ones. To get good AI, you have to give it your worst data.
Read more: The AI Readiness Gap Is Not a Technology Problem
The scientist stays in the loop
AI in the lab should be agentic but not autonomous. It should retrieve data, suggest parameters, and build the structural backbone of the work, but the human expert must review and decide the next step. This is not a constraint on what AI can do. It is the fundamental condition under which the technology actually gets adopted and trusted in a regulated environment.
The scientist is not just the end-user of an AI system; they are the decision-maker within it.
Deploying AI on a broken process only makes the mess faster
Consider a scientist who must complete 64 discrete manual steps before a bioreactor run can even begin. Automating those steps removes the friction, but it does not fix the underlying problem. A more important question is whether all those steps are actually necessary or if the current sequence is simply a relic of the paper era.
The organizations seeing genuine transformation are those that redesign their processes around what AI makes possible, rather than merely digitizing what already exists.
Read more: When to Stop Optimizing and Start Reimagining
The data problem comes before everything else
Governance, ontologies, and a unified system of record are not optional extras in an AI strategy. They are the foundation without which nothing else works. Scientific data environments have grown increasingly complex over the years. Instrument outputs, legacy PDFs, and spreadsheets held together with macros all create a landscape of data that was never designed to be shared.
The organizations making genuine progress are those that address this reality directly rather than trying to deploy tools on top of the chaos.
Read more: The Data You Ignore Is the Data You Need
Learn more about how Sapio Sciences is helping scientific organizations build AI-ready foundations here.