Why the hardest question in scientific AI is not which tool to deploy

  • Automating a broken process makes it faster, not better; optimization and reimagination are not the same thing
  • Agentic AI does not follow linear sequences; workflows designed around legacy constraints will not capture its potential
  • The blue sky question, unlimited time and unlimited budget, is the starting point for genuine process reinvention
  • Generic large language models cannot reason about proprietary scientific data; domain-specific scientific language models are what make reimagined processes intelligent

Before a scientist at one biopharma organization can begin a bioreactor run, they complete 64 discrete manual steps. Creating time points. Setting parameters. Copying values into spreadsheets. Configuring the run setup. Every step done by hand, every time, because that is how it has always been done and because, until recently, there was no viable alternative.

Automating those 64 steps is useful. It removes transcription errors and saves time. But Adam Paton, Head of Strategic Accounts at Zifo, argues it is also exactly the wrong place to focus. “Is it revolutionary?” he asks. “Absolutely not.” Eliminating the friction in a broken process is not the same as asking whether the process should exist in its current form at all.

That distinction is where most AI investment in science currently stops short.

The Google Glass warning

We have been here before. In 2014, Google Glass promised scientists would run discovery labs remotely. A decade later, those headsets are nowhere to be found. According to Gartner, generative AI has now entered the same trough of disillusionment. Organizations that skipped the foundations are hitting a wall of apathy. They tried to automate a mess rather than fix it.

The optimization trap

Traditional process improvement follows a familiar logic. Map the current state. Define the future state. That future state is usually just a cleaner, more automated version of today. The sequence of steps stays the same; the friction between them decreases. As Paton put it at a recent joint Zifo and Sapio Sciences event in Hamburg, this delivers a step benefit rather than a transformation.

The shift to agentic AI changes that logic entirely. An agent does not need to follow a linear sequence. Where a scientist moves from A to B to C, an agent can move directly from A to E. Those 64 steps are not a process to be automated; they are a problem to be eliminated. The question is not how to make them faster, but why they require human involvement at all.

Starting with the blue sky

Paton’s framework for getting there begins with a deliberately unrealistic question: if you had unlimited time and unlimited budget, what would this process look like? Not an improved version of the current workflow. A genuinely reimagined one, built around what the science requires rather than what legacy systems have historically permitted.

Working backward from that ideal state, organizations can identify which constraints are real and which are habits. That mapping must include failures as well as successes: a process reimagined only around what worked carries the same success bias as an AI trained only on perfect data.

Generic AI will not get you there

Reimagining the process also surfaces a specific requirement for the AI that will support it. Generic large language models handle general tasks well: summarization, drafting, and information retrieval. For highly specialized domains such as lentiviral vector research or polymer science, they get you part of the way. They cannot reason about proprietary process data they have never seen.

What Paton describes as a scientific language model, trained on an organization’s own data and connected to its ontologies, is what makes a reimagined process genuinely intelligent rather than merely automated. That is the difference between a system that executes a workflow and one that can interrogate your process history and suggest what to do differently next time.

Paton pointed to Sapio’s Elain as an example: an AI co-scientist embedded within the platform, connected to the organization’s ontologies, and capable of operating across the full scientific workflow without scientists needing to switch context.

The scientist stays in the lead

None of this works without the scientist at the center. The goal of reimagination is not to replace scientific judgment; it is to remove everything that is not scientific judgment, so the expert’s attention goes where it actually matters.

Technology comes last. The path forward starts with the problem, not the platform. That is a harder path than deploying a new tool. It is also the only one that delivers a return worth the investment.

Adam Paton presented at Practical AI for Science Leaders, a joint Zifo and Sapio Sciences event held in Hamburg in April 2026, alongside Dr. Marko Gentzsch of Richter Biologics, Dr. Prashant Vaidyanathan of OXB, and Yuri de Lugt and Kelly Maddison of Sapio Sciences.