Human-adaptive AI.
Trustworthy by design.

Hopperlace builds evaluation and orchestration infrastructure for AI systems that know when to act and when to defer — making the human-AI boundary measurable, auditable, and trustworthy.

Humans are still relevant in the places that matter most in the AI age. High stakes mean human accountability. Current model capabilities mean real limits. Both mean someone still has to be in the loop — and that loop has to actually work. Better safety. Better and faster AI too.

The architecture

Three layers. One architecture.

Layer 1

Knowledge & Evaluation Layer

Evidence Synthesis

Early pilots

AI systems need to know what they don’t know. This layer builds the evaluation infrastructure for appropriate deferral — measuring not just accuracy but when the AI should stop and hand off. Current application: human-in-the-loop systematic review screening.

Learn more

Layer 2

Routing & Orchestration Layer

LetsBegin

In Development

Getting the human in the loop isn’t enough — it matters when, how, and in what form. This layer manages the handoff: sequencing decisions, surfacing one thing at a time, and routing based on confidence and complexity. Designed around human attention and cognitive capacity, so the human who receives the task can actually do it well. Current application: human-adaptive task orchestration for agentic workflows.

Layer 3

Trust & Governance Layer

Value Compass

In Development

Trust in AI systems has to be earned, not assumed. This layer makes AI behavior legible — measuring alignment between how a system acts and the values of the people and organisations relying on it. So the right tool gets used in the right situation, with the right expectations. Current application: trust evaluation framework for AI tool selection.

Running a systematic review team?

We’re onboarding early pilots.

Get in touch

Research

White Paper · March 2026

Deference-Aware Evaluation for Human-in-the-Loop AI Systems

Hopperlace Research

We introduce a framework for evaluating AI systems not just on accuracy, but on their capacity to recognize the limits of their own competence and appropriately defer to human judgment. The paper presents deference-aware metrics and demonstrates their application in high-stakes screening tasks.

Read on OSF

Team

Who we are

YS

Yuyu Shen

Founder

Ex data scientist and seasoned AI-native product leader turned founder — most recently in HR, fintech, and enterprise tech. Building Hopperlace because there’s a gap I couldn’t ignore: we’re building AI evaluation and orchestration systems as if the human isn’t in the picture. They are and will continue to be in many places. And that changes everything about how those systems should be designed.

Contact

Get in touch

Interested in our research or exploring collaboration?

hello@hopperlace.ai