Agentic Workflow

Vision and mandate

The four commitments that define how CAIRE builds — the north star for every architectural decision.

Source

The vision crystallised on 2026-04-29 in the job description for CTO – AI Systems & Agent Workforce (location: Stockholm, hybrid; founder-level, hands-on). The role description is the design contract for everything in this section: it describes the human role only as a way to make the agent equivalent of that role the actual long-term load-bearer.

The full text follows. Subsequent pages reference it by section number.


The job description (verbatim)

Read this first

This is not a traditional CTO role. If you want to:

Do not apply.

This role exists to replace those things with software.

About Caire

Caire is building an AI-first healthcare platform with a single, uncompromising principle:

Caire must scale output 1000× without scaling humans.

We do not build teams. We build execution systems.

The role

A new category of CTO. Designs, builds, and continuously optimises an AI workforce that executes across:

Humans define what and why. AI agents execute how — end to end. The job is to make the company behave like software.

Mission

The mandate is brutally simple:

The long-term bottleneck of Caire should be compute, not headcount.

What this CTO builds (hands-on)

  1. AI workforce architecture. Agent roles, responsibilities, collaboration patterns. Evaluator agents, review loops, self-correction. PRD → code → test → deploy pipelines run primarily by agents. Repo conventions and workflows agents must follow.
  2. Agent-first execution. Multi-agent orchestration (build, review, test, ship). AI systems that evaluate and improve other AI systems. Deterministic execution where possible — evaluators where not. Systems that remove work instead of automating it.
  3. Tool & model agnosticism. Continuously evaluate and rotate between OpenAI, Anthropic, Google, OSS frameworks, emerging tools. No vendor loyalty. Only performance, cost, reliability, and leverage matter.
  4. Metrics-driven autonomy. Execution tied directly to business signals (revenue, cost, funnel metrics). Budgets and cash balance treated as system inputs. AI decides how much to scale within constraints.

What this CTO explicitly does NOT do

Why this role exists

Hiring people is slow, expensive, and non-linear. We are building a company where execution is software.


The four commitments (used as design constraints in this section)

# Commitment Where it shows up in this section
(a) Humans define what / why. Agents execute how, end to end. PRD-to-PR pipeline, Darwin as orchestrator.
(b) Tool / model agnosticism — rotate providers based on cost × performance × reliability. Model and vendor agnosticism.
(c) If it works, scale it automatically. If it doesn't, kill it automatically. Scale or kill.
(d) Execution tied to business signals (revenue, cost, funnel, cash). Budgets are system inputs. Throughput and business signals.

Every other page in this section is constrained by these. If a proposal violates (b) — say, hard-codes a single provider — it doesn't ship until the constraint is restored. If a proposal violates (c) — adds a knob a human must turn weekly — it's incomplete until that knob is removed or automated.

Why the wiki carries this

The wiki is the agent substrate. If it doesn't carry the design contract, every new agent has to be told the contract from scratch — which costs tokens, drifts in interpretation, and silently regresses. Pinning the contract here means any agent reading the wiki picks it up automatically.

This page is never edited to soften the language. If a commitment changes, the change happens here, deliberately, with a wiki log entry — not by sanding the edges of the original prose.

Cross-references