Docs · Release and evaluations

Release governance and evaluation workflows

Understand how SentinelAI connects evaluation suites, release approvals, baselines, rollback references, and dependency invalidation.

Overview

Release governance and evaluation workflows are tightly connected in SentinelAI. Evaluation suites define the regression evidence a runtime AI system must satisfy, while release records preserve the governed rollout decision and the dependency set behind it.

This page is part of the public SentinelAI documentation layer. It is meant to accelerate orientation and evaluation while staying aligned with the product’s governance-focused positioning and messaging guardrails.

Evaluation suites and baselines

Suites organize prompt-linked test cases, approved baselines, pass-rate targets, and regression thresholds so readiness is based on a durable standard instead of one-off testing.

Release records

Release governance records capture version, environment, approval state, rollback references, and linked prompt or source dependencies for the AI system being promoted.

Gating and invalidations

Release blocking works best when suite results and dependency snapshots remain current. SentinelAI is designed to make those relationships visible when releases need re-review after a supporting asset changes.