Safety Critical Labs / The Framework

Ten requirement
areas. Every AI
failure mode
covered.

The SCL AI Requirements Framework establishes verifiable, pass/fail requirements for each AI-specific failure mode not addressed by existing safety-critical software standards. Algorithm-agnostic. Domain-applicable. Openly published under a citable DOI.

Framework v2.1.1

AI-1 through AI-10

Safety-Critical
Mission-Critical
Operational Support
Click a tier to filter · Esc to clear
AI-1
Operational and data foundations
A declared Operational Design Domain (ODD) bounding the certification claim, training/validation/test separation with documented provenance, and CUI and ITAR classification inheritance through AI output channels.
AI-2
Addressing AI bias
Bias-free baseline performance across user classes and operational contexts, bias-free training data, continuous bias monitoring, and bias threshold alerting and response.
AI-3
ML test coverage
A defined test matrix serving as the ML equivalent of code coverage. Covers nominal performance, edge cases, failure mode injection, and distributional boundary testing.
AI-4
Continuous validation
Post-deployment data drift monitoring, performance threshold maintenance, model maintenance criteria, and periodic model validation. Addresses a gap not covered by existing safety-critical software standards.
AI-5
Hallucination prevention
Hallucination criteria definition, detection, response, and logging, with independent output validation for safety-critical decisions. Graceful degradation to human override below threshold.
AI-6
Out-of-distribution detection
Training distribution characterization, runtime OOD detection, defined OOD response, and event logging. This is the condition under which AI behavior is least predictable.
AI-7
Adversarial robustness
Data poisoning protection, adversarial input protection, model inversion and extraction protection, model integrity, adversarial event logging, and AI supply chain security.
AI-8
Explainability
Operator-accessible decision reasoning with traceable decision basis, confidence indication, reasoning inspection capability, and public disclosure support. Required for any AI system where a traditional safety case would demand logical inspection.
AI-9
Human-AI teaming
Human decision authority, operational situational awareness, trust calibration, graceful degradation, interaction logging, operator qualification, workload management, and training program requirements.
AI-10
Privacy and data protection
Personal data identification and minimization, lawful basis and consent management, data subject rights, privacy-enhancing techniques, and cross-border transfer controls. Applies where the system processes personal data; otherwise documented Not Applicable with rationale.
Classification levels

Three tiers. Scaled to consequence.

The framework applies all ten requirement areas at different depths depending on the classification of your AI system. Classification is determined during Phase 1 of the assessment.

Tier 1
Safety-Critical AI
AI outputs directly affect human safety or system survivability. All AI-1 through AI-10 apply. No tailoring without Project Safety Review Board approval.
AI-1 AI-2 AI-3 AI-4 AI-5 AI-6 AI-7 AI-8 AI-9 AI-10
Tier 2
Mission-Critical AI
AI outputs affect operational success but not human safety. AI-1 through AI-6, AI-8, AI-9, and AI-10 apply; AI-7 is tailored with Chief Engineer approval and documented rationale.
AI-1 AI-2 AI-3 AI-4 AI-5 AI-6 AI-7 · tailored AI-8 AI-9 AI-10
Tier 3
Operational Support AI
AI supports operations but does not drive critical decisions. AI-1 through AI-6 and AI-10 apply at minimum. Tailoring permitted with Software Assurance authority approval and documented rationale.
AI-1 AI-2 AI-3 AI-4 AI-5 AI-6 AI-10
What assessment produces

A formal determination.
Not a score.

Assessment against this framework produces one of three outcomes. Each determination is documented against a specific version of the framework, at a defined classification level, with every finding on record. There is no subjective rating, no maturity index, no percentage.

The standard is publicly available so you can read every requirement before engaging with SCL. That transparency is deliberate. A certification is only defensible to regulators and procurement officers if anyone can verify what it was measured against.

Certified All applicable requirements met. Certificate issued with a validity period and a scheduled surveillance audit.
Conditional Minor findings documented. Certificate issued with specific, time-bound conditions attached and a required closure review.
Not Certified One or more requirements not met. Findings documented. Remediation required before reassessment. A Determination Document is issued regardless of outcome.

A Conditional Certification is not a lesser certificate. It is a certificate with specific, documented, time-bound conditions. It requires a closure review within the agreed timeframe.

Published standard

Read every requirement
before you engage.

The framework is openly published under a citable DOI. Every requirement, verification method, and evidence standard is available for review before any assessment begins.

If you believe a requirement is technically incorrect, insufficiently grounded, or missing coverage for a known AI failure mode, SCL welcomes that challenge. The standard improves through scrutiny.

Document Requirements and Verification Standards for Artificial Intelligence in Safety-Critical Applications — Version 2.1.1
DOI 10.5281/zenodo.19024420
License CC BY-SA 4.0 — free to read, cite, and reproduce with attribution
Download the framework