Certification that
follows AI
wherever it goes.

The same framework certifies an AI system in a cockpit, an operating room, a vehicle, and a production line. Different regulators, different standards, the same underlying failure modes. One published certification record.

01 / Origin
Aerospace & Defense
Where the framework was built

The SCL framework was developed from inside a government human spaceflight program to address exactly this problem, for crewed vehicles where the consequence of an incorrect AI output cannot be undone.

No AI specific certification path currently exists for human rated spacecraft. The governing software standard covers deterministic systems. The AI requirements developed inside the spaceflight program are a direct response to that gap. The SCL framework is the continuation of that work in the commercial and defense sectors.

NPR 7150.2D V&V occurs at delivery. AI systems on crew vehicles may be in operation for months to years. Drift detection and revalidation requirements don't exist in the current standard.
NASA-STD-8739.8 Safety cases require logical traceability from requirements to implementation. Black box AI outputs cannot satisfy this without an explicit explainability layer.
All spaceflight stds. No recognition of hallucination as a failure mode. Confident, plausible, incorrect outputs are not in the taxonomy of any current spaceflight software standard.
02 / Market
Aviation
FAA roadmap forming

The FAA has explicitly acknowledged in its AI Safety Roadmap that it lacks standards for AI in aviation. Certification frameworks established before regulation arrives often become the basis for regulation. Organizations certifying now are building the record that regulators will reference.

The DO-178C ecosystem already has a deeply established certification culture. Aerospace organizations know what certification means, why it matters, and how to prepare for it. That culture is ready for an AI specific layer. The SCL framework is designed to sit alongside DO-178C and address the AI failure modes that existing avionics software standards do not cover.

DO-178C Structural coverage metrics (MC/DC, decision coverage) cannot be applied to neural networks. No AI equivalent test coverage requirement exists in avionics software standards.
DO-178C Deterministic behavior assumed throughout the standard. Probabilistic outputs, confidence bounds, and distributional behavior are outside its vocabulary.
FAA AI Roadmap Roadmap acknowledges gaps but the certification criteria to fill them are not yet defined. Organizations certifying now are establishing the precedent.
03 / Market
Medical Devices
FDA SaMD evolving

FDA's Software as a Medical Device framework is evolving rapidly to address AI and ML in clinical decision support, diagnostic imaging, and patient monitoring. AI specific validation requirements remain underspecified. Organizations that establish certification records now will hold lasting regulatory position as FDA's requirements solidify.

The failure modes are identical to spaceflight AI: distributional drift in deployed models, hallucination in clinical language models, out of distribution inputs when patient populations differ from training data, and the absence of explainability when a clinician needs to understand an AI recommendation. The SCL framework addresses all of them.

FDA SaMD Predetermined change control plans exist, but criteria for AI model drift that triggers revalidation are not quantitatively defined. SCL's AI-4 fills this gap.
IEC 62304 Medical device software lifecycle standard does not address ML specific V&V requirements, OOD detection, or confidence calibration for clinical outputs.
ISO 14971 Risk management framework does not recognize AI hallucination, adversarial vulnerability, or distributional shift as named risk categories requiring specific mitigation.
04 / Market
Automotive
ISO 26262 / SOTIF gaps open

ISO 26262 defines functional safety requirements for automotive electrical and electronic systems. ISO 21448 (SOTIF) addresses safety of the intended functionality. Neither was designed for the probabilistic, distributional behavioral characteristics of neural network based systems operating in the open world conditions of public road environments.

The automotive market has the largest installed base of safety critical AI systems in the world. The gap between current certification practice and what AI systems actually require is substantial. SOTIF's "unknown unsafe scenarios" are, in AI terms, the out of distribution problem. SCL's AI-6 requirement formalizes what SOTIF acknowledges but does not specify.

ISO 26262 ASIL decomposition and hardware fault metrics cannot be applied to neural network components whose failure modes are distributional, not categorical.
ISO 21448 / SOTIF Unknown unsafe scenarios are acknowledged but not addressed with quantitative detection requirements. Out of distribution detection is the missing specification.
SOTIF Operational design domain definitions exist in concept but lack quantitative boundary enforcement. This is what SCL addresses through out of distribution detection with defined escalation paths.
05 / Market
Industrial & Manufacturing
EU AI Act, NIST AI RMF emerging

IEC 61508 and its derivatives define functional safety for electrical/electronic/programmable electronic systems. ISO 10218 and ISO 13849 govern robotic and machinery safety. None of these standards account for models whose behavior emerges from training data or that may degrade silently after deployment.

Industrial AI deployments often operate in tightly bounded physical environments with well defined operational parameters. An ideal fit for a declared Operational Design Domain (ODD). The SCL framework lets industrial operators translate the operational envelope they already enforce into a certification claim that downstream regulators, insurers, and customers can reference.

IEC 61508 SIL targets assume deterministic failure modes with quantifiable failure rates. Distributional, data driven failure modes fall outside the standard's vocabulary.
ISO 10218 / 13849 Robot safety functions are specified for bounded kinematic behavior. Vision based and learned control loops require OOD detection and explainability that current standards don't address.
EU AI Act High risk AI categories explicitly include industrial safety relevant systems. Organizations need verifiable evidence against a published standard, not internal attestations.

The failure modes are
the same everywhere.

A language model in a cockpit that hallucinates a clearance. A diagnostic imaging model in an operating room that misclassifies a scan. A perception system in an autonomous vehicle that fails to detect an out of distribution object. The failure mode is the same. The consequence differs only in specifics.

The SCL framework was developed in spaceflight, where the requirements pressure was greatest and the regulatory gap was most clearly defined. But hallucination doesn't care what industry you're in. Distributional drift happens in every deployed model. The framework is explicitly algorithm agnostic and applicable to neural networks, decision trees, reinforcement learning systems, and hybrid architectures.