Certification that
follows AI
wherever it goes.

The SCL framework is explicitly domain-agnostic. Developed in spaceflight but applicable wherever the consequence of AI failure is measured in lives, missions, or critical infrastructure. We go where AI goes.

Aerospace & Defense
Aerospace & Defense
Where the framework was built
NASA DoD NPR 7150.2D NASA-STD-8739.8 MIL-STD

If you are building or deploying AI on human-rated spacecraft, autonomous defense systems, launch vehicles, or crew-support platforms, this is the market the SCL framework was written for.

The SCL framework was developed from inside a government human spaceflight program to address exactly this problem — for crewed vehicles where the consequence of an incorrect AI output cannot be undone.

No AI-specific certification path currently exists for human-rated spacecraft. The governing software standard covers deterministic systems. The AI requirements developed inside the spaceflight program are a direct response to that gap. The SCL framework is the continuation of that work in the commercial and defense sectors.

NPR 7150.2D V&V occurs at delivery. AI systems on crew vehicles may be in operation for months to years. Drift detection and revalidation requirements don't exist in the current standard.
NASA-STD-8739.8 Safety cases require logical traceability from requirements to implementation. Black-box AI outputs cannot satisfy this without an explicit explainability layer.
All spaceflight stds. No recognition of hallucination as a failure mode. Confident, plausible, incorrect outputs are not in the taxonomy of any current spaceflight software standard.
Aviation
Aviation
FAA Roadmap forming now
FAA EASA RTCA DO-178C DO-254

If you are an avionics supplier, aircraft OEM, or air traffic management provider deploying AI in certified airspace, the regulatory window to establish a certification record is open now.

The FAA has explicitly acknowledged in its AI Safety Roadmap that it lacks standards for AI in aviation. Certification frameworks established before regulation arrives often become the basis for regulation. Organizations certifying now are building the record that regulators will reference.

The DO-178C ecosystem already has a deeply established certification culture. Aerospace organizations know what certification means, why it matters, and how to prepare for it. That culture is ready for an AI-specific layer. The SCL framework is designed to sit alongside DO-178C and address the AI failure modes that existing avionics software standards do not cover.

DO-178C Structural coverage metrics (MC/DC, decision coverage) cannot be applied to neural networks. No AI-equivalent test coverage requirement exists in avionics software standards.
DO-178C Deterministic behavior assumed throughout the standard. Probabilistic outputs, confidence bounds, and distributional behavior are outside its vocabulary.
FAA AI Roadmap Roadmap acknowledges gaps but the certification criteria to fill them are not yet defined. Organizations certifying now are establishing the precedent.
Medical Devices
Medical Devices
FDA SaMD guidance evolving
FDA SaMD IEC 62304 ISO 14971

If you are a medical device manufacturer, clinical AI developer, or health system deploying AI in diagnostic, therapeutic, or patient monitoring applications, the SaMD regulatory environment is moving faster than most organizations are prepared for.

FDA's Software as a Medical Device framework is evolving rapidly to address AI and ML in clinical decision support, diagnostic imaging, and patient monitoring. AI-specific validation requirements remain underspecified. Organizations that establish certification records now will hold lasting regulatory position as FDA's requirements solidify.

The failure modes are identical to spaceflight AI: distributional drift in deployed models, hallucination in clinical language models, out-of-distribution inputs when patient populations differ from training data, and the absence of explainability when a clinician needs to understand an AI recommendation. The SCL framework addresses all of them.

FDA SaMD Predetermined change control plans exist, but criteria for AI model drift that triggers revalidation are not quantitatively defined. SCL's AI-4 fills this gap.
IEC 62304 Medical device software lifecycle standard does not address ML-specific V&V requirements, OOD detection, or confidence calibration for clinical outputs.
ISO 14971 Risk management framework does not recognize AI hallucination, adversarial vulnerability, or distributional shift as named risk categories requiring specific mitigation.
Automotive
Automotive
ISO 26262 / SOTIF gaps
ISO 26262 ISO 21448 SOTIF NHTSA UNECE WP.29

If you are an automotive OEM, Tier 1 supplier, or ADAS developer working with neural network-based systems in safety-relevant vehicle functions, the gap between what ISO 26262 requires and what your AI system actually does is not theoretical.

ISO 26262 defines functional safety requirements for automotive electrical and electronic systems. ISO 21448 (SOTIF) addresses safety of the intended functionality. Neither was designed for the probabilistic, distributional behavioral characteristics of neural network-based systems operating in the open-world conditions of public road environments.

The automotive market has the largest installed base of safety-critical AI systems in the world. The gap between current certification practice and what AI systems actually require is substantial. SOTIF's "unknown unsafe scenarios" are, in AI terms, the out-of-distribution problem. SCL's AI-6 requirement formalizes what SOTIF acknowledges but does not specify.

ISO 26262 ASIL decomposition and hardware fault metrics cannot be applied to neural network components whose failure modes are distributional, not categorical.
ISO 21448 / SOTIF Unknown unsafe scenarios are acknowledged but not addressed with quantitative detection requirements. Out-of-distribution detection is the missing specification.
SOTIF Operational design domain definitions exist in concept but lack quantitative boundary enforcement. This is what SCL addresses through out-of-distribution detection with defined escalation paths.
Industrial & Manufacturing
Industrial & Manufacturing
Functional safety meets ML
IEC 61508 ISO 10218 ISO 13849 NIST AI RMF EU AI Act

If you are deploying AI into process control, predictive maintenance, industrial robotics, or machine vision on a production floor, the functional safety standards you already work to were not written for learned behavior.

IEC 61508 and its derivatives define functional safety for electrical/electronic/programmable electronic systems. ISO 10218 and ISO 13849 govern robotic and machinery safety. None of these standards account for models whose behavior emerges from training data or that may degrade silently after deployment.

Industrial AI deployments often operate in tightly bounded physical environments with well-defined operational parameters — an ideal fit for a declared Operational Design Domain (ODD). The SCL framework lets industrial operators translate the operational envelope they already enforce into a certification claim that downstream regulators, insurers, and customers can reference.

IEC 61508 SIL targets assume deterministic failure modes with quantifiable failure rates. Distributional, data-driven failure modes fall outside the standard's vocabulary.
ISO 10218 / 13849 Robot safety functions are specified for bounded kinematic behavior. Vision-based and learned control loops require OOD detection and explainability that current standards don't address.
EU AI Act High-risk AI categories explicitly include industrial safety-relevant systems. Organizations need verifiable evidence against a published standard, not internal attestations.
Ready to certify

Begin an assessment for your program.

The SCL framework applies wherever AI is deployed under consequence. Tell us about your program and we will map the applicable requirements and next steps.

Start an assessment
Algorithm-agnostic. Domain-agnostic.

The failure modes are
the same everywhere.

A language model in a cockpit that hallucinates a clearance. A diagnostic imaging model in an operating room that misclassifies a scan. A perception system in an autonomous vehicle that fails to detect an out-of-distribution object. The failure mode is the same. The consequence differs only in specifics.

The SCL framework was developed in spaceflight, where the requirements pressure was greatest and the regulatory gap was most clearly defined. But hallucination doesn't care what industry you're in. Distributional drift happens in every deployed model. The framework is explicitly algorithm-agnostic and applicable to neural networks, decision trees, reinforcement learning systems, and hybrid architectures.

Spaceflight Hallucination in a crew decision support system. Drift in an anomaly detection model between launch and arrival. OOD input during an unplanned orbital maneuver.
Aviation Hallucination in an ATC communication model. Distributional shift when operating in weather conditions outside training data. Adversarial spoofing of sensor inputs.
Medical Hallucination in a clinical language model. Drift in a diagnostic model as patient population shifts. OOD inputs when rare conditions fall outside training distribution.
Automotive Perception failure outside the operational design domain. Adversarial inputs that exploit model vulnerabilities. Drift in a safety system as vehicle population ages.
Industrial Vision model misclassification outside the declared ODD on a production line. Drift in a predictive maintenance model as equipment ages. OOD input to a collaborative robot when an unexpected object enters the workspace.