The SCL framework is explicitly domain-agnostic. Developed in spaceflight but applicable wherever the consequence of AI failure is measured in lives, missions, or critical infrastructure. We go where AI goes.
If you are building or deploying AI on human-rated spacecraft, autonomous defense systems, launch vehicles, or crew-support platforms, this is the market the SCL framework was written for.
The SCL framework was developed from inside a government human spaceflight program to address exactly this problem — for crewed vehicles where the consequence of an incorrect AI output cannot be undone.
No AI-specific certification path currently exists for human-rated spacecraft. The governing software standard covers deterministic systems. The AI requirements developed inside the spaceflight program are a direct response to that gap. The SCL framework is the continuation of that work in the commercial and defense sectors.
If you are an avionics supplier, aircraft OEM, or air traffic management provider deploying AI in certified airspace, the regulatory window to establish a certification record is open now.
The FAA has explicitly acknowledged in its AI Safety Roadmap that it lacks standards for AI in aviation. Certification frameworks established before regulation arrives often become the basis for regulation. Organizations certifying now are building the record that regulators will reference.
The DO-178C ecosystem already has a deeply established certification culture. Aerospace organizations know what certification means, why it matters, and how to prepare for it. That culture is ready for an AI-specific layer. The SCL framework is designed to sit alongside DO-178C and address the AI failure modes that existing avionics software standards do not cover.
If you are a medical device manufacturer, clinical AI developer, or health system deploying AI in diagnostic, therapeutic, or patient monitoring applications, the SaMD regulatory environment is moving faster than most organizations are prepared for.
FDA's Software as a Medical Device framework is evolving rapidly to address AI and ML in clinical decision support, diagnostic imaging, and patient monitoring. AI-specific validation requirements remain underspecified. Organizations that establish certification records now will hold lasting regulatory position as FDA's requirements solidify.
The failure modes are identical to spaceflight AI: distributional drift in deployed models, hallucination in clinical language models, out-of-distribution inputs when patient populations differ from training data, and the absence of explainability when a clinician needs to understand an AI recommendation. The SCL framework addresses all of them.
If you are an automotive OEM, Tier 1 supplier, or ADAS developer working with neural network-based systems in safety-relevant vehicle functions, the gap between what ISO 26262 requires and what your AI system actually does is not theoretical.
ISO 26262 defines functional safety requirements for automotive electrical and electronic systems. ISO 21448 (SOTIF) addresses safety of the intended functionality. Neither was designed for the probabilistic, distributional behavioral characteristics of neural network-based systems operating in the open-world conditions of public road environments.
The automotive market has the largest installed base of safety-critical AI systems in the world. The gap between current certification practice and what AI systems actually require is substantial. SOTIF's "unknown unsafe scenarios" are, in AI terms, the out-of-distribution problem. SCL's AI-6 requirement formalizes what SOTIF acknowledges but does not specify.
If you are deploying AI into process control, predictive maintenance, industrial robotics, or machine vision on a production floor, the functional safety standards you already work to were not written for learned behavior.
IEC 61508 and its derivatives define functional safety for electrical/electronic/programmable electronic systems. ISO 10218 and ISO 13849 govern robotic and machinery safety. None of these standards account for models whose behavior emerges from training data or that may degrade silently after deployment.
Industrial AI deployments often operate in tightly bounded physical environments with well-defined operational parameters — an ideal fit for a declared Operational Design Domain (ODD). The SCL framework lets industrial operators translate the operational envelope they already enforce into a certification claim that downstream regulators, insurers, and customers can reference.
The SCL framework applies wherever AI is deployed under consequence. Tell us about your program and we will map the applicable requirements and next steps.
Start an assessmentA language model in a cockpit that hallucinates a clearance. A diagnostic imaging model in an operating room that misclassifies a scan. A perception system in an autonomous vehicle that fails to detect an out-of-distribution object. The failure mode is the same. The consequence differs only in specifics.
The SCL framework was developed in spaceflight, where the requirements pressure was greatest and the regulatory gap was most clearly defined. But hallucination doesn't care what industry you're in. Distributional drift happens in every deployed model. The framework is explicitly algorithm-agnostic and applicable to neural networks, decision trees, reinforcement learning systems, and hybrid architectures.