Standards
The frameworks behind
every finding.
We map every result to the standards your security, compliance, and legal teams already use as reference points.
OWASP LLM Top 10
The definitive list of the ten most critical security risks specific to Large Language Model applications. Published by OWASP and maintained by a global community of AI security researchers.
- LLM01: Prompt Injection
- LLM02: Insecure Output Handling
- LLM03: Training Data Poisoning
- LLM04: Model Denial of Service
- LLM05: Supply Chain Vulnerabilities
- LLM06: Sensitive Information Disclosure
- LLM07: Insecure Plugin Design
- LLM08: Excessive Agency
- LLM09: Overreliance
- LLM10: Model Theft
Every BetweenPrompt scan generates probes mapped to all ten categories. Findings are tagged with their LLM category code and include OWASP-aligned remediation guidance.
NIST AI RMF
The NIST AI Risk Management Framework provides a structured, voluntary approach to managing the risks of AI systems throughout their lifecycle. Increasingly required by enterprise procurement and US federal contractors.
- GOVERN — Policies, processes, and accountability
- MAP — Categorize risks and context
- MEASURE — Analyze and assess risks
- MANAGE — Prioritize and treat risks
BetweenPrompt's automated testing directly supports the MEASURE function. Consulting engagements cover GOVERN, MAP, and MANAGE — helping teams implement RMF-aligned processes.
MITRE ATLAS
Adversarial Threat Landscape for Artificial-Intelligence Systems. MITRE ATLAS is a knowledge base of adversarial ML tactics, techniques, and procedures observed in real-world attacks and academic research.
- Reconnaissance against ML systems
- ML supply chain compromise
- Model evasion and inference attacks
- Backdoor ML model manipulation
- Exfiltration via AI system outputs
BetweenPrompt's execution engine includes test cases derived from ATLAS techniques. Every applicable finding includes the ATLAS technique ID alongside OWASP mapping.
ISO/IEC 42001
The first international standard for AI Management Systems. Provides requirements for establishing, implementing, maintaining, and improving an AI management system — analogous to ISO 27001 for information security.
- AI risk assessment and treatment
- Responsible AI objectives
- Operational controls for AI systems
- Performance evaluation and improvement
BetweenPrompt's reporting supports documentation requirements for Clause 8 (Operation) and Clause 9 (Performance Evaluation). Our consulting team can advise on full 42001 alignment.
OWASP ASVS
The Application Security Verification Standard defines security requirements for designing, developing, and testing web applications. Relevant for any AI system with a web-facing interface or API layer.
- Authentication and session management
- Input validation and encoding
- API and web service security
- Configuration and infrastructure
BetweenPrompt covers the ASVS controls most applicable to AI-adjacent APIs — particularly input handling (V5) and API security (V13). Full ASVS coverage is available through security review engagements.
EU AI Act
The European Union's comprehensive AI regulation classifying AI systems by risk level and imposing obligations accordingly. High-risk systems face mandatory conformity assessments. In force 2024, phased obligations through 2026.
- Risk classification of AI systems
- Transparency requirements for LLM outputs
- Technical documentation obligations
- Post-market monitoring requirements
BetweenPrompt's reporting can serve as evidence for technical documentation and conformity assessment. Our consulting team advises on risk classification and compliance roadmaps.
Need help mapping to a specific standard?
Our consulting team advises on compliance roadmaps for NIST AI RMF, ISO/IEC 42001, and EU AI Act obligations.