Classification engine, document generation, and chat
AktAI's classification engine uses RAG (Retrieval-Augmented Generation) to analyze your AI systems. Here's how it works:
1. Your system details are embedded using OpenAI embeddings 2. The most relevant EU AI Act articles and recitals are retrieved from our vector database 3. Claude AI analyzes the context and classifies the system 4. Results include risk level, confidence score (0-100%), reasoning, and cited articles
The system retries up to 2 times to ensure accuracy.
The AI Compliance Chat (accessible from the sidebar) lets you ask questions about the EU AI Act in plain language. It uses the same RAG pipeline as classification but in conversational mode.
Examples: - "Does my chatbot need special documentation?" - "What are my obligations as a deployer?" - "Explain Article 6 in simple terms"
All responses cite specific articles from the regulation.
AktAI automatically recommends and generates compliance documents based on your registered AI systems. Supported document types:
- Fundamental Rights Impact Assessment (FRIA) - Risk Assessment Report - Transparency Notice - Conformity Declaration - AI Literacy Training Plan
Documents are generated using AI but should be reviewed by your compliance team before approval.
Your compliance score (0-100%) is calculated from four dimensions:
1. System Classification (25%): Have all systems been classified? 2. Documentation (25%): Are required documents generated and approved? 3. Training (25%): What percentage of employees completed AI literacy training? 4. Gap Remediation (25%): Have identified compliance gaps been addressed?
Scores are tracked over time so you can monitor progress.
The Evidence Vault automatically collects and timestamps compliance evidence:
- Document approvals - Risk classifications - Training completions - Gap remediations - Policy acknowledgments
This creates an audit trail you can present to regulators. Evidence can be exported as a compliance package (ZIP bundle).