The right level of governance intelligence for where you are now
Every tier of ALMA is built around a specific governance challenge. Choose the one that matches where your organisation is in its AI governance journey — not the one with the longest feature list.
Essentials
Establish your baseline. Know where you stand.
You need to demonstrate that your organisation takes AI literacy seriously — but you don't yet have the evidence to show it. You're preparing for an audit, responding to a client questionnaire, or simply trying to understand where your people's AI mindset actually is before you invest in training.
Ideal for
Organisations conducting their first structured AI governance readiness review, or teams preparing evidence for EU AI Act Article 4 obligations.
Professional
Move from awareness to action. Build a governance culture.
You've identified that AI literacy is a governance risk — but a one-time score isn't enough. You need to track progress over time, understand which teams are most exposed, and have the tools to turn assessment findings into concrete interventions. You're also under pressure to demonstrate ongoing compliance, not just a point-in-time snapshot.
Ideal for
Governance leads, Chief AI Officers, and HR or L&D teams responsible for building and sustaining AI literacy across the organisation. Also suited to consultancies running ALMA assessments for multiple clients.
Enterprise
Govern AI at scale. Integrate, automate, and demonstrate.
You operate at a scale where AI governance can't be managed manually. You need ALMA to integrate with your existing GRC stack, support SSO and enterprise identity management, generate board-level reporting, and provide the audit trail that regulators and certification bodies require. You may also be deploying AI across multiple jurisdictions and need to manage governance evidence across a complex organisational structure.
Ideal for
Large organisations, regulated industries, and enterprises with complex AI deployment portfolios that require audit-ready governance evidence, enterprise identity management, and integration with existing compliance infrastructure.
EU AI Act Article 50 — transparency obligation
Article 50 of the EU AI Act requires organisations deploying AI systems to disclose their use to affected individuals. ALMA includes a built-in AI Transparency Statement to support this obligation. This applies to all organisations using AI systems within scope of the Act, irrespective of which ALMA tier they use.
All tiers share the same psychometric foundation
Every ALMA assessment — regardless of tier — uses the same validated 50-item instrument, the same five-dimension taxonomy, and the same scoring methodology. What differs is the depth of analysis, the breadth of deployment, and the integration and advisory support available.
Essentials
- Validated 50-item assessment
- Five-dimension score report
- Executive summary PDF
- EU AI Act Article 4 evidence
- Up to 50 respondents
Professional
- Everything in Essentials
- Longitudinal trend tracking
- Risk pattern detection
- Automatically generated action plans
- ISO 42001 gap analysis support
- Regulatory intelligence feed
- Multi-team segmentation
- Learning pathways & community
- Pulse surveys
- Up to 250 respondents
Enterprise
- Everything in Professional
- SSO & SCIM provisioning
- GRC webhook integration
- Board-level reporting
- Custom question sets
- Advanced analytics
- Multi-language support
- Unlimited respondents
- Priority advisory access
Common questions
Not sure which tier is right for you?
Book a 30-minute discovery conversation with The Responsible AI Center. We'll review your governance context and recommend the most appropriate starting point — without any obligation.
