The EU AI Act significantly raises the bar for how AI systems must be governed throughout their lifecycle. Firms are being asked not just to declare that they have policies and governance structures in place, but to furnish tangible, enduring evidence, in effect, to deliver a “kit” of artefacts that demonstrate compliance in action.
Organisations must compile documentation of risk assessments performed on the AI system, identifying hazards, assessing severity, and linking to mitigation measures. They must interface those risk assessments with control frameworks and governance procedures to show that someone took responsibility and acted on the findings. Traceability of data provenance becomes a core requirement: showing how training and test data were sourced, annotated, version‑controlled, and validated.
Model development must be accompanied by versioning and change logs to demonstrate how outputs evolve and controls persist across iterations. Human oversight must be clearly established, especially for high‑risk systems: auditors will want records of review procedures, human decision points, exception handling and escalation logs. Operational monitoring comes into play: once deployed, the AI must be subject to logging, performance tracking, anomaly detection and periodic reassessment; evidence of this “living governance” is critical. Finally, audit trails and artifacts themselves must be managed in a way that they are readily retrievable, role‑based, secured, and aligned to the relevant articles of the Act.
Acuity RM Group Plc (LON:ACRM) through its wholly owned subsidiary, Acuity. Acuity is an established provider of risk management services.



































