Agentic AI Risk Control Framework

In brief

The Agentic AI Risk Control Framework is a not-for-profit standard for managing the enterprise-wide risks associated with integrating non-human workforces into business processes.

We are not interested in just jumping on the AI governance bandwagon, though: we built the Framework to support the industry-wide need to improve the performance and efficiency of client experience through Fetch, the CX Data Collection Agent.

The Framework is important because agent-led automation is accelerating, but without the right controls, firms risk avoidable harm, cost, and reputational damage.

Asset managers, therefore, should evaluate their current agentic AI exposure, adopt a phased control approach and, to stay ahead, register their interest in gaining the earliest access to the framework in October.

Where’s the agentic AI risk control framework?

“No single framework fully addressed the practical risk control needs of agentic AI.”

Agentic AI is exciting because it automates complex, multi-step processes by pursuing goals rather than following rigid instructions, making it more adaptive and durable than traditional software.

Agentic AI matters to asset management firms, therefore, because they perform many multi-step operational processes within a market environment that demands a constant search for efficiency savings.

But for business managers in a regulated landscape where durability is essential, agentic AI raises questions:

  • What are the new risks I will be liable for?
  • What controls should I implement before I let an agent go live in my operations?
  • What effect will these controls have on the value I gain from the agent?

Answering these questions requires an independent agentic AI risk control framework, but when we looked for such a tool, we found only a patchwork of answers:

Authoritative Publications

Strengths vs factors limiting their potential to be an agentic AI risk control framework

Authoritative Publication Strengths Limitations
1. ISO 42001 Organization-wide AI governance system with terminology, structure, and traceability. Limited in operational detail; requires supplementation for hands-on control design. 
2. NIST’s AI Risk Management Framework Lifecycle-based trust framework with emphasis on risk mapping and stakeholder roles. Lighter on detailed control templates or ownership roles. 
3. Anthropic’s Responsible Scaling Policy   Highlights agent containment, oversight, and operational limits. High-level policy document with fewer practical implementation steps. 
4. The UK’s AI Assurance Toolkit  Promotes logging, access control, and explainability for assurance. Conceptual and policy-centric; lighter on operational mechanisms. 
5. Microsoft’s Responsible AI Standard Includes practical RAI tools and checklists for enterprise use. Focuses on principles and tooling; not tailored to agent-based AI. 
6. Meta’s CICERO Agent Risk Controls  Real-world security controls for a specific AI agent use case. Designed for a single product and isn’t easily generalisable to most business workflows. 
7. IAPS AI Agent Governance: A Field Guide Strategic governance approach with stakeholder-specific guidance. Lighter on controls for day-to-day risk management in business operations. 
8. Google’s “Approach to AI Agent Security” Security model tied to agent architecture layers (tools, memory, orchestration). Lighter on specific controls and the link to business risk management. 
9. Unit 42: “AI Agents Are Here. So Are the Threats” Detailed threat scenarios with specific mitigations across agent lifecycles. Primarily security-focused; lighter on the broader operational or business risk controls. 
10. IBM's “Governing Agentic AI in Financial Services” Regulatory framing for financial services with an emphasis on compliance strategy and phased governance recommendations. Lighter on control granularity, actionable steps, and links to business value.  
11. MIT’s “Mapping AI Risk Mitigations”A comprehensive taxonomy of AI risks and mitigations across the AI lifecycle.Covers all AI at a high-level, rather than a deep focus on agentic AI controls.
12. Google’s AI Agent Security FrameworkHigh-level vision for secure agent design grounded in principles like observability, permissions, and defense-in-depth.Practitioners still need to operationalize principles into practical, auditable controls for real-world deployment and compliance.
13. Gartner’s AI TRiSMEnterprise-level structure for linking AI trust, risk, and security into governance, monitoring, and compliance workflows.Scope is broader than agentic AI and more depth needed on the technical, operational, and behavioural controls needed to manage agentic risk.
14. Microsoft’s whitepaper on Governing AI AgentsComprehensive and realistic description of how to manage agents in the Microsoft environment.Lacks the software-agnostic approach and broader risk framing needed from an industry-wide control framework.
15. Anthropic’s Framework for Safe and Trustworthy AgentsPractical user-facing safety mechanisms within the Anthropic environment.Lacks the software-agnostic approach, breadth of governance and commercial risk mapping needed from an industry-wide control framework.
16. ServiceNow’s AI Maturity IndexLinks AI maturity to governance, workflow orchestration, talent readiness, and ROI measurement.Focuses on strategic maturity rather than detailed operational risk controls.
17. European Tele-communications Standards Institute (ETSI) AI Threat OntologyStructured and standardised language and analysis of AI risks across actors, vectors, and assets.Lighter on the translation into the operational controls needed for practical actionability.
18. Secure AI Lifecycle (SAIL) FrameworkComprehensive, lifecycle-based, AI-level control framework.Lighter on the operational, governance, and human-factor controls needed for agentic AI.
19. Prescriptive Guidance on Agentic AI Frameworks, Protocols, and Tools on AWSPractical, production-ready design patterns, agent orchestration blueprints, and infrastructure tooling for the AWS platform. Lacks the software-agnostic approach needed from an industry-wide control framework.
20. Adversarial AI Threat Modelling Framework by Kai AizenAn attacker-driven testing / threat-model of tactics, red-card evaluations, and KPIs. Lighter on policy, accountability, supplier oversight, continuous monitoring, and audit.
21. OWASP’s State of Agentic AI Security and GovernanceCommunity-backed baseline, including secure discovery and AI-specific scoring, emphasizing runtime control. Stops short of enterprise-grade control mappings, protocol-hardening specifics, and auditable evidence requirements. Lighter on organizational integration and human factors.
22. Conformity Assessment Procedure (capAI) from Oxford and Bologna universitiesGeneral governance and conformity assessment process for all high-risk AI systems under the EU AI Act.Lacks the catalogue of granular, technical, and risk-specific controls needed to ensure safe management of AI agents.
Authoritative Publication Strengths Limitations
1. ISO 42001 Organization-wide AI governance system with terminology, structure, and traceability. Limited in operational detail; requires supplementation for hands-on control design. 
2. NIST’s AI Risk Management Framework Lifecycle-based trust framework with emphasis on risk mapping and stakeholder roles. Lighter on detailed control templates or ownership roles. 
3. Anthropic’s Responsible Scaling Policy   Highlights agent containment, oversight, and operational limits. High-level policy document with fewer practical implementation steps. 
4. The UK’s AI Assurance Toolkit  Promotes logging, access control, and explainability for assurance. Conceptual and policy-centric; lighter on operational mechanisms. 
5. Microsoft’s Responsible AI Standard Includes practical RAI tools and checklists for enterprise use. Focuses on principles and tooling; not tailored to agent-based AI. 
6. Meta’s CICERO Agent Risk Controls  Real-world security controls for a specific AI agent use case. Designed for a single product and isn’t easily generalisable to most business workflows. 
7. IAPS AI Agent Governance: A Field Guide Strategic governance approach with stakeholder-specific guidance. Lighter on controls for day-to-day risk management in business operations. 
8. Google’s “Approach to AI Agent Security” Security model tied to agent architecture layers (tools, memory, orchestration). Lighter on specific controls and the link to business risk management. 
9. Unit 42: “AI Agents Are Here. So Are the Threats” Detailed threat scenarios with specific mitigations across agent lifecycles. Primarily security-focused; lighter on the broader operational or business risk controls. 
10. IBM's “Governing Agentic AI in Financial Services” Regulatory framing for financial services with an emphasis on compliance strategy and phased governance recommendations. Lighter on control granularity, actionable steps, and links to business value.  
11. MIT’s “Mapping AI Risk Mitigations”A comprehensive taxonomy of AI risks and mitigations across the AI lifecycle.Covers all AI at a high-level, rather than a deep focus on agentic AI controls.
12. Google’s AI Agent Security FrameworkHigh-level vision for secure agent design grounded in principles like observability, permissions, and defense-in-depth.Practitioners still need to operationalize principles into practical, auditable controls for real-world deployment and compliance.
13. Gartner’s AI TRiSMEnterprise-level structure for linking AI trust, risk, and security into governance, monitoring, and compliance workflows.Scope is broader than agentic AI and more depth needed on the technical, operational, and behavioural controls needed to manage agentic risk.
14. Microsoft’s whitepaper on Governing AI AgentsComprehensive and realistic description of how to manage agents in the Microsoft environment.Lacks the software-agnostic approach and broader risk framing needed from an industry-wide control framework.
15. Anthropic’s Framework for Safe and Trustworthy AgentsPractical user-facing safety mechanisms within the Anthropic environment.Lacks the software-agnostic approach, breadth of governance and commercial risk mapping needed from an industry-wide control framework.
16. ServiceNow’s AI Maturity IndexLinks AI maturity to governance, workflow orchestration, talent readiness, and ROI measurement.Focuses on strategic maturity rather than detailed operational risk controls.
17. European Tele-communications Standards Institute (ETSI) AI Threat OntologyStructured and standardised language and analysis of AI risks across actors, vectors, and assets.Lighter on the translation into the operational controls needed for practical actionability.
18. Secure AI Lifecycle (SAIL) FrameworkComprehensive, lifecycle-based, AI-level control framework.Lighter on the operational, governance, and human-factor controls needed for agentic AI.
19. Prescriptive Guidance on Agentic AI Frameworks, Protocols, and Tools on AWSPractical, production-ready design patterns, agent orchestration blueprints, and infrastructure tooling for the AWS platform. Lacks the software-agnostic approach needed from an industry-wide control framework.
20. Adversarial AI Threat Modelling Framework by Kai AizenAn attacker-driven testing / threat-model of tactics, red-card evaluations, and KPIs. Lighter on policy, accountability, supplier oversight, continuous monitoring, and audit.
21. OWASP’s State of Agentic AI Security and GovernanceCommunity-backed baseline, including secure discovery and AI-specific scoring, emphasizing runtime control. Stops short of enterprise-grade control mappings, protocol-hardening specifics, and auditable evidence requirements. Lighter on organizational integration and human factors.
22. Conformity Assessment Procedure (capAI) from Oxford and Bologna universitiesGeneral governance and conformity assessment process for all high-risk AI systems under the EU AI Act.Lacks the catalogue of granular, technical, and risk-specific controls needed to ensure safe management of AI agents.
Show

We have benefited greatly from each of these works, but the need for a generally accepted and software agnostic industry standard remains because its absence is allowing confusion about agentic AI to thrive:

  • Some firms may rush ahead too fast and hurt themselves. In fact, early evidence from MIT indicates this is already happening. 
  • Others may hesitate and fall behind unnecessarily.

Critics raise concerns, but of the top 10 voiced on LinkedIn in the first six months of 2025, 7 of them would apply to any innovation, not just agents, and all are controllable.

Risk control, therefore, is a superior response than either rushing in or hesitating.

The top 10 concerns about agentic AI raised in LinkedIn posts in January-June 2025:

ConcernWhat’s the risk?  Is it specific to agentic AI? Is it controllable? 
1. “AI agents don’t behave like traditional software … they can act in unexpected ways and produce incorrect results.” Unpredictability & Errors ✓ ✓ 
2. “Systems involving multiple agents can be difficult to troubleshoot. When something breaks, it’s not always clear why.” Lack of Visibility & Difficulty Troubleshooting ✓ ✓ 
3. “Cost: running advanced AI agents isn’t cheap … expenses can add up … especially if ROI isn’t defined.” Cost & Resource Overheads ✓ ✓ 
4. “Decommissioning… how do you ‘fire’ an AI agent … revoke credentials… ensure final state is securely archived?”  Credential & Identity Management ✘ ✓ 
5. “If not properly secured, an agent could be exploited to extract confidential business strategies … risk of a breach.”  Security & Data Leakage ✘ ✓ 
6. “Without telemetry, agents operate without the context they need … integration of telemetry is the gap.”  Dependency on Integration & Data Quality ✘ ✓ 
7. “If an agent relies on thirdparty tools or APIs and those services change or go down, the agent may stop functioning.” Vendor / API Instability ✘ ✓ 
8. “Monitoring frameworks… intervention protocols… crossfunctional oversight… continuous learning.”  Governance & Monitoring Limitations ✘ ✓ 
9. “Hardest part … is getting people to change how they work.”  Operational Resilience & Change Management ✘ ✓ 
10. “Risk of diminishing human cognitive abilities … AI used as tool, not replacement.”  OverReliance & Skill Degradation ✘ ✓ 
ConcernWhat’s the risk?  Is it specific to agentic AI? Is it controllable? 
1. “AI agents don’t behave like traditional software … they can act in unexpected ways and produce incorrect results.” Unpredictability & Errors ✓ ✓ 
2. “Systems involving multiple agents can be difficult to troubleshoot. When something breaks, it’s not always clear why.” Lack of Visibility & Difficulty Troubleshooting ✓ ✓ 
3. “Cost: running advanced AI agents isn’t cheap … expenses can add up … especially if ROI isn’t defined.” Cost & Resource Overheads ✓ ✓ 
4. “Decommissioning… how do you ‘fire’ an AI agent … revoke credentials… ensure final state is securely archived?”  Credential & Identity Management ✘ ✓ 
5. “If not properly secured, an agent could be exploited to extract confidential business strategies … risk of a breach.”  Security & Data Leakage ✘ ✓ 
6. “Without telemetry, agents operate without the context they need … integration of telemetry is the gap.”  Dependency on Integration & Data Quality ✘ ✓ 
7. “If an agent relies on thirdparty tools or APIs and those services change or go down, the agent may stop functioning.” Vendor / API Instability ✘ ✓ 
8. “Monitoring frameworks… intervention protocols… crossfunctional oversight… continuous learning.”  Governance & Monitoring Limitations ✘ ✓ 
9. “Hardest part … is getting people to change how they work.”  Operational Resilience & Change Management ✘ ✓ 
10. “Risk of diminishing human cognitive abilities … AI used as tool, not replacement.”  OverReliance & Skill Degradation ✘ ✓ 
Show

Agentic risk control will separate firms

“Treat the concerns like any other risk: break them down, demystify them, and start managing them.”

As a benchmarking firm, Accomplish’s bible is ISO 27001 on information security, which runs through everything we do, including regular independent audits that our clients request during due diligence. So, when we embarked on our own agentic AI transformation, we looked for some structure to help us identify the risks and controls we would need.

On finding the patchwork of results above, we began building a proprietary agentic AI risk control framework to ensure we build agents safely. After several iterations, it now lets us choose from ~150 controls assigned to ~30 risks across 5 control groups: 

A.  Agent Behaviour and Control.
B.  Scaling and Multi-Agent Risks.
C.  System Security and Threats.
D.  Governance and Integration.
E.  Human Factors.

Without this structure, we’d have lost time and money, through avoidable errors, slower efficiency savings, and audit stress. This will apply equally to asset management firms whose risk management practices often do not yet account for agent-specific behaviours and that, as large organizations, will especially benefit from this structure. Accordingly, we believe the winning response to the concerns above is to treat them like any other risk: break them down, demystify them, and start managing them.

Accomplish’s Agentic AI Risk Control Framework

The primary purpose of this article is to demonstrate how Accomplish controls agentic AI, because this is crucial to our role in training and testing Fetch – your CX Data Collection Agent.

However, we are also pleased to contribute the framework freely and openly to the broader body of knowledge on agentic AI risk management. To achieve this, we have transferred its IP to a not-for-profit entity called Agentic Risk IP. 

Holistic, flexible, applicable to any platform, and protocol-agnostic, the Agentic AI Risk Control Framework’s five key features are helping us transform our business processes in a structured and practical way:

  1. Commercial value orientation – it maps every risk to a tangible business harm and the time and money saved through mitigation. This discourages abstract theory and encourages controls that are written in operational language and that align with real workflows.
  2. Lifecycle mapping – it attributes each control across the stages of an agent’s life: design, training, testing, implementation, and maintenance. Building this bridge between engineering and ongoing governance is important because different risks materialise at different stages, reducing the potential for painful ‘tail’ risks. These layered safeguards create defense-in-depth by employing the principles of ‘secure-by-design’ thinking.
  3. Cross-functional and enterprise-wide accountability  – governing autonomous AI agents requires new lines of accountability and validation, so the framework assigns each control to an owner. It often calls on more than one owner to take responsibility for managing a risk, emphasising the importance of cross-functional governance. This makes responsibility traceable, keeping humans in the loop, enabling collaboration, and avoiding finger-pointing. It also helps firms assign clear responsibilities across the Three Lines of Defence:
                     a. Business owners define acceptable use and validate outcomes.
                     b. Technology leads implement controls and ensure technical integrity.
                     c. Risk and compliance functions provide oversight and assurance.
  4. Prioritisation – it encourages prioritisation by assessing each risk’s likelihood and impact, while also stipulating a ‘minimum viable control set’. This reminds users that while the full set of controls is a risk-based decision, ‘no control’ is not an acceptable choice.
  5. Continuous improvement – it flags each control’s loggability, monitorability, and reviewability, which strengthens accountability and facilitates transparency and post-incident learning.

In keeping with open-source standards, we will also commence a public consultation on it in October and place the resulting version under the future direction of an independent Governing Council. Register your interest below in gaining the earliest access and (if you wish) contributing to it through the consultation.

The value of managing agentic AI risks

This agent governance framework gives firms practical and value-adding AI operational controls aligned to real-world agent behaviour.

Agentic AI risks and the value of controlling them

Agentic AI Risk
Tangible business harm
✘ Tangible business harm
Value of the operational controls
✓ Value of the operational controls
01. Unpredictability and Errors
✘ Agent takes unsafe or unintended actions.
✓ Prevent reputational damage and legal exposure.
✓ Save time debugging, avoid reactive incident handling.
02. Lack of Visibility and Difficulty Troubleshooting
✘ Agent may fail silently – signalling no errors while producing incorrect or incomplete outcomes.
✓ Enable root cause analysis and forensic clarity.
✓ Reduce downtime during investigations.
03. Prompt instability and regression
✘ Agent performance worsens over time due to uncontrolled prompt changes.
✓ Maintain consistent quality and avoids silent regressions.
✓ Prevent time lost due to output instability.
04. Inconsistent Reasoning Chains
✘ Agent may lose logical coherence in the middle of multi-step reasoning, contradicting its earlier logic.
✓ Structuring prompt chains, logging intermediate steps, and validating outputs for logical consistency builds confidence.
✓ Minimise the time needed to enact fallback options.
05. Bias and Fairness
✘ Legal or reputational damage due to biased, unfair, or harmful outputs.
✓ Preserve social and legal licence to operate; avoids fines.
✓ Prevent lengthy remediation and disclosure processes.
06. Retrieval drift in RAG systems
✘ Strategic drift and missed goals as the retrieval system becomes misaligned.
✓ Avoid decisions made on outdated data sources.
✓ Avoid wasting time investigating or retraining agents due to unsatisfactory inputs.
Agentic AI Risk
Tangible business harm
✘ Tangible business harm
Value of the operational controls
✓ Value of the operational controls
07. Inter-agent orchestration and communication
✘ Multiple agents get confused or overlap in responsibilities.
✓ Avoid cross-talk or function duplication..
✓ Save time resolving misrouted tasks.
08. Overlapping or Conflicting Agent Actions
✘ Lack of boundaries and lines of authority between one agent and another risks chaos.
✓ Clear boundaries and decision-making authorities enable the organisation to function smoothly, minimising its regulatory and reputational risks.
✓ Save time from remediating both the output of conflicting decisions and the agentic environment that caused it.
09. Agent Identity Confusion or Impersonation
✘ It may become unclear which agent performed which action, or whether the agent was authentic.
✓ Ensure each agent maintains a verifiable and unique identity.
✓ Ensure mutual authentication and token verification.
10. Orchestrator Subversion
✘ Orchestrator failure leads to process breakdowns, compliance issues, or financial loss.
✓ Prevent costly mis-execution and reputational damage by enforcing valid agent workflows.
✓ Save time on error investigation, manual overrides, and recovery from cascading agent failures.
11. Uncontrolled Agent Replication
✘ Untraceable outputs make it impossible to justify decisions or prove compliance. Loss of stakeholder trust. Operational and remediation costs.
✓ Enable auditability, defensibility, and transparency in regulated environments.
✓ Minimise back-and-forth between compliance / legal and operational teams.
Agentic AI Risk
Tangible business harm
✘ Tangible business harm
Value of the operational controls
✓ Value of the operational controls
12. Unauthorised Data Access
✘ Excessive or unmonitored data access, creates exposed attack surfaces that raise privacy and security violation risks.
✓ Protect the confidentiality of training and operational data by treating agents as insiders and manage their identities, credentials, and privileges through a Non-Human Identity (NHI) lifecycle.
13. Unauthorised Data Modification
✘ Degraded agent performance and security by tampering, memory poisoning, or modifying training or operational datasets.
Protect the integrity of your data by requiring:
✓ The same information-handling capability from an agent as you do from an employee.
✓ Additional agent-specific controls to protect data integrity and detect and respond to unauthorised modification.
14. Protocol Related Risks
✘ Third-party MCP / A2A servers could contain basic authentication / command-injection bugs, while mixing MCP / A2A / ACP versions could weaken or remove security features.
✓ Onboard servers as you would a new vendor and, if you use multiple protocols, manage version control strictly.
15. Malicious User Prompt
✘ A malicious instruction may trick an agent into unintended behaviour or to talk to a rogue agent.
✓ Restrict prompt sources, pathways, and size.
✓ Confirm integrity of incoming data before execution.
✓ Remove unauthorized instructions.
✓ Monitor for attacks.
16. Agent Fails Under Attack
✘ An agent may be vulnerable to prompt-based or memory-based attacks that cause it to perform illicit tasks.
✓ Preventive and detective controls for known attack vectors and an open risk posture enable comparison, benchmarking, and improvement.
17. Incident Management
✘ Slow detection and response to agent-induced breaches or misuse create uncontrolled damage and repair costs.
✓ Reduce cost of uncontrolled exposure or delayed containment by detecting issues early and containing damage.
✓ Reduce your 'mean time to respond' and the extent of rework needed.
Agentic AI Risk
Tangible business harm
✘ Tangible business harm
Value of the operational controls
✓ Value of the operational controls
18. Agent Lifecycle Management
✘ Agents may enter production without adequate change control, version tracking, or deprecation procedures.
✓ Manage and govern the non-human identity (NHI) lifecycle, from its initial conception, through to version management, to its eventual retirement.
19. Cost and Resource Overheads
✘ Agent consumes excessive resources or budget.
✓ Prevent uncontrolled cost escalation through pre-launch stress tests, usage limits, and resource consumption monitoring.
20. Dependency on Integration and Data Quality
✘ Poor data custody, unclear data provenance or stale data compromises performance and decision reliability.
✓ Validating data pipelines, enforcing data schema, and monitoring data health will benefit the entire organisation (agentic and non-agentic).
✓ Save time by designing agents to identify and flag data integrity issues.
21. Collateral Damage
✘ An agent could make irreversible changes or deletions to data, infrastructure, or services.
✓ Applying access controls and fallback procedures will protect your organisation's infrastructure from inadvertent harm.
✓ Save time by avoiding the need to reinstate, recreate, or repair collateral damage.
22. Vendor / API Instability
✘ Agents tied to external APIs or vendor tools may break if dependencies change, causing downtime or errors in internal processes.
✓ Strengthen your process by treating external relationships like any other supplier risk, version-locking APIs, simulating downtime in pre-launch testing, and building resilient fallback workflows.
✓ Avoid time spent on firefighting when external services change unexpectedly or break.
23. Accountability, Explainability, and Monitoring
✘ Insufficient oversight, legal accountability, transparency, or explainability could risk policy, audit, or regulatory breach.
✓ Human ownership, defined responsibilities, and layered governance controls.
✓ Real time monitoring, explainability logs, kill switches, and independent reviews of agent decisions.
24. Board-Level Oversight and Direction
✘ Strategic or enterprise-level opportunities and threats from agentic AI go un-identified at board-level.
✓ Strategic impact assessment, up to date AI risk appetite statement, regular discussion of new and ongoing agentic risks and scenarios.
✓ Agentic Risk Board Evidence Pack.
✓ Clear decision-making and direction.
25. Regulatory Risk
✘ Breaches of rules like the EU’s AI Act could lead to fines, market access barriers, legal action, and reputational damage.
✓ Role-specific governance and clear accountability for compliance tasks across the organization.
✓ Technical documentation, conformity assessments, and audit logs to demonstrate the transparency, traceability, and monitoring required for CE marking.
26. External Disclosures
✘ Actual AI risk posture may be inconsistent with public filings, regulatory disclosures, or investor communications – leading to legal exposure, reputational harm, or regulatory penalties.
✓ Integrate agentic AI into risk management external reporting and disclosures.
Agentic AI Risk
Tangible business harm
✘ Tangible business harm
Value of the operational controls
✓ Value of the operational controls
27. Change Management
✘ Organisation lacks the readiness to operationalise agent use, with overlaps or conflicts between manual and agentic tasks causing duplicated and wasted effort.
✓ Smooth change management avoids task conflicts that confuse staff and prevent launches.
✓ Save time by streamlining agentic and human processes.
28. Skill Degradation
✘ Overuse of familiar prompts, agents, or LLM outputs could narrow a user’s perspective or investigative range, and judgement.
✓ Institute policies that encourage users to step-in periodically and to vary their prompts.
29. Staff Over-Reliance
✘ Dependence on agents could create complacency around oversight, foregoing critical review and stifling challenge.
✓ Champion ‘AI challenge’ roles.
✓ Document why HITL reviews accepted outputs.
✓ Record non-consensus risk assessments.

A framework that evolves with agentic AI

By creating a deeper controls layer, the agentic AI control framework is designed to complement (not replace) established high-level standards such as ISO 42001 and the NIST AI Risk Management Framework, as well as evolving ones such as those from MIT and agentic-level ones that focus more narrowly on orchestration. Its structure maps cleanly onto existing risk registers and control libraries, helping firms integrate agent-specific risks into enterprise-wide governance.

And by structuring agent-related risks into explicit categories and controls, the framework helps firms satisfy internal control obligations, know when they have conducted sufficient testing, manage third-party and model risks, and show auditors and regulators that agentic automation is subject to robust oversight.

In particular, it lets regulated firms demonstrate effective operational risk management, as its controls reflect the expectations outlined in regimes such as MiFID II, UCITS, the EU AI Act, and the SEC’s evolving stance on AI governance.

As agentic AI evolves you will need a control framework that can also keep pace with the technology as it develops, which is the purpose of the versioning, the independent Governing Board, and protocol-specific annexes for areas like the Model Context Protocol (MCP) that we will integrate into the master control set once it stabilises.

Lastly, controls are only part of a broader fabric of governance, which should also include business case decisions, risk management, testing, and ongoing oversight. To help managers navigate these steps for agentic AI, the framework also includes general agent-specific templates for these areas.

How to get started

Our approach was to start with the simplest use cases we could find and focus not just on building agents, but on building agentic capability.

To help us make early choices about where to deploy agents, we developed a business case template and a high-level risk identifier. We then documented our build and test procedures and strengthened them with every new agent.

These are all parts of the framework, which is currently on v2.84. Once it reaches v3.0 (expected in October) we will make it publicly available so early adopters of agentic AI in asset management can benefit from it immediately.

To facilitate this, we’re currently building out www.agenticrisks.com through which we will conduct the consultation.

Frequently asked questions

1. What is agentic AI risk control?

Agentic AI risk control refers to the structured governance of AI systems that act autonomously toward goals. Unlike traditional automation, agents operate with higher independence – which introduces unique risks. This framework equips firms with purpose-built controls to identify, constrain, and monitor those behaviours.

Most AI assurance frameworks either stay at a high-level (i.e. all AI), or if they specialize in agentic AI, they focus on one aspect, for example, model performance, security, or ethics. The Agentic AI Risk Control Framework is enterprise-wide, tailored to agent-led automation, with operational, organizational, and executive controls that address not just agentic performance but it’s full integration into an organization.

Agentic automation controls govern how agents use tools, recall memory, and make decisions across steps. These controls matter because agents can improvise in ways that make traditional if-then logic insufficient. Without explicit oversight, firms risk process errors, regulatory exposure, or brand damage.

Yes – the framework includes a published AI risk framework comparison, benchmarking it against tools from Google, IBM, Unit 42, and the IAPS Field Guide. Unlike most, this framework focuses on the business realities of implementing agentic AI in financial services.

Generative agents – such as those used in client interactions or research – often present risks like hallucination, misuse of external tools, or unpredictable output. The framework maps these behaviours to specific risk categories and provides practical risk controls for generative agents across the agent lifecycle: from design to maintenance.

Picture of Adam Grainger

Adam Grainger

Behavioral analytics | Client experience | Asset management

Agentic AI Risk Control Framework

Thank you for your interest in the Agentic AI Risk Control Framework. Please fill in this form and, in September, you will be the first to receive a copy of the framework.

CX Data Maturity Framework - 5 Steps to CX Data Maturity in the era of AI

The Vital Piece - whitepaper

Fill in this form and you can download the whitepaper of
The Vital Piece – a CX Data Maturity Framework you can adapt to capitalize on AI-driven CX.

The AI driven Client Experience Quadrant

The New Dawn - whitepaper

Fill in this form and you can download the whitepaper of
The New Dawn – AI-driven client experience is set to become the next table stake.

How to stand out - Competition in the asset management industry

The Differentiation Challenge - whitepaper

Fill in this form and you can download the whitepaper of
The Differentiation Challenge – how to stand out in a crowded market: five winning strategies for asset managers.

Accomplish’s
monthly newsletter

Complete this form to receive Accomplish’s newsletter – a monthly round-up of all things relating to asset management client experience.

You will be able to update your preferences easily at any subsequent time.

Intermediary Client Behavior Benchmark

Intermediary Client Behavior Benchmark

Fill in this form and you’ll be able to stay close to the development of the new Intermediary Client Behavior Benchmark.

asset management CX Data Readiness Check - free tool

Asset Management CX Data Maturity Audit

Fill in this form and you’ll be able to download your free copy of the Asset Management CX Data Maturity Audit

Fundamentals of CX for B2B asset managers 2024 update download

Fill in this form and you’ll be redirected directly to your free copy of the Fundamentals of CX for B2B asset mangers 2024 update.

Find out

Please fill in this form and we’ll redirect you to the download for our brochure right away.