How Leading Institutions Build AI Agent Workflows in AML/KYC Compliance
Across banks and credit unions, top-down mandates to implement AI are commonplace. For compliance teams, that means navigating regulatory expectations around explainability and ensuring agentic AI systems operate as intended.
In our recent fireside chat on AI risk management for AML/KYC compliance, Ceri Lawley (CCO, International Finance Corporation), Christina Rea-Baxter (CEO, RayCor Consulting), Matthew Hunt (COO, Ground Truth Intelligence), and Peter Piatetsky (CEO, Castellum.AI) engaged in a candid discussion about the real-world risks, limitations and opportunities of AI agents in financial crime compliance.
Getting Started with AI Agents: Treat It Like Onboarding
"AI is not creating a new type of work, at least in compliance. It is doing specific jobs that already have job descriptions, that already have training programs." – Peter Piatetsky
This means providing proper system access, ensuring ongoing training and updating prompting when regulations change. Smaller financial institutions are better able to level the playing field with large competitors with the edge offered by well-trained AI adjudication agents.
Guide: How to Evaluate AI Agents for AML/KYC
AI Rollout: Start Conservative, Scale Thoughtfully
Christina Rea-Baxter from RayCor recommended starting with tasks like duplicate alert elimination and spot-checking under defined thresholds, with "regular sampling and regular QC and everything being documented." For high-stakes decisions, human judgment remains essential.
Matthew Hunt from Ground Truth Intelligence pointed out at where human expertise is irreplaceable: "The asymmetry around where there's the most risk tends to be in the least digitized jurisdictions, where information is not necessarily all available online and accessible to agents and models, but is offline, is held in the heads of people who are well-placed and understand those local markets." Critical information lives in the knowledge of local experts who understand specific markets, cultural contexts and jurisdictional nuances that AI agents cannot access from online sources alone.
How Analysts and AI Agents Collaborate in AML/KYC Workflows
Successful AI implementation creates a culture where human expert investigators work alongside agentic AI that automates rote tasks, enabling compliance teams to focus on complex, high-judgment work. In practice, responsibilities can be distributed as follows:
AI handles information gathering, summarization, initial analysis and screening of redundant alert reviews.
Humans retain responsibility for final determinations and complex judgments requiring cultural or contextual expertise.
AI agents become better at preliminary adjudications as they are continually audited and updated by human analysts. Analysts reclaim time for higher-value investigative work.
Governance Frameworks and Ongoing Monitoring
The focus today is on AI agents that can take actions autonomously, requiring new governance approaches. The panel outlined how AI governance fits within the three lines of defense model:
First line: Business teams on the front line who are customer-facing and handle day-to-day operations.
Second line: Compliance oversight function providing specialized expertise and a second pair of eyes. This is where human-AI collaboration workflows are designed and monitored.
Third line: Internal audit conducting periodic reviews to ensure agentic AI systems work as intended.
Integrating AI agents into this model means the second line designs risk-based triggers for when agents escalate to humans, while the third line audits whether those handoffs are effective.
The panel also identified regulatory risk as a primary concern, with organizations facing a fragmented regulatory environment: from the EU's prescriptive AI Act, to the US's decentralized model and non-binding NIST frameworks, to inconsistent state-level regulations within the US. Ceri Lawley from the IFC pointed out how the industry eventually managed similar fragmentation with data privacy regulations, and common standards are likely to emerge over time.
Until there is consensus on standards, institutions operating across multiple jurisdictions must build flexible compliance frameworks that accommodate different regulatory approaches while maintaining consistent internal governance and quality assurance practices.
Best Practice: Align Technology With Expertise
Responsible AI implementation requires vendors who can explain their technology completely, institutions that ask hard questions and governance frameworks that ensure human oversight at critical decision points.
As the industry navigates this transformation, the organizations leading the way are building AI systems with transparency at their core—making explainability and robust governance non-negotiable from the start.