AI Agents in AML/KYC Compliance: Insights from Industry Experts
Agentic AI presents a new era of opportunity and risk for compliance teams as it moves from theoretical concepts to practical application. In our recent webinar, Castellum.AI CEO Peter Piatetsky spoke with industry experts Anna Chenoweth (Head of Sanctions at Coinbase), Matthew Hunt (Co-Founder and COO at Ground Truth Intelligence), and Lucas Chapin (Head of Data at Hummingbird) exploring the best practices and how to evaluate and effectively deploy agentic AI solutions in AML/KYC workflows.
The panel focused on real-world use cases, implementation challenges, regulatory expectations, and strategies to ensure both performance and accountability. Here are the key takeaways for compliance teams exploring AI agents.
Key Takeaways
Agentic AI in compliance: Where It works and why
AI agents show the most value when used to augment, not replace, human analysts. Instead of trying to make end-to-end decisions, these agents are designed to automate specific high-friction parts of the compliance workflow that slow down operations. Some of the highest-impact agentic use cases include:
Drafting SAR narratives using structured case data
Enriching alerts with relevant customer, watchlist, or transactional context
Assisting analysts during EDD/CDD by reducing time spent on manual research
Managing alert queues: triaging, summarizing, and recommending action paths
In each case, AI agents reduce low-value work and support faster, more consistent decisions. When properly integrated, they can help teams scale capacity, reduce alert fatigue, and increase reporting quality without sacrificing control.
Onboard the agent like you’d train a new hire
One of the core messages from the webinar was that agent performance is entirely dependent on how it’s trained and onboarded.
To ensure accuracy and reliability, AI agents must be integrated into workflows with the same level of detail used to train new compliance staff. That means:
Training it on your internal SOPs, decisioning protocols, and documentation standards
Ensuring it has access only to verified, high-quality data sources
Embedding the AI agent directly into workflows so that the agent can reference additional context for alert decisioning
Regularly reviewing output to correct model drifts and reinforce expectations
One panelist highlighted that general-purpose models fail quickly in regulated environments. Without tailored context and clear guardrails, agents can generate inconsistent or even misleading outputs. To avoid this, institutions must design AI systems that are workflow-specific, context-aware, and constantly quality-assured.
AI workflows require oversight, modularity and transparency
The most effective deployments rely on a modular, transparent architecture. Instead of relying on a single, monolithic model, forward-thinking compliance teams are adopting a strategy of deploying multiple specialized agents, each trained to handle specific, well-defined tasks alert reviews, SAR filing, etc.
Moreover, agents must be designed with intentional interruption points, where a human can review, supplement, or override AI output. Every action should be explainable, documented, and reversible. This level of oversight is critical, not just for internal governance, but to meet regulatory expectations around accountability and transparency.
Traditional Automation Vs. Agentic AI for FinCrime Complaince
Rising regulatory focus on AI
AI doesn't reduce your regulatory obligations, but it changes the rules of engagement. As compliance leaders, you’ll still need to answer the same questions auditors already ask about compliance decision-making and how that’s documented, but now with an added layer of scrutiny around AI agent outputs.
Expect regulators to ask:
How was the AI model developed, trained, and validated?
How does the system follow and document adherence to your SOPs?
How do the AI’s outputs compare in quality to those of human analysts?
How do you conduct ongoing quality assurance and model validation post-deployment?
What processes are in place to ensure the quality and accuracy of the data powering your AI?
When it comes to regulatory alignment, explainability is non-negotiable. If you are unable to clearly explain how an AI agent reached its conclusion or confirm that it followed established protocols, that agent shouldn’t be in your compliance decision workflow.
Additionally, you must demonstrate responsible change management, including staff training, internal documentation, risk assessments, and proactive communication with regulators. The bar is higher, not lower, once AI enters your workflow.
Strong guardrails are a must for responsible AI use
Agentic AI is powerful, but it must be constrained by design. Not all workflows are suitable for automation. In high-risk or high-subjectivity scenarios, human oversight is critical.
Panelists emphasized that to reduce risk and maintain quality, compliance teams should:
Start by deploying AI to well-understood, repeatable tasks, like L1 and L2 alert reviews, that have clearly defined data inputs and procedures
Avoid implementing AI agent decisions without retaining human review or recommendations
Monitor agents for signs of bias, hallucination, or logic drift
Use version control and testing to manage changes over time
Integrating AI agents into your workflow raises the expectations for quality. Every output must be consistent, auditable, and benchmarked—not just against the AI’s past performance, but also against previous human-generated results.
Invest in internal readiness: AI training for staff
Rolling out an AI agent is more than just a technical upgrade. It’s a fundamental shift in an institution’s compliance, operations, and culture. Without thorough internal preparation, institutions risk creating blind spots for both compliance teams and auditors. Before deployment, it’s critical to ensure:
Internal SOPs have been updated to reflect agentic workflows
All users interacting with the agent are trained on its capabilities and limitations
Quality assurance processes are in place to review, score, and log agent output
Wrapping up
With responsible design and implementation, AI agents can become a force multiplier for compliance teams. The path forward involves careful testing of AI agents in limited settings, validating vendors for accuracy and regulatory adherence, and investing in ongoing staff development for an agentic future.