Financial regulators don't need to wait for new legislation. The SEC, FINRA, and federal banking supervisors have spent decades building enforcement theories around a simple premise: if you deploy a system that causes harm, you were obligated to govern it adequately — and if you didn't, existing law already covers the gap.
Three recent enforcement actions demonstrate exactly how this plays out in practice. None involved AI agents. All of them will.
Two Sigma: The Supervision Failure
In 2025, the SEC charged Two Sigma with breaching fiduciary duties after the firm failed to address known vulnerabilities in its algorithmic investment models. A single employee made unauthorized changes to fourteen live models over two years. The resulting losses to clients totaled $165 million. The firm paid $90 million to settle.
The enforcement theory wasn't novel. It was supervision: the firm had identified a material risk, had the ability to remediate it, and didn't. That a quantitative model was at the center of the failure was incidental to the charge. The obligation is to supervise known risks. The medium is irrelevant.
Apply that logic to an AI agent with write access to customer accounts, or one routing trade decisions without a human checkpoint, and the theory transfers without modification.
Brex Treasury: Fitness for Purpose
FINRA fined Brex Treasury $900,000 in 2024 for deploying an automated identity-verification algorithm that was not reasonably designed to verify customer identities. The result: over $15 million in suspicious transaction attempts from accounts that shouldn't have been approved.
The core finding wasn't that automation is impermissible. It was that the system wasn't fit for the purpose it was deployed to serve — and that the firm didn't verify adequacy before deployment.
For agentic AI, the question regulators will ask is identical: was the system reasonably designed for what you asked it to do? If an agent is making recommendations, routing decisions, or executing actions autonomously, the fitness-for-purpose obligation applies. The burden of demonstrating reasonable design falls on the firm.
Earnest Operations: The Governance Infrastructure Requirement
Massachusetts settled with Earnest Operations in 2025 for $2.5 million over AI underwriting models that produced disparate impacts on protected classes. The remediation requirement is what matters here. The settlement required written policies, bias testing protocols, and model inventories — in other words, governance infrastructure proportionate to the capability of the system.
This is the template regulators will reach for when agentic AI produces adverse outcomes. The question won't just be what happened, but whether the governance infrastructure in place was adequate to have caught it.
The Pattern
Each of these cases shares the same structural feature: the absence of governance infrastructure proportionate to the capability of the system. Algorithmic models acting on live portfolios without adequate supervision controls. An identity verification system deployed without adequate design validation. Underwriting models operating without bias monitoring or documented governance.
For AI agents — systems that act autonomously across multiple systems, make sequential decisions without human approval, and can access and modify data in real time — the exposure multiplies. Audit trails, scope boundaries, human oversight checkpoints, and escalation logic aren't best practices. Under existing regulatory frameworks, they're the difference between a defensible governance posture and the next case study.
The Timeline
FINRA published standalone guidance on agentic AI risks in December 2025. Examinations specifically questioning agentic AI governance practices are expected to begin in 2026. Early movers who establish robust standards will do more than reduce their own risk — they will define what "reasonably designed" supervision looks like for the firms that follow.
The enforcement theories are already proven. The only open question is which firm's governance gap becomes the example.