sigmet consulting
← All articles

1 Word in SEC Priorities Should Perk Your Ears Up

The SEC's AI examination language shifted from 'disclose' to 'demonstrate' in 2026. That single word moves the obligation from documentation to proof — and most governance programs aren't built for it.

The SEC Division of Examinations has included AI in its examination priorities for three consecutive years. Most compliance teams read the 2026 edition, noted the familiar themes, and moved on.

That was a mistake. Something changed this year — not in a new rule, not in a new enforcement theory, but in a single word. The shift from that word to its replacement is the most consequential regulatory signal for AI governance in financial services since the off-channel communications enforcement wave began.

The word that disappeared: disclose.

The word that replaced it: demonstrate.


What the 2025 Priorities Actually Asked

In 2025, the SEC's AI examination focus was organized around accuracy of representation. Were firms disclosing their use of AI to clients and regulators? Were those disclosures accurate? Was the description of an AI tool's capabilities consistent with what the tool actually did?

This was, in enforcement terms, a disclosure problem. The SEC had already shown its willingness to act on AI misrepresentation: Delphia (USA) Inc. paid $225,000 after falsely claiming it used AI to analyze client data in investment decisions; Global Predictions paid $175,000 for similar misrepresentations. Both cases settled under existing securities law in 2024, without a single new AI-specific statute required.

The message heading into 2025 was: say what you do. Don't overclaim. Update your ADV if you're using AI. That was a compliance exercise most firms could manage — a documentation problem dressed up as a governance problem.


What the 2026 Priorities Are Actually Asking

The 2026 priorities retain the accuracy-of-representation requirement. But they've added something materially different: whether controls and supervision actually function as described.

This is no longer a disclosure problem. It's an operational problem.

A firm that updated its Form ADV in 2025 to note that it uses AI-assisted tools for investment recommendations satisfied last year's examination standard. In 2026, that disclosure creates an obligation. Examiners will now test whether the controls referenced in that disclosure exist in practice — whether the supervision framework covers the specific AI tools deployed, whether policies are enforced and not merely documented, and whether the firm can reconstruct how an AI-driven decision was reached.

The 2026 priorities frame this as the Division assessing whether firms can demonstrate (1) fair and accurate representations, (2) that AI-generated recommendations align with investor profiles, (3) adequate controls and supervision, and (4) explainability of automated decision-making to non-technical audiences — including examiners.

The fourth item is where most governance programs currently fail.


The Question Firms Cannot Answer

Framing this as a practical examination scenario makes the gap concrete.

An examiner reviews a client account. An AI tool flagged the account for a specific recommendation, or generated a client communication, or produced a suitability analysis that drove a transaction. The examiner turns to the compliance officer and asks: "Walk me through how the AI reached that conclusion."

For firms using third-party AI tools — which describes the majority of mid-market registered investment advisers and broker-dealers — the honest answer is often: "We can't. The vendor doesn't expose that level of detail."

That answer is not an examiner-ready response. It is an examination finding.

This is what the industry has begun calling the Black Box AI problem. Firms can see what an AI system produced. They cannot reconstruct why it produced that output — what data it weighted, what logic it applied, what intermediate steps it took before generating a result. The output is visible. The decision path is not.

The SEC's 2026 emphasis on explainability makes that architectural reality a regulatory liability.


Why This Is Harder Than It Looks

The instinctive response to an explainability gap is to ask the vendor to fix it — to surface more logging, provide better audit trails, build a dashboard that shows decision logic. Some vendors can. Many cannot, by design: the opacity is a feature of the underlying model architecture, not a product oversight.

The governance gap this creates isn't solved by updating a policy. It requires a different kind of assessment: mapping every AI tool deployed against its ability to produce examination-ready decision documentation, identifying which tools create reconstructable audit trails and which do not, and building supervisory architecture that accounts for the difference.

For firms that have layered multiple AI tools across investment, compliance, and operational functions — each with different logging capabilities, different vendor relationships, and different levels of human oversight — this is not a small exercise. And it cannot be completed the week before an examination letter arrives.


The Broader Pattern

The 2026 priorities are also notable for how they distribute the AI examination theme. In prior years, AI governance appeared in a discrete section. In 2026, the Division has embedded AI oversight across virtually every examination category: emerging financial technology, cybersecurity, automated investment tools, Regulation S-P compliance, and anti-money laundering. A firm cannot address the AI question in one section of an examination preparation checklist and consider it resolved.

This reflects where the SEC's thinking has landed: AI governance is not a technology audit. It is a firmwide supervisory question. The same fiduciary standards, recordkeeping obligations, and duty-to-supervise frameworks that have governed human activity at registered firms for decades now govern automated activity. The technology changes. The regulatory framework does not.


What Demonstration Actually Requires

SEC Chairman Paul Atkins framed the 2026 priorities as a foundation for "constructive dialogue" with examiners — not a "gotcha" exercise. That framing is itself instructive. The Division is signaling that it expects firms to be able to have a substantive conversation about their AI governance programs — not merely produce a policy document in response to an information request.

The distinction between those two postures is the difference between a compliance program built for disclosure and one built for examination. The first produces artifacts the board can see. The second produces documentation that survives the question "walk me through how this worked."

Both are necessary. Most mid-market firms have the first. The second is where the 2026 examination standard now sits.


Sigmet.ai helps financial services firms build governance infrastructure for agentic AI — from examination readiness assessments to Written Supervisory Procedures (WSPs) that address autonomous AI systems specifically. Vendor-agnostic. Built for the conversation that actually matters.