Public companies entering the annual reporting cycle are facing a fast-moving disclosure challenge: how to describe AI-related cyber risk with enough precision to inform investors without turning risk factors into speculative or exaggerated narratives. Boards, in-house counsel, and security leaders are increasingly treating AI not as a marketing topic, but as an operational variable that meaningfully alters threat models, attack surfaces, and incident response expectations. The result is a shift in how cybersecurity disclosures are drafted, reviewed, and substantiated.
AI is now embedded across corporate operations. Companies rely on it for internal automation, customer interactions, fraud detection, software development, analytics, and security tooling. That ubiquity matters because disclosure obligations change depending on how AI is used. Internal deployment introduces risks tied to data leakage, prompt handling, logging, and automated decision-making. Customer-facing AI systems introduce additional exposure around misuse, access controls, third-party dependencies, and safety failures that can cascade into security incidents or regulatory scrutiny. The disclosure question is no longer whether AI exists, but whether its use materially changes the company’s risk profile in a way investors would reasonably want to understand.
Under Chair Paul Atkins, the SEC has emphasized materiality while signaling discomfort with disclosure overload. This does not mean reduced enforcement. Rather, it means companies are expected to focus disclosures on real, supportable risks instead of generic statements about emerging technology. Cybersecurity remains an enforcement priority, particularly where disclosures are misleading, inconsistent, or disconnected from operational reality. Companies must be able to substantiate what they say about AI-driven cyber risk with governance structures, internal controls, and documented processes.
AI-related cyber risk typically appears in two disclosure contexts. First, in risk factors and management discussion, where companies explain how AI may affect operations, reputation, compliance posture, or competitive standing. Strong disclosures in this area are specific and fact-based, describing actual AI use cases, reliance on third-party models, or the sensitivity of data processed. Second, in cybersecurity governance and incident disclosures, where companies describe how cyber risks are identified, managed, and overseen, including board involvement and management expertise. Because AI increasingly influences both the threat landscape and defensive tooling, it is now inseparable from these governance narratives.
Materiality is the anchor for all of these disclosures. Boilerplate statements about AI being “rapidly evolving” or “introducing new cyber threats” are insufficient unless tied to concrete impacts. The SEC has made clear that materiality determinations for cyber incidents include qualitative factors such as reputational harm, operational disruption, customer trust, and regulatory exposure. This is especially relevant for AI-related incidents, where financial impact may be unclear at first, but operational or reputational consequences can be immediate and severe. Companies that lack disciplined internal processes for evaluating AI-related risk will struggle to produce consistent, credible disclosure language.
In practice, several disclosure patterns are emerging. Some companies emphasize how AI amplifies existing threats, such as more sophisticated phishing or faster vulnerability discovery by attackers. Others focus on data exposure risks, including the introduction of confidential information into AI workflows through prompts, logs, or integrations. Many highlight dependency risks tied to third-party AI providers, outages, or supply-chain vulnerabilities. More mature disclosures also describe governance controls, including board oversight, management accountability, and internal policies governing AI use. The most credible filings distinguish clearly between AI used as a defensive capability and AI that introduces new operational risk.
A growing concern for legal and compliance teams is overstatement. Regulators remain sensitive to exaggerated claims about AI capabilities, sometimes referred to as AI-washing. Disclosure language that reads like marketing copy creates enforcement risk if actual practices do not align with investor-facing statements. As a result, AI disclosures are increasingly treated as compliance representations. Absolute language requires evidence in the form of policies, controls, testing, and documentation.
The current disclosure posture did not emerge overnight. In July 2023, the SEC adopted comprehensive cybersecurity disclosure rules requiring prompt reporting of material incidents and annual discussion of cyber risk management and governance. In May 2024, SEC leadership clarified how materiality determinations should be made and reinforced the distinction between required and voluntary incident disclosures. In early 2026, Chair Atkins publicly called for trimming immaterial corporate reporting, reinforcing a materiality-first approach just as companies were preparing filings that increasingly referenced AI. By the January 2026 reporting season, AI-enabled cyber risk had become a routine topic in Form 10-K drafting, shaped by these regulatory developments.
From a drafting perspective, several principles are emerging as best practice. Companies are expected to be specific about AI use cases rather than relying on generic descriptions. AI risks should be mapped to familiar cyber categories such as credential compromise, data exfiltration, system disruption, or third-party failure. Absolute claims should be avoided unless controls are demonstrably mature. Disclosures must align with incident response reality, particularly the ability to make timely materiality determinations. Consistency across filings, earnings calls, and public statements is essential, as contradictions themselves can create enforcement or litigation risk.
Ultimately, the difficulty of AI cyber disclosures reflects gaps in operational readiness. Many organizations have adopted AI faster than they have formalized controls around it. This becomes apparent when disclosure teams cannot quickly answer where AI is used, what data flows exist, which vendors are involved, or how monitoring is performed. Companies addressing this gap are formalizing AI governance through inventories, vendor diligence standards, data handling rules, incident playbooks, and board-level reporting that treats AI as part of enterprise risk management rather than a siloed innovation effort.
The emerging challenge under Atkins’ SEC is not whether AI deserves mention in filings, but whether companies can describe AI-related cyber risk in a way that is accurate, specific, and material. Organizations that treat disclosure as the output of sound governance, rather than a standalone drafting exercise, will be best positioned. The companies that succeed in this reporting cycle will be those that can clearly explain their AI footprint, articulate how it changes cyber exposure, and demonstrate that oversight and controls match the risks they describe.