The National Institute of Standards and Technology (NIST) continues to play a pivotal role in shaping cybersecurity, privacy, and artificial intelligence (AI) risk management practices. Last quarter in April 2025, NIST hosted its First Cyber AI Profile Workshop, an event designed to gather community input on developing a Cyber AI Profile. This profile applies the NIST Cybersecurity Framework to three key focus areas: securing AI system components, conducting AI-enabled cyber defense, and thwarting AI-enabled cyberattacks. The workshop, which included presentations, panel discussions, and breakout sessions, aimed to provide organizations with practical guidance for adopting AI while prioritizing cybersecurity risks.
What We Learned from the First NIST Cyber AI Profile Workshop
The workshop brought together in-person and virtual participants to discuss the intersection of AI and cybersecurity. Key reflections centered on viewing “AI risk as organizational risk,” emphasizing the need to integrate AI considerations into broader enterprise governance structures. Attendees highlighted AI’s dual-use nature: it serves as a powerful tool for defenses, such as anomaly detection and incident response, but also enables adversaries to amplify attacks through methods like phishing, data poisoning, or model inversion. Proactive strategies, including automated red teaming and zero-trust principles, were deemed essential to mitigate these risks.
Discussions stressed multidisciplinary collaboration across legal, technical, procurement, and governance teams to bridge knowledge gaps between AI and cybersecurity experts. Effectiveness in AI-enabled cyber defense was a major topic, with calls for measurable benchmarks to evaluate performance and address issues like false positives. Supply chain security emerged as critical, involving scrutiny of AI components‘ origins (e.g., microservices, libraries) and data provenance to counter vulnerabilities. Transparency was another cornerstone, advocating for clear documentation of model behavior and decision processes to build trust. Human-in-the-loop approaches and training were recommended to better interpret AI outputs.
A consensus formed around integrating the Cyber AI Profile with existing NIST frameworks, such as the Cybersecurity Framework, AI Risk Management Framework, Privacy Framework, and Risk Management Framework. Participants suggested mappings to international standards like the EU’s General Data Protection Regulation (GDPR) and ISO/IEC 27090, as well as leveraging tools like MITRE ATLAS for threat modeling. Sector-specific profiles for industries like healthcare, finance, and energy were proposed to tailor guidance.
Outcomes included agreement on the profile’s scalability for organizations of varying sizes, reducing redundancy with evolving regulations, and providing implementation guidance. Data governance was flagged as needing its own focus area to handle sensitive data and privacy in AI systems. Tailored measures discussed included cryptographic signing for models, AI Software Bill of Materials (AI SBOM) for transparency, securing feedback loops, adaptive identity management for non-human entities, and differentiated incident response for AI-targeted attacks.
Future plans involve using this feedback to refine the Cyber AI Profile, with an emphasis on practical, actionable resources to help organizations navigate AI’s cybersecurity challenges. Main topics covered included enterprise risk management, collaboration, dual-use AI, defense effectiveness, supply chain security, transparency, human involvement, framework integration, data governance, and AI-specific cybersecurity tactics. Participant feedback was overwhelmingly positive, with consensus on starting with the Cybersecurity Framework and building a versatile tool.
Security Impact Analysis in Cyber AI Contexts
In the context of the Cyber AI Workshop, understanding the security implications of AI integrations is crucial. For a deeper dive into conducting effective security impact analysis, refer to this guide where we covered Security Impact Analysis.
What is the NIST Privacy Framework
The NIST Privacy Framework is a voluntary tool designed to help organizations manage privacy risks through enterprise risk management, enabling them to innovate while safeguarding individuals’ privacy. Its purpose is to improve privacy outcomes by identifying and prioritizing risks, much like how cybersecurity frameworks address threats.
Developed collaboratively with stakeholders, the framework has evolved iteratively. The latest version, Privacy Framework 1.1 Initial Public Draft (IPD), builds on version 1.0 with updated categories and subcategories, as detailed in mapping documents. Core components include functions, categories, and subcategories that guide organizations in assessing and mitigating privacy risks. It ties directly to enterprise risk management by framing privacy as an organizational risk, encouraging integration into broader governance processes.
Resources for implementation are robust, including the full 1.1 IPD document, a Quick Start Guide for small and medium businesses, a Learning Center with videos, a Resource Repository for shared tools, events for engagement, and a mailing list for updates. As a voluntary framework, it offers flexibility, allowing organizations to adapt it to their needs without mandates. Benefits include enhanced privacy protection, reduced compliance burdens through risk-based approaches, and support for ethical innovation in data-driven technologies.
For a comprehensive understanding of the NIST Privacy Framework 1.1, explore this detailed resource we’ve developed for our readers: Understanding the NIST Privacy Framework 1.1.
Differential Privacy in NIST Contexts
Related to privacy enhancements in AI and data handling, NIST provides guidance on evaluating differential privacy guarantees. Learn more in this guide: Understanding NIST SP 800-226: A Guide to Evaluating Differential Privacy Guarantees.
Overview of the NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF) is a voluntary guideline released on January 26, 2023, to help manage risks from AI to individuals, organizations, and society. Its purpose is to embed trustworthiness into AI design, development, use, and evaluation, aligning with other risk management efforts.
Developed via a transparent, consensus-driven process involving public comments, workshops, and drafts, the framework promotes collaborative input. Key characteristics include a focus on voluntary adoption and enhancing AI trustworthiness. The structure revolves around four core functions: Govern (establishing policies and oversight), Map (identifying risks and contexts), Measure (assessing and monitoring risks), and Manage (mitigating and responding to risks).
A companion Playbook provides practical implementation steps. To address trustworthy AI, it incorporates considerations like reliability, safety, and ethics throughout the AI lifecycle. Related extensions include the Generative AI Profile (NIST-AI-600-1), released July 26, 2024, which targets unique risks from generative AI and suggests aligned actions. The Trustworthy and Responsible AI Resource Center supports global alignment and best practices.
For organizations, its voluntary nature allows tailored application, fostering better risk management and innovation in AI deployment.
Tying It All Together
The Cyber AI Workshop underscores the need for holistic approaches, where the Privacy and AI Frameworks complement cybersecurity efforts. By integrating these tools, organizations can address AI’s multifaceted risks—from privacy breaches to cyber threats—ensuring responsible adoption in an evolving digital landscape. NIST’s ongoing work, including future refinements to the Cyber AI Profile, promises to further bridge these domains.