New York Strengthens Third-Party Risk Rules with AI Oversight and Vendor Resilience

Table of Contents

The New York Department of Financial Services (NYDFS) has issued updated guidance for financial institutions supervised under its regulatory regime, clarifying expectations around third-party risk management, cloud vendor resilience and, notably, the oversight of artificial intelligence (AI) systems used by vendors or institutions themselves. The revisions reflect recent service-provider failures, the rise of model-based automation and growing regulatory scrutiny of AI in financial services.

New York State Industry Enforcement

What’s New in Third-Party Risk Guidance

In its Industry Letter dated October 21, 2025, the NYDFS clarified that covered entities must treat third-party service-provider (TPSP) arrangements as integral parts of their cybersecurity programs. The guidance underscores that services such as cloud computing, file transfers, artificial intelligence (AI) systems and fintech platforms fall squarely within TPSP risk. It calls on senior governing bodies and senior officers to engage actively in oversight of TPSP risk, and highlights four lifecycle phases: selection, contracting, ongoing monitoring and termination. Key contractual provisions include supplier access controls (including MFA), data-location disclosure, encryption in transit and at rest, subcontractor lists, explicit AI-use language (e.g., whether the vendor may use the institution’s data for training models) and formal off-boarding/transition requirements with certified data destruction or migration. The guidance emphasises that outsourcing does not relieve the covered entity of its obligations under Part 500; board-level challenge of vendor risk is now an expectation.

The revised guidance reinforces long-standing vendor-management obligations under 23 NYCRR Part 500 and other NYDFS frameworks—and layers in new expectations. Key areas include:

  • Cloud and service-provider resilience: Institutions must map service-provider dependencies, assess concentration risk, and maintain exit strategies, continuity plans and contractual data-portability terms.
  • Vendor audit and oversight: Contracts must grant audit rights, require subcontractor disclosures, and include monitoring of vendor performance, compliance, and incident-notification protocols.
  • AI and model governance: When a vendor uses or supplies AI (for underwriting, fraud detection, decision automation or operations), the supervised entity retains accountability. Institutions must ensure model transparency, bias testing, drift monitoring, vendor governance of model updates and documentation of vendor-AI pipelines.
  • Incident learning and vendor failure lessons: NYDFS references major cloud-service outages and cascading vendor disruptions as cautionary precedents. The guidance mandates scenario testing, concentration risk reviews and periodic vendor-failure simulation exercises.

Why This Matters for Financial Institutions

The updated guidance signals several critical changes:

A heightened regulatory risk environment

NYDFS uses guidance bulletins and supervisory letters to flag enforcement priorities. With AI and vendor resilience now explicit, institutions cannot treat third-party oversight lightly. Vendor failures or uncontrolled AI systems may trigger regulatory actions—even in the absence of a traditional data breach.

Vendor/AI failures equal enterprise risk

A disruption at a major cloud provider or an unmonitored vendor AI model can cascade into business continuity failure, consumer harm, reputational damage and regulation scrutiny. The guidance emphasizes vendor-chain visibility, automated controls and resilience planning as essential to risk governance.

Visibility and model governance gaps

Many institutions struggle to trace downstream vendor chains, monitor sub-processors, audit vendor-supplied models or detect vendor-embedded AI decisioning. The updated guidance presses institutions to map not just direct vendors, but indirect chains and AI workflows, increasing governance demands.

Enforcement Spotlight: AG Letitia James and Privacy/AI Oversight

Parallel to NYDFS’s supervisory updates, the New York State Attorney General’s Office (OAG) under AG Letitia James has taken concrete steps to enforce privacy, tracking and AI-related risks. Notably:

  • In July 2024 the OAG launched business and consumer guides to online tracking and website privacy controls, flagging unwanted cookies and tracking technologies deployed before consent. The initiative highlighted the expectation that businesses must disclose tracking and honor opt-outs.
  • The OAG has actively proposed regulations under the Child Data Protection Act (CDPA) and the STOP Addictive Feeds Exploitation Act (SAFE for Kids Act) which direct AG oversight of algorithmic feeds, children’s data, age-verification and algorithmic tracking—placing AI-governance front and center of privacy enforcement.
  • In March 2025 the AG secured a settlement of approximately US$650,000 with a student-social-networking app developer over failure to verify user identities and unauthorized access to minor-targeted features, underscoring the focus on data-usage governance.

The combined effect of NYDFS’s guidance and the AG’s enforcement posture means that financial institutions and any business with AI or vendor-chain dependencies must embed both privacy and operational resilience into vendor-risk frameworks.

The October 2025 NYDFS guidance also gives regulators a clearer baseline for vendor-AI risk oversight—signalling that failures in vendor-AI governance or data portability may be treated as supervisory matters, not just operational errors. This shapes the regulatory calculus: vendor or AI mis-governance could be the focus of examination or enforcement rather than simply a business continuity issue.

Related Acts & Laws for New York Businesses To Be Aware Of

Act / Regulation Scope Key Vendor/AI Provisions
23 NYCRR Part 500 (NYDFS Cyber Reg) NY-based financial institutions Third-party service provider oversight, incident reporting, resilience planning
SAFE for Kids Act (NY) Social media & addictive feeds for minors Parental consent, age-verification, algorithmic feed governance
Child Data Protection Act (CDPA, NY) Data collection from minors Limits on tracking, requirements for consent, algorithmic-decision transparency
AI-Safety Proposed Legislation (NY) Frontier AI models & systemic risk Governance, transparency, “frontier” model definition, liability for harms
Federal FTC AI/Privacy Guidance U.S.-wide consumer protection Truth-in-algorithms, data misuse, bias audits, vendor risk disclosures

Next Steps for Vendor & AI Risk Programs

Financial institutions and firms with vendor-AI footprints should treat the updated NYDFS guidance as operational core, not a sidebar. They should:

  1. Enhance vendor inventories to include AI-model vendors, cloud service providers, data-platform vendors and any subcontractor chains.
  2. Update vendor due-diligence questionnaires and contracts to include questions on AI model governance, vendor reuse of data, audit rights, exit/portability and concentration risks.
  3. Include AI-specific vendor questions consistent with NYDFS guidance: Does the TPSP’s contract address whether your data may feed into model training? Is encryption in transit/at rest explicitly required? Does the vendor disclose subcontractors and data-location? Does the exit clause require certification of data destruction or migration?
  4. Integrate AI-model governance into enterprise risk frameworks including bias testing, model-drift review, vendor model-update logs and decision-audit trails.
  5. Conduct vendor resilience and scenario-testing exercises—simulate vendor outage, cascading fail-over, AI-model error event and data-portability interruption.
  6. Ensure incident-response plans cover vendor-chain or AI-model failures—including notification chains, consumer-harm assessment, regulatory reporting triggers and remediation controls.
  7. Ensure contracts reflect the life-cycle expectations in NYDFS’s October 21, 2025 guidance: access revocation, data-migration or destruction certification, vendor audit rights, subcontractor oversight and vendor model-reuse safeguards.
  8. Align cyber insurance and privacy-liability coverage to reflect vendor-AI exposures—review policy definitions for “third-party service provider risk,” “model failure,” “data misuse,” and ensure contractual and operational alignment with underwriting expectations.

Regulatory Convergence and Strategic Risk Governance

Vendor and AI risk management are no longer peripheral activities—they sit at the intersection of operational resilience, privacy protection and regulatory compliance. The NYDFS update and AG actions under Letitia James demonstrate a shift toward integrated supervision of vendor-AI ecosystems: not only managing a provider contract but monitoring automated decision systems, tracking post-contract vendor behavior, and ensuring exit mechanisms are effective.

For regulated entities this means going beyond traditional vendor checklists to build a layered program combining infrastructure resilience, AI governance, vendor transparency and auditable controls. Compliance, risk management and operations must now speak a shared language around third-party risk and model oversight.

In other words: treat your vendors and AI systems not just as tools—but as critical components of your regulated-entity risk footprint and book a demo with Captain Compliance to determine what your risks are and how we can help you with New Yorks newest AI compliance regulations.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.