~/home/news/critical-servicenow-ai-flaw-cve-2025-12420-enables-2026-01-18

Critical ServiceNow AI Flaw CVE-2025-12420 Enables Unauthenticated User Impersonation

ServiceNow disclosed a critical CVE-2025-12420 vulnerability in its AI platform that lets unauthenticated actors forge any user identity and execute arbitrary actions. An emergency patch was released in October 2025, but the flaw highlights deep security challenges for AI-enabled SaaS.

Overview

On Monday, ServiceNow announced emergency remediation for a newly disclosed vulnerability in its AI platform, designated CVE-2025-12420. The flaw allows an attacker with no credentials to impersonate any user-admin or regular-within a tenant and perform unrestricted actions ranging from data exfiltration to privileged configuration changes. The issue affects the core AI components now known as Now Assist AI Agents and the Virtual Agent API, which are embedded in the majority of ServiceNow’s enterprise offerings (AI Builder, Predictive Intelligence, etc.). The vulnerability received a CVSS v3.1 base score of 9.3, classifying it as critical.

Technical Details

According to the research published by SaaS-security firm CyberScoop, the root cause lies in how the Now Assist platform validates API calls that originate from AI agents. The platform trusts a JWT-like token that is generated internally by the AI runtime. Unfortunately, the token verification logic fails to enforce that the request originates from an authenticated session. An unauthenticated actor can craft a request that includes a forged token payload containing any user_id they wish to assume.

Once the forged token passes the lax validation, the AI service treats the request as if it were made by the impersonated user. Because AI agents operate with elevated privileges-often required to pull data across multiple tables and orchestrate workflow actions-the attacker can:

  • Read, modify, or delete records in any ServiceNow table the impersonated user can access.
  • Invoke workflow scripts that execute server-side JavaScript, effectively achieving remote code execution (RCE) within the tenant’s instance.
  • Escalate privileges further by chaining calls to other AI agents via the agent discovery feature, a form of second-order prompt injection.

The vulnerability is exploitable over the public internet because the AI endpoints are exposed via the standard ServiceNow API gateway. No prior authentication, API key, or network restriction is required, making it a classic unauthenticated remote code execution vector.

Impact Analysis

All ServiceNow customers that have enabled any AI-driven feature-whether they are on the public SaaS instance or a self-hosted on-premises deployment-are potentially vulnerable. The impact can be broken down as follows:

  • Data Confidentiality: Attackers can read sensitive HR, finance, or IT records, violating GDPR, HIPAA, or other regulatory mandates.
  • Data Integrity: Malicious modifications to configuration tables could disrupt business processes, alter ticket routing, or inject malicious scripts.
  • Availability: RCE capabilities enable denial-of-service attacks or the deployment of ransomware within the ServiceNow environment.
  • Privilege Escalation: By impersonating an admin, attackers can create new privileged accounts, effectively taking full control of the tenant.

Given that ServiceNow is a backbone for ITSM, HR, security operations, and many other enterprise workflows, the ripple effect of a successful exploit can be catastrophic for any organization relying on the platform.

Timeline of Events

  • October 2024 - Early Discovery: AppOmni began internal testing of ServiceNow’s AI agents and noted anomalous token handling.
  • October 2025 - Public Disclosure: AppOmni reported CVE-2025-12420 to ServiceNow under coordinated vulnerability disclosure.
  • October 30, 2025 - Emergency Patch Deployment: ServiceNow pushed fixes to most hosted instances and released patch binaries to partners and self-hosted customers.
  • November 1, 2025 - Advisory Published: ServiceNow issued a security advisory urging all customers to upgrade to Now Assist AI Agents 5.1.18 / 5.2.19 and Virtual Agent API 3.15.2 / 4.0.4 or later.
  • January 18, 2026 - Blog Publication: This analysis consolidates the known facts and provides actionable guidance.

Mitigation & Recommendations

ServiceNow’s emergency patch addresses the token verification flaw, but organizations should adopt a defense-in-depth approach:

  1. Apply the Patch Immediately: Upgrade to the versions specified in the advisory (Now Assist AI Agents 5.1.18 or 5.2.19+, Virtual Agent API 3.15.2 or 4.0.4+). Verify the patch status via the ServiceNow Instance Health dashboard.
  2. Restrict API Exposure: Use IP allow-lists, mutual TLS, or VPN tunnels to limit access to the AI endpoints. Disable public API access for AI agents unless absolutely required.
  3. Review Agent Discovery Settings: Turn off automatic grouping of agents into discoverable teams. Explicitly define which agents can communicate and enforce the principle of least privilege.
  4. Enable Auditing & Alerting: Activate detailed logging for token generation, AI agent actions, and privileged record changes. Correlate these logs with SIEM solutions to detect anomalous impersonation attempts.
  5. Conduct Prompt Injection Hardening: Follow AppOmni’s guidance on sanitizing data fields that feed AI agents. Consider implementing a “whitelist-only” approach for fields that trigger AI processing.
  6. Pen-Test AI Workflows: Engage red-team exercises that specifically target AI-driven APIs. Validate that no bypass exists for token validation or privilege checks.

For customers on self-hosted instances, verify that the patch has been applied to all on-premise nodes and that any custom plugins that extend AI functionality have been reviewed for similar token handling logic.

Real-World Impact

Enterprises that have integrated ServiceNow’s AI capabilities into their ticketing, change management, and employee onboarding processes now face a heightened risk profile. A successful exploit could allow a low-level contractor to masquerade as a senior IT manager, approve risky changes, or export confidential employee data. In regulated industries, the breach could trigger costly compliance penalties and erode customer trust.

Even organizations that have not yet enabled AI features are indirectly affected because the vulnerability exists in the shared code base. The safest path is to treat every tenant as vulnerable until the patch is confirmed.

Expert Opinion

From a senior cybersecurity analyst’s perspective, CVE-2025-12420 is a watershed moment for AI-augmented SaaS platforms. It demonstrates that traditional authentication boundaries can be bypassed when AI runtimes are given implicit trust. The fact that the flaw was exploitable without any credentials underscores a critical design oversight: the platform assumed that AI agents are inherently trustworthy, ignoring the possibility of a malicious actor injecting crafted payloads into the data flow.

Two broader lessons emerge:

  1. AI-Specific Threat Modeling Must Be Mandatory: Vendors need to treat AI agents as separate attack surfaces with their own authentication and authorization models. Token issuance, validation, and revocation must be as rigorous as any user-facing API.
  2. Configuration Hygiene Is as Important as Code Fixes: The default “discoverable agents” setting created a second-order prompt injection pathway that survived even after the primary token bug was patched. Organizations must audit default configurations and disable unnecessary inter-agent communication.

In the long term, we will likely see a shift toward zero-trust AI orchestration frameworks where each AI component is individually authenticated, and data provenance is cryptographically verified before being processed. Until those standards mature, the onus remains on both vendors and customers to enforce strict access controls and continuous monitoring.