Recent disclosures have highlighted a critical security flaw in the ServiceNow AI Platform, identified as CVE-2025-12420, with a CVSS score of 9.3. This vulnerability, dubbed "BodySnatcher," allows unauthenticated users to impersonate others and perform arbitrary actions on their behalf. The flaw was patched on October 30, 2025, with updates available for both hosted and self-hosted instances. Despite no evidence of exploitation in the wild, the severity of this vulnerability necessitates immediate attention from users to apply the security updates.
The vulnerability is particularly concerning due to its ability to bypass multi-factor authentication (MFA) and single sign-on (SSO) protections. By exploiting a hardcoded platform-wide secret and account-linking logic, attackers can impersonate any ServiceNow user using only an email address. This could lead to unauthorized actions such as creating backdoor accounts with elevated privileges and subverting security controls.
This disclosure follows previous reports of vulnerabilities in ServiceNow's Now Assist generative AI platform, which could be exploited for second-order prompt injection attacks. These attacks enable unauthorized actions, including data exfiltration and privilege escalation, posing significant risks to organizations relying on these AI-driven tools.
CVE-2025-12420 represents a severe threat due to its potential to allow unauthorized access and control over ServiceNow AI systems. The vulnerability enables attackers to impersonate users and execute privileged actions, potentially leading to data breaches and operational disruptions. The flaw's ability to bypass MFA and SSO protections further exacerbates its impact, making it a critical concern for organizations using ServiceNow's AI capabilities.
The vulnerability's exploitation involves chaining a hardcoded secret with account-linking logic that trusts email addresses, allowing attackers to bypass access controls. This method of attack is particularly dangerous as it can be executed remotely, enabling attackers to manipulate AI-driven workflows and compromise sensitive data.
Clients using ServiceNow's AI Platform may face significant risks if the CVE-2025-12420 vulnerability is exploited. Potential impacts include operational disruptions due to unauthorized actions performed by impersonated users, data breaches resulting from unauthorized access to sensitive information, and financial losses associated with these incidents. Additionally, organizations may suffer reputational damage if customer or partner data is compromised.
From a compliance perspective, exploitation of this vulnerability could lead to regulatory challenges, audits, or penalties due to unauthorized access and data breaches. Organizations must ensure they are aligned with relevant laws and regulations to mitigate these risks.
To mitigate the risks associated with CVE-2025-12420, clients should take the following actions:
By implementing these measures, organizations can reduce the risk of exploitation and protect their systems from unauthorized access. Continuous monitoring and proactive security practices are essential in maintaining a secure environment.
1898 & Co. is actively addressing the current threat landscape by offering specialized services to help clients secure their AI-driven platforms. Our team provides tailored security assessments and vulnerability management solutions designed to identify and mitigate risks associated with AI technologies.
We have updated our security protocols to incorporate the latest threat intelligence related to AI vulnerabilities, ensuring our clients receive the most current guidance. Our collaborative efforts with industry experts and government agencies enhance our ability to provide comprehensive support in addressing emerging threats.
Our ongoing research into AI security vulnerabilities allows us to offer cutting-edge solutions that protect against sophisticated attack vectors. We have successfully assisted clients in implementing robust security measures that safeguard their AI systems from exploitation.