New capabilities correlate AI-driven security incidents, govern agentic browsers and introduce an open-source tool for evaluating LLM manipulation risks
Zenity, a leading end-to-end security and governance platform for AI agents, today announced a significant expansion of its AI security platform. The new release introduces an intelligence layer for correlating AI-driven security incidents, expands coverage to agentic browsers across the enterprise and debuts a new open source tool developed by Zenity Labs to evaluate emerging large language model (LLM) manipulation techniques.
As organizations adopt AI agents, AI assistants and agentic browsers at scale, security teams face increasing difficulty understanding how incidents unfold across identities, workflows and environments. Traditional alerting provides signals, but not the narrative behind them. Zenity’s latest advancements provide a unified approach for detecting, analyzing and governing AI behaviors in real-world enterprise environments.
“With this release we are giving security teams something they have never had before, real visibility into intent,” said Ben Kliger, co-founder and CEO, Zenity. “Our new Correlation Agent does not just detect signals, it interprets them. It understands what an agent is trying to do, by connecting every signal, data point and insight that the Zenity platform collects and generates throughout the agent lifecycle into a single coherent story.”
“This is a game changer for AI security, especially as organizations embark on their goals towards 1 billion agents,” Kliger added. “By transforming scattered signals into high confidence security narratives, we are eliminating guesswork, accelerating investigations and giving teams the clarity they need to operate safely at massive scale.”
New Incident Intelligence for AI Security Investigations
Zenity’s new Issues capability correlates posture findings, runtime anomalies, identity relationships and graph-based insights into high-confidence security incidents. The system unifies these signals into coherent narratives that explain what happened, why it happened and what was impacted. This gives security teams immediate visibility so they can begin investigations without reconstructing events manually.
The Correlation Agent gives teams something they have never had before, visibility into intent. A critical element that traditional detections can’t capture. It interprets behavior, surfaces manipulation attempts and explains what the agent was actually doing - reducing guesswork and shortening the time it takes to understand what is happening.
Extended Coverage for Agentic Browsers
Zenity is expanding its AI security coverage to agentic browsers, focusing first on ChatGPT Atlas, Perplexity Comet, and Dia. These tools create a new source of shadow AI. They autonomously read content across authenticated sessions and take actions on the user's behalf, leaving security teams blind to the difference between human and agent activity. This creates a high-risk surface where a single malicious instruction in an email, webpage or document can lead to data loss, credential misuse or other damaging outcomes at enterprise scale.
Through Zenity’s device agent, organizations can discover agentic browsers, monitor autonomous activity, apply data loss prevention and detect intention driven anomalies in real time. This coverage ensures that agentic browsing behaviors are governed with the same unified policies applied across other copilots and enterprise AI agents. By exposing previously invisible agent behavior and surfacing high-risk actions, Zenity closes the trust gap traditional tools cannot address.
Open Source Tool to Help Teams Minimize Agentic Risk
Zenity Labs is also releasing a new open source tool based on ongoing research into data structure injection and structured self-modeling attacks against LLMs. Safe Harbor adds a dedicated safe action the agent can call when it identifies something harmful, (such as a workflow, tool call or unstructured input). The agent can immediately pivot away from the unsafe workflow instead of following a malicious instruction to completion. This enables developers to reduce the risk of runtime exploitations during the build process.
With these launches, Zenity is strengthening its position as the security platform built for how AI actually behaves across agents, browsers, workflows and autonomous decision chains. The new capabilities help organizations reduce noise, improve investigative depth and gain visibility into the growing ecosystem of agentic and AI-driven applications.
About Zenity
Zenity is the first security and governance platform purpose-built for AI agents - spanning SaaS, home grown platforms (Cloud), and end-user devices (Endpoint). Trusted by Fortune 500 enterprises, Zenity helps security teams confidently adopt AI by delivering defense in depth with full-lifecycle coverage: from agent discovery and posture management to real-time detection, inline prevention, and response. With an agent-centric approach that prioritizes how agents behave, what they access, and which tools they invoke, Zenity eliminates blind spots and enforces consistent policy and controls across environments so organizations can innovate with AI, without compromising security. Learn more at www.zenity.io.
View source version on businesswire.com: https://www.businesswire.com/news/home/20251204892848/en/
“With this release we are giving security teams something they have never had before, real visibility into intent,” said Ben Kliger, co-founder and CEO, Zenity
Contacts
For Media Inquiries:
Elyse Familant
Results PR
Elysef@resultspr.net