At the RSA Security Conference in San Francisco, HiddenLayer CEO and co-founder Chris “Tito” Sestito outlined a rapidly shifting cybersecurity landscape driven by the rise of agentic AI systems.
As enterprises move from static AI models to autonomous agents capable of executing tasks, interacting with data, and engaging with customers, the attack surface expands dramatically. Sestito emphasized that while these systems unlock powerful new efficiencies, they also introduce new vulnerabilities, including prompt injection, excessive access, and single points of failure that can be exploited by both internal missteps and external threat actors.
Hidden Layer’s latest platform enhancements, including agentic runtime visibility and expanded telemetry, are designed to address this emerging risk environment by giving organizations deep observability into how AI agents behave, interact, and potentially expose sensitive systems.
The broader challenge, Sestito argues, is that enterprises are deploying AI faster than they can secure it, creating a uniquely risky moment where attackers are rapidly adapting alongside defenders. In this new paradigm, organizations must rethink traditional security strategies to account for autonomous systems operating at machine speed, often with broad, cross-functional access.
Core Takeaways
Agentic AI Expands the Attack Surface: As AI systems evolve into autonomous agents capable of executing tasks and accessing sensitive systems, they create new entry points for attackers, increasing both internal and external risk exposure.
Single Points of Failure Become Critical Risks: Agentic systems often consolidate access across multiple systems, meaning that compromising a single agent can grant attackers broad reach across an organization.
Runtime Visibility Is Essential: Hidden Layer’s platform provides deep observability into AI and agentic system behavior, enabling security teams to trace events, analyze attacks, and understand how actions propagate across systems.
A Dangerous Gap Between Deployment and Security: Enterprises are rapidly adopting AI technologies without fully understanding or mitigating their risks, creating a window of vulnerability as threat actors quickly adapt to exploit these systems.
Key Quotes
The Expanding Reach of Agentic Systems
“When you talk about agentic systems, you’re really talking about automation layered on top of generative AI. These systems can now perform tasks, interact with data, and even engage with customers in ways that were previously limited to human employees. That creates enormous efficiency, but it also dramatically increases their reach within an organization.”
“When a single system can access data, communicate externally, and execute workflows, it becomes a powerful point of leverage. For threat actors, that means a single compromise can unlock access to multiple systems, making these environments far more attractive targets than traditional applications.”
External Threats Exploiting AI Interfaces
“The biggest shift is that attackers no longer need to break into your organization in the traditional sense. Instead, they can interact directly with systems that are designed to be publicly accessible, such as chatbots or customer-facing agents, and attempt to bypass guardrails or manipulate prompts.”
“This opens the door to entirely new attack vectors, including prompt injection and unintended data access. These systems are built to be helpful and responsive, which makes them fundamentally different from traditional software—and in many ways, easier to manipulate if not properly secured.”
Why Observability Is the New Security Foundation
“What organizations need now is visibility into how these systems actually behave at runtime. It’s not enough to know that an agent exists—you need to understand what it’s doing, what tools it’s calling, and how its actions propagate across your environment.”
“Our platform enables security teams to trace specific events, such as where a prompt injection originated, what actions followed, and how that activity moved between agents and humans. That level of insight is critical for both detecting and responding to threats in real time.”
A Uniquely Risky Moment for AI Adoption
“We are absolutely in a uniquely dangerous moment. Organizations are under pressure to deploy AI quickly, but the technology itself is inherently insecure in many ways, especially when guardrails can be bypassed relatively easily.”
“If companies are not implementing additional security controls beyond what comes out of the box, they are exposing themselves, their customers, and their employees to significant risk. The pace of adoption is outstripping the pace of security, and that gap is where attackers thrive.”