In this follow-up to my 2024 conversation with Rohit Ghai, CEO of RSA, we spoke about why identity threats are rising, including factors like geopolitical instability. He highlighted the value of passwordless technology, which liberates users from having to constantly enter an obscure string of letters and numbers to log in.
Particularly fascinating, he detailed the new ways that hackers are using AI for cyberattacks. He also detailed how AI can help close the cybersecurity talent gap by automating alert handling, patching, and identity governance—including through just-in-time access provisioning.
Key Points: Cybersecurity, Identity Threats, and AI
Select Quotes: “AI Has Entered a New Phase”
Cybersecurity doesn’t exist in a vacuum; its effectiveness (or lack thereof) is affected by larger culture trends and related technologies like artificial intelligence and authentication strategies.
The Rise in Identity Attacks Is Not a Failure—It’s a Reflection of the Times
In explaining why identity threats are escalating, Ghai points not to a breakdown in cybersecurity capabilities, but to broader forces that adversaries are exploiting.
“We are living in a recessionary climate, as you know, and it’s a time of much geopolitical volatility as well. So it is in times like this it is typical for cyber threats—particularly identity-related cyber threats—to be on the rise.
“The reason is pretty obvious: the threat actor, the hacker, is a rational actor. They’re looking to take advantage of the situation. They know budgets are under pressure, security teams are distracted, perhaps understaffed, going through personnel changes. Nation-state threat actors are especially active during times like this.
“So it’s not surprising to me that threat activity is increasing. But that rise is more a reflection of the global backdrop than a verdict on the cybersecurity industry’s progress—which, I believe, is very real.”
Strong Authentication Isn’t Enough—Integration Is Key
Ghai highlights a dangerous blind spot: even the most advanced authentication solutions can be bypassed if they don’t integrate with other parts of the identity stack.
“In cybersecurity, we’ve long focused on securing authentication—strong multifactor, biometrics, everything. But the Marks & Spencer breach showed us that attackers don’t always need to break through authentication; they bypass it. They simply called the help desk and convinced a technician they were a legitimate user. These users had very sophisticated MFA, but it didn’t matter. The attacker exploited a gap in the identity lifecycle.
“If the authentication tool and the identity governance solution had shared context, this breach could have been prevented. That’s the danger of siloed tools—they don’t talk to each other, and the attackers are absolutely taking advantage of that. This isn’t theoretical. In the MGM case and the M&S case, we’re talking about hundreds of millions in damage.”
The AI Threat Has Entered a New Phase: Intelligent, Personalized Attacks
AI is now enabling far more than the automation of brute-force attacks. According to Ghai, we’ve entered an era where adversarial AI is capable of intelligent reconnaissance and deeply personalized deception.
“AI-based attacks have evolved in three distinct phases. First, there was brute-force automation—simple password spraying, using scripts and dictionaries. Then we entered phase two: impersonation, using deepfake voices and AI-generated content to socially engineer access.
“But now we’re entering phase three—what I call agentic adversarial AI. These are intelligent agents that don’t just impersonate or automate—they investigate. They mine your social media, your digital footprint, your personal history to build highly targeted, context-aware attacks.
“Instead of guessing your password, they know your pet’s name, your kids’ schools, and they craft an attack that feels legitimate. That’s the leap—from brute force to true intelligence—and it’s unfolding in real time.”
All quotes edited lightly for length and clarity.