From highly personalized phishing attacks to automated intrusion attempts, AI is giving threat actors new tools to outmaneuver traditional defenses. At the same time, organizations are embracing AI to protect themselves and scale their operations. It’s a high-speed arms race, and the playing field keeps evolving.
For IT leaders, AI’s implications are profound. The systems and agents designed to streamline work and improve experiences also introduce new risks. Staying secure in this AI-powered world requires fresh thinking, tighter cross-functional collaboration, and smarter oversight.
Here are three major shifts AI is driving in the cybersecurity landscape, and the strategies forward-thinking teams are using to adapt.
1. AI is strengthening both offense and defense
Threat actors are using AI to automate reconnaissance, customize phishing messages, and adapt their methods in real time. These attacks often don’t look like the ones that businesses traditionally train employees for. Instead, they’re more subtle, more convincing, and harder to detect using static rules or signature-based systems.
In response, cybersecurity teams are embedding AI into their own defenses. Machine learning models now analyze vast streams of behavior and activity to surface anomalies in real time, flagging everything from suspicious logins to changes in user behavior that might signal a compromise.
To stay ahead, teams need real-time anomaly detection and behavioral monitoring capabilities. It’s no longer enough to know if an action occurred. Now, you need to understand if it was typical, safe, and aligned with user intent. This means expanding visibility across tools, endpoints, and user journeys, all while building systems that can adapt and learn over time.
2. AI agents are becoming new attack surfaces
AI agents (think chatbots, onboarding guides, and workflow assistants) are increasingly woven into digital experiences. They improve productivity and scale support, but they also come with unique risks. These agents interact with sensitive data, influence user behavior, and operate across boundaries. If misconfigured or unsupervised, they can expose vulnerabilities, leak information, or confuse customers.
In many organizations, AI agents are deployed quickly, often outside the direct control of security teams. Without analytics and oversight into their behavior and the means to take action it, agents can act in unpredictable or even risky ways, creating a shadow layer of automation that’s hard to govern.
Effective organizations are putting governance frameworks in place for AI agents. That includes:
- Clear ownership: Designating teams responsible for the creation, training, and monitoring of each agent.
- Conversation and action logging: Capturing detailed histories of what agents say and do to users.
- Feedback loops: Enabling human review of edge cases and unintended behaviors.
- Fail-safes: Building escalation paths when agents hit their confidence limits or encounter sensitive topics.
Visibility and accountability are essential. The more power AI agents have, the more carefully they need to be managed.
3. Security is now a cross-functional responsibility
As AI becomes embedded across tools and teams, cybersecurity can no longer live solely with IT. Sales might deploy a lead-qualifying chatbot. Product might experiment with in-app personalization agents. Marketing might launch AI-generated content workflows. Each initiative carries different levels of risk, and without alignment, security gaps emerge.
What once required a secure login now requires secure intent. It’s not just who is accessing a system, but why, how, and what they’re doing once inside. This is especially critical when AI agents act on behalf of humans (or make decisions that affect them).
To best manage cybersecurity in this environment, forward-looking businesses are building cross-functional security motions. These teams bring together IT, product, support, and revenue operations to:
- Share context on where and how AI is being used
- Define acceptable use and escalation policies
- Monitor performance and flag anomalies collaboratively
- Continuously educate teams on emerging risks and best practices
This alignment ensures that innovation doesn’t outpace safety and that everyone understands their role in keeping systems secure.
When it comes to AI, visibility is security
The AI era has introduced incredible possibilities and new kinds of complexity. As agents and algorithms gain more autonomy and decision-making power, maintaining visibility into what’s happening becomes mission-critical.
Security isn't just about building walls anymore. It’s about understanding behavior, spotting change, and responding quickly when things go off course. That requires tools, processes, and cultures that are proactive, not reactive.
Want to learn more about how Pendo helps IT teams reduce AI-related cybersecurity risks? Get a custom demo here.