Zero Trust: a proven solution for the new AI security challenge
Date:
Tue, 07 Oct 2025 09:00:52 +0000
Description:
As AI agents and LLMs reshape work, Zero Trust offers the proven path to harness innovation without compromising security.
FULL STORY ======================================================================
As organizations race to unlock the productivity potential of large language models ( LLMs ) and agentic AI, many are also waking up to a familiar
security problem: what happens when powerful new tools have too much freedom, too few safeguards, and far-reaching access to sensitive data?
From drafting code to automating customer service and synthesizing business insights, LLMs and autonomous AI agents are redefining how work gets done.
But the same capabilities that make these tools indispensable the ability to ingest, analyze, and generate human-like content can quickly backfire if not governed with precision.
When an AI system is connected to enterprise data, APIs, and applications without proper controls, the risk of accidental leaks, rogue actions or malicious misuse skyrockets. Its tempting to assume that enabling these new
AI capabilities will require the abandonment of existing security principles.
In reality, the opposite is true: the tried and true Zero Trust architecture that has shaped resilient cybersecurity in recent years is now needed more than ever to secure LLMs, AI agents, AI workflows, and the sensitive data
they interact with. Only with Zero Trusts identity-based authorization and enforcement approach can complex AI interactions be made secure. The AI Risk: Same Problem, Increased Complexity, Higher Stakes
LLMs excel at rapidly processing vast volumes of data. But every interaction between a user and an AI agent, an agent and a model, or a model and a database creates a new potential risk. Consider an employee who uses an LLM
to summarize confidential contracts. Without robust controls, those
summaries, or the contracts behind them, could be left exposed.
Or imagine an autonomous agent granted permissions to speed up tasks. If it isnt governed by strict, real-time access controls, that same agent could inadvertently pull more data than intended, or be exploited by an attacker to exfiltrate sensitive information. In short, LLMs dont change the fundamental security challenge. They simply multiply the pathways and scale of exposure.
This multiplication effect is particularly concerning because AI systems operate at machine speed and scale. A single unmanaged access that might expose a handful of records in traditional systems could, when exploited by
an AI agent, result in the exposure of thousands or even millions of
sensitive data points in seconds.
Moreover, AI agents are capable of chaining actions together, calling APIs,
or orchestrating workflows across multiple systems activities that blur traditional security perimeters and complicate the task of monitoring and containment.
In this environment, organizations can no longer rely on static defenses. Instead, security must be dynamic and based on the identity of each user, agent, LLM and digital resource to enable adaptive, contextual, and least privilege access at every turn. The Amplified Need for Zero Trust in an AI World
Zero Trust rests on a simple but powerful idea: never trust, always verify. Every user, device, application, or AI agent must continuously prove who they are and what theyre allowed to do, every time they attempt an action.
This model maps naturally to modern AI environments. Instead of just trying
to filter prompts, or retrieved data, or outputs filtering which can be bypassed using clever prompts Zero Trust enforces security deeper in the stack.
It governs which agents and models can access which data, under what conditions, and for how long. Think of it as putting identity and context at the center of every interaction, whether its a human requesting data or an AI process operating autonomously in the background.
One example to think about is prompt injection attacks, where malicious
inputs trick an LLM into revealing sensitive data or performing unauthorized tasks. Even the most advanced filtering systems have proven vulnerable to these jailbreak techniques.
But with Zero Trust in place, the damage from such an attack is avoided because the AI process itself lacks standing privileges. The system verifies access requests made by AI components independent of any dependency on prompt interpretation or filtering, making it impossible for a compromised prompt to escalate into a data exposure. How to Apply Zero Trust to LLM Workflows
Securing LLMs and generative AI doesnt mean reinventing the wheel. It means expanding proven Zero Trust principles to new use cases:
- Tie AI agents to verified identities: Treat AI processes like human users. Each agent or model needs its own identity, roles, and entitlements.
- Use fine-grained, context-aware controls: Limit an AI agents access based on real-time factors like time, device, or sensitivity of the data requested.
- Enforce controls at the protocol level: Dont rely solely on prompt, output or retrieval-level filtering. Apply Zero Trust deeper, at the system and network layers, to block unauthorized access, no matter how sophisticated the prompt.
- Maintain zero trust along chains of AI interactions: Even for complex chains of interactions - such as a user using an agent that uses an agent
that uses an LLM to access a database - identity and entitlements must be traced and enforced along each step of the interaction sequence.
- Continuously monitor and audit: Maintain visibility into every action an agent or model takes. Tamperproof logs and smart session recording ensure compliance and accountability.
To apply Zero Trust to AI, organizations will need proper identity management solutions for AI models and agents, much as they do today for employees. This will underpin the use of IAM (Identity and Access Management) with AI assets and digital resources for consistent policy enforcement.
By applying Zero Trust to its AI systems, an organization can move from
hoping AI projects wont leak data or go rogue to knowing they cannot. This assurance is more than a technical advantage, its a business enabler. Organizations that can confidently deploy AI while safeguarding their data will innovate faster, attract more customers, and maintain regulatory compliance in an environment where laws around AI usage are rapidly evolving.
Regulators worldwide are signaling that AI governance will require demonstrable safeguards against misuse, and Zero Trust provides the clearest path toward compliance without stifling innovation. AI promises
transformative gains, but only for those who can harness it safely. Zero
Trust is the proven security model that ensures the benefits of AI can be realized without opening the door to unacceptable risks.
We list the best Antivirus Software: expert rankings and reviews .
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:
https://www.techradar.com/news/submit-your-story-to-techradar-pro
======================================================================
Link to news story:
https://www.techradar.com/pro/zero-trust-a-proven-solution-for-the-new-ai-secu rity-challenge
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)