ChannelLife UK - Industry insider news for technology resellers
Glowing ai brain shield blocking red warning lines cybersec art

CrowdStrike unveils Falcon AIDR to secure AI prompts

Wed, 17th Dec 2025

CrowdStrike has launched Falcon AI Detection and Response, a new product that targets attacks on the prompt and agent interaction layer of enterprise artificial intelligence systems.

The product extends the Falcon security platform into a part of AI use that many cyber teams do not yet monitor. It addresses prompt injection, jailbreaks and other attempts to influence AI agents and extract sensitive data.

Falcon AI Detection and Response, known as AIDR, is now generally available. CrowdStrike positions it as a single system that covers AI data, models, agents, identities, infrastructure and user interactions across an organisation.

Michael Sentonas, President of CrowdStrike, said prompt-based attacks were now a core concern for customers using generative AI tools.

"Prompt injection is a frontier security problem. Adversaries are injecting hidden instructions into GenAI tools to weaponize the very systems transforming how work gets done," said Michael Sentonas, president of CrowdStrike. "Falcon AIDR secures every prompt, response, and agent action in real time, extending the power of the Falcon platform to the interaction layer and delivering complete protection across our customers' AI infrastructure."

The product focuses on the interaction layer where generative AI systems receive instructions and act on them. Cyber criminals increasingly target this stage of AI use. They use crafted prompts and hidden instructions to change outputs, control agents or reach data that should remain restricted.

CrowdStrike describes this interaction layer as a new attack surface. It describes prompts as the new malware. The company links this risk to the rapid spread of generative AI tools in day-to-day work and in software development.

New AI risks

Enterprises are deploying chat-style AI interfaces and autonomous agents that can trigger actions inside business systems. These agents often connect to documents, internal applications and external APIs. This structure creates new paths for attackers who can influence the prompts or the context that the models receive.

Falcon AIDR sits over this usage layer. It monitors prompts, responses and agent actions in real time. It then applies policy and blocks interactions that match known attack patterns or breach corporate rules.

The product includes logging of how employees use AI tools and how AI agents behave at runtime. These logs support compliance checks and incident investigations. They also give security teams a record of which prompts and responses led to specific automated actions.

CrowdStrike says AIDR uses research into adversarial prompts and more than 180 known prompt injection techniques. This threat intelligence underpins detection of prompt injection, jailbreak attempts and unsafe outputs. The system blocks such prompts before they can influence model behaviour or downstream systems.

Real-time controls

Falcon AIDR also offers real-time controls on how AI can operate inside an organisation. The product can block unsafe interactions, contain malicious or unexpected agent behaviour and enforce policy across users and teams.

The data protection features scan prompts and responses for sensitive material. This includes credentials and regulated data. The system blocks that content before it reaches AI models, agents or external AI providers.

Developers can integrate AIDR into AI applications and agent frameworks. This design embeds safeguards at build time. It aims to reduce the risk that AI features in new software expose data or trigger unsafe actions when deployed at scale.

Part of Falcon platform

Falcon AIDR operates as part of the broader Falcon platform. CrowdStrike already offers tools for endpoint and cloud workload protection, identity security and data security. The addition of AIDR gives customers a single vendor across traditional endpoints and AI-specific layers.

The company positions this as a unified model for AI security. The platform covers underlying environments where AI runs and the higher interaction layers where prompts and agents sit. It supports AI in both development environments and general workforce use.

The launch reflects a wider shift in enterprise security budgets. Organisations are starting to treat AI systems as separate assets that require direct monitoring and control. Many boards now ask for evidence that generative AI programmes comply with data protection law and internal risk frameworks.

CrowdStrike plans further engagement with customers on AI security strategy. It will host a virtual AI summit on secure AI adoption and development, with regional sessions for the Americas, Asia-Pacific and Europe, the Middle East and Africa.