ChannelLife UK - Industry insider news for technology resellers
Flux result fe78e56a 011a 407d b7dc 6c4804cb6dbf

Anthropic AI's Mythos triggers warnings over cyber risk

Thu, 23rd Apr 2026 (Today)

Anthropic AI's Mythos model has prompted warnings from cyber security specialists, heightening concerns about how generative AI could increase the scale and sophistication of cyberattacks.

The response follows reports that unauthorised users accessed Mythos by simply changing the model name. Security experts say the incident shows how quickly advanced AI systems can move beyond controlled environments into wider circulation.

Security leaders are urging boards and executives in the UK and elsewhere to treat AI-driven cyber risk as a strategic issue. They argue that recent developments expose both the fragility of AI infrastructure and the potential for these systems to industrialise existing cybercrime techniques.

Sujatha S Iyer, Head of AI Security at ManageEngine, Zoho's IT division, said the emergence of tools such as Mythos should force organisations to rethink their assumptions about threat actors and the speed of attacks.

"As AI lowers the barrier of entry for cybercriminals, the baseline for defence must too rise. Anthropic AI's Mythos model is a wake-up call - reminding us that cyber resilience isn't just an IT issue. This is a priority that requires board-level attention," said Sujatha S Iyer, Head of AI Security, ManageEngine, Zoho.

AI systems built for code analysis, content generation or research can also help attackers. Security professionals say these models can support malicious users with reconnaissance, phishing, vulnerability discovery and exploit development, even when guardrails are in place.

Iyer said AI is changing the mechanics and speed of common attack types, putting new pressure on organisations that still rely on traditional defences.

"We're entering a phase where attackers can automate reconnaissance, personalise phishing at scale, and identify vulnerabilities faster than many organisations can respond. This fundamentally shifts the balance in favour of threat actors," said Iyer.

Many businesses still depend on perimeter-based security architectures that assume a clear boundary between trusted internal systems and the outside world. But as cloud services, remote work and software-as-a-service platforms have expanded, that boundary has become less distinct.

Companies now face adversaries that can adapt their methods in near real time, Iyer said.

"What's critical now is that businesses move away from reactive security models. Traditional perimeter-based approaches are no longer sufficient when threats are becoming more adaptive and intelligent. Instead, organisations need to prioritise continuous monitoring, identity-first security, and rapid incident response capabilities that can keep pace with AI-driven threats," said Iyer.

Security teams are also focusing on basic operational processes, including patching, configuration management and staff training. Experts say AI-enabled attackers can rapidly scan public-facing systems for known flaws that remain unpatched.

Weaknesses in day-to-day practice often undermine investments in advanced tools, Iyer said.

"There's also a growing need to strengthen cyber hygiene at every level of the organisation. Even the most advanced tools can be undermined by poor patch management or lack of employee awareness," said Iyer.

Concerns about Mythos intensified after reports that external users had accessed the model without authorisation. The method described involved changing a model identifier rather than breaching infrastructure through more complex means.

Shane Fry, Chief Technology Officer at RunSafe Security, said the incident illustrates how exposed AI systems can become even when providers intend to limit access.

"Unauthorized users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed. The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place," said Shane Fry, Chief Technology Officer, RunSafe Security.

Security practitioners say the Mythos episode raises questions about access control, monitoring and logging for advanced models. It also highlights how powerful AI systems, once exposed, can become part of the wider cyber ecosystem regardless of a vendor's policies.

For UK organisations, the comments from Iyer and Fry reflect a broader shift in cyber security thinking. Boards are being asked to treat AI as both a tool for defence and a risk multiplier for adversaries.

Vendors and security teams are now assessing how AI models can be integrated into monitoring and response workflows without creating new attack surfaces. At the same time, they are examining how adversaries might use the same class of models to probe public infrastructure, corporate networks and the software supply chain.

Regulators in the UK and Europe have signalled tighter oversight for providers of advanced AI systems. The Mythos case is likely to feed into ongoing debates about model access, transparency and safety requirements.

The incident has also renewed attention on software hardening. Fry said teams maintaining critical systems will need to assume that automated vulnerability discovery will become faster and more accurate, whether through legitimate tools or models such as Mythos.

Security leaders now expect AI-enabled offensive tools to move into the mainstream of cybercrime. They say the balance between defenders and attackers will depend on how quickly organisations improve monitoring, identity controls and secure development practices.