ChannelLife UK - Industry insider news for technology resellers
Story image

AI's dual role in future cybersecurity: threat & ally

Today

As the world moves towards 2025, experts from various sectors have put forward projections concerning the role of artificial intelligence (AI) in cybersecurity and beyond. These insights, shared by professionals across industries, herald significant changes and highlight potential challenges that may arise with the continued evolution of AI technologies.

Mark Bowling, Chief Information Security and Risk Officer at ExtraHop, warns of a "new wave of traditional fraud" enabled by generative AI. Bowling draws attention to the increasing proficiency of cybercriminals, who will likely exploit generative AI to enhance impersonation tactics. Such developments are expected to pose extensive threats by allowing impersonations of authority figures to extract sensitive information like Personal Identifiable Information (PII) or credentials. According to Bowling, combating this fraud wave will necessitate strengthening identity protection measures through methods such as Multi-Factor Authentication (MFA) and Identity and Access Management (IAM) tools to detect abnormal credential usage.

Ping Identity's CEO and Founder, Andre Durand, echoes these sentiments regarding trust, suggesting that as AI technologies advance, we must pivot our security mindset to "trust nothing, verify everything." The increasing capabilities of AI in impersonating individuals will drive us to rely less on implicit trust and more on thorough verification processes.

On a similar note, Sadiq Iqbal of Check Point Software Technologies anticipates AI becoming a major enabler of cybercrime. Iqbal predicts that threat actors will leverage AI to craft targeted phishing tactics and adaptive malware, making cybercrime more accessible for less experienced groups, effectively democratising cybercrime.

Beyond these threats, Morey Haber of BeyondTrust forecasts a deflating hype around AI, warning of the "Artificial Inflation" of AI capabilities, which will become apparent as certain overblown promises fall short. Haber anticipates a pivot towards practical AI applications that bolster security without bombarding organisations with marketing hyperbole.

Corey Nachreiner from WatchGuard Technologies discusses the emergence of multimodal AI systems capable of integrating various forms of content for streamlined cyberattacks. These systems could enable less skilled attackers to launch sophisticated attacks, posing further detection and prevention challenges for security teams.

Lastly, Steve Povolny of Exabeam highlights the risks attached to overly trusting AI outputs, which could potentially lead to vulnerabilities within organisations. Povolny suggests instigating a "Zero Trust for AI," advocating for rigorous verification, validation, and fact-checking of AI outputs prior to making critical security decisions. This move underscores the importance of maintaining human oversight as an integral part of AI deployment within security frameworks.

The collected views from these industry leaders provide a nuanced picture of the future landscape of AI and cybersecurity. With generative AI set to play both a transformative and disruptive role, organisations are pushed towards implementing enhanced verification measures and practical AI applications to effectively balance the benefits and risks associated with these advancing technologies. As companies brace for these predicted changes, the emphasis remains on fostering resilience, adapting verification processes, and maintaining a balanced view on the capabilities and limitations of AI.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X