AI's impact on cybersecurity: challenges & strategies
The cybersecurity landscape is set to undergo significant changes in 2025, as experts predict the impact of artificial intelligence (AI) will bring both opportunities and challenges. Industry leaders have shared insights into the evolving role of AI, the emergence of new regulatory frameworks, and the shift towards continuous compliance.
Sadiq Iqbal of Check Point Software Technologies emphasises the regulatory challenges posed by AI technologies, especially large language models (LLMs). These tools, although transformative, harbour risks such as data integrity issues and algorithmic bias. To address these, Iqbal predicts the development of new regulatory frameworks in 2025, akin to the standards by NIST or ISO, guiding AI deployment and providing organisations with tools for compliance and risk mitigation.
Bernd Greifeneder from Dynatrace highlights a shift in compliance towards a real-time dynamic system supported by standards like Australia's CPA 230 and Hong Kong's Monetary Authority Operational Resilience Framework. This approach integrates observability and security, allowing organisations to gain insights necessary for compliance and proactive threat detection. AI systems will automate monitoring, analysing, and alerting on regulatory adherence, potentially transforming compliance from static audits to dynamic processes.
Norman Rice of Extreme Networks discusses the evolving role of AI in enterprises, moving towards practical, ROI-driven applications rather than broad disruption. While initial excitement suggested AI could automate numerous processes, the reality is more about incremental efficiency improvements. These improvements include more accurate real-time IT issue detection and network technology certification, adapting AI to critical, well-defined use cases.
Steve Wilson of Exabeam warns of advanced AI tools available to hackers by 2025. With enhanced reasoning abilities, generative AI could enable sophisticated phishing scams, using deepfake voices and video avatars. Organisations must adopt AI-driven security measures that evolve with attack strategies to counteract these threats.
Sarah Cleveland from ExtraHop underscores AI's transformative impact on cybersecurity, enhancing the ability to prioritise and address threats in real time. As attackers become more sophisticated, organisations integrating AI into their defence strategies can better detect imperceptible anomalies and allocate resources effectively. AI thus acts as a multiplier for security teams, maintaining a proactive posture in an increasingly complex digital environment.
George Moawad from Genetec notes a mix of enthusiasm and concern regarding AI, as highlighted in Genetec's State of Physical Security Report. While 42% of security decision-makers show interest in AI solutions, privacy, ethics, and data bias are pivotal concerns. Companies are focusing on responsible AI adoption, ensuring transparency, governance, and compliance with ethical standards.
Jason Hardy of Hitachi Vantara foresees organisations adopting a more measured AI strategy, prioritising ROI in infrastructure investments. Companies will start by identifying specific problems to solve, enabling a pragmatic approach that determines data and infrastructure needs for successful AI implementation.
Darrell Geusz of Ping Identity predicts a merger between payments and identity by 2025, facilitated by verifiable credentials on smartphones. This convergence will allow AI assistants to securely execute tasks on behalf of users, such as making payments, highlighting the importance of secure credential management.
Corey Nachreiner of WatchGuard Technologies acknowledges that generative AI (GenAI) has yet to achieve transformative business changes. Despite initial hype diminishing, ongoing improvements in GenAI capabilities, particularly in deepfake technologies, present new potential threats. Businesses need to prepare for sophisticated attacks combining GenAI with other tactics to exploit organisational trust.