ChannelLife UK - Industry insider news for technology resellers
Quantum corporate network attack cracked shield cybersecurity

Quantum & AI threats to reshape cybersecurity by 2026

Wed, 17th Dec 2025

Kyndryl security executive Kris Lovejoy has warned that organisations face rising cyber risks from quantum computing and autonomous artificial intelligence as they head toward 2026.

Lovejoy is Global Security and Resilience Leader at Kyndryl. She has set out a series of predictions about how emerging technologies will change cyber defence and digital risk management over the next two years.

She said the most urgent concern is the impact of fault-tolerant quantum computing on today's encryption standards. She also highlighted the emergence of fully autonomous cyberattacks and the shift of security teams into governing roles for agentic AI systems.

Lovejoy said many organisations are not aligned with the pace of change. She cited internal research which found that only a small minority of leaders see quantum as the most impactful technology over the next three years.

"Despite the critical security risks that the post-quantum era will introduce, only 4% of leaders believe quantum will be the most impactful technological advancement in the next three years, leaving most organizations vulnerable (especially as 'harvest now, decrypt later' attacks persist)," said Lovejoy, Global Security and Resilience Leader, Kyndryl.

Quantum risk rises

Lovejoy said advances in fault-tolerant quantum computing could undermine current cryptographic systems. She said encryption that many organisations treat as robust may not remain reliable.

She warned that some attackers already collect encrypted data in anticipation of future quantum decryption methods. Security researchers describe this practice as "harvest now, decrypt later". It allows adversaries to store sensitive data today and attempt decryption when quantum tools become available.

Lovejoy said enterprises should view quantum risk as a long-term change programme across networks, applications, identities and supply chains. She said organisations should not treat it as a single technology upgrade.

She said preparation for the post-quantum era needs board-level attention. It also needs coordinated planning between security, IT, operations and business leaders.

Agentic AI oversight

Lovejoy expects agentic AI systems to move deeper into core business processes over the next few years. These systems can act with a degree of autonomy inside workflows.

She predicted that security teams will move beyond traditional defensive roles. She said they will become governance leaders for agentic AI.

In her view, security functions will work on behavioural standards for AI agents. They will also embed policy-as-code into AI workflows and focus on observability and enforceability across the agent lifecycle.

This shift would place security teams at the centre of AI governance structures. It would also increase their involvement in technology design and deployment decisions.

Autonomous attacks

Lovejoy expects a new type of threat from fully autonomous AI-driven attacks. She forecast that by 2027 attackers will successfully execute end-to-end cyber operations without direct human command.

In this scenario, AI systems would manage every stage of an attack. This would include initial penetration, lateral movement and data exfiltration.

She said such incidents would challenge existing incident response methods. Human-in-the-loop response processes would no longer be fast enough.

Lovejoy said this change would push defenders toward automated and machine-speed defence models. It would also put pressure on organisations to invest in continuous monitoring and rapid containment tools.

Digital trust pressure

Lovejoy expects AI-powered threats such as deepfakes and advanced social engineering to become mainstream in 2026. She said these tools will test digital trust in everyday interactions.

She said organisations that do not adopt automated detection and adaptive response will face increasing difficulty. They may struggle to keep pace with new attack vectors that rely on synthetic media and adversarial AI.

This trend will affect internal security training and external customer interactions. It will also raise questions about identity verification and content authenticity in both public and private sectors.

Pragmatic AI adoption

On enterprise use of agentic AI, Lovejoy expects a measured approach rather than wholesale transformation. She said many businesses will focus on specific process improvements over the next 6 to 12 months.

Organisations are likely to continue testing agentic AI in contained environments. They will work on proving concrete value and building governance frameworks before large-scale deployment.

Lovejoy said progress will rely on selective, high-impact use cases. She does not expect most enterprises to move quickly into fully autonomous operations.

Reskilling demand

Lovejoy said advances in autonomous systems will reshape workforce requirements in security and resilience roles. She expects new training programmes and governance positions to emerge.

She said this shift will affect traditional entry-level and mid-career roles. Employers will need to rethink education models for security staff.

Lovejoy said AI governance, risk management and ethical decision-making will become core skills. She expects these areas to form part of future workforce strategies.

She said organisations that start structured reskilling efforts now will be better placed as automation expands across security operations.

Lovejoy said enterprises that act early on quantum risk, autonomous threat models and AI governance will set the pace in the next phase of cybersecurity change.