Mixed reaction from the AI community on King's Speech
The recent King's Speech has sparked significant discourse among experts regarding future UK legislation on Artificial Intelligence (AI). Notably, there was no introduction of a new AI Bill, which many had anticipated. The speech's focus on bringing "appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models" leaves room for interpretation and has drawn mixed reactions from the AI community.
Mark Jones, a Partner at Payne Hicks Beach, expressed disappointment over the absence of new comprehensive AI legislation. He noted, "Many expected today's King's Speech to introduce a new AI Bill, bringing the UK in line with Europe (the Artificial Intelligence Act). Instead, we have vague promises of appropriate legislation." Highlighting the current government's focus, Jones added, "It is clear that the government's priority is on cyber security and preventing hacking incidents. However, the silence on online harms and deepfakes is a missed opportunity. There is no mention of criminalising the creation of sexually explicit deepfakes, which failed to pass through parliament before the general election."
Complementing this perspective, Dr Marc Warner, CEO of Faculty AI, acknowledged the necessity of regulations but cautioned against possible "regulatory overreach" by Labour. Warner argued that AI has proven its safety and efficacy in specialised applications such as predicting travel times, spotting bank fraud, and reading patient scans for decades. He stated, "Embracing these narrow applications of AI should be the priority. Cracking down here would stifle growth, hamper innovation, and deny the public better, faster, and cheaper public services." Warner suggested leveraging narrow AI while implementing sensible rules for more advanced systems as a balanced approach moving forward.
Adding to the conversation, Peter van der Putten, Head of the AI Lab at Pegasystems and Assistant Professor of AI at Leiden University, remarked on the shift in the UK's approach to AI regulation. He observed, "The King's Speech suggests the UK is now following the path already set by the EU in moving from codes of good practice to actual legislation guiding the development and use of large language models (LLMs)." Van der Putten welcomed this progression, emphasising the necessity of clear and robust guidelines. However, he stressed the importance of aligning new UK laws with the existing EU AI Act, which encompasses a broader range of AI systems.
Van der Putten further elucidated, "While the UK government appears focused on frontier generative AI models, risk emanates not just from the largest, most advanced AI systems but more frequently from the misuse of smaller, limited AI models. The real threat is not from AI superintelligence but from basic artificial unintelligence." This point underscores the criticality of comprehensive and informed legislation that addresses a spectrum of AI technologies, rather than narrowly focusing on advanced AI models alone.
Stakeholders from various sectors await further details on the proposed legislation to better understand its implications. As indicated by the experts, the delicate balance between promoting innovation and ensuring public safety remains a pivotal challenge. The direction of AI regulation in the UK will evidently play a significant role in shaping the development and application of AI technologies within the country and potentially influence international standards.
As the dialogue around AI legislation continues to unfold, the consensus among industry experts appears to advocate for a measured approach. This would involve fostering innovation in narrow AI applications while setting up robust frameworks to manage the broader, more powerful AI systems. How this balance will be achieved in concrete policy remains to be seen, potentially marking a transformative period for AI governance in the UK.