ChannelLife UK - Industry insider news for technology resellers
Young professionals collaborating ai data eu flag ai regulation teamwork

EU finalises AI Code of Practice ahead of new regulatory era

Fri, 11th Jul 2025

The European Commission has received the final version of the General-Purpose AI Code of Practice following extensive involvement from more than 1,000 stakeholders across the technology sector, academia, civil society, and small and medium-sized enterprises. This Code aims to guide industry compliance with the forthcoming AI Act, which introduces specific requirements for general-purpose artificial intelligence from August 2025.

With the adoption of the EU AI Act, the regulatory landscape for artificial intelligence in Europe is witnessing a significant transformation. The General-Purpose AI Code of Practice has been regarded as a critical milestone in preparing providers and developers for the robust standards mandated by the legislation. The text reportedly incorporates practical mechanisms to address emerging risks and foster more responsible AI deployment in both private and public sectors.

Key contributors to the Code include model providers, security experts, rightsholders, and civil society groups, among others. One active participant, HackerOne, is known globally for its work in offensive security. Ilona Cohen, Chief Legal and Policy Officer at HackerOne, commented on the final draft: "HackerOne believes that securing AI systems and ensuring that they perform as intended is essential for establishing trust in their use and enabling their responsible deployment. We are pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies. We also support the commitment to AI model evaluation using a range of methodologies to address systemic risk, including security concerns and unintended outcomes."

These provisions highlight a priority for ongoing security assessments and transparency in AI operations, as well as robust channels for third-party disclosures and whistleblower protections. Industry insiders view these steps as essential for building confidence in artificial intelligence technologies as they become more deeply integrated across sectors such as healthcare, finance, and public administration.

Despite generally positive reception, the process by which the Code was drafted has elicited concerns regarding inclusivity and transparency. Randolph Barr, Chief Information Security Officer at Cequence Security, said: "The oversight from the EU Commission is generally regulatory and policy-focused – not commercial – which is encouraging. That said, it does raise an important question: why was input limited to just a handful of large companies, without broader opportunity for community feedback? From what I can tell, smaller companies were largely excluded from the drafting process, and the process lacked formal transparency or open consultation. That limits the ability to scrutinise the intent, content, and balance of the Code."

Barr further emphasised the innovative potential of smaller firms, noting, "Smaller, innovative companies often pioneer new safety techniques, ethical designs, and inclusive governance models. Excluding them risks shaping standards around incumbent risk appetites, prioritising defensibility over agility or fairness. It also potentially slows down innovation and sidelines more open, decentralised or community-driven approaches to AI. Moving forward, I'd really like to see public comment periods, inclusion of startups and academic researchers, and greater transparency around who is shaping these guidelines and what expertise they bring."

As the AI Act's implementation date approaches, debate is likely to continue around the balance between regulation, commercial interest, and community inclusion. Industry observers point out that for the Code's guidance to have maximum effectiveness and legitimacy, an open and transparent revision process will be critical. Many are calling for the introduction of public consultation mechanisms and formal avenues for feedback from a wider spectrum of the AI ecosystem.

For now, the European Commission and the drafters of the General-Purpose AI Code of Practice will watch closely as the industry prepares to put these voluntary standards into practice ahead of the binding requirements of the AI Act. The degree to which these measures foster trust, address safety risks, and reflect diverse expertise will likely shape both public sentiment and future regulatory developments in Europe and beyond.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X