ChannelLife UK - Industry insider news for technology resellers
Flux result c02b277e a5b1 42c0 96ae f6555d714e08

Software Improvement Group sets out AI governance guide

Mon, 27th Apr 2026 (Today)

Software Improvement Group has published its AI Maturity Guide 2026, a handbook outlining 20 steps for senior leaders to manage AI use across their organisations.

The guide is aimed at board members, chief technology officers, chief information security officers, and governance, risk and compliance leaders. It follows the company's earlier AI Readiness Guide, shifting the focus from experimentation to formal oversight.

Many businesses have already adopted AI in at least one function, but few have put in place a consistent framework to govern it, according to Software Improvement Group. Leadership teams still struggle to identify where AI is being used, which systems are critical to operations, and whether the associated risks are under control.

The publication comes as companies face tighter scrutiny over AI governance and security. In material released alongside the guide, Software Improvement Group pointed to new regulations and a steady flow of cyber incidents involving AI-related systems and development practices.

Among the findings it highlights are signs of weak control over AI use in large organisations. The company says 20% of firms use AI coding tools against policy, creating what it describes as shadow AI risk.

It also says 72% of enterprise AI systems fall below industry standards. The material further claims that productivity gains from AI can fade in larger codebases, with up to 60% of those gains lost once software reaches about 100,000 lines of code, as AI tools struggle with more complex architecture.

Role-based steps

The guide breaks its recommendations down by seniority and function. For boards, the emphasis is on improving understanding of AI, setting direction, requiring transparency, and weighing trade-offs between speed, risk and value.

For governance, risk and compliance teams, the focus is on turning evolving regulation and standards into a practical system for internal AI oversight. Security leaders are urged to extend existing resilience and security practices to cover AI-assisted development and AI systems in production.

Technology and engineering leaders are given a separate set of actions for building and operating AI-enabled software in a measurable way. That section also covers governance of AI-assisted and agentic development.

The company defines a mature organisation as one that can identify its AI footprint, govern AI as part of its wider software portfolio, control risk, and measure value. That framing reflects a broader market shift as companies move away from isolated pilots and towards integrating AI into mainstream systems and processes.

Boardroom pressure

The guide also reflects a tension many executives face as they try to show returns from AI investments while avoiding operational, legal and security failures. Rather than treating AI as a separate workstream, the company argues that businesses need to manage software and AI together.

That approach is linked to its broader research on what it describes as a gap between executive ambition and operational reality. The handbook is intended to offer a more practical route for leadership teams that already accept the strategic importance of AI but are still developing internal controls.

Amsterdam-based Software Improvement Group focuses on software governance and portfolio analysis. Its platform analyses more than 400 billion lines of code across more than 30,000 systems and over 300 technologies, according to the company.

It also noted its involvement in standards work, including co-developing ISO/IEC 5338, which it describes as a global standard for AI lifecycle management. That background is relevant as businesses seek clearer benchmarks for AI oversight and implementation.

Rob van der Veer, chief AI officer at Software Improvement Group, said the guide is aimed at executives responsible for making AI work in practice.

"This guide is written for those who lead in making AI work in practice. You might be the one pushing for faster innovation under pressure from competitors and shareholders. You might be the one accountable when things go wrong. In either case, AI maturity will not come from a single project, pilot, or purchase. It will come from a steady, deliberate shift in how you govern your software and AI as one portfolio," he said.