ChannelLife UK - Industry insider news for technology resellers
Story image

The new AI economy: welcome to the era of knowledge-as-a-service

Today

The internet and its commercial opportunities have evolved rapidly over the past few decades. 

In the early 2000s, knowledge producers faced several challenges with search-based navigation. While tools like Google Knowledge Graph provided quick answers, the benefits primarily favoured source platforms, which profited from large audiences through advertising. Companies adapted by leveraging meta-information and links, leading to the growth of an industry focused on search engine optimisation (SEO). Tools like sitemaps strengthened this interdependent relationship between search providers and content producers.

Subsequently, cloud computing emerged as a more efficient and cost-effective technology, leading to the concept of infrastructure-as-a-service. Businesses that embraced cloud solutions reduced costs and created new software-as-a-service (SaaS) business models, resulting in entirely new business categories.

Then, about a decade ago, virtual assistants like Siri and early chatbots introduced conversational technology. While the rise of chat technology felt innovative, what was essentially 'under the hood' had stayed the same. Like the early iterations of Google, these tools were interfaces for accessing knowledge platforms, ultimately relying on conventional search to link to reputable sources.

Data disruptions 

Today's AI agents present synthesised knowledge as though they are the owner or creator of that knowledge, but without attributing the original authors. This prevents traffic from interchange back to the sources and sometimes wholly obscures them. Unsurprisingly, this shift has led to internet fragmentation, widening the gap between knowledge sources and user interaction. 

Simultaneously, new challenges in the knowledge ecosystem have emerged. Answers alone do not equate to knowledge. LLMs frequently need more depth for complex queries and contextual understanding, making some answers unreliable or irrelevant. Additionally, AI tools rely on historical data, leading to an LLM brain drain where new insights are missed if humans cease to create and share original thoughts. This effect is compounded by growing user scepticism regarding AI's reliability, jeopardising community-driven knowledge systems' credibility.

Fundamentally, the entire AI ecosystem is at risk without establishing trust. We saw this lack of confidence reflected in our 2024 Developer Survey results, which found that only 43% of developers say that they trust the accuracy of AI tools, while 31% of developers remain sceptical. Further, they are concerned about AI's potential to circulate misinformation (79% of respondents), missing or incorrect attribution for sources of data (65%), and bias that does not represent a diversity of viewpoints (50%).

The rise of socially responsible AI  

Pressures from within the technology community and beyond are driving LLM developers to take attribution more seriously. This has created an urgency around data procurement focused on high-quality training data that is higher quality than publicly available. Ultimately, this has led to some providers resorting to sometimes unethical measures to help them bridge this gap. 

As LLM providers focus more on enterprise customers, data governance becomes increasingly critical. Corporate customers need to be more accepting of lapses in accuracy and expect accountability for the information provided by models and the security of their data. 

Maintaining feedback loops with human knowledge creators is essential for ongoing knowledge generation and strengthening trust in AI tools. LLM developers and organisations that recognise the creation, curation, and validation of human knowledge as valuable as mere user engagement will catalyse an era of new internet business models. We are entering an economy where Knowledge-as-a-Service will power the future.

Humans + AI = Knowledge-as-a-Service

Knowledge-as-a-Service business models will rely on a community of creators generating relevant, domain-specific, high-quality content and on the ethical use of data to benefit and reinvest in these communities. 
Take, for example, Stack Overflow. Developers and LLM providers can access our trusted and validated technical content or the knowledge store. This store enables users to access existing knowledge on demand while facilitating the creation and validation of new knowledge. This 'Knowledge-as-a-Service' model allows communities to continue sharing knowledge while guiding LLM providers and AI developers towards fair and responsible use of community content. 

When enterprises combine this public knowledge store with their own corpus of data, they create an expanded store that becomes knowledge-as-a-service. This fosters a feedback loop that helps developers and technologists innovate and add value more efficiently. This business model promotes sustainable financial growth in a market where traditional monetisation methods, like advertising and software-as-a-service, face increasing economic pressure. 
The success of the Knowledge-as-a-Service model will depend on multiple factors, including scalable content deployment, supporting third-party use cases, delivering ROI for enterprises, building enterprise networks, and sourcing relevant data. Longer-term sustainability will be determined by creating new data sources, protecting existing ones, and enabling fair access to knowledge and tools. Fostering mutually beneficial partnerships will be essential for sustainable data use. 

An AI future built on trust 

As AI once again reshapes the internet, businesses that promote a sustainable, open web that prioritises both community and commercial interests and supports ethical growth and transparency in the evolving knowledge landscape will reap the benefits. Recent developments showing the "thought process" behind LLM responses may illuminate other avenues for attribution and source disclosure. As technological advances become commonplace and legal standards evolve, we will undoubtedly see more significant industry and regulatory scrutiny. 

Enterprises and developers using trusted Knowledge-as-a-Service platforms stand to gain numerous benefits. Access to well-curated, contextually appropriate data increases output accuracy. Licensed content protects against misrepresentation and misinformation, mitigating legal risk. Reliable content fosters user confidence, increasing trust in the data. 

We have a collective social responsibility for leveraging AI, and the commercial and ethical advantages are clear. New standards must be established, where vetted, trusted, and accurate data is the foundation for building and delivering technology solutions. Only through a vision like ours can we preserve a more open internet.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X