Cloudera unveils AI service with NVIDIA for 36x faster LLMs
Cloudera has launched Cloudera AI Inference, a new AI inference service powered by NVIDIA NIM microservices, to enhance the development and deployment of AI across various domains.
The Cloudera AI Inference aims to improve Large Language Model (LLM) performance speeds by 36 times, utilising NVIDIA accelerated computing and microservices. The service is intended to enhance performance, data security, and scalability for enterprises.
Cloudera's AI Inference streamlines deployment and management of large-scale AI models, facilitating the transition of GenAI from pilot phases to full production. This capability is significant for enterprises facing challenges in AI adoption related to compliance and governance.
Industry analyst Sanjeev Mohan commented on these challenges, noting, "Enterprises are eager to invest in GenAI, but it requires not only scalable data but also secure, compliant, and well-governed data. Productionizing AI at scale privately introduces complexity that DIY approaches struggle to address. Cloudera AI Inference bridges this gap by integrating advanced data management with NVIDIA's AI expertise, unlocking data's full potential while safeguarding it."
The service protects sensitive data by allowing secure development and deployment under enterprise control, preventing leaks to non-private, vendor-hosted AI model services. This protection is crucial as organisations increasingly focus on secure and private data management.
Cloudera's Chief Product Officer, Dipto Chakravarty, expressed enthusiasm about the collaboration with NVIDIA, stating, "We are excited to collaborate with NVIDIA to bring Cloudera AI Inference to market, providing a single AI/ML platform that supports nearly all models and use cases so enterprises can both create powerful AI apps with our software and then run those performant AI apps in Cloudera as well."
The integration with NVIDIA technology allows developers to build and deploy enterprise-grade LLMs with significantly faster performance. This seamless experience eliminates the need for command-line interfaces and separate monitoring systems, offering a unified platform for managing both LLM deployments and traditional models.
Kari Briski, Vice President of AI Software, Models and Services at NVIDIA, commented on the integration, saying, "Enterprises today need to seamlessly integrate generative AI with their existing data infrastructure to drive business outcomes. By incorporating NVIDIA NIM microservices into Cloudera's AI Inference platform, we're empowering developers to easily create trustworthy generative AI applications while fostering a self-sustaining AI data flywheel."
Key features of Cloudera AI Inference include the utilisation of NVIDIA NIM microservices for optimising open-source LLMs, hybrid cloud solutions for enhanced security and regulatory compliance, and scalability with auto-scaling and real-time performance tracking. Further, the service offers robust enterprise security with service accounts and access control, along with features for risk-managed deployment.
The launch of Cloudera AI Inference coincides with the ongoing digital transformation efforts across industries, marking a critical juncture for enterprises to integrate AI efficiently and securely into their operations.