Software/ platforms

Snowflake and NVIDIA Power Customized AI Applications for Customers and Partners

Bringing together the industry’s leading AI-powered applications, models, and hardware so customers can deliver enterprise AI across their businesses with ease, efficiency, and trust

Snowflake (NYSE: SNOW), the AI Data Cloud company, today announced at Snowflake Summit 2024 a new collaboration with NVIDIA that customers and partners can harness to build customized AI data applications in Snowflake, powered by NVIDIA AI.

With this latest collaboration, Snowflake has adopted NVIDIA AI Enterprise software to integrate NeMo Retriever microservices into Snowflake Cortex AI, Snowflake’s fully managed large language model (LLM) and vector search service. This will enable organizations to seamlessly connect custom models to diverse business data and deliver highly accurate responses. In addition, Snowflake Arctic, the most open, enterprise-grade LLM, is now fully supported with NVIDIA TensorRT-LLM software, providing users with highly optimized performance. Arctic is also now available as an NVIDIA NIM inference microservice, allowing more developers to access Arctic’s efficient intelligence.

As enterprises look for ways to further unlock the power of AI across their teams, there’s an increasing need to apply data to drive customization. Through Snowflake’s collaboration with NVIDIA, organizations can rapidly create bespoke, use-case specific AI solutions, enabling businesses across industries to realize the potential of enterprise AI.

“Pairing NVIDIA’s full stack accelerated computing and software with Snowflake’s state-of-the-art AI capabilities in Cortex AI is game-changing,” said Sridhar Ramaswamy, CEO, Snowflake. “Together, we are unlocking a new era of AI where customers from every industry and every skill level can build custom AI applications on their enterprise data with ease, efficiency, and trust.”

“Data is the essential raw material of the AI industrial revolution,” said Jensen Huang, founder and CEO, NVIDIA. “Together, NVIDIA and Snowflake will help enterprises refine their proprietary business data and transform it into valuable generative AI.”

Snowflake Cortex AI + NVIDIA AI Enterprise Software

Snowflake and NVIDIA are collaborating to integrate the key technologies of NVIDIA AI Enterprise software platform – such as NeMo Retriever – into Cortex AI, so business users can efficiently build and leverage bespoke AI-powered applications that maximize their AI investments.

NVIDIA AI Enterprise software capabilities to be offered in Cortex AI include:

  • NVIDIA NeMo Retriever: Provides information retrieval with high accuracy and powerful performance for enterprises building retrieval-augmented generation-based AI applications within Cortex AI.
  • NVIDIA Triton Inference Server: Provides the ability to deploy, run, and scale AI inference for any application on any platform.

In addition, NVIDIA NIM inference microservices – a set of pre-built AI containers and part of NVIDIA AI Enterprise – can be deployed right within Snowflake as a native app powered by Snowpark Container Services. The app enables organizations to easily deploy a series of foundation models right within Snowflake.

Quantiphi, an AI-first digital engineering company, and ‘Elite’ tier partner with both Snowflake and NVIDIA, is one of the many AI providers building Snowflake Native Apps using Snowpark Container Services. These apps run within a customer’s Snowflake account to help ensure data remains protected, while delivering faster time-to-value. Quantiphi’s Native Apps, baioniq™ – a generative AI platform for boosting knowledge worker productivity and Dociphi–an AI-led intelligent document processing platform for the banking, financial services, and insurance industries, target specific business personas to accelerate their industry use cases and day-to-day operations. Both Dociphi and baioniq were developed using the NVIDIA NeMo framework and will be available on Snowflake Marketplace for users to deploy without leaving their Snowflake environment.

Expanded Support for Snowflake Arctic

The state-of-the-art Snowflake Arctic LLM, launched in April 2024 and trained on NVIDIA H100 Tensor Core GPUs, is available as an NVIDIA NIM so users can get started with Arctic in seconds. The Arctic NIM hosted by NVIDIA is live on the NVIDIA API catalog for developer access using free credits, and will be offered as a downloadable NIM, giving users even more choice to deploy the most open enterprise LLM available on their preferred infrastructure.

Earlier this year, Snowflake and NVIDIA announced an expansion of their initial collaboration to deliver a single, unified AI infrastructure and compute platform in the AI Data Cloud. Today’s announcements represent key advancements in Snowflake’s shared mission with NVIDIA to help customers succeed on their AI journeys.

Learn More:

  • Tune into the Snowflake Summit 2024 Keynote livestream to hear about the latest in AI, apps, and data collaboration and check out Snowflake Dev Day on June 6, 2024 to see the latest innovations in action.
  • Dig into how the world-renowned Snowflake AI Research team trained Snowflake Arctic in this blog.
  • See how organizations are bringing generative AI and LLMs to their enterprise data in this video.
  • Stay on top of the latest news and announcements from Snowflake on LinkedIn and Twitter / X.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

Hive Pro unveils Revolutionary Platform Uni5 Xposure

PR Newswire

Epicor Appoints Lisa Pope as President

Business Wire

CommTech Platform, Qwoted Closes $3 Million Seed Round

Business Wire