docs/usage/providers/nvidia.mdx
<Image alt={'Using Nvidia NIM in LobeHub'} cover src={'/blog/assetsd8927338578c426b833e5cb57e0b57ec.webp'} />
NVIDIA NIM is part of NVIDIA AI Enterprise, designed to accelerate the deployment of generative AI applications through microservices. It offers a set of easy-to-use inference microservices that can run on any cloud, data center, or workstation, with support for NVIDIA GPU acceleration.
This guide will walk you through how to integrate and use AI models provided by Nvidia NIM in LobeHub:
<Steps> ### Step 1: Obtain Your Nvidia NIM API KeyModels page, select the model you want to use, such as Deepseek-R1.<Image alt={'Select a model'} inStep src={'/blog/assets74000cc1bc59ee4a15e8f0304afbf866.webp'} />
Build with this NIM.Generate API Key button.<Image alt={'Get API Key'} inStep src={'/blog/assetsaf57d31364a41634b10c243ed9b1f8f8.webp'} />
<Callout type={'warning'}> Make sure to store your API key securely, as it will only be shown once. If you lose it, you’ll need to generate a new one. </Callout>
App Settings section in LobeHub and navigate to AI Service Providers.Nvidia NIM option in the list of providers.<Image alt={'Enter Nvidia NIM API Key'} inStep src={'/blog/assetscad58c557fda04b9379000cbbaa4c493.webp'} />
<Image alt={'Select Nvidia NIM Model'} inStep src={'/blog/assets45d90e73abffd7ae7d85808f81827bb9.webp'} />
<Callout type={'warning'}> You may incur charges from the API service provider during usage. Please refer to Nvidia NIM’s pricing policy for details. </Callout> </Steps>
That’s it! You’re now ready to use Nvidia NIM-powered models in LobeHub for your conversations.