docs/usage/providers/groq.mdx
<Image alt={'Using Groq in LobeHub'} cover src={'/blog/assets17870709/1d840e27-fa74-4e71-b777-330bf41d6dff.webp'} />
Groq’s LPU Inference Engine has demonstrated outstanding performance in the latest independent large language model (LLM) benchmarks, redefining the standards for AI solutions with its incredible speed and efficiency. With the integration of Groq Cloud into LobeHub, you can now easily harness Groq’s technology to accelerate LLM performance within LobeHub.
<Callout type={'info'}> In internal benchmarks, Groq’s LPU Inference Engine consistently achieved speeds of 300 tokens per second. According to ArtificialAnalysis.ai, Groq outperforms other providers in both throughput (241 tokens per second) and total time to receive 100 output tokens (0.8 seconds). </Callout>
This guide will walk you through how to use Groq in LobeHub:
<Steps> ### Step 1: Get Your GroqCloud API KeyFirst, visit the GroqCloud Console to obtain your API Key.
<Image alt={'Get GroqCloud API Key'} height={274} inStep src={'/blog/assets34400653/6942287e-fbb1-4a10-a1ce-caaa6663da1e.webp'} />
In the console, navigate to the API Keys section and create a new API Key.
<Image alt={'Save GroqCloud API Key'} height={274} inStep src={'/blog/assets34400653/eb57ca57-4f45-4409-91ce-9fa9c7c626d6.webp'} />
<Callout type={'warning'}> Make sure to save the key shown in the popup — it will only be displayed once. If you lose it, you’ll need to generate a new one. </Callout>
Go to Settings -> AI Providers in LobeHub, and find the configuration section for Groq. Paste the API Key you just obtained.
<Image alt={'Groq Provider Settings'} height={274} inStep src={'/blog/assets34400653/88948a3a-6681-4a8d-9734-a464e09e4957.webp'} /> </Steps>
Next, in the assistant’s model selection menu, choose a model supported by Groq to start experiencing Groq’s powerful performance in LobeHub.
<Video alt={'Select Groq Model'} src="/blog/assets28616219/b6b8226b-183f-4249-8255-663a5e9f5af4.mp4" />