Back to Lobehub

Using Infini-AI in LobeHub

docs/usage/providers/infiniai.mdx

2.1.561.2 KB
Original Source

Using Infini-AI in LobeHub

Infini-AI is a large model service platform optimized for diverse chip architectures, offering a high-performance and unified AGI infrastructure solution.

This guide will walk you through the steps to quickly integrate Infini-AI's capabilities into LobeHub.

<Callout type="info"> Infini-AI enforces a whitelist mechanism for image URLs. Currently, image links from services like Alibaba Cloud OSS and AWS S3 are supported.\ If you encounter a 400 error when using image-based conversations, try [uploading images using base64 encoding](/docs/self-hosting/environment-variables/s3#llm-vision-image-use-base-64). </Callout> <Steps> ### Step 1: Obtain an Infini-AI API Key
  • Log in to the Large Model Service Platform
  • In the left-hand navigation menu, select "API KEY Management"
  • On the newly opened page, click the "Create API KEY" button, enter a name, and click "Create"

Step 2: Configure the Model Service in LobeHub

  • Open LobeHub and go to the "Settings" page
  • In the "Language Model" section, select "Infini-AI"
  • Paste the API Key you obtained earlier
</Steps>