Back to Lobehub

Using InternLM in LobeHub

docs/usage/providers/internlm.mdx

2.1.561.8 KB
Original Source

Using InternLM in LobeHub

<Image cover src={'/blog/assetsf069368b9162f58247318dde850c0807.webp'} />

InternLM is a large-scale pre-trained language model jointly developed by Shanghai AI Laboratory and Intern Studio. Designed for natural language processing tasks, InternLM excels at understanding and generating human language, offering powerful semantic comprehension and text generation capabilities.

This guide will walk you through how to use InternLM within LobeHub.

<Steps> ### Step 1: Obtain Your InternLM API Key
  • Register and log in to the InternLM API Portal
  • Create a new API token
  • Save the token from the pop-up window

<Image alt={'Save API Token'} inStep src={'/blog/assets61324ea13398c8920f798b97ac19d58f.webp'} />

<Callout type={'warning'}> Make sure to save the API token shown in the pop-up window. It will only be displayed once. If you lose it, you’ll need to generate a new one. </Callout>

Step 2: Configure InternLM in LobeHub

  • Go to the Settings page in LobeHub
  • Under AI Providers, locate the configuration section for InternLM

<Image alt={'Enter API Key'} inStep src={'/blog/assets71b5cfd165bc907f437bf807048a3e67.webp'} />

  • Paste your API Key into the input field
  • Choose an InternLM model for your AI assistant to start chatting

<Image alt={'Select InternLM model and start chatting'} inStep src={'/blog/assets5205b6dd0f80b8ba02c297fcdfc1aecb.webp'} />

<Callout type={'warning'}> Please note that usage may incur charges depending on the API provider’s pricing policy. Refer to InternLM’s official documentation for details. </Callout> </Steps>

You’re all set! You can now start using InternLM-powered models in LobeHub for conversations and interactions.