Back to Lobehub

Using GitHub Models in LobeHub

docs/usage/providers/github.mdx

2.1.563.0 KB
Original Source

Using GitHub Models in LobeHub

<Image cover src={'/blog/assetsf117203c39294f45930785d85773c83e.webp'} />

GitHub Models is a new feature recently launched by GitHub, designed to provide developers with a free platform to access and experiment with various AI models. GitHub Models offers an interactive sandbox environment where users can test different model parameters and prompts to observe the model's responses. The platform supports a range of advanced language models, including OpenAI's GPT-4o, Meta's Llama 3.1, and Mistral's Large 2, covering a wide spectrum of use cases from large language models to task-specific models.

This guide will walk you through how to use GitHub Models within LobeHub.

GitHub Models Rate Limits

Currently, usage of the Playground and free API is subject to limits on requests per minute, daily requests, tokens per request, and concurrent requests. If you hit a rate limit, you’ll need to wait for it to reset before making additional requests. Rate limits vary depending on the model type (low, high, or embedding models). For details on model types, refer to the GitHub Marketplace.

<Image alt={'GitHub Models Rate Limits'} inStep src={'/blog/assets50607dece1bbffe80fdcbe76324ff9b6.webp'} />

<Callout type="note"> These limits are subject to change. For the most up-to-date information, please refer to the [official GitHub documentation](https://docs.github.com/en/github-models/prototyping-with-ai-models#rate-limits). </Callout>

GitHub Models Configuration Guide

<Steps> ### Step 1: Obtain a GitHub Access Token

<Image alt={'Create Access Token'} inStep src={'/blog/assetsc376d2e9e97f9ea9d788589f0a9e23d6.webp'} />

  • Copy and securely save the generated token from the result page.

<Image alt={'Save Access Token'} inStep src={'/blog/assets9880145be3e52b8f9dcd8343cd34a6ca.webp'} />

<Callout type={"warning"}> - During the GitHub Models testing phase, you must apply to join the waitlist to gain access.

```
- Be sure to store your access token securely, as it will only be shown once. If you lose it, you’ll need to generate a new one.
```
</Callout>

Step 2: Configure GitHub Models in LobeHub

  • Open the Settings panel in LobeHub.
  • Under AI Providers, locate the GitHub configuration section.

<Image alt={'Enter Access Token'} inStep src={'/blog/assetse0d53ba2bfb6ba5bf33f2b8a547f4e41.webp'} />

  • Paste the access token you obtained earlier.
  • Choose a GitHub model for your AI assistant to start chatting.

<Image alt={'Select GitHub Model and Start Chatting'} inStep src={'/blog/assetsb6959f725c38f86053e4b07c9188d825.webp'} /> </Steps>

And that’s it! You’re now ready to start using GitHub-provided models in LobeHub for conversations and interactions.