documentation/docs/mcp/gitmcp-mcp.md
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import YouTubeShortEmbed from '@site/src/components/YouTubeShortEmbed'; import CLIExtensionInstructions from '@site/src/components/CLIExtensionInstructions'; import GooseDesktopInstaller from '@site/src/components/GooseDesktopInstaller';
<YouTubeShortEmbed videoUrl="https://www.youtube.com/embed/6aV8pinnUS8" />This tutorial covers how to add the Git MCP Server as a goose extension to give LLMs live access to GitHub repos, enabling smart documentation search, code exploration, and accurate project insights.
:::tip Quick Install <Tabs groupId="interface"> <TabItem value="ui" label="goose Desktop" default> Launch the installer </TabItem> <TabItem value="cli" label="goose CLI"> Command
npx -y mcp-remote https://gitmcp.io/docs
In this example, goose uses GitMCP to pull real-time documentation from the openai/whisper GitHub repository, explore how the speech-to-text model works, and surface accurate setup instructions and command-line usageβall directly from the source.
goose session
Can you help me understand how the OpenAI Whisper speech-to-text model works?
Use the GitHub repository `openai/whisper` to find accurate information.
Please:
- Summarize what the model does and its key features
- Show an example of how to run it from the command line
- Explain what files or models are needed to get started
- Include any useful code snippets or setup instructions you find in the repo
Give me the response in a Markdown guide format.
:::note Desktop
I'll help you understand OpenAI Whisper by exploring their GitHub repository. Let me fetch the documentation and relevant information.
βββ owner | repo ββββββββββββββββββββββββββ
Now let me search for some specific setup and usage information:
βββ owner | query ββββββββ repo ββββββββββββ
Let me search for more technical details and examples:
βββ owner | query ββββββββ repo ββββββββββββ
Perfect! Now I have all the information I need from the OpenAI Whisper repository. Let me create a comprehensive Markdown guide for you.
OpenAI Whisper Speech-to-Text Model Guide
Overview Whisper is a general-purpose speech recognition model developed by OpenAI. It's a powerful, multitasking neural network that can perform:
Key Features:
Architecture Whisper uses a Transformer sequence-to-sequence model that processes audio in 30-second chunks. It converts audio to log-Mel spectrograms and uses special tokens to specify tasks (transcribe, translate, language ID, etc.).
Command-Line Usage Basic Transcription
pip install -U openai-whisperwhisper --helpwhisper your_audio_file.mp3On your first run, Whisper will automatically download the model weights:
tiny: ~39 MB downloadbase: ~142 MB downloadsmall: ~461 MB downloadmedium: ~1.42 GB downloadlarge: ~2.87 GB downloadturbo: ~1.5 GB downloadModels are cached locally, so subsequent runs are much faster.
.en models (base.en, small.en, etc.)tiny or base modelsmedium or large modelsmedium or large (NOT turbo)Whisper is released under the MIT License, making it free to use for both personal and commercial projects.
:::