Back to Langchaingo

Ollama Streaming Example with LangChain Go

examples/ollama-stream-example/README.md

0.1.141.8 KB
Original Source

Ollama Streaming Example with LangChain Go

šŸ‘‹ Hello, Go enthusiasts and AI adventurers! Welcome to this exciting example that showcases how to use LangChain Go with Ollama for streaming AI-generated content. Let's dive in and see what this cool code does! šŸš€

What's This All About?

This example demonstrates how to:

  1. Set up an Ollama-based language model (LLM) šŸ¤–
  2. Create a conversation with system and user messages šŸ’¬
  3. Generate content using the LLM with real-time streaming 🌊

The Magic Explained

Here's what's happening in this nifty little program:

  1. We start by creating an Ollama LLM instance using the "mistral" model. Mistral is known for its efficiency and quality, so good choice! šŸ‘

  2. We set up a conversation with two messages:

    • A system message that tells the AI to act as a "company branding design wizard" šŸ§™ā€ā™‚ļø
    • A user message asking for a company name suggestion for a Go-backed LLM tools producer šŸ¢
  3. The real magic happens when we call GenerateContent. We use a streaming function that prints out the AI's response in real-time. It's like watching the AI think! 🤯

Running the Example

To run this example, make sure you have Ollama set up and running on your machine. Then, simply execute the Go file:

go run ollama_stream_example.go

You'll see the AI's response appear on your screen character by character. It's mesmerizing! ✨

Why This is Cool

  • Real-time streaming: See the AI's thoughts as they form!
  • Local LLM: Ollama runs on your machine, giving you more control and privacy.
  • Go power: Harness the speed and simplicity of Go for AI applications.

So there you have it! A simple yet powerful example of streaming AI responses using LangChain Go and Ollama. Happy coding, and may your Go programs be ever intelligent! šŸŽ‰šŸ‘©ā€šŸ’»šŸ‘Øā€šŸ’»