docs/_getting_started/getting-started.md
{: .no_toc }
{{ page.description }} {: .fs-6 .fw-300 }
{: .no_toc .text-delta }
After reading this guide, you will know:
Add RubyLLM to your Gemfile:
bundle add ruby_llm
For Rails applications, you can use the generator to set up database-backed conversations:
bin/rails generate ruby_llm:install
This creates Chat and Message models with ActiveRecord persistence. Your conversations will be automatically saved to the database.
After running the install generator, you can optionally add a ready-to-use chat interface:
bin/rails generate ruby_llm:chat_ui
This creates:
Then visit http://localhost:3000/chats to start chatting! See the [Rails Integration Guide]({% link _advanced/rails.md %}) for full details.
RubyLLM needs API keys for the AI providers you want to use. Configure them once, typically when your application starts.
# config/initializers/ruby_llm.rb (in Rails) or at the start of your script
require 'ruby_llm'
RubyLLM.configure do |config|
# Add keys ONLY for the providers you intend to use.
# Using environment variables is highly recommended.
config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
# config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
end
You only need to configure keys for the providers you actually plan to use. See the [Configuration Guide]({% link _getting_started/configuration.md %}) for all options, including setting defaults and connecting to custom endpoints. {: .note }
Interact with language models using RubyLLM.chat.
# Create a chat instance (uses the configured default model)
chat = RubyLLM.chat
# Ask a question
response = chat.ask "What is Ruby on Rails?"
# The response is a RubyLLM::Message object
puts response.content
# => "Ruby on Rails, often shortened to Rails, is a server-side web application..."
RubyLLM handles the conversation history automatically. See the [Chatting with AI Models Guide]({% link _core_features/chat.md %}) for more details.
Generate images using models like DALL-E 3 via RubyLLM.paint.
# Generate an image (uses the default image model)
image = RubyLLM.paint("A photorealistic red panda coding Ruby")
# Access the image URL (or Base64 data depending on provider)
if image.url
puts image.url
# => "https://oaidalleapiprodscus.blob.core.windows.net/..."
else
puts "Image data received (Base64)."
end
# Save the image locally
image.save("red_panda.png")
Learn more in the [Image Generation Guide]({% link _core_features/image-generation.md %}).
Create numerical vector representations of text using RubyLLM.embed.
# Create an embedding (uses the default embedding model)
embedding = RubyLLM.embed("Ruby is optimized for programmer happiness.")
# Access the vector (an array of floats)
vector = embedding.vectors
puts "Vector dimension: #{vector.length}" # e.g., 1536
# Access metadata
puts "Model used: #{embedding.model}"
Explore further in the [Embeddings Guide]({% link _core_features/embeddings.md %}).
You've covered the basics! Now you're ready to explore RubyLLM's features in more detail: