Back to Litellm

Using Nemo-Guardrails with LiteLLM Server

cookbook/Using_Nemo_Guardrails_with_LiteLLM_Server.ipynb

1.84.0-dev.22.4 KB
Original Source

Using Nemo-Guardrails with LiteLLM Server

Call Bedrock, TogetherAI, Huggingface, etc. on the server

Using with Bedrock

docker run -e PORT=8000 -e AWS_ACCESS_KEY_ID=<your-aws-access-key> -e AWS_SECRET_ACCESS_KEY=<your-aws-secret-key> -p 8000:8000 ghcr.io/berriai/litellm:latest

python
pip install nemoguardrails langchain
python
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model_name="anthropic.claude-v2", openai_api_base="http://0.0.0.0:8000", openai_api_key="my-fake-key")

from nemoguardrails import LLMRails, RailsConfig

config = RailsConfig.from_path("./config.yml")
app = LLMRails(config, llm=llm)

new_message = app.generate(messages=[{
    "role": "user",
    "content": "Hello! What can you do for me?"
}])

Using with TogetherAI

  1. You can either set this in the server environment: docker run -e PORT=8000 -e TOGETHERAI_API_KEY=<your-together-ai-api-key> -p 8000:8000 ghcr.io/berriai/litellm:latest

  2. Or Pass this in as the api key (...openai_api_key="<your-together-ai-api-key>")

python
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model_name="together_ai/togethercomputer/CodeLlama-13b-Instruct", openai_api_base="http://0.0.0.0:8000", openai_api_key="my-together-ai-api-key")

from nemoguardrails import LLMRails, RailsConfig

config = RailsConfig.from_path("./config.yml")
app = LLMRails(config, llm=llm)

new_message = app.generate(messages=[{
    "role": "user",
    "content": "Hello! What can you do for me?"
}])

CONFIG.YML

save this example config.yml in your current directory

python
# instructions:
#   - type: general
#     content: |
#       Below is a conversation between a bot and a user about the recent job reports.
#       The bot is factual and concise. If the bot does not know the answer to a
#       question, it truthfully says it does not know.

# sample_conversation: |
#   user "Hello there!"
#     express greeting
#   bot express greeting
#     "Hello! How can I assist you today?"
#   user "What can you do for me?"
#     ask about capabilities
#   bot respond about capabilities
#     "I am an AI assistant that helps answer mathematical questions. My core mathematical skills are powered by wolfram alpha."
#   user "What's 2+2?"
#     ask math question
#   bot responds to math question
#     "2+2 is equal to 4."

# models:
#   - type: main
#     engine: openai
#     model: claude-instant-1