llama-index-integrations/llms/llama-index-llms-neutrino/README.md
To install the required packages, run:
%pip install llama-index-llms-neutrino
!pip install llama-index
You can create an API key by visiting platform.neutrinoapp.com. Once you have the API key, set it as an environment variable:
import os
os.environ["NEUTRINO_API_KEY"] = "<your-neutrino-api-key>"
A router is a collection of LLMs that you can route queries to. You can create a router in the Neutrino dashboard or use the default router, which includes all supported models. You can treat a router as a single LLM.
Create an instance of the Neutrino model:
from llama_index.llms.neutrino import Neutrino
llm = Neutrino(
# api_key="<your-neutrino-api-key>",
# router="<your-router-id>" # Use 'default' for the default router
)
To generate a text completion for a prompt, use the complete method:
response = llm.complete("In short, a Neutrino is")
print(f"Optimal model: {response.raw['model']}")
print(response)
To send a chat message and receive a response, create a ChatMessage and use the chat method:
from llama_index.core.llms import ChatMessage
message = ChatMessage(
role="user",
content="Explain the difference between statically typed and dynamically typed languages.",
)
resp = llm.chat([message])
print(f"Optimal model: {resp.raw['model']}")
print(resp)
To stream responses for a chat message, use the stream_chat method:
message = ChatMessage(
role="user", content="What is the approximate population of Mexico?"
)
resp = llm.stream_chat([message])
for i, r in enumerate(resp):
if i == 0:
print(f"Optimal model: {r.raw['model']}")
print(r.delta, end="")