docs/examples/llm/pipeshift.ipynb
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-llms-pipeshift
%pip install llama-index
Head on to the models section of pipeshift dashboard to see the list of available models.
complete with a promptfrom llama_index.llms.pipeshift import Pipeshift
# import os
# os.environ["PIPESHIFT_API_KEY"] = "your_api_key"
llm = Pipeshift(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
# api_key="YOUR_API_KEY" # alternative way to pass api_key if not specified in environment variable
)
res = llm.complete("supercars are ")
print(res)
chat with a list of messagesfrom llama_index.core.llms import ChatMessage
from llama_index.llms.pipeshift import Pipeshift
messages = [
ChatMessage(
role="system", content="You are sales person at supercar showroom"
),
ChatMessage(role="user", content="why should I pick porsche 911 gt3 rs"),
]
res = Pipeshift(
model="meta-llama/Meta-Llama-3.1-8B-Instruct", max_tokens=50
).chat(messages)
print(res)
Using stream_complete endpoint
from llama_index.llms.pipeshift import Pipeshift
llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
resp = llm.stream_complete("porsche GT3 RS is ")
for r in resp:
print(r.delta, end="")
Using stream_chat endpoint
from llama_index.llms.pipeshift import Pipeshift
from llama_index.core.llms import ChatMessage
llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
messages = [
ChatMessage(
role="system", content="You are sales person at supercar showroom"
),
ChatMessage(role="user", content="how fast can porsche gt3 rs it go?"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")