Back to Llama Index

Pipeshift

docs/examples/llm/pipeshift.ipynb

0.14.212.0 KB
Original Source

Pipeshift

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index-llms-pipeshift
python
%pip install llama-index

Basic Usage

Head on to the models section of pipeshift dashboard to see the list of available models.

Call complete with a prompt

python
from llama_index.llms.pipeshift import Pipeshift

# import os
# os.environ["PIPESHIFT_API_KEY"] = "your_api_key"

llm = Pipeshift(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    # api_key="YOUR_API_KEY" # alternative way to pass api_key if not specified in environment variable
)
res = llm.complete("supercars are ")
python
print(res)

Call chat with a list of messages

python
from llama_index.core.llms import ChatMessage
from llama_index.llms.pipeshift import Pipeshift

messages = [
    ChatMessage(
        role="system", content="You are sales person at supercar showroom"
    ),
    ChatMessage(role="user", content="why should I pick porsche 911 gt3 rs"),
]
res = Pipeshift(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct", max_tokens=50
).chat(messages)
python
print(res)

Streaming

Using stream_complete endpoint

python
from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
resp = llm.stream_complete("porsche GT3 RS is ")
python
for r in resp:
    print(r.delta, end="")

Using stream_chat endpoint

python
from llama_index.llms.pipeshift import Pipeshift
from llama_index.core.llms import ChatMessage

llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
messages = [
    ChatMessage(
        role="system", content="You are sales person at supercar showroom"
    ),
    ChatMessage(role="user", content="how fast can porsche gt3 rs it go?"),
]
resp = llm.stream_chat(messages)
python
for r in resp:
    print(r.delta, end="")