llama-index-integrations/llms/llama-index-llms-perplexity/README.md
The Perplexity integration for LlamaIndex allows you to tap into real-time generative search powered by the Perplexity API. This integration supports synchronous and asynchronous chat completions—as well as streaming responses.
To install the required packages, run:
%pip install llama-index-llms-perplexity
!pip install llama-index
Please refer to the official Perplexity API documentation to get started. You can follow the steps outlined here to generate your API key.
Import the necessary libraries and set your Perplexity API key:
from llama_index.llms.perplexity import Perplexity
pplx_api_key = "your-perplexity-api-key" # Replace with your actual API key
Create an instance of the Perplexity LLM with your API key and desired model settings:
llm = Perplexity(api_key=pplx_api_key, model="sonar-pro", temperature=0.2)
You can send a chat message using the chat method. Here’s how to do that:
from llama_index.core.llms import ChatMessage
messages_dict = [
{"role": "system", "content": "Be precise and concise."},
{
"role": "user",
"content": "What is the weather like in San Francisco today?",
},
]
messages = [ChatMessage(**msg) for msg in messages_dict]
# Obtain a response from the model
response = llm.chat(messages)
print(response)
For asynchronous conversation processing, use the achat method to send messages and await the response:
response = await llm.achat(messages)
print(response)
For cases where you want to receive a response token by token in real time, use the stream_chat method:
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
Similarly, for asynchronous streaming, the astream_chat method provides a way to process response deltas asynchronously:
resp = await llm.astream_chat(messages)
async for delta in resp:
print(delta.delta, end="")
Perplexity models can easily be wrapped into a llamaindex tool so that it can be called as part of your data processing or conversational workflows. This tool uses real-time generative search powered by Perplexity, and it’s configured with the updated default model ("sonar-pro") and the enable_search_classifier parameter enabled.
Below is an example of how to define and register the tool:
from llama_index.core.tools import FunctionTool
from llama_index.llms.perplexity import Perplexity
from llama_index.core.llms import ChatMessage
def query_perplexity(query: str) -> str:
"""
Queries the Perplexity API via the LlamaIndex integration.
This function instantiates a Perplexity LLM with updated default settings
(using model "sonar-pro" and enabling search classifier so that the API can
intelligently decide if a search is needed), wraps the query into a ChatMessage,
and returns the generated response content.
"""
pplx_api_key = (
"your-perplexity-api-key" # Replace with your actual API key
)
llm = Perplexity(
api_key=pplx_api_key,
model="sonar-pro",
temperature=0.7,
enable_search_classifier=True, # This will determine if the search component is necessary in this particular context
)
messages = [ChatMessage(role="user", content=query)]
response = llm.chat(messages)
return response.message.content
# Create the tool from the query_perplexity function
query_perplexity_tool = FunctionTool.from_defaults(fn=query_perplexity)
https://docs.llamaindex.ai/en/stable/examples/llm/perplexity/