docs/examples/llm/dashscope.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/llm/dashscope.ipynb" target="_parent"></a>
In this notebook, we show how to use the DashScope LLM models in LlamaIndex. Check out the DashScope site or the documents.
If you're opening this Notebook on colab, you will need to install LlamaIndex 🦙 and the DashScope Python SDK.
!pip install llama-index-llms-dashscope
You will need to login DashScope an create a API. Once you have one, you can either pass it explicitly to the API, or use the DASHSCOPE_API_KEY environment variable.
%env DASHSCOPE_API_KEY=YOUR_DASHSCOPE_API_KEY
import os
os.environ["DASHSCOPE_API_KEY"] = "YOUR_DASHSCOPE_API_KEY"
DashScope Objectfrom llama_index.llms.dashscope import DashScope, DashScopeGenerationModels
dashscope_llm = DashScope(model_name=DashScopeGenerationModels.QWEN_MAX)
complete with a promptresp = dashscope_llm.complete("How to make cake?")
print(resp)
responses = dashscope_llm.stream_complete("How to make cake?")
for response in responses:
print(response.delta, end="")
chat with a list of messagesfrom llama_index.core.base.llms.types import MessageRole, ChatMessage
messages = [
ChatMessage(
role=MessageRole.SYSTEM, content="You are a helpful assistant."
),
ChatMessage(role=MessageRole.USER, content="How to make cake?"),
]
resp = dashscope_llm.chat(messages)
print(resp)
stream_chatresponses = dashscope_llm.stream_chat(messages)
for response in responses:
print(response.delta, end="")
messages = [
ChatMessage(
role=MessageRole.SYSTEM, content="You are a helpful assistant."
),
ChatMessage(role=MessageRole.USER, content="How to make cake?"),
]
# first round
resp = dashscope_llm.chat(messages)
print(resp)
# add response to messages.
messages.append(
ChatMessage(role=MessageRole.ASSISTANT, content=resp.message.content)
)
messages.append(
ChatMessage(role=MessageRole.USER, content="How to make it without sugar")
)
# second round
resp = dashscope_llm.chat(messages)
print(resp)