docs/examples/llm/alephalpha.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/llm/alephalpha.ipynb" target="_parent"></a>
Aleph Alpha is a powerful language model that can generate human-like text. Aleph Alpha is capable of generating text in multiple languages and styles, and can be fine-tuned to generate text in specific domains.
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-llms-alephalpha
!pip install llama-index
import os
os.environ["AA_TOKEN"] = "your_token_here"
complete with a promptfrom llama_index.llms.alephalpha import AlephAlpha
# To customize your token, do this
# otherwise it will lookup AA_TOKEN from your env variable
# llm = AlephAlpha(token="<aa_token>")
llm = AlephAlpha(model="luminous-base-control")
resp = llm.complete("Paul Graham is ")
print(resp)
To access detailed response information such as log probabilities, ensure your AlephAlpha instance is initialized with the log_probs parameter. The logprobs attribute of the CompletionResponse will contain this data. Other details like the model version and raw completion text can be accessed directly if they're part of the response or via additional_kwargs.
from llama_index.llms.alephalpha import AlephAlpha
llm = AlephAlpha(model="luminous-base-control", log_probs=0)
resp = llm.complete("Paul Graham is ")
if resp.logprobs is not None:
print("\nLog Probabilities:")
for lp_list in resp.logprobs:
for lp in lp_list:
print(f"Token: {lp.token}, LogProb: {lp.logprob}")
if "model_version" in resp.additional_kwargs:
print("\nModel Version:")
print(resp.additional_kwargs["model_version"])
if "raw_completion" in resp.additional_kwargs:
print("\nRaw Completion:")
print(resp.additional_kwargs["raw_completion"])
from llama_index.llms.alephalpha import AlephAlpha
llm = AlephAlpha(model="luminous-base-control")
resp = await llm.acomplete("Paul Graham is ")
print(resp)