Back to Llama Index

Aleph Alpha

docs/examples/llm/alephalpha.ipynb

0.14.212.3 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/llm/alephalpha.ipynb" target="_parent"></a>

Aleph Alpha

Aleph Alpha is a powerful language model that can generate human-like text. Aleph Alpha is capable of generating text in multiple languages and styles, and can be fine-tuned to generate text in specific domains.

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index-llms-alephalpha
python
!pip install llama-index

Set your Aleph Alpha token

python
import os

os.environ["AA_TOKEN"] = "your_token_here"

Call complete with a prompt

python
from llama_index.llms.alephalpha import AlephAlpha

# To customize your token, do this
# otherwise it will lookup AA_TOKEN from your env variable
# llm = AlephAlpha(token="<aa_token>")
llm = AlephAlpha(model="luminous-base-control")

resp = llm.complete("Paul Graham is ")
python
print(resp)

Additional Response Details

To access detailed response information such as log probabilities, ensure your AlephAlpha instance is initialized with the log_probs parameter. The logprobs attribute of the CompletionResponse will contain this data. Other details like the model version and raw completion text can be accessed directly if they're part of the response or via additional_kwargs.

python
from llama_index.llms.alephalpha import AlephAlpha

llm = AlephAlpha(model="luminous-base-control", log_probs=0)

resp = llm.complete("Paul Graham is ")

if resp.logprobs is not None:
    print("\nLog Probabilities:")
    for lp_list in resp.logprobs:
        for lp in lp_list:
            print(f"Token: {lp.token}, LogProb: {lp.logprob}")

if "model_version" in resp.additional_kwargs:
    print("\nModel Version:")
    print(resp.additional_kwargs["model_version"])

if "raw_completion" in resp.additional_kwargs:
    print("\nRaw Completion:")
    print(resp.additional_kwargs["raw_completion"])

Async

python
from llama_index.llms.alephalpha import AlephAlpha

llm = AlephAlpha(model="luminous-base-control")
resp = await llm.acomplete("Paul Graham is ")
python
print(resp)