docs/language-models/hosted-models/aws-sagemaker.mdx
To use Open Interpreter with a model from AWS Sagemaker, set the model flag:
interpreter --model sagemaker/<model-name>
# Sagemaker requires boto3 to be installed on your machine:
!pip install boto3
from interpreter import interpreter
interpreter.llm.model = "sagemaker/<model-name>"
interpreter.chat()
We support the following completion models from AWS Sagemaker:
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f
interpreter --model sagemaker/<your-huggingface-deployment-name>
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f"
interpreter.llm.model = "sagemaker/<your-huggingface-deployment-name>"
Set the following environment variables (click here to learn how) to use these models.
| Environment Variable | Description | Where to Find |
|---|---|---|
AWS_ACCESS_KEY_ID | The API access key for your AWS account. | AWS Account Overview -> Security Credentials |
AWS_SECRET_ACCESS_KEY | The API secret access key for your AWS account. | AWS Account Overview -> Security Credentials |
AWS_REGION_NAME | The AWS region you want to use | AWS Account Overview -> Navigation bar -> Region |