docs/source/en/model_doc/openai-gpt.md
This model was released on 2018-06-11 and added to Hugging Face Transformers on 2023-06-20.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"> </div> </div>GPT (Generative Pre-trained Transformer) (blog post) focuses on effectively learning text representations and transferring them to tasks. This model trains the Transformer decoder to predict the next word, and then fine-tuned on labeled data.
GPT can generate high-quality text, making it well-suited for a variety of natural language understanding tasks such as textual entailment, question answering, semantic similarity, and document classification.
You can find all the original GPT checkpoints under the OpenAI community organization.
[!TIP] Click on the GPT models in the right sidebar for more examples of how to apply GPT to different language tasks.
The example below demonstrates how to generate text with [Pipeline], [AutoModel], and from the command line.
from transformers import pipeline
generator = pipeline(task="text-generation", model="openai-community/openai-gpt", device=0)
output = generator("The future of AI is", max_length=50, do_sample=True)
print(output[0]["generated_text"])
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("openai-community/openai-gpt")
model = AutoModelForCausalLM.from_pretrained("openai-community/openai-gpt", device_map="auto")
inputs = tokenizer("The future of AI is", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
[[autodoc]] OpenAIGPTConfig
[[autodoc]] OpenAIGPTModel - forward
[[autodoc]] OpenAIGPTLMHeadModel - forward
[[autodoc]] OpenAIGPTDoubleHeadsModel - forward
[[autodoc]] OpenAIGPTForSequenceClassification - forward
[[autodoc]] OpenAIGPTTokenizer
[[autodoc]] OpenAIGPTTokenizerFast