docs/source/en/model_doc/ctrl.md
This model was released on 2019-09-11 and added to Hugging Face Transformers on 2020-11-16.
CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.).
The abstract from the paper is the following:
Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution.
This model was contributed by keskarnitishr. The original code can be found here.
past_key_values as input, which is the previously computed key/value attention pairs.
Using the past_key_values value prevents the model from re-computing
pre-computed values in the context of text generation. See the forward
method for more information on the usage of this argument.[[autodoc]] CTRLConfig
[[autodoc]] CTRLTokenizer - save_vocabulary
[[autodoc]] CTRLModel - forward
[[autodoc]] CTRLLMHeadModel - forward
[[autodoc]] CTRLForSequenceClassification - forward