docs/03 - Parameters Tab.md
Contains parameters that control the text generation.
LLMs work by generating one token at a time. Given your prompt, the model calculates the probabilities for every possible next token. The actual token generation is done after that.
Can be used to save and load combinations of parameters for reuse.
These were obtained after a blind contest called "Preset Arena" where hundreds of people voted. The full results can be found here.
A key takeaway is that the best presets are:
The other presets are:
For more information about the parameters, the transformers documentation is a good reference.
(prompt_length) = min(truncation_length - max_new_tokens, prompt_length), so your prompt will get truncated if you set it too high.(min_p) * (probability of the most likely token) are discarded. This is the same as top_a but without squaring the probability.(top_a) * (probability of the most likely token)^2 are discarded.guidance_scale != 1. It is most useful for instruct models and custom system messages. You place your full prompt in this field with the system message replaced with the default one for the model (like "You are Llama, a helpful assistant...") to make the model pay more attention to your custom system message.0 < smoothing_factor < 1, the logits distribution becomes flatter. When smoothing_factor > 1, it becomes more peaked.temperature/dynamic_temperature/quadratic_sampling will be removed from wherever they are and moved to the end of the stack.To the right (or below if you are on mobile), the following parameters are present:
tokenizer.json for the model directly.<s>, EOS as </s>, etc.--share.top_p -> temperature -> top_k can be defined.user_data/grammars. The output is written to the "Grammar" box below. You can also save and delete custom grammars using this menu.The following parameters appear in the Chat tab sidebar rather than the Parameters tab:
This sub-tab within the Parameters tab defines the instruction template used in the Chat tab when "instruct" or "chat-instruct" are selected under "Mode".
The Character tab is a separate top-level tab that contains the following sub-tabs:
Parameters that define the character used in the Chat tab when "chat" or "chat-instruct" are selected under "Mode".
Note: the following replacements take place in the context and greeting fields when the chat prompt is generated:
{{char}} and <BOT> get replaced with "Character's name".{{user}} and <USER> get replaced with "Your name".So you can use those special placeholders in your character definitions. They are commonly found in TavernAI character cards.
Allows you to create and manage user profiles.
In this tab, you can download the current chat history in JSON format and upload a previously saved chat history.
When a history is uploaded, a new chat is created to hold it. That is, you don't lose your current chat in the Chat tab.
Allows you to upload characters in the YAML format used by the web UI, including optionally a profile picture.
Allows you to upload a TavernAI character card. It will be converted to the internal YAML format of the web UI after upload.