pgml-cms/docs/open-source/pgml/api/pgml.transform_stream.md
pgml.transform_stream mirrors pgml.transform with two caveats:
SETOF JSONB instead of JSONB.text-generation task.The pgml.transform_stream function is overloaded and can be used to chat with messages or complete text.
Use this for conversational AI applications or when you need to provide instructions and maintain context.
pgml.transform_stream(
task JSONB,
inputs ARRAY[]::JSONB,
args JSONB
)
| Argument | Description |
|---|---|
| task | The task object with required keys of task and model. |
| inputs | The input chat messages. |
| args | The additional arguments for the model. |
A simple example using meta-llama/Meta-Llama-3.1-8B-Instruct:
SELECT pgml.transform_stream(
task => '{
"task": "conversational",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct"
}'::JSONB,
inputs => ARRAY[
'{"role": "system", "content": "You are a friendly and helpful chatbot"}'::JSONB,
'{"role": "user", "content": "Tell me about yourself."}'::JSONB
]
) AS answer;
Result
["I"]
["'m"]
[" so"]
[" glad"]
[" you"]
[" asked"]
["!"]
[" I"]
["'m"]
[" a"]
...
Results have been truncated for sanity.
We follow OpenAI's standard for model parameters:
frequency_penalty - Penalizes the frequency of tokenslogit_bias - Modify the likelihood of specified tokenslogprobs - Return logprobs of the most likely token(s)top_logprobs - The number of most likely tokens to return at each token positionmax_tokens - The maximum number of tokens to generaten - The number of completions to build outpresence_penalty - Control new token penalizationresponse_format - The format of the responseseed - The seed for randomnessstop - An array of sequences to stop ontemperature - The temperature for samplingtop_p - An alternative sampling methodFor more information on these parameters see OpenAI's docs.
An example with some common parameters:
SELECT pgml.transform_stream(
task => '{
"task": "conversational",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct"
}'::JSONB,
inputs => ARRAY[
'{"role": "system", "content": "You are a friendly and helpful chatbot"}'::JSONB,
'{"role": "user", "content": "Tell me about yourself."}'::JSONB
],
args => '{
"max_tokens": 10,
"temperature": 0.75,
"seed": 10
}'::JSONB
) AS answer;
Result
["I"]
["'m"]
[" so"]
[" glad"]
[" you"]
[" asked"]
["!"]
[" I"]
["'m"]
[" a"]
Use this for simpler text-generation tasks like completing sentences or generating content based on a prompt.
pgml.transform_stream(
task JSONB,
input text,
args JSONB
)
| Argument | Description |
|---|---|
| task | The task object with required keys of task and model. |
| input | The text to complete. |
| args | The additional arguments for the model. |
A simple example using meta-llama/Meta-Llama-3.1-8B-Instruct:
SELECT pgml.transform_stream(
task => '{
"task": "text-generation",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct"
}'::JSONB,
input => 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone'
) AS answer;
Result
[","]
[" Nine"]
[" for"]
[" Mort"]
["al"]
[" Men"]
[" doomed"]
[" to"]
[" die"]
[","]
[" One"]
[" for"]
[" the"]
[" Dark"]
[" Lord"]
[" on"]
We follow OpenAI's standard for model parameters:
best_of - Generates "best_of" completionsecho - Echo back the promptfrequency_penalty - Penalizes the frequency of tokenslogit_bias - Modify the likelihood of specified tokenslogprobs - Return logprobs of the most likely token(s)max_tokens - The maximum number of tokens to generaten - The number of completions to build outpresence_penalty - Control new token penalizationseed - The seed for randomnessstop - An array of sequences to stop ontemperature - The temperature for samplingtop_p - An alternative sampling methodFor more information on these parameters see OpenAI's docs.
An example with some common parameters:
SELECT pgml.transform_stream(
task => '{
"task": "text-generation",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct"
}'::JSONB,
input => 'Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone',
args => '{
"max_tokens": 10,
"temperature": 0.75,
"seed": 10
}'::JSONB
) AS answer;
Result
[","]
[" Nine"]
[" for"]
[" Mort"]
["al"]
[" Men"]
[" doomed"]
[" to"]
[" die"]
[","]