Back to Baml

Supports Streaming Openai

fern/snippets/supports-streaming-openai.mdx

0.222.0986 B
Original Source

<ParamField path="supports_streaming" type="boolean"

Whether the internal LLM client should use the streaming API. Default: <auto>

ModelSupports Streaming
o1-previewfalse
o1-minifalse
o1-*false
gpt-5true
gpt-5-minitrue
*true

Then in your prompt you can use something like:

baml
client<llm> MyClientWithoutStreaming {
  provider openai
  options {
    model gpt-5
    api_key env.OPENAI_API_KEY
    supports_streaming false 
  }
}

function MyFunction() -> string {
  client MyClientWithoutStreaming
  prompt #"Write a short story"#
}
python
# This will be streamed from your python code perspective, 
# but under the hood it will call the non-streaming HTTP API
# and then return a streamable response with a single event
b.stream.MyFunction()

# This will work exactly the same as before
b.MyFunction()
</ParamField>