fern/snippets/supports-streaming.mdx
<ParamField path="supports_streaming" type="boolean"
Whether the internal LLM client should use the streaming API. Default: true
Then in your prompt you can use something like:
client<llm> MyClientWithoutStreaming {
provider anthropic
options {
model claude-3-5-haiku-20241022
api_key env.ANTHROPIC_API_KEY
max_tokens 1000
supports_streaming false
}
}
function MyFunction() -> string {
client MyClientWithoutStreaming
prompt #"Write a short story"#
}
# This will be streamed from your python code perspective,
# but under the hood it will call the non-streaming HTTP API
# and then return a streamable response with a single event
b.stream.MyFunction()
# This will work exactly the same as before
b.MyFunction()