docs/explanation/streaming.md
[!TLDR]
- HTML streaming sends content to browsers incrementally
<await>and<try>tags enable async rendering- Built-in error handling and loading states are included
Marko provides a powerful, yet simple, declarative approach to HTML streaming via <await> and <try> to improve perceived and real performance of your pages.
Streaming is the process of transmitting data incrementally as it’s generated. On the web, HTML streaming means sending HTML to the browser chunk-by-chunk, as soon as it's ready, rather than waiting until the entire document is completed.
In contrast, buffering means generating the full HTML page first, and only then sending it to the browser.
Origins of Progressive Rendering
Marko’s Streaming History
In-order streaming
Out-of-order streaming
Marko provides intuitive built-in tags to handle asynchronous HTML generation and streaming:
<await>Wait for a promise to render a section of the template
Syntax example:
<await|user|=getUser()>
${user.name}
</await>
<await>getUser(), the remaining HTML will be flushed<try>Manage asynchronous boundaries, handle loading states, and gracefully catch errors within streaming (and non-streaming) HTML.
Basic syntax:
<try>
<await|user|=getUser()>
${user.name}
</await>
<@placeholder>
Loading...
</@placeholder>
<@catch|err|>
Error: ${err.message}
</@catch>
</try>
@placeholder@catchOut-of-order streaming involves temporary placeholders being replaced with real content once it is ready. If not handled properly, this can cause content the user is reading or interacting with to shift, leading to a poor user experience.
Even though streaming has been supported on the web for decades and more tools are utilizing it, you may still find that some default configurations of third parties may assume a buffered response. Here are some known culprits that may buffer your server’s output HTTP streams:
keep-alive: overhead from closing and reopening connections may delay responses.X-Content-Type-Options header eliminates browser buffering at the very beginning of HTTP responsesMost of NGiNX’s relevant parameters are inside its builtin http_proxy module:
proxy_http_version 1.1; # 1.0 by default
proxy_buffering off; # on by default
Apache’s default configuration works fine with streaming, but your host may have it configured differently. The relevant Apache configuration is inside its mod_proxy and mod_proxy_* modules and their associated environment variables.
Content Delivery Networks (CDNs) consider efficient streaming one of their best features, but it may be off by default or if certain features are enabled.
<details> <summary><strong>Fastly (Varnish)</strong></summary>For Fastly or another provider that uses VCL configuration, check if backend responses have beresp.do_stream = true set.
Some Akamai features designed to mitigate slow backends can ironically slow down fast chunked responses. Try toggling off Adaptive Acceleration, Ion, mPulse, Prefetch, and/or similar performance features. Also check for the following in the configuration:
<network:http.buffer-response-v2>off</network:http.buffer-response-v2>
For extreme cases where Node streams very small HTML chunks with its built-in compression modules, you may need to tweak the compressor stream settings. Here’s an example with createGzip and its Z_PARTIAL_FLUSH flag:
import http from "http";
import zlib from "zlib";
import MarkoTemplate from "./something.marko";
http
.createServer(function (request, response) {
response.writeHead(200, { "content-type": "text/html;charset=utf-8" });
const templateStream = MarkoTemplate.stream({});
const gzipStream = zlib.createGzip({
flush: zlib.constants.Z_PARTIAL_FLUSH,
});
templateStream.pipe(outputStream).pipe(response);
})
.listen(80);