packages/docs/docs/distributed-rendering.mdx
Remotion Lambda is our recommended solution if you are looking for a way to split up renders into chunks that run on different machines.
It takes care of a lot of engineering work that you would have to undergo yourself otherwise:
We think Lambda is the best balance between speed, cost, scalability and ease of use.
Many users are setting the memory too high for their Lambda cost and are unnecessarily causing their renders to be way too expensive.
See how to optimize a Lambda render.
Before proceeding with building your own distributed rendering solution, consider how much money you are going to save and weigh it against the cost of implementation, given the complexity.
Also consider how much savings you are getting by Lambda functions shutting down immediately after renders are finished.
Should you have come to the conclusion that you still need a distributed rendering solution, here is how to do it.
Remotion Lambda is following the same blueprint as well.
We're calling the machine which orchestrates the render the "main routine".
selectComposition().frameRange and everyNthFrame options if necessary to determine how many frames you are actually rendering.aac audio codec.frameRange argument when rendering chunks.0, and end at durationInFrames - 1. Passing values too small or too big will cause an error to be thrown.renderMedia().frameRange of this chunk that you have calculated before to your render call.composition the value you retrieved in the main routine.renderMedia() call, but you must pass the same options for every chunk.inputProps that you passed to selectComposition().numberOfGifLoops always to null for the chunk.enforceAudioTrack option always to true.outputLocation to acompositionStart to the first frame of overall range of frames that are being rendered.
renderMedia() invocation should have 0 as compositionStart.frameRange: [100, 199], and you split this up into 4 chunks: [100, 124], [125, 149], [150, 174], [175, 199], every chunk should have 100 as the value for compositionStarth264 (default and recommended), or you want to render a GIF, set the codec to h264-ts instead of h264.aac (which would be default and recommended),audioCodec to pcm-16 instead of the audio codec.forSeamlessAacConcatenation option to false.audioCodec to the same value you would pass to a regular renderMedia() call, which might be null.forSeamlessAacConcatenation option to true.separateAudioTo option to render the audio to a separate file.concurrency and offthreadVideoThreads may be adjusted if desired, but leaving the defaults is also fine.Following these instructions and passing the right values is crucial for aligning the audio correctly when concatenating it in the end.
Pass the video and audio chunks, and artifacts that you have received, to the machine running the main routine.
Collect all the audio and video chunks. Once you have all of them, you can call combineChunks() with the following parameters:
videoFiles: An array of absolute paths to video chunks, must be in order.audioFiles: An array of absolute paths to audio chunks, must be in order.outputLocation: An absolute file path where the combined chunks are stored.onProgress: See documentation for combineChunks().codec: The final codec of the media. If you want to render h264, you should now pass h264, not h264-ts.framesPerChunk: The amount of frames that were rendered per chunk. The everyNthFrame option will be applied only afterwards, so don't calculate it in.fps: Must be the value of fps that you got from selectComposition.preferLossless: Pass it here as well if you passed it to renderMedia().compositionDurationInFrames: The total duration of the composition, taken from selectComposition(). Pass the full duration even if you are only rendering a portion of it.frameRange: If you only rendered a portion of a composition, then you must pass that exact frame range here as well. Remember that frame ranges start at 0, and end at durationInFrames - 1. Passing values too small or too big will cause an error to be thrown.If you have changed the defaults of a renderMedia() call, you may also have to pass additional parameters:
audioCodec: If you passed a custom audioCodec to the render routines, then you should also pass it here. Otherwise, should be null. If the default for the video codec would have been aac and you passed pcm-16 to the render routine, you can now pass null again.audioBitrate: The bitrate that you would target. Since you might have renderednumberOfGifLoops: If you are rendering a GIF, you can now set the number of loops (remember that for each chunk it must be null)everyNthFrame: If you passed it to each renderMedia() call, pass it here again.Furthermore, you can use the logLevel, metadata, cancelSignal and binariesDirectory options as you would use with renderMedia.