website/docs/performance.mdx
<div>
<iframe
width="760"
height="427"
scrolling="no"
src="https://egghead.io/lessons/react-profile-react-rendering-and-optimize-with-memo-to-leverage-structural-sharing/embed"
></iframe>
</div>
<a
className="egghead-link"
href="https://egghead.io/lessons/react-profile-react-rendering-and-optimize-with-memo-to-leverage-structural-sharing"
>
Hosted on egghead.io
</a>
<div>
<iframe
width="760"
height="427"
scrolling="no"
src="https://egghead.io/lessons/javascript-produces-immutable-data-and-avoid-unnecessary-creation-of-new-data-trees-with-immer/embed"
></iframe>
</div>
<a
className="egghead-link"
href="https://egghead.io/lessons/javascript-produces-immutable-data-and-avoid-unnecessary-creation-of-new-data-trees-with-immer"
>
Hosted on egghead.io
</a>
Here is a simple benchmark on the performance of Immer. This test takes 50,000 todo items and updates 5,000 of them. Freeze indicates that the state tree has been frozen after producing it. This is a development best practice, as it prevents developers from accidentally modifying the state tree.
Something that isn't reflected in the numbers above, but in reality, Immer is sometimes significantly faster than a hand written reducer. The reason for that is that Immer will detect "no-op" state changes, and return the original state if nothing actually changed, which can avoid a lot of re-renderings for example. Cases are known where simply applying immer solved critical performance issues.
These tests were executed on Node 10.16.3. Use yarn test:perf to reproduce them locally.
Most important observation:
yarn test:perf for more tests). This is in practice negligible.For applications with significant array iteration within producers, enable the Array Methods Plugin:
import {enableArrayMethods} from "immer"
enableArrayMethods()
This plugin optimizes array operations like filter, find, some, every, and slice by avoiding proxy creation for every element during iteration. Without the plugin, iterating a 1000-element array creates 1000+ proxies. With the plugin, callbacks receive base values, and proxies are only created for elements you actually mutate.
By default, Immer uses loose iteration which only processes enumerable string properties. This is faster than strict iteration which includes symbols and non-enumerable properties. For most use cases, the default is optimal:
import {setUseStrictIteration} from "immer"
// Default: false (loose iteration for better performance)
setUseStrictIteration(false)
Only enable strict iteration if you specifically need to track symbol or non-enumerable properties.
When adding a large data set to the state tree in an Immer producer (for example data received from a JSON endpoint), it is worth to call freeze(json) on the root of the data that is being added first. To shallowly freeze it. This will allow Immer to add the new data to the tree faster, as it will avoid the need to recursively scan and freeze the new data.
Realize that immer is opt-in everywhere, so it is perfectly fine to manually write super performance critical reducers, and use immer for all the normal ones. Even from within a producer you opt-out from Immer for certain parts of your logic by using utilies original or current and perform some of your operations on plain JavaScript objects.
Immer will convert anything you read in a draft recursively into a draft as well. If you have expensive side effect free operations on a draft that involves a lot of reading, for example finding an index using find(Index) in a very large array, you can speed this up by first doing the search, and only call the produce function once you know the index. Thereby preventing Immer to turn everything that was searched for in a draft. Or, alternatively, perform the search on the original value of a draft, by using original(someDraft), which boils to the same thing.
Always try to pull produce 'up', for example for (let x of y) produce(base, d => d.push(x)) is exponentially slower than produce(base, d => { for (let x of y) d.push(x)})