docs/source/package_reference/peanut.md
PEANuT is a parameter-efficient fine-tuning technique that introduces weight-aware neural tweakers to generate adapter updates from the frozen pretrained weights themselves. Instead of learning a purely linear low-rank update as in LoRA, PEANuT conditions the adapter transformation on the base weight, which makes the update rule more expressive while keeping the number of trainable parameters small.
PEANuT uses an input projection A, an output projection B, and optional intermediate residual encoder/decoder
pairs with non-linear activations. This makes it possible to model more complex update patterns than weight-agnostic
linear adapters while still remaining within the PEFT setting.
PEANuT currently has the following tradeoffs:
Pros:
0.2M trainable parameters.Cons:
ΔW is explicitly constructed before being applied.If these tradeoffs do not fit your use case, consider other PEFT methods such as LoRA.
The abstract from the paper is:
Fine-tuning large pre-trained foundation models often yields excellent downstream performance but is prohibitively expensive when updating all parameters. Parameter-efficient fine-tuning (PEFT) methods such as LoRA alleviate this by introducing lightweight update modules, yet they commonly rely on weight-agnostic linear approximations, limiting their expressiveness. In this work, we propose PEANuT, a novel PEFT framework that introduces weight-aware neural tweakers, compact neural modules that generate task-adaptive updates conditioned on frozen pre-trained weights. PEANuT provides a flexible yet efficient way to capture complex update patterns without full model tuning. We theoretically show that PEANuT achieves equivalent or greater expressivity than existing linear PEFT methods with comparable or fewer parameters. Extensive experiments across four benchmarks with over twenty datasets demonstrate that PEANuT consistently outperforms strong baselines in both NLP and vision tasks, while maintaining low computational overhead.
[[autodoc]] tuners.peanut.config.PeanutConfig
[[autodoc]] tuners.peanut.model.PeanutModel