docs/nodes/communityNodes.md
These are nodes that have been developed by the community, for the community. If you're not sure what a node is, you can learn more about nodes here.
If you'd like to submit a node for the community, please refer to the node creation overview.
To use a node, add the node to the nodes folder found in your InvokeAI install location.
The suggested method is to use git clone to clone the repository the node is found in. This allows for easy updates of the node in the future.
If you'd prefer, you can also just download the whole node folder from the linked repository and add it to the nodes folder.
To use a community workflow, download the .json node graph file and load it into Invoke AI via the Load Workflow button in the Workflow Editor.
Description: A set of nodes to perform anamorphic modifications to images, like lens blur, streaks, spherical distortion, and vignetting.
Node Link: https://github.com/JPPhoto/anamorphic-tools
Description: A set of nodes for linked adapters (ControlNet, IP-Adaptor & T2I-Adapter). This allows multiple adapters to be chained together without using a collect node which means it can be used inside an iterate node without any collecting on every iteration issues.
ControlNet-Linked - Collects ControlNet info to pass to other nodes.IP-Adapter-Linked - Collects IP-Adapter info to pass to other nodes.T2I-Adapter-Linked - Collects T2I-Adapter info to pass to other nodes.Note: These are inherited from the core nodes so any update to the core nodes should be reflected in these.
Node Link: https://github.com/skunkworxdark/adapters-linked-nodes
Description: Generate autostereogram images from a depth map. This is not a very practically useful node but more a 90s nostalgic indulgence as I used to love these images as a kid.
Node Link: https://github.com/skunkworxdark/autostereogram_nodes
Example Usage: </br> -> ->
Description: This node takes in a collection of images of the same size and averages them as output. It converts everything to RGB mode first.
Node Link: https://github.com/JPPhoto/average-images-node
Description: Remove image backgrounds using BiRefNet (Bilateral Reference Network), a high-quality segmentation model. Supports multiple model variants including standard, high-resolution, matting, portrait, and specialized models for different use cases.
Node Link: https://github.com/veeliks/invoke_birefnet
Output Examples
<section> </section>Description: Removes residual artifacts after an image is separated from its background.
Node Link: https://github.com/VeyDlin/clean-artifact-after-cut-node
View: </br>
Description: Generates a mask for images based on a closely matching color, useful for color-based selections.
Node Link: https://github.com/VeyDlin/close-color-mask-node
View: </br>
Description: Employs a U2NET neural network trained for the segmentation of clothing items in images.
Node Link: https://github.com/VeyDlin/clothing-mask-node
View: </br>
Description: Enhances local image contrast using adaptive histogram equalization with contrast limiting.
Node Link: https://github.com/VeyDlin/clahe-node
View: </br>
Description: Adjust an image's curve based on a user-defined string.
Node Link: https://github.com/JPPhoto/curves-node
Description: Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
Node Link: https://github.com/dwringer/depth-from-obj-node
Example Usage: </br>
Description: A single node that can enhance the detail in an image. Increase or decrease details in an image using a guided filter (as opposed to the typical Gaussian blur used by most sharpening filters.) Based on the Enhance Detail ComfyUI node from https://github.com/spacepxl/ComfyUI-Image-Filters
Node Link: https://github.com/skunkworxdark/enhance-detail-node
Example Usage: </br>
Description: This node adds a film grain effect to the input image based on the weights, seeds, and blur radii parameters. It works with RGB input images only.
Node Link: https://github.com/JPPhoto/film-grain-node
Description: This node will flip an openpose image horizontally, recoloring it to make sure that it isn't facing the wrong direction. Note that it does not work with openpose hands.
Node Link: https://github.com/JPPhoto/flip-pose-node
Description: This node returns an ideal size to use for the first stage of a Flux image generation pipeline. Generating at the right size helps limit duplication and odd subject placement.
Node Link: https://github.com/JPPhoto/flux-ideal-size
Description: This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no nonterminal terms remain in the string.
This includes 3 Nodes:
Node Link: https://github.com/dwringer/generative-grammar-prompt-nodes
Example Usage: </br>
Description: A node for InvokeAI utilizes the GPT-2 language model to generate random prompts based on a provided seed and context.
Node Link: https://github.com/mickr777/GPT2RandomPromptMaker
Output Examples
Generated Prompt: An enchanted weapon will be usable by any character regardless of their alignment.
Description: One node that turns a grid image into an image collection, one node that turns an image collection into a gif.
Node Link: https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/GridToGif.py
Example Node Graph: https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/Grid%20to%20Gif%20Example%20Workflow.json
Output Examples
Description: Halftone converts the source image to grayscale and then performs halftoning. CMYK Halftone converts the image to CMYK and applies a per-channel halftoning to make the source image look like a magazine or newspaper. For both nodes, you can specify angles and halftone dot spacing.
Node Link: https://github.com/JPPhoto/halftone-node
Example
Input:
Halftone Output:
CMYK Halftone Output:
Description: Hand Refiner takes in your image and automatically generates a fixed depth map for the hands along with a mask of the hands region that will conveniently allow you to use them along with ControlNet to fix the wonky hands generated by Stable Diffusion
Node Link: https://github.com/blessedcoolant/invoke_meshgraphormer
View
Description: This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 15 Nodes:
Node Link: https://github.com/dwringer/composition-nodes
</br>Description: Identifies and extracts the dominant color from an image using k-means clustering.
Node Link: https://github.com/VeyDlin/image-dominant-color-node
View: </br>
Description: Export images in multiple formats (AVIF, JPEG, PNG, TIFF, WebP) with format-specific compression and quality options.
Node Link: https://github.com/veeliks/invoke_image_export
Nodes:
<section> </section>Description: Group of nodes to convert an input image into ascii/unicode art Image
Node Link: https://github.com/mickr777/imagetoasciiimage
Output Examples
</br>Description: This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
Node Link: https://github.com/JPPhoto/image-picker-node
Description: Provides various image resizing options such as fill, stretch, fit, center, and crop.
Node Link: https://github.com/VeyDlin/image-resize-plus-node
View: </br>
Description: This node uses a small (~2.4mb) model to upscale the latents used in a Stable Diffusion 1.5 or Stable Diffusion XL image generation, rather than the typical interpolation method, avoiding the traditional downsides of the latent upscale technique.
Node Link: https://github.com/gogurtenjoyer/latent-upscale
Description: This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such for invokeAI node experimentation. Think animation + ControlNet outputs.
Node Link: https://github.com/helix4u/load_video_frame
Output Example:
Description: Create compelling 3D stereo images from 2D originals.
Node Link: https://gitlab.com/srcrr/shift3d/-/raw/main/make3d.py
Example Node Graph: https://gitlab.com/srcrr/shift3d/-/raw/main/example-workflow.json?ref_type=heads&inline=false
Output Examples
Description: Offers logical operations (OR, SUB, AND) for combining and manipulating image masks.
Node Link: https://github.com/VeyDlin/mask-operations-node
View: </br>
Description: An InvokeAI node to match a histogram from one image to another. This is a bit like the color correct node in the main InvokeAI but this works in the YCbCr colourspace and can handle images of different sizes. Also does not require a mask input.
A good use case for this node is to normalize the colors of an image that has been through the tiled scaling workflow of my XYGrid Nodes.
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
Node Link: https://github.com/skunkworxdark/match_histogram
Output Examples
Description: A set of nodes for Metadata. Collect Metadata from within an iterate node & extract metadata from an image.
Metadata Item Linked - Allows collecting of metadata while within an iterate node with no need for a collect node or conversion to metadata nodeMetadata From Image - Provides Metadata from an imageMetadata To String - Extracts a String value of a label from metadataMetadata To Integer - Extracts an Integer value of a label from metadataMetadata To Float - Extracts a Float value of a label from metadataMetadata To Scheduler - Extracts a Scheduler value of a label from metadataMetadata To Bool - Extracts Bool types from metadataMetadata To Model - Extracts model types from metadataMetadata To SDXL Model - Extracts SDXL model types from metadataMetadata To LoRAs - Extracts Loras from metadata.Metadata To SDXL LoRAs - Extracts SDXL Loras from metadataMetadata To ControlNets - Extracts ControNets from metadataMetadata To IP-Adapters - Extracts IP-Adapters from metadataMetadata To T2I-Adapters - Extracts T2I-Adapters from metadataDenoise Latents + Metadata - This is an inherited version of the existing Denoise Latents node but with a metadata input and output.Node Link: https://github.com/skunkworxdark/metadata-linked-nodes
Description: Creates a negative version of an image, effective for visual effects and mask inversion.
Node Link: https://github.com/VeyDlin/negative-image-node
View: </br>
Description: Nightmare Prompt Generator - Uses a local text generation model to create unique imaginative (but usually nightmarish) prompts for InvokeAI. By default, it allows you to choose from some gpt-neo models I finetuned on over 2500 of my own InvokeAI prompts in Compel format, but you're able to add your own, as well. Offers support for replacing any troublesome words with a random choice from list you can also define.
Node Link: https://github.com/gogurtenjoyer/nightmare-promptgen
Description: Uses Ollama API to expand text prompts for text-to-image generation using local LLMs. Works great for expanding basic prompts into detailed natural language prompts for Flux. Also provides a toggle to unload the LLM model immediately after expanding, to free up VRAM for Invoke to continue the image generation workflow.
Node Link: https://github.com/Jonseed/Ollama-Node
Example Node Graph: https://github.com/Jonseed/Ollama-Node/blob/main/Ollama-Node-Flux-example.json
View:
Description: an extensive suite of auto prompt generation and prompt helper nodes based on extensive logic. Get creative with the best prompt generator in the world.
The main node generates interesting prompts based on a set of parameters. There are also some additional nodes such as Auto Negative Prompt, One Button Artify, Create Prompt Variant and other cool prompt toys to play around with.
Node Link: https://github.com/AIrjen/OneButtonPrompt_X_InvokeAI
Nodes:
Description: asks a local LLM running in Oobabooga's Text-Generation-Webui to write a prompt based on the user input.
Link: https://github.com/sammyf/oobabooga-node
Example:
"describe a new mystical creature in its natural environment"
can return
"The mystical creature I am describing to you is called the "Glimmerwing". It is a majestic, iridescent being that inhabits the depths of the most enchanted forests and glimmering lakes. Its body is covered in shimmering scales that reflect every color of the rainbow, and it has delicate, translucent wings that sparkle like diamonds in the sunlight. The Glimmerwing's home is a crystal-clear lake, surrounded by towering trees with leaves that shimmer like jewels. In this serene environment, the Glimmerwing spends its days swimming gracefully through the water, chasing schools of glittering fish and playing with the gentle ripples of the lake's surface. As the sun sets, the Glimmerwing perches on a branch of one of the trees, spreading its wings to catch the last rays of light. The creature's scales glow softly, casting a rainbow of colors across the forest floor. The Glimmerwing sings a haunting melody, its voice echoing through the stillness of the night air. Its song is said to have the power to heal the sick and bring peace to troubled souls. Those who are lucky enough to hear the Glimmerwing's song are forever changed by its beauty and grace."
Requirement
a Text-Generation-Webui instance (might work remotely too, but I never tried it) and obviously InvokeAI 3.x
Note
This node works best with SDXL models, especially as the style can be described independently of the LLM's output.
Description: A set of InvokeAI nodes that add general prompt (string) manipulation tools. Designed to accompany the Prompts From File node and other prompt generation nodes.
Prompt To File - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.PTFields Collect - Converts image generation fields into a Json format string that can be passed to Prompt to file.PTFields Expand - Takes Json string and converts it to individual generation parameters. This can be fed from the Prompts to file node.Prompt Strength - Formats prompt with strength like the weighted format of compelPrompt Strength Combine - Combines weighted prompts for .and()/.blend()CSV To Index String - Gets a string from a CSV by index. Includes a Random index optionThe following Nodes are now included in v3.2 of Invoke and are no longer in this set of tools.
Prompt Join -> String JoinPrompt Join Three -> String Join ThreePrompt Replace -> String ReplacePrompt Split Neg -> String Split NegSee full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
Node Link: https://github.com/skunkworxdark/Prompt-tools-nodes
Workflow Examples
Description: This is a pack of nodes to interoperate with other services, be they public websites or bespoke local servers. The pack consists of these nodes:
Node Link: https://github.com/fieldOfView/InvokeAI-remote_image
Description: Implements one click background removal with BriaAI's new version 1.4 model which seems to be producing better results than any other previous background removal tool.
Node Link: https://github.com/blessedcoolant/invoke_bria_rmbg
View
Description: An integration of the rembg package to remove backgrounds from images using multiple U2NET models.
Node Link: https://github.com/VeyDlin/remove-background-node
View: </br>
Description: Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
Node Link: https://github.com/Ar7ific1al/invokeai-retroizeinode/
Retroize Output Examples
Description: A set of custom nodes for InvokeAI to create cross-view or parallel-view stereograms. Stereograms are 2D images that, when viewed properly, reveal a 3D scene. Check out r/crossview for tutorials.
Node Link: https://github.com/simonfuhrmann/invokeai-stereo
Example Workflow and Output </br>
Description: Detects skin in images based on predefined color thresholds.
Node Link: https://github.com/VeyDlin/simple-skin-detection-node
View: </br>
Description: This is a set of nodes for calculating the necessary size increments for doing upscaling workflows. Use the Final Size & Orientation node to enter your full size dimensions and orientation (portrait/landscape/random), then plug that and your initial generation dimensions into the Ideal Size Stepper and get 1, 2, or 3 intermediate pairs of dimensions for upscaling. Note this does not output the initial size or full size dimensions: the 1, 2, or 3 outputs of this node are only the intermediate sizes.
A third node is included, Random Switch (Integers), which is just a generic version of Final Size with no orientation selection.
Node Link: https://github.com/dwringer/size-stepper-nodes
Example Usage: </br>
Description: text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line
Node Link: https://github.com/mickr777/textfontimage
Output Examples
Results after using the depth controlnet
Description: This node generates masks for highlights, midtones, and shadows given an input image. You can optionally specify a blur for the lookup table used in making those masks from the source image.
Node Link: https://github.com/JPPhoto/thresholding-node
Examples
Input:
Highlights/Midtones/Shadows:
Highlights/Midtones/Shadows (with LUT blur enabled):
Description: Applies an unsharp mask filter to an image, preserving its alpha channel in the process.
Node Link: https://github.com/JPPhoto/unsharp-mask-node
Description: These nodes add the following to InvokeAI:
The nodes include:
Images To Grids - Combine multiple images into a grid of imagesXYImage To Grid - Take X & Y params and creates a labeled image grid.XYImage Tiles - Super-resolution (embiggen) style tiled resizingImage Tot XYImages - Takes an image and cuts it up into a number of columns and rows.XYImage collectionsSee full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/README.md
Node Link: https://github.com/skunkworxdark/XYGrid_nodes
Output Examples
Description: This node allows you to do super cool things with InvokeAI.
Node Link: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/app/invocations/prompt.py
Example Workflow: https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json
Output Examples
</br>The nodes linked have been developed and contributed by members of the Invoke AI community. While we strive to ensure the quality and safety of these contributions, we do not guarantee the reliability or security of the nodes. If you have issues or concerns with any of the nodes below, please raise it on GitHub or in the Discord.
If you run into any issues with a node, please post in the InvokeAI Discord.