Nodes Browser
ComfyDeploy: How ComfyUI-RMBG works in ComfyUI?
What is ComfyUI-RMBG?
A ComfyUI node for removing image backgrounds using RMBG-2.0
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI-RMBG
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI-RMBG
A ComfyUI custom node designed for advanced image background removal and object segmentation, utilizing multiple models including RMBG-2.0, INSPYRENET, BEN, SAM, and GroundingDINO.
$${\color{red}If\ this\ custom\ node\ helps\ you\ or\ you\ like\ my\ work,\ please\ give\ me⭐on\ this\ repo!}$$ $${\color{red}It's\ a\ greatest\ encouragement\ for\ my\ efforts!}$$
News & Updates
-
2025/01/05: Update ComfyUI-RMBG to v1.5.0 with new Fashion and accessories Segment custom node ( update.md )
- Added a new custom node for fashion segmentation.
-
2025/01/02: Update ComfyUI-RMBG to v1.4.0 with new Clothes Segment node ( update.md )
- Added intelligent clothes segmentation with 18 different categories
- Support multiple item selection and combined segmentation
- Same parameter controls as other RMBG nodes
-
2024/12/29: Update ComfyUI-RMBG to v1.3.2 with background handling ( update.md )
- Enhanced background handling to support RGBA output when "Alpha" is selected.
- Ensured RGB output for all other background color selections.
-
2024/12/25: Update ComfyUI-RMBG to v1.3.1 with bug fixes ( update.md )
- Fixed an issue with mask processing when the model returns a list of masks.
- Improved handling of image formats to prevent processing errors.
-
2024/12/23: Update ComfyUI-RMBG to v1.3.0 with new Segment node ( update.md )
- Added text-prompted object segmentation
- Support both tag-style ("cat, dog") and natural language ("a person wearing red jacket") prompts
- Multiple models: SAM (vit_h/l/b) and GroundingDINO (SwinT/B) (as always model file will be downloaded automatically when first time using the specific model)
- This update requires install requirements.txt
-
2024/12/12: Update Comfyui-RMBG ComfyUI Custom Node to v1.2.2 ( update.md )
-
2024/12/02: Update Comfyui-RMBG ComfyUI Custom Node to v1.2.1 ( update.md )
-
2024/11/29: Update Comfyui-RMBG ComfyUI Custom Node to v1.2.0 ( update.md )
-
2024/11/21: Update Comfyui-RMBG ComfyUI Custom Node to v1.1.0 ( update.md )
Features
- Background Removal (RMBG Node)
- Multiple models: RMBG-2.0, INSPYRENET, BEN
- Various background options
- Batch processing support
- Object Segmentation (Segment Node)
- Text-prompted object detection
- Support both tag-style and natural language inputs
- High-precision segmentation with SAM
- Flexible parameter controls
Installation
Method 1. install on ComfyUI-Manager, search Comfyui-RMBG
and install
install requirment.txt in the ComfyUI-RMBG folder
./ComfyUI/python_embeded/python -m pip install -r requirements.txt
Method 2. Clone this repository to your ComfyUI custom_nodes folder:
cd ComfyUI/custom_nodes
git clone https://github.com/1038lab/ComfyUI-RMBG
install requirment.txt in the ComfyUI-RMBG folder
./ComfyUI/python_embeded/python -m pip install -r requirements.txt
3. Manually download the models:
- The model will be automatically downloaded to
ComfyUI/models/RMBG/
when first time using the custom node. - Manually download the RMBG-2.0 model by visiting this link, then download the files and place them in the
/ComfyUI/models/RMBG/RMBG-2.0
folder. - Manually download the INSPYRENET models by visiting the link, then download the files and place them in the
/ComfyUI/models/RMBG/INSPYRENET
folder. - Manually download the BEN model by visiting the link, then download the files and place them in the
/ComfyUI/models/RMBG/BEN
folder. - Manually download the SAM models by visiting the link, then download the files and place them in the
/ComfyUI/models/SAM
folder. - Manually download the GroundingDINO models by visiting the link, then download the files and place them in the
/ComfyUI/models/grounding-dino
folder. - Manually download the Clothes Segment model by visiting the link, then download the files and place them in the
/ComfyUI/models/RMBG/segformer_clothes
folder. - Manually download the Fashion Segment model by visiting the link, then download the files and place them in the
/ComfyUI/models/RMBG/segformer_fashion
folder.
Usage
RMBG Node
Optional Settings :bulb: Tips
| Optional Settings | :memo: Description | :bulb: Tips |
|----------------------|-----------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| Sensitivity | Adjusts the strength of mask detection. Higher values result in stricter detection. | Default value is 0.5. Adjust based on image complexity; more complex images may require higher sensitivity. |
| Processing Resolution | Controls the processing resolution of the input image, affecting detail and memory usage. | Choose a value between 256 and 2048, with a default of 1024. Higher resolutions provide better detail but increase memory consumption. |
| Mask Blur | Controls the amount of blur applied to the mask edges, reducing jaggedness. | Default value is 0. Try setting it between 1 and 5 for smoother edge effects. |
| Mask Offset | Allows for expanding or shrinking the mask boundary. Positive values expand the boundary, while negative values shrink it. | Default value is 0. Adjust based on the specific image, typically fine-tuning between -10 and 10. |
| Background | Choose output background color | Alpha (transparent background) Black, White, Green, Blue, Red |
| Invert Output | Flip mask and image output | Invert both image and mask output |
| Performance Optimization | Properly setting options can enhance performance when processing multiple images. | If memory allows, consider increasing process_res
and mask_blur
values for better results, but be mindful of memory usage. |
Basic Usage
- Load
RMBG (Remove Background)
node from the🧪AILab/🧽RMBG
category - Connect an image to the input
- Select a model from the dropdown menu
- select the parameters as needed (optional)
- Get two outputs:
- IMAGE: Processed image with transparent, black, white, green, blue, or red background
- MASK: Binary mask of the foreground
Parameters
sensitivity
: Controls the background removal sensitivity (0.0-1.0)process_res
: Processing resolution (512-2048, step 128)mask_blur
: Blur amount for the mask (0-64)mask_offset
: Adjust mask edges (-20 to 20)background
: Choose output background colorinvert_output
: Flip mask and image outputoptimize
: Toggle model optimization
Segment Node
- Load
Segment (RMBG)
node from the🧪AILab/🧽RMBG
category - Connect an image to the input
- Enter text prompt (tag-style or natural language)
- Select SAM and GroundingDINO models
- Adjust parameters as needed:
- Threshold: 0.25-0.35 for broad detection, 0.45-0.55 for precision
- Mask blur and offset for edge refinement
- Background color options
RMBG-2.0
RMBG-2.0 is is developed by BRIA AI and uses the BiRefNet architecture which includes:
- High accuracy in complex environments
- Precise edge detection and preservation
- Excellent handling of fine details
- Support for multiple objects in a single image
- Output Comparison
- Output with background
- Batch output for video The model is trained on a diverse dataset of over 15,000 high-quality images, ensuring:
- Balanced representation across different image types
- High accuracy in various scenarios
- Robust performance with complex backgrounds
INSPYRENET
INSPYRENET is specialized in human portrait segmentation, offering:
- Fast processing speed
- Good edge detection capability
- Ideal for portrait photos and human subjects
BEN
BEN is robust on various image types, offering:
- Good balance between speed and accuracy
- Effective on both simple and complex scenes
- Suitable for batch processing
SAM
SAM is a powerful model for object detection and segmentation, offering:
- High accuracy in complex environments
- Precise edge detection and preservation
- Excellent handling of fine details
- Support for multiple objects in a single image
- Output Comparison
- Output with background
- Batch output for video
GroundingDINO
GroundingDINO is a model for text-prompted object detection and segmentation, offering:
- High accuracy in complex environments
- Precise edge detection and preservation
- Excellent handling of fine details
- Support for multiple objects in a single image
- Output Comparison
- Output with background
- Batch output for video
Requirements
- ComfyUI
- Python 3.10+
- Required packages (automatically installed):
- torch>=2.0.0
- torchvision>=0.15.0
- Pillow>=9.0.0
- numpy>=1.22.0
- huggingface-hub>=0.19.0
- tqdm>=4.65.0
- transformers>=4.35.0
- transparent-background>=1.2.4
Credits
- RMBG-2.0: https://huggingface.co/briaai/RMBG-2.0
- INSPYRENET: https://github.com/plemeri/InSPyReNet
- BEN: https://huggingface.co/PramaLLC/BEN
- SAM: https://huggingface.co/facebook/sam-vit-base
- GroundingDINO: https://github.com/IDEA-Research/GroundingDINO
- Clothes Segment: https://huggingface.co/mattmdjaga/segformer_b2_clothes
- Created by: 1038 Lab
License
GPL-3.0 License