Quickly Change Styles, Clothes, and More with Precision in ComfyUI

πŸ€– AI Video Summary: The video demonstrates a workflow combining an IP adapter and a segmentation model to apply a Hawaiian shirt style to a selected part of an image, emphasizing fine-tuned adjustments, the ability to batch process and giving a detailed step-by-step guide on setting up and running the required custom nodes and models for this task, while also noting the process’s VRAM requirements and limitations in style transfer consistency.

πŸ‘‰ ComfyUI Workflow Download Link: Changing Styles with IPAdapter and Grounding Dino

When we combine an IP adapter with a segmentation model, we can make fine-tuned adjustments to specific areas of an image. These adjustments can look significantly better than those achieved through traditional in-painting coupled with a ControlNet method.

The workflow featured in the video is broken into three components:

  • Basic workflow (loading checkpoint, prompts, KSampler, etc.)
  • IP Adapter nodes
  • Segmentation nodes

IP Adapter Nodes

We’re using four different nodes in this area, which include the following: Load ImageIPAdapter Unified LoaderLoad Clip Vision model, and IPAdapter Advanced.

The overall goal here is to understand the style of the reference image and pass that information along with the model so the output accurately understands what that reference image is about and applies it accordingly.

While in the example provided in the video uses a Hawaiian shirt, the input image could be any image that you want to extract the style from. This could be other shirt or fabric styles, paintings, monograms, etc.

🧐 Power-up: You can learn more about IPAdapters in this in-depth guide provided by HuggingFace.

Segment Anything Nodes

Borrowing from the popular SD WebUI Segment Anything extension, the Segment Anything custom nodes for ComfyUI provided by storyicon allow you to provide a textual input and it will, if found, segment that object from an image. Both extensions are based on Grounding Dino.

The video demonstrates how to connect a Load Image node along with SAMModelLoaderGroundingDinoModelLoader, and the GroundingDinoSAMSegment nodes.

One important variable to touch on that wasn’t included in the video is the threshold value in the GroundingDinoSAMSegment node. Essentially, a lower value may result in more of an area being selected, whereas a higher value will be more specific. The problem is that too high of a value may result in nothing being selected at all.

Generally, the default value of 0.30 works well for most cases. Use the Convert Mask to Image node if you want to review the segment.

Important Notes

  • Inpainting Checkpoint: Using an inpainting model, preferably SDXL, will improve results.
  • VRAM Requirements: The segmentation models used in this workflow can be VRAM intensive, so ensure sufficient VRAM is available.
  • Style Transfer Limitations: Transfers are not perfect representations, nor will they place details in specific areas (i.e., flower on sleeve in Hawaiian shirt may not show on sleeve of model).
  • Limited by Segment Bounds: Only the area that has been segmented will receive changes. Therefore, if this method is used for changing outfits, then length, details, etc. will not be transferred.
  • Textual Prompt: Just describe the item that is being transferred in the style.

2 responses to “Quickly Change Styles, Clothes, and More with Precision in ComfyUI”

  1. Shawn Avatar

    Have any questions on this workflow? Drop it below πŸ‘‡

  2. ACrazy Avatar
    ACrazy

    Thanks for the video and tutorial. I love the concept of this.

    Could you help with my issue please? Output completely ignoring clothes.

    SETUP:
    – I drew a mask around the eyes of my model and bypassed the DINOnodes (since the mask didn’t get all of the required area).
    – For the clothes, I selected sunglasses (specifically these : https://encrypted-tbn0.gstatic.com/shopping?q=tbn:ANd9GcQjL_O72XNdc8Ir2jR1qIRpeS_C2rugy7998U5fzRl7peSXxpL1SgoMvr7h2ORbDp1ZKof3EHdyzH8oRWSoyk9jCxXBzkjr7ovhdcmGvuvd8KE-Qb9349t7).
    – For the checkpoint model, I selected EpicRealism V5 Inpainting.
    – All other settings are the same as yours.

Leave a Comment

2 thoughts on “Quickly Change Styles, Clothes, and More with Precision in ComfyUI”

  1. Thanks for the video and tutorial. I love the concept of this.

    Could you help with my issue please? Output completely ignoring clothes.

    SETUP:
    – I drew a mask around the eyes of my model and bypassed the DINOnodes (since the mask didn’t get all of the required area).
    – For the clothes, I selected sunglasses (specifically these : https://encrypted-tbn0.gstatic.com/shopping?q=tbn:ANd9GcQjL_O72XNdc8Ir2jR1qIRpeS_C2rugy7998U5fzRl7peSXxpL1SgoMvr7h2ORbDp1ZKof3EHdyzH8oRWSoyk9jCxXBzkjr7ovhdcmGvuvd8KE-Qb9349t7).
    – For the checkpoint model, I selected EpicRealism V5 Inpainting.
    – All other settings are the same as yours.

    Reply

Leave a Comment