ComfyUI >
-
See also Stable Diffusion usage notes
ComfyUI simple installation ∞
1. Download the latest ComfyUI at
https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z
2. Download and install 7zip
https://www.7-zip.org/
3. Download and install Python 3.10.10
https://www.python.org/downloads/release/python-31010/
Using the installer, have it add Python to the PATH
4. Unpack ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z to a directory of your choice.
5. Within that directory, read README_VERY_IMPORTANT.txt for any important updates.
6. Within that directory, run
update\update_comfyui_and_python_dependencies.bat
To make future updates, run
update\update_comfyui.bat
7. That update may suggest you update “pip”. It will give you the command to do so. Enclose the command in double-quotes. For example:
"C:\YOUR_INSTALLATION\python_embeded\python.exe" -m pip install --upgrade pip
You might also/later have to:
python.exe -m pip install --upgrade pip
For some additional tools, I also recommend:
pip install simpleeval
7. Put a .ckpt or .safetensors [ 1 ] checkpoint (model) in ComfyUI\models\checkpoints
-
e.g.
v1-5-pruned-emaonly.ckptfromhuggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt
8. To launch ComfyUI, run
run_nvidia_gpu.batif you have an Nvidia GPU, e.g. a GeForce 1080-
run_cpu.batto use your CPU. This will be _very_ slow.
Your web browser should open, displaying the ComfyUI interface. The command prompt will also give you the URL to visit, and you can control-click it to launch your web browser.
ComfyUI-specific terms ∞
- node – A little window/widget
- group – A box/backdrop behind nodes
-
TODO – The lines
–
- For general Stable Diffusion terms, see Stable Diffusion usage notes
Newbie advice ∞
drag, ormiddle-click-drag, an area of empty space to move the desktop.scroll wheelto zoomclickto select nodes.shift-clickorcontrol-dragto select additional nodes.dragtitles to move nodes.shift-dragtitles to move multiple nodes you’ve selected.dragtitles to move groups and any nodes therein.- Use groups liberally
- The “reroute” node supports titles;
right-clickon one of the dots. - ComfyUI-Bmad-DirtyUndoRedo is really useful for undoing with
control-zand redoing withcontrol-shift-z - To simultaneously zoom out and pan,
scrollwheel-backwhen hovering the mouse nearer to one of the sides. This is helpful for when you are in the middle of dragging something and run out of space to safely drop it. - Do not drop a group near other nodes; it will pick the other nodes up.
drag-and-dropa workflow.jsonor a ComfyUI-produced images into this space to recreate that workflow. It will overwrite your currently-viewed workflow.- Don’t forget to save your workflow!
- If you are doing something, and a dialog appears behind a node,
dragthe background around to reveal it. -
If nothing happens when you click “Queue Prompt”, then change the #seed[not existing]. Caching is really good, and sometimes it’s too good. :)
Configuration ∞
On the right-hand Queue menu, click its top-right gear icon.
- [_] Prompt for filename when saving workflow
- [x] Save Menu Position
- [x] Invert Menu Scrolling
- Link Render Mode [Linear]
-
click“close”
Move the menu up a bit, especially if you have the “Manager” plugin, so its button is always in view.
Hotkeys / Shortcuts ∞
https://blenderneko.github.io/ComfyUI-docs/Interface/Shortcuts/
control-enter |
Queue up current graph for generation |
control-shift-enter |
Queue up current graph as first for generation |
control-s |
Save workflow |
control-o |
Open workflow (load) |
control-a |
Select all nodes |
control-m |
Mute/unmute selected nodes (bypass) |
delete |
Delete selected nodes |
backspace |
Delete selected nodes |
control-delete |
Delete the current graph (delete everything) |
control-backspace |
Delete the current graph (delete everything) |
space |
Move the canvas around when held and moving the cursor |
control-leftclick |
Add clicked node to selection |
shift-leftclick |
Add clicked node to selection |
leftclick-drag |
Move node |
shift-leftclick-drag |
Move multiple selected nodes at the same time |
control-c |
Copy selected nodes into clipboard |
control-v |
Paste clipboard nodes, severing connections |
control-shift-v |
Paste clipboard nodes, maintaining incoming connections |
control-d |
Load default graph |
q |
Toggle queue visibility |
h |
Toggle history visibility |
r |
Refresh graph (similar to the web browser F5) |
left-doubleclick |
When over the empty canvas (not on a node), open a search box to add a node. |
rightclick |
When over the empty canvas (not a node), open node menu |
Best practices ∞
- If your workflow requires a plugin, write a note.
- If you do something atypical, write a note.
-
Tinker at low quality/resolution, and have it spit out images even if you think you’ll delete most of them. When you discover something you like, you can load its workflow, turn up the quality settings and reproduce it.
- Remember to change the
seedtofixedif you want to tinker with a slew of them.
- Remember to change the
- Don’t use batches, because in order to reproduce an image you will have to create the whole batch to get to that image.
-
You can
right-clicka node and “Bypass”. This can be helpful to, for example, choose to not upscale your future queued items.
TODO – Using SDXL ∞
—
-
Download sd_xl_base_1.0.safetensors
Usage and tips ∞
In the “Load LoRA” node, what’s the difference between strength_model and strength_clip ∞
I have no idea. From what I’ve read:
-
There are two parts of a LoRA: MODEL and UNET; tweaking them separately can give better results.
Extracting a single image out of a batch workflow ∞
Problem: You used the batch feature to create a bunch of images. You like one of the images in the middle of them, but there is no seed number for an individual item like that. How do you re-create that image? How do you change its settings and adjust it?
Solution: Use the “Latent From Batch” node.
Empty Latent Image -> Latent From Batch -> sampler node (e.g. KSampler)
batch_indexis which number in the batch you want-
lengthis how many images after that batch number you want to retrieve.
I know of no way to make things more complicated than this, for example to extract images 1,3,5-8 out of a batch.
In the future, I recommend you click [x] Extra options and use the “Batch count” feature, with a randomized seed. This will give you a bunch of one-off images each with their own seed and then you won’t have this batch problem in the future.
Workflows ∞
The boxes, lines, and all the settings which you work with.
By default, your workflow is saved into every image you create. You can drag-and-drop them back into your ComfyUI tab and overwrite what’s there. This will let you reproduce an image. This is extremely useful for adjusting the settings of an image you particularly like, such as upscaling an image you love.
Refiner workflow ∞
A refiner workflow is a two or three-stage process which generates simple previews then brings them into a polishing pass.
I’ve heard this helps eliminate a lot of the ugly mutations people make negative prompt poetry to avoid. It’s as though the second stage uses the first as an img2img.
This is something that, as of this writing, was not possible with AUTOMATIC1111, so these notes are here for ComfyUI.
-
The checkpoint file
sd_xl_refiner_1.0.safetensorswas downloaded fromhuggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0- Place the file as
ComfyUI\models\checkpoints\SDXL\sd_xl_refiner_1.0.safetensors
- Place the file as
–
-
You can have a second model used as a final refining model.
- refiner -> model
-
You can have a second model used as both a very rough quality precursor model and a final refining model.
- refiner -> model -> refiner
–
- Your first (refiner) steps value should be extremely low, so as to make just a sketch / hint to build from.
-
Your second (base) steps value should be fairly low. Don’t do all the work before the refiner.
- These steps are added on to the refiner initialization steps.
-
Remember that your third (refiner) steps value are added on to the refiner initialization and base steps, so be careful to not overdo it.
–
- Fiddle with
stepsandstart_at_step - Leave
end_at_stepalone - Fix the
noise_seed, then randomize it later. -
return_with_leftover_noise;enableis best practice.
–
In ComfyUI there is the item sampler_name – While this has a significant impact on output, it might be more valuable to set it to something which produces a decent image at a low number of steps. This is because you can queue up a bunch of images, each with a random seed, and for any preview item you don’t like, you can drop that queue entry. It saves a lot of time when you don’t have to wait for the full workflow to complete!
- With the checkpoint from Stable Diffusion – 1.5 pruned emaonly,
steps;4for the first preprocessor. prompt“a box of rocks”seed;0-
cfg;8
These sampler_name items are good quality:
lmsdpm_adaptivedpmpp_2s_ancestraldpmpp_sdedpmpp_sde_gpudpmpp_2mdpmpp_2m_sdedpmpp_2m_sde_gpuddpmddimuni_pc-
uni_pc_bh2
–
Inbox ∞
-
https://comfyanonymous.github.io/ComfyUI_tutorial_vn/
- A tutorial as a dating sim!
- https://www.reddit.com/r/StableDiffusion/comments/17boamf/9_coherent_facial_expressions_comfyui_edition/
- https://www.reddit.com/r/comfyui/comments/155pt6s/how_to_use_gfpgan/
- https://www.reddit.com/r/StableDiffusion/comments/15hl8hc/simple_comfyui_img2img_upscale_workflow/
- https://github.com/ltdrdata/ComfyUI-Impact-Pack
- https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
- https://github.com/comfyanonymous/ComfyUI_experiments
Footnotes
- preferred [ ↩ ]

