(on Wikipedia)
https://github.com/CompVis/stable-diffusion
https://ommer-lab.com/research/latent-diffusion-models/
https://stability.ai/
-
- These guys use Stable Diffusion: https://creator.nightcafe.studio/
- NSFW and waifu was unstablediffusion.net
-
https://www.unstability.ai/ - web.
-
- 2023-10-19 - Switched to ComfyUI
-
2023-04-12 - I used Easy Diffusion
Stuff ∞
- Additional Networks for generating images
- Generative Models by Stability AI
- LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
-
Interactive guide to Stable Diffusion steps parameter - Tutorial on inference / sampling steps.
Recommended budget video cards to use for Stable Diffusion ∞
This knowledge is decent as of 2023-04-12, but development and improvement are ongoing!
-
- As of 2023-04-12 efforts for NVIDIA RDNA exist but those perform worse. It is unknown if that type of card will ever outperform Nvidia in terms of ROI.
-
The third digit to be high
- e.g. 2060 is better than 3050
-
8G or more of VRAM
- The higher the better, this is more important than that third digit.
Examples:
- 2050 8G
- 2060 8G is widely available and cost effective.
- 3050 12G
- 2060 12G is a great choice but a significant cost increase.
-
3060 12G
-
- "Ti" carts are nice, but it is too expensive to consider.
-
Used / refurbished cards save a lot of money, but be aware that those cards may have been used for crypto mining and at worse their fans might be wearing down. Also, warranty matters.
Random knowledge I don't understand ∞
Stuff here: https://pastebin.com/9RmqDhSm
Inpainting/Outpainting:
Upscaling:
Wildcards: https://rentry.org/NAIwildcards
Create Embeddings, Hypernetworks, LoRAs:
- https://rentry.org/simplified-embed-training
- https://rentry.org/HNSpeedrun
- https://rentry.org/lora_train
Animation:
ControlNet:
Where to get models ∞
Various I've noted:
Where to get SDXL models ∞
-
The primary Stable Diffusion model is
sd_xl_base_1.0.safetensors
which can be downloaded from https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0- It was trained at 1024, so you want to make the width and height of either your latent image or img2img to 1024 or higher; there might be quality issues if you make them much smaller.
Upscale models ∞
-
-
TODO - LoRAs ∞
-
To learn more about LORAs, see Stable Diffusion usage notes § LORAs
TODO - get some locations
TODO - How to make a full-body image of a person ∞
- https://www.google.com/search?q=stable+diffusion+prompt+full+body+portrait
-
https://www.reddit.com/r/StableDiffusion/comments/x65klf/how_to_get_a_full_body_image/
TO TRY - Training your own models for Stable Diffusion AI art ∞
See Training your own models for Stable Diffusion AI art
TODO - how to make stable diffusion output its own training data ∞
TODO - drop my notes here
Stuff ∞
TODO - Tutorials ∞
- If you want ComfyUI-specific stuff, check out ComfyUI § tutorials
- LOrAs - https://huggingface.co/blog/lora
-
Prompts: https://web.archive.org/web/20230613000826/rentry.org/hdgpromptassist
Alternatives / other implementations ∞
See AI Art
- Easy Diffusion
- https://invoke-ai.github.io/InvokeAI/
- NVIDIA: https://rentry.org/voldy
- CPU: https://rentry.org/cputard | https://rentry.org/webui-cpu
- AMD (and more): https://rentry.org/sdg-link
-
Alternative UI (Node-based): https://github.com/comfyanonymous/ComfyUI
Last updated 2025-02-04 at 01:30:38