ComfyUI-TinyBreaker is a collection of custom nodes specifically designed to generate images using the TinyBreaker model. It's actively developed with ongoing improvements. Although still in progress, these nodes are functional and allow you to explore the potential of the model.
TinyBreaker model
While still in the prototype stage, the TinyBreaker model stands out for its unique features. To learn more about its strengths and discover upcoming improvements, check out "What is TinyBreaker?"
You need to have these two models copied into your ComfyUI application:
- tinybreaker_prototype0.safetensors (3.0 GB):
- place the file in the
'ComfyUI/models/checkpoints'
folder.
- place the file in the
- t5xxl_fp8_e4m3fn.safetensors (4.9 GB):
- place the file in the
'ComfyUI/models/clip'
folder (or'ComfyUI/models/text_encoders'
). - this model is a versatile text encoder used by FLUX and SD3.5 as well.
- place the file in the
Ensure you have the latest version of ComfyUi.
The easiest way to install the nodes is through ComfyUI Manager:
- Open ComfyUI and click on the "Manager" button to launch the "ComfyUI Manager Menu".
- Within the ComfyUI Manager, locate and click on the "Custom Nodes Manager" button.
- In the search bar, type "tinybreaker".
- Select the "ComfyUI-TinyBreaker" node from the search results and click the "Install" button.
- Restart ComfyUI to ensure the changes take effect.
To manually install the nodes:
- Open your preferred terminal application.
- Navigate to your ComfyUI directory:
cd <your_comfyui_directory>
- Move into the custom_nodes folder and clone the repository:
cd custom_nodes git clone https://github.com/martin-rizzo/ComfyUI-TinyBreaker
For those using the standalone ComfyUI release on Windows:
- Go to where you unpacked ComfyUI_windows_portable. You'll find your
run_nvidia_gpu.bat
file here, confirming the correct location. - Press CTRL+SHIFT+Right click in an empty space and select "Open PowerShell window here".
- Clone the repository into your custom nodes folder using:
git clone https://github.com/martin-rizzo/ComfyUI-TinyBreaker .\ComfyUI\custom_nodes\ComfyUI-TinyBreaker
This image contains a simple workflow for testing the TinyBreaker model. To load this workflow, simply drag and drop the image into ComfyUI.
For further information and additional workflow examples, please consult the workflows folder.
The 'Select Style' node allows you to select an image style. This node injects text into the prompt and modifies sampler parameters to influence the image generation. Please note that these styles are still in development, as I am experimenting with different parameter combinations to refine them over time. Therefore, they might not always function perfectly or reflect exactly what is described here.
Style Name | Description |
---|---|
PHOTO |
Realistic images that closely resemble photographs. |
DARKFAN80 |
Dark fantasy images with 80s cinematic style. |
LITTLETOY |
Cute, minimalist images in the style of small toys. |
PIXEL_ART |
Pixel art images with retro and blocky details. |
COLOR_INK |
Beautiful drawings in vibrant colorful ink style. |
REALISTIC_WAIFU_X |
Realistic images where a woman is the main subject. |
REALISTIC_WAIFU_Z |
Realistic images where a woman is the main subject (variant) |
The 'Unified Prompt' node allows you to input both your prompt and parameters within a single text area, streamlining your workflow. This eliminates the need for separate input fields.
When using the Unified Prompt node:
- Begin by typing your desired prompt text as usual.
- Then write any necessary parameters, each preceded by a double hyphen (
--
). - Utilize the special keys CTRL+UP and CTRL+DOWN to modify the values of each parameter.
Parameter | Description |
---|---|
--no <text> |
Specifies elements that should not appear in the image. (negative prompt) |
--refine <text> |
Provides a textual description of what elements should be refined. |
--variant <number> |
Specifies variants of the refinement without changing composition. |
--cfg-adjust <decimal> |
Adjusts the value of the Classifier-Free Guidance (CFG). |
--detail <level> |
Sets the intensity level for detail refinement. |
Parameter | Description |
---|---|
--seed <number> |
Defines a number for initializing the random generator. |
--aspect <ratio> |
Specifies the aspect ratio of the image. |
--landscape / --portrait |
Specifies orientation of the image (horizontal or vertical). |
--small / --medium / --large |
Controls generated image size. |
--batch-size <number> |
Specifies number of images to generate in a batch. |
--style <style> |
Defines the artistic style of the image. |
--no trees, clouds
--refine cats ears
--variant 2
--cfg-adjust -0.2
--detail normal
--seed 42
--aspect 16:9
--portrait
--medium
--batch-size 4
--style PIXEL_ART
For more details on these parameters, see docs/prompt_parameters.md.
The 'Unified Prompt' node offers special control keys for simplifying parameter input and modification:
- CTRL+RIGHT (autocomplete): Initiate a parameter name by typing
--
followed by its beginning (e.g.,--d
). Pressing CTRL+RIGHT will automatically complete the full parameter name (e.g.,--detail
). - CTRL+UP/DOWN (over parameter value): Increment or decrement the value associated with a parameter. For instance, if your cursor is positioned over
--seed 20
and you press CTRL+UP, the text will change to--seed 21
.
The 'Save Image' node embeds workflow information into the generated image. Additionally, it embeds prompt and parameter information in a format compatible with CivitAI/A1111, this enables:
- CivitAI can read the prompt used to generate the image when uploaded.
- A wide range of applications can access the prompt and parameters used for image generation.
I would like to express my sincere gratitude to the developers of PixArt-Σ for their outstanding model. Their contributions have been instrumental in shaping this project and pushing the boundaries of high-quality image generation with minimal resources.
Additional thanks to Ollin Boer Bohan for the Tiny AutoEncoder models. These models have proven invaluable for their efficient latent image encoding, decoding and transcoding capabilities.
Copyright (c) 2024-2025 Martin Rizzo
This project is licensed under the MIT license.
See the "LICENSE" file for details.