Add custom nodes, Civitai loras (LFS), and vast.ai setup script
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Includes 30 custom nodes committed directly, 7 Civitai-exclusive loras stored via Git LFS, and a setup script that installs all dependencies and downloads HuggingFace-hosted models on vast.ai. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
1
custom_nodes/controlaltai-nodes/.gitignore
vendored
Normal file
1
custom_nodes/controlaltai-nodes/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
/__pycache__/
|
||||
21
custom_nodes/controlaltai-nodes/LICENSE
Normal file
21
custom_nodes/controlaltai-nodes/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 ControlAltAI
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
288
custom_nodes/controlaltai-nodes/README.md
Normal file
288
custom_nodes/controlaltai-nodes/README.md
Normal file
@@ -0,0 +1,288 @@
|
||||
### Requirements Update: 8 Dec 2024: Flux Attention Control Node requires XFormers. Check your version of PyTorch and install a compatible version of XFormers. Please follow the instructions here: <a href="https://github.com/gseth/ControlAltAI-Nodes/blob/master/xformers_instructions.txt">xformers_instructions</a>
|
||||
|
||||
# ComfyUI ControlAltAI Nodes
|
||||
|
||||
This repository contains custom nodes designed for the ComfyUI framework, focusing on quality-of-life improvements. These nodes aim to make tasks easier and more efficient. Two Flux nodes are available to enhance functionality and streamline workflows within ComfyUI.
|
||||
|
||||
## Nodes
|
||||
|
||||
### List of Nodes:
|
||||
- Flux
|
||||
- Flux Resolution Calculator (Updated May 2025)
|
||||
- Flux Sampler
|
||||
- Flux Union ControlNet Apply
|
||||
- Logic
|
||||
- Boolean Basic
|
||||
- Boolean Reverse
|
||||
- Integer Settings
|
||||
- Choose Upscale Model
|
||||
- Image
|
||||
- Get Image Size & Ratio
|
||||
- Noise Plus Blend
|
||||
- Flux Region
|
||||
- Region Mask Generator
|
||||
- Region Mask Processor
|
||||
- Region Mask Validator
|
||||
- Region Mask Conditioning
|
||||
- Flux Attention Control
|
||||
- Region Overlay Visualizer
|
||||
- Flux Attention Cleanup
|
||||
|
||||
### Flux Resolution Calculator
|
||||
|
||||
The Flux Resolution Calculator is designed to determine the optimal image resolution for outputs generated using the Flux model, which is notably more oriented towards megapixels. Unlike traditional methods that rely on standard SDXL resolutions, this calculator operates based on user-specified megapixel inputs. Users can select their desired megapixel count, ranging from 0.1 to 2.0 megapixels, and aspect ratio. The calculator then provides the exact image dimensions necessary for optimal performance with the Flux model. This approach ensures that the generated images meet specific quality and size requirements tailored to the user's needs. Additionally, while the official limit is 2.0 megapixels, during testing, I have successfully generated images at higher resolutions, indicating the model's flexibility in accommodating various image dimensions without compromising quality.
|
||||
|
||||
- **Supported Megapixels:** 0.1 MP - 2.5 MP (change stepping to 0.1 for fine-tuned selection)
|
||||
- **Note:** Generations above 1 MP may appear slightly blurry, but resolutions of 3k+ have been successfully tested on the Flux1.Dev model.
|
||||
- **Custom Ratio:** Custom Ratio is now supported. Enable or Disable the Custom Ratio and input any ratio. (Example: 4:9).
|
||||
- **Preview:** The preview node is just a visual representation of the ratio.
|
||||
- **Divisible By:** You can now choose the divisibility by 8/16/32/64. By default, it is 64. To get fine-tuned results, choose divisibility by 8. Divisibility by 32/64 is recommended for Flux Dev 1.
|
||||
|
||||
### Flux Sampler
|
||||
|
||||
The Flux Sampler node combines the functionality of the CustomSamplerAdvance node and input nodes into a single, streamlined node.
|
||||
|
||||
- **CFG Setting:** The CFG is fixed at 1.
|
||||
- **Conditioning Input:** Only positive conditioning is supported.
|
||||
- **Compatibility:** Only the samplers and schedulers compatible with the Flux model are included.
|
||||
- **Latent Compatibility:** Use SD3 Empty Latent Image only. The normal empty latent image node is not compatible.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
### Flux Union ControlNet Apply
|
||||
|
||||
The Flux Union ControlNet Apply node is an all-in-one node compatible with InstanX Union Pro ControlNet. It has been tested extensively with the union controlnet type and works as intended. You can combine two ControlNet Union units and get good results. Not recommended to combine more than two. The ControlNet is tested only on the Flux 1.Dev Model.
|
||||
|
||||

|
||||
|
||||
**Recommended Settings:**<br>
|
||||
strength: 0.15-0.65.<br>
|
||||
end percentage: 0.200 - 0.900.
|
||||
|
||||
**Recommended PreProcessors:**<br>
|
||||
Canny: Canny Edge (ControlNet Aux).<br>
|
||||
Tile: Tile (ControlNet Aux).<br>
|
||||
Depth: Depth Anything V2 Relative (ControlNet Aux).<br>
|
||||
Blue: Direct Input (Blurry Image) or Tile (ControlNet Aux).<br>
|
||||
Pose: DWPose Estimator (ControlNet Aux).<br>
|
||||
Gray: Image Desaturate (Comfy Essentials Custom Node).<br>
|
||||
Low Quality: Direct Input.
|
||||
|
||||
Results: (Canny and Depth Examples not included. They are straightforward.)<br><br>
|
||||
**Pixel Low Resolution to High Resolution**<br><br>
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
**Photo Restoration**<br><br>
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
**Game Asset Low Resolution Upscale**<br><br>
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
**Blur to UnBlur**<br><br>
|
||||

|
||||
|
||||
**Re-Color**<br><br>
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
**YouTube tutorial Union ControlNet Usage: <a href="https://www.youtube.com/watch?v=4_1A5pQkJkg">Video Tutorial</a>**
|
||||
|
||||
**Shakker Labs & InstantX Flux ControlNet Union Pro Model Download:** <a href="https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro">Hugging Face Link</a>
|
||||
|
||||
### Get Image Size & Ratio
|
||||
This node is designed to get the image resolution in width, height, and ratio. The node can be further connected to the Flux Resolution Calculator. To do so, follow the following steps:
|
||||
- Right-click on the Flux Resolution Calculator -- > Convert widget to input -- > Convert custom_aspect_ratio to input.
|
||||
- Connect Ratio output to custom_aspect_ratio input.
|
||||
|
||||

|
||||
|
||||
### Integer Setting
|
||||
This node is designed to give output as a raw value of 1 or 2 integers. Enable = 2, Disable = 1.
|
||||
|
||||
Use case: This can be set up before a two-way switch, allowing workflow logical control to flow in one or the other direction. As of now, it only controls two logical flows. In the future, we will upgrade the node to support three or more logical switch flows.
|
||||
|
||||

|
||||
|
||||
### Choose Upscale Model
|
||||
A simple node that can be connected with a boolean logic. A true response will use upscale model 1, and a false response will use upscale model 2.
|
||||
|
||||

|
||||
|
||||
### Noise Plus Blend
|
||||
This node will generate a Gaussian blur noise based on the dimensions of the input image and will blend the noise into the entire image or only the mask region.
|
||||
|
||||
**Issue:** Generated faces/landscapes are realistic, but during upscale, the AI model smoothens the skin or texture, making it look plastic or adding smooth fine lines.
|
||||
|
||||
**Solution:** For upscaling, auto segment or manually mask the face or specified regions and add noise. Then, pass the blended image output to the K-Sampler and denoise at 0.20 - 0.50.
|
||||
|
||||

|
||||
|
||||
You can see the noise has been applied only to the face as per the mask. This will maintain the smooth bokeh and preserve the facial details during upscale.
|
||||

|
||||
|
||||
Denoise the image using Flux or SDXL sampler. Recommended sampler denoise: 0.10 - 0.50
|
||||

|
||||
|
||||
**Settings:**<br>
|
||||
noise_scale: 0.30 - 0.50.<br>
|
||||
blend_opacity: 10-25.
|
||||
|
||||
If you find too many artifacts on the skin or other textures, reduce both values. Increase the values if upscaling output results in plastic, velvet-like smooth lines.
|
||||
|
||||
**Best Setting for AI-generated Faces:**<br>
|
||||
noise_scale: 0.40-0.50.<br>
|
||||
blend_opacity: 15-25.
|
||||
|
||||
**Best Setting for AI-generated texture (landscapes):**<br>
|
||||
noise_scale: 0.30.<br>
|
||||
blend_opacity: 12-15.
|
||||
|
||||
Results:
|
||||
**Example 1**<br>
|
||||
Without Noise Blend:
|
||||

|
||||
|
||||
With Noise Blend:
|
||||

|
||||
|
||||
**Example 2**<br>
|
||||
Without Noise Blend:
|
||||

|
||||
|
||||
With Noise Blend:
|
||||

|
||||
|
||||
**Example 3**<br>
|
||||
Without Noise Blend:
|
||||

|
||||
|
||||
With Noise Blend:
|
||||

|
||||
|
||||
**Example 4**<br>
|
||||
Without Noise Blend:
|
||||

|
||||
|
||||
With Noise Blend:
|
||||

|
||||
|
||||
### Flux Region (Spatial Control)
|
||||
|
||||
The node pipeline is as follows: Region Mask Generator --> Region Mask Processor --> Region Mask Validator --> Flux Region Conditioning --> Flux Attention Control --> Flux Overlay Visualizer (optional) --> Flux Attention Cleanup. </br>
|
||||
*Note: Watching the video tutorial is a must. The learning curve is a bit high to use Flux Region Spatial Control.*
|
||||
|
||||
**Region Mask Generator:** This node generates the regions in mask and bbox format. This information is then passed on to the Mask Processor.</br>
|
||||
|
||||
</br>
|
||||
|
||||
**Region Mask Processor:** This node processes the generated mask and applies Gaussian Blur and feathering. This pre-processor node preprocesses the mask and sends the preprocessed information in the pipeline.</br>
|
||||
|
||||
</br>
|
||||
|
||||
**Region Mask Validator:** This node calculates the validity of the regions. The "is valid" message will be true if there are no overlaps. The validation message would show you detailed information on the overlapping regions and the overlap percentage. Although the methodology used requires zero overlaps, the issue is resolved in the flux attention control with feathering. Overlapping will only be an issue if it is excessive, beyond 40-50%.</br>
|
||||
|
||||
</br>
|
||||
|
||||
**Region Mask Conditioning:** Up to three separate conditioning can be connected. The node will process based on the number of regions defined rather than the actual conditioning connections. The strength values are independent for each region. Strength 1 for Region 1, Strength 2 for Region 2, and Strength 3 for Region 3. The strength value range is from 0 to 10 with an increment/decrement step of 0.1. At Value 1, the region strength will match the base conditioning strength, which is always set at 1 as a global value. Strength Values are not only relative to the base conditioning value but are also relative to each other. They are also affected by the Region % area in the canvas and the feathering value in the attention control. Please note. Only use the dual clip and flux conditioning in comfy. The base + region flux guidance should be set to 1.</br>
|
||||
|
||||
</br>
|
||||
|
||||
**Flux Attention Control:** The node takes the region conditioning + base conditioning + the feathering strengths and all the previous information in the pipeline and overrides the Flux Attention. When disabled, it only passes through the base conditioning to the Sampler.</br>
|
||||
|
||||
</br>
|
||||
|
||||
**Region Overlay Visualizer:** This node overlays the region on the final output for visual purposes only.</br>
|
||||
|
||||
**Flux Attention Cleanup:** Since the attention is overridden in the model, a tensor mismatch error will occur when you switch the workflow. We also do not want the attention to be cleaned up in the existing workflow. This node automatically will preserve attention during re-runs in the existing workflow, but when switching workflow will do a fresh clean up and restore flux original attention. This process is achieved without a model unload or manual cache cleanup, as they will not work.</br>
|
||||
|
||||
</br>
|
||||
|
||||
**Xformers & Token/Attention Limits:** The pipeline uses an advanced attention mechanism that combines text tokens from your prompts with spatial information from defined regions. As you increase prompt length or add multiple, complex regions, you create larger attention matrices. While xFormers helps optimize memory usage, there is still a practical limit on how many tokens and spatial positions the model can handle without causing dimension or shape alignment errors.
|
||||
|
||||
Example Error: 'Invalid shape for attention bias: torch.Size([1, 24, 5264, 5264]) (expected (1, 24, 5118, 5118))'
|
||||
|
||||
This limit isn’t about a fixed “5,000 x 5,000” size or a strict VRAM cap. Instead, it’s determined by the model’s architecture and how tokens are combined with spatial positions. Extremely long prompts or too many intricate regions can produce attention shapes that the model’s code cannot process, resulting in shape mismatch errors rather than running out of memory. If you encounter these errors, try shortening your prompt or reducing the complexity of your regional conditioning. There isn’t a simple formula linking VRAM size directly to token count. Instead, it’s about balancing your prompt length and region definitions to keep the attention mechanism within workable limits. Testing with the Flux model and T5-XXL in FP16 on a 4090 shows that keeping prompts relatively short (each clip under 80 tokens) and regions manageable helps avoid such issues.
|
||||
|
||||
**GGUF & CivitAI fine-tune models:** The Flux Region Pipeline was tested with GGUF models without issues. Third-party CivitAI Copax Timeless XPlus 3 Flux models also worked without problems.
|
||||
|
||||
**LoRA Support:** LoRA is supported and will apply to all attention. At this stage, using different LoRA for different Regions is not possible. Research work is still ongoing.
|
||||
|
||||
**ControlNet Support:** Currently not tested. Research work is still ongoing.
|
||||
|
||||
Results:
|
||||
**Example 1**<br>
|
||||
3 Region Split Blend using Advance LLM: Base Conditioning (ignored) + 3 Regions
|
||||

|
||||

|
||||
|
||||
**Example 2**<br>
|
||||
Style manipulation: Base Conditioning + 1 Region
|
||||

|
||||

|
||||
|
||||
**Example 3**<br>
|
||||
Simple Splitting Contrast: Base Conditioning (ignored) + 2 Regions
|
||||

|
||||

|
||||
|
||||
**Example 4**<br>
|
||||
Simple Splitting Blend: Base Conditioning + 1 Region
|
||||

|
||||

|
||||
|
||||
**Example 5**<br>
|
||||
3 Region Split Blend: Base Conditioning (ignored) + 3 Regions
|
||||

|
||||

|
||||
|
||||
**Example 6**<br>
|
||||
3 Region Split Blend using Advance LLM: Base Conditioning (ignored) + 3 Regions
|
||||

|
||||

|
||||
|
||||
**Example 7**<br>
|
||||
Color Manipulation: Base Conditioning (ignored) + 2 Regions
|
||||

|
||||

|
||||
|
||||
**YouTube tutorial Flux Region Usage: <a href="https://youtu.be/kNwz6kJRDc0">Flux Region Spatial Control Tutorial</a>**
|
||||
|
||||
### YouTube ComfyUI Tutorials
|
||||
|
||||
We are a team of two and create extensive tutorials for ComfyUI. Check out our YouTube channel:</br>
|
||||
<a href="https://youtube.com/@controlaltai">ControlAltAI YouTube Tutorials</a>
|
||||
|
||||
### Black Forest Labs AI
|
||||
|
||||
Black Forest Labs, a pioneering AI research organization, has developed the Flux model series, which includes the Flux1.[dev] and Flux1.[schnell] models. These models are designed to push the boundaries of image generation through advanced deep-learning techniques.
|
||||
|
||||
For more details on these models, their capabilities, and licensing information, you can visit the <a href="https://blackforestlabs.ai/">Black Forest Labs website</a>
|
||||
|
||||
### Flux Regional Spatial Control Acknowledgment
|
||||
|
||||
Inspired from: <a href="https://github.com/attashe/ComfyUI-FluxRegionAttention">Flux Region Attention by Attashe</a>
|
||||
|
||||
### License
|
||||
|
||||
This project is licensed under the MIT License.
|
||||
79
custom_nodes/controlaltai-nodes/__init__.py
Normal file
79
custom_nodes/controlaltai-nodes/__init__.py
Normal file
@@ -0,0 +1,79 @@
|
||||
print("\n\033[32mInitializing ControlAltAI Nodes\033[0m") # Fixed green reset
|
||||
|
||||
from .flux_resolution_cal_node import FluxResolutionNode
|
||||
from .flux_sampler_node import FluxSampler
|
||||
from .flux_union_controlnet_node import FluxUnionControlNetApply
|
||||
from .boolean_basic_node import BooleanBasic
|
||||
from .boolean_reverse_node import BooleanReverse
|
||||
from .get_image_size_ratio_node import GetImageSizeRatio
|
||||
from .noise_plus_blend_node import NoisePlusBlend
|
||||
from .integer_settings_node import IntegerSettings
|
||||
from .integer_settings_advanced_node import IntegerSettingsAdvanced
|
||||
from .choose_upscale_model_node import ChooseUpscaleModel
|
||||
from .region_mask_generator_node import RegionMaskGenerator
|
||||
from .region_mask_validator_node import RegionMaskValidator
|
||||
from .region_mask_processor_node import RegionMaskProcessor
|
||||
from .region_mask_conditioning_node import RegionMaskConditioning
|
||||
from .flux_attention_control_node import FluxAttentionControl
|
||||
from .region_overlay_visualizer_node import RegionOverlayVisualizer
|
||||
from .flux_attention_cleanup_node import FluxAttentionCleanup
|
||||
from .hidream_resolution_node import HiDreamResolutionNode
|
||||
from .perturbation_texture_node import PerturbationTexture
|
||||
from .text_bridge_node import TextBridge
|
||||
from .two_way_switch_node import TwoWaySwitch
|
||||
from .three_way_switch_node import ThreeWaySwitch
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxResolutionNode": FluxResolutionNode,
|
||||
"FluxSampler": FluxSampler,
|
||||
"FluxUnionControlNetApply": FluxUnionControlNetApply,
|
||||
"BooleanBasic": BooleanBasic,
|
||||
"BooleanReverse": BooleanReverse,
|
||||
"GetImageSizeRatio": GetImageSizeRatio,
|
||||
"NoisePlusBlend": NoisePlusBlend,
|
||||
"IntegerSettings": IntegerSettings,
|
||||
"IntegerSettingsAdvanced": IntegerSettingsAdvanced,
|
||||
"ChooseUpscaleModel": ChooseUpscaleModel,
|
||||
"RegionMaskGenerator": RegionMaskGenerator,
|
||||
"RegionMaskValidator": RegionMaskValidator,
|
||||
"RegionMaskProcessor": RegionMaskProcessor,
|
||||
"RegionMaskConditioning": RegionMaskConditioning,
|
||||
"FluxAttentionControl": FluxAttentionControl,
|
||||
"RegionOverlayVisualizer": RegionOverlayVisualizer,
|
||||
"FluxAttentionCleanup": FluxAttentionCleanup,
|
||||
"HiDreamResolutionNode": HiDreamResolutionNode,
|
||||
"PerturbationTexture": PerturbationTexture,
|
||||
"TextBridge": TextBridge,
|
||||
"TwoWaySwitch": TwoWaySwitch,
|
||||
"ThreeWaySwitch": ThreeWaySwitch,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxResolutionNode": "Flux Resolution Calc",
|
||||
"FluxSampler": "Flux Sampler",
|
||||
"FluxUnionControlNetApply": "Flux Union ControlNet Apply",
|
||||
"BooleanBasic": "Boolean Basic",
|
||||
"BooleanReverse": "Boolean Reverse",
|
||||
"GetImageSizeRatio": "Get Image Size Ratio",
|
||||
"NoisePlusBlend": "Noise Plus Blend",
|
||||
"IntegerSettings": "Integer Settings",
|
||||
"IntegerSettingsAdvanced": "Integer Settings Advanced",
|
||||
"ChooseUpscaleModel": "Choose Upscale Model",
|
||||
"RegionMaskGenerator": "Region Mask Generator",
|
||||
"RegionMaskValidator": "Region Mask Validator",
|
||||
"RegionMaskProcessor": "Region Mask Processor",
|
||||
"RegionMaskConditioning": "Region Mask Conditioning",
|
||||
"FluxAttentionControl": "Flux Attention Control",
|
||||
"RegionOverlayVisualizer": "Region Overlay Visualizer",
|
||||
"FluxAttentionCleanup": "Flux Attention Cleanup",
|
||||
"HiDreamResolutionNode": "HiDream Resolution",
|
||||
"PerturbationTexture": "Perturbation Texture",
|
||||
"TextBridge": "Text Bridge",
|
||||
"TwoWaySwitch": "Switch (Two Way)",
|
||||
"ThreeWaySwitch": "Switch (Three Way)",
|
||||
}
|
||||
|
||||
# Tell ComfyUI where to find JavaScript files
|
||||
WEB_DIRECTORY = "./web"
|
||||
|
||||
__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"]
|
||||
23
custom_nodes/controlaltai-nodes/boolean_basic_node.py
Normal file
23
custom_nodes/controlaltai-nodes/boolean_basic_node.py
Normal file
@@ -0,0 +1,23 @@
|
||||
class BooleanBasic:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"boolean": ("BOOLEAN", {"default": False}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("BOOLEAN",)
|
||||
FUNCTION = "process_boolean"
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
def process_boolean(self, boolean):
|
||||
return (boolean,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"BooleanBasic": BooleanBasic,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"BooleanBasic": "Boolean Basic",
|
||||
}
|
||||
23
custom_nodes/controlaltai-nodes/boolean_reverse_node.py
Normal file
23
custom_nodes/controlaltai-nodes/boolean_reverse_node.py
Normal file
@@ -0,0 +1,23 @@
|
||||
class BooleanReverse:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"boolean": ("BOOLEAN", {"default": True}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("BOOLEAN",)
|
||||
FUNCTION = "reverse_boolean"
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
def reverse_boolean(self, boolean):
|
||||
return (not boolean,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"BooleanReverse": BooleanReverse,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"BooleanReverse": "Boolean Reverse",
|
||||
}
|
||||
30
custom_nodes/controlaltai-nodes/choose_upscale_model_node.py
Normal file
30
custom_nodes/controlaltai-nodes/choose_upscale_model_node.py
Normal file
@@ -0,0 +1,30 @@
|
||||
class ChooseUpscaleModel:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"upscale_model_1": ("UPSCALE_MODEL",),
|
||||
"upscale_model_2": ("UPSCALE_MODEL",),
|
||||
"use_model_1": ("BOOLEAN", {"default": True}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("UPSCALE_MODEL",)
|
||||
RETURN_NAMES = ("upscale_model",)
|
||||
FUNCTION = "choose_upscale_model"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
def choose_upscale_model(self, upscale_model_1, upscale_model_2, use_model_1):
|
||||
if use_model_1:
|
||||
return (upscale_model_1,)
|
||||
else:
|
||||
return (upscale_model_2,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"ChooseUpscaleModel": ChooseUpscaleModel,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"ChooseUpscaleModel": "Choose Upscale Model",
|
||||
}
|
||||
@@ -0,0 +1,75 @@
|
||||
import torch
|
||||
from comfy.ldm.modules import attention as comfy_attention
|
||||
from comfy.ldm.flux import math as flux_math
|
||||
from comfy.ldm.flux import layers as flux_layers
|
||||
|
||||
class AnyType(str):
|
||||
"""A special class that is always equal in not equal comparisons"""
|
||||
def __ne__(self, __value: object) -> bool:
|
||||
return False
|
||||
|
||||
any_type = AnyType("*")
|
||||
|
||||
class FluxAttentionCleanup:
|
||||
def __init__(self):
|
||||
self.original_attention = comfy_attention.optimized_attention
|
||||
self.original_flux_attention = flux_math.attention
|
||||
self.original_flux_layers_attention = flux_layers.attention
|
||||
self.current_attn_mask = None
|
||||
print("FluxAttentionCleanup initialized")
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"any_input": (any_type, {}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("message",)
|
||||
FUNCTION = "cleanup_attention"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def cleanup_attention(self, any_input):
|
||||
"""Skip cleanup during normal operation, but clean on workflow switch"""
|
||||
message = "Attention preserved for current workflow. Will clean on workflow switch."
|
||||
print("\n" + message)
|
||||
return (message,)
|
||||
|
||||
def __del__(self):
|
||||
"""Clean up attention when switching workflows"""
|
||||
try:
|
||||
print("\nStarting attention cleanup for workflow switch...")
|
||||
|
||||
# Reset attention functions to original state
|
||||
flux_math.attention = self.original_flux_attention
|
||||
flux_layers.attention = self.original_flux_layers_attention
|
||||
|
||||
# Clear attention mask
|
||||
if hasattr(flux_math.attention, 'keywords'):
|
||||
if 'attn_mask' in flux_math.attention.keywords:
|
||||
flux_math.attention.keywords['attn_mask'] = None
|
||||
|
||||
# Clear stored mask
|
||||
if self.current_attn_mask is not None:
|
||||
del self.current_attn_mask
|
||||
self.current_attn_mask = None
|
||||
|
||||
# Force CUDA cleanup
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.empty_cache()
|
||||
torch.cuda.synchronize()
|
||||
|
||||
print("Workflow switch: Region Attention Cleanup Successful")
|
||||
except:
|
||||
pass
|
||||
|
||||
# Node class mappings
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxAttentionCleanup": FluxAttentionCleanup
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxAttentionCleanup": "Flux Attention Cleanup"
|
||||
}
|
||||
325
custom_nodes/controlaltai-nodes/flux_attention_control_node.py
Normal file
325
custom_nodes/controlaltai-nodes/flux_attention_control_node.py
Normal file
@@ -0,0 +1,325 @@
|
||||
import torch
|
||||
from torch import Tensor
|
||||
import torch.nn.functional as F
|
||||
from typing import List, Dict, Optional, Tuple
|
||||
from einops import rearrange
|
||||
import comfy.model_management as model_management
|
||||
from comfy.ldm.modules import attention as comfy_attention
|
||||
from comfy.ldm.flux import math as flux_math
|
||||
from comfy.ldm.flux import layers as flux_layers
|
||||
import numpy as np
|
||||
from PIL import Image, ImageFilter, ImageDraw
|
||||
from functools import partial
|
||||
|
||||
# Protected xformers import
|
||||
try:
|
||||
from xformers.ops import memory_efficient_attention as xattention
|
||||
has_xformers = True
|
||||
except ImportError:
|
||||
has_xformers = False
|
||||
xattention = None
|
||||
|
||||
class FluxAttentionControl:
|
||||
def __init__(self):
|
||||
self.original_attention = comfy_attention.optimized_attention
|
||||
self.original_flux_attention = flux_math.attention
|
||||
self.original_flux_layers_attention = flux_layers.attention
|
||||
if not has_xformers:
|
||||
print("\n" + "="*70)
|
||||
print("\033[94mControlAltAI-Nodes: This node requires xformers to function.\033[0m")
|
||||
print("\033[33mPlease check \"xformers_instructions.txt\" in ComfyUI\\custom_nodes\\ControlAltAI-Nodes for how to install XFormers\033[0m")
|
||||
print("="*70 + "\n")
|
||||
print("FluxAttentionControl initialized")
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"model": ("MODEL",),
|
||||
"condition": ("CONDITIONING",),
|
||||
"latent_dimensions": ("LATENT",),
|
||||
"region1": ("REGION",),
|
||||
"number_of_regions": ("INT", {
|
||||
"default": 1,
|
||||
"min": 1,
|
||||
"max": 3,
|
||||
"step": 1,
|
||||
"display": "Number of Regions"
|
||||
}),
|
||||
"enabled": ("BOOLEAN", {
|
||||
"default": True,
|
||||
"display": "Enable Regional Control"
|
||||
}),
|
||||
"feather_radius1": ("FLOAT", {
|
||||
"default": 0.0,
|
||||
"min": 0.0,
|
||||
"max": 100.0,
|
||||
"step": 1.0,
|
||||
"display": "Feather Radius for Region 1"
|
||||
}),
|
||||
},
|
||||
"optional": {
|
||||
"region2": ("REGION",),
|
||||
"feather_radius2": ("FLOAT", {
|
||||
"default": 0.0,
|
||||
"min": 0.0,
|
||||
"max": 100.0,
|
||||
"step": 1.0,
|
||||
"display": "Feather Radius for Region 2"
|
||||
}),
|
||||
"region3": ("REGION",),
|
||||
"feather_radius3": ("FLOAT", {
|
||||
"default": 0.0,
|
||||
"min": 0.0,
|
||||
"max": 100.0,
|
||||
"step": 1.0,
|
||||
"display": "Feather Radius for Region 3"
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MODEL", "CONDITIONING",)
|
||||
RETURN_NAMES = ("model", "conditioning",)
|
||||
FUNCTION = "apply_attention_control"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def generate_region_mask(self, region: Dict, width: int, height: int, feather_radius: float) -> Image.Image:
|
||||
if region.get('bbox') is not None:
|
||||
x1, y1, x2, y2 = region['bbox']
|
||||
x1_px = int(x1 * width)
|
||||
y1_px = int(y1 * height)
|
||||
x2_px = int(x2 * width)
|
||||
y2_px = int(y2 * height)
|
||||
mask = Image.new('L', (width, height), 0)
|
||||
mask_draw = ImageDraw.Draw(mask)
|
||||
mask_draw.rectangle([x1_px, y1_px, x2_px, y2_px], fill=255)
|
||||
if feather_radius > 0:
|
||||
mask = mask.filter(ImageFilter.GaussianBlur(radius=feather_radius))
|
||||
print(f'Generating masks with {width}x{height} and [{x1}, {y1}, {x2}, {y2}], feather_radius={feather_radius}')
|
||||
return mask
|
||||
elif region.get('mask') is not None:
|
||||
mask = region['mask'][0].cpu().numpy()
|
||||
mask = (mask * 255).astype(np.uint8)
|
||||
mask = Image.fromarray(mask)
|
||||
mask = mask.resize((width, height))
|
||||
if feather_radius > 0:
|
||||
mask = mask.filter(ImageFilter.GaussianBlur(radius=feather_radius))
|
||||
return mask
|
||||
else:
|
||||
raise Exception('Unknown region type')
|
||||
|
||||
def generate_test_mask(self, masks: List[Image.Image], height: int, width: int):
|
||||
hH, hW = int(height) // 16, int(width) // 16
|
||||
print(f'{width} {height} -> {hW} {hH}')
|
||||
|
||||
lin_masks = []
|
||||
for mask in masks:
|
||||
mask = mask.convert('L')
|
||||
mask = torch.tensor(np.array(mask)).unsqueeze(0).unsqueeze(0) / 255.0 # Normalize to 0-1
|
||||
mask = F.interpolate(mask, (hH, hW), mode='bilinear', align_corners=False).flatten()
|
||||
lin_masks.append(mask)
|
||||
return lin_masks, hH, hW
|
||||
|
||||
def prepare_attention_mask(self, lin_masks: List[torch.Tensor], region_strengths: List[float], Nx: int, emb_size: int, emb_len: int):
|
||||
"""Prepare attention mask for three regions with per-region strengths."""
|
||||
total_len = emb_len + Nx
|
||||
n_regs = len(lin_masks)
|
||||
|
||||
# Initialize attention mask and scales
|
||||
cross_mask = torch.zeros(total_len, total_len)
|
||||
q_scale = torch.ones(total_len)
|
||||
k_scale = torch.ones(total_len)
|
||||
|
||||
# Indices for embeddings
|
||||
main_prompt_start = 0
|
||||
main_prompt_end = emb_size
|
||||
|
||||
# Subprompt indices
|
||||
subprompt_starts = [emb_size * (i + 1) for i in range(n_regs)]
|
||||
subprompt_ends = [emb_size * (i + 2) for i in range(n_regs)]
|
||||
|
||||
# Initialize position masks
|
||||
position_masks = torch.stack(lin_masks) # Shape: [n_regs, Nx]
|
||||
|
||||
# Normalize masks so that overlapping areas sum to 1
|
||||
position_masks_sum = position_masks.sum(dim=0)
|
||||
position_masks_normalized = position_masks / (position_masks_sum + 1e-8)
|
||||
|
||||
# Build attention masks and scales
|
||||
for i in range(n_regs):
|
||||
sp_start = subprompt_starts[i]
|
||||
sp_end = subprompt_ends[i]
|
||||
mask_i = position_masks_normalized[i]
|
||||
|
||||
# Scale embeddings based on mask and per-region strength
|
||||
strength = region_strengths[i]
|
||||
q_scale[sp_start:sp_end] = mask_i.mean() * strength
|
||||
k_scale[sp_start:sp_end] = mask_i.mean() * strength
|
||||
|
||||
# Create mask including tokens and positions
|
||||
m_with_tokens = torch.cat([torch.ones(emb_len), mask_i])
|
||||
mb = m_with_tokens > 0.0 # Include positions where mask > 0
|
||||
|
||||
# Block attention between positions not in mask and subprompt
|
||||
cross_mask[~mb, sp_start:sp_end] = 1
|
||||
cross_mask[sp_start:sp_end, ~mb] = 1
|
||||
|
||||
# Block attention between positions in region and main prompt
|
||||
positions_idx = (mask_i > 0.0).nonzero(as_tuple=True)[0] + emb_len
|
||||
cross_mask[positions_idx[:, None], main_prompt_start:main_prompt_end] = 1
|
||||
cross_mask[main_prompt_start:main_prompt_end, positions_idx[None, :]] = 1
|
||||
|
||||
# Block attention between subprompts
|
||||
for j in range(n_regs):
|
||||
if i != j:
|
||||
other_sp_start = subprompt_starts[j]
|
||||
other_sp_end = subprompt_ends[j]
|
||||
cross_mask[sp_start:sp_end, other_sp_start:other_sp_end] = 1
|
||||
cross_mask[other_sp_start:other_sp_end, sp_start:sp_end] = 1
|
||||
|
||||
# Ensure self-attention is allowed
|
||||
cross_mask.fill_diagonal_(0)
|
||||
|
||||
# Prepare scales for GPU
|
||||
q_scale = q_scale.reshape(1, 1, -1, 1).cuda()
|
||||
k_scale = k_scale.reshape(1, 1, -1, 1).cuda()
|
||||
|
||||
return cross_mask, q_scale, k_scale
|
||||
|
||||
def xformers_attention(self, q: Tensor, k: Tensor, v: Tensor, pe: Tensor,
|
||||
attn_mask: Optional[Tensor] = None,
|
||||
mask: Optional[Tensor] = None) -> Tensor: # Added mask parameter
|
||||
q, k = flux_math.apply_rope(q, k, pe)
|
||||
q = rearrange(q, "B H L D -> B L H D")
|
||||
k = rearrange(k, "B H L D -> B L H D")
|
||||
v = rearrange(v, "B H L D -> B L H D")
|
||||
|
||||
# Use attn_mask if provided, otherwise use the mask parameter
|
||||
attention_bias = attn_mask if attn_mask is not None else mask
|
||||
|
||||
if attention_bias is not None:
|
||||
x = xattention(q, k, v, attn_bias=attention_bias)
|
||||
else:
|
||||
x = xattention(q, k, v)
|
||||
|
||||
x = rearrange(x, "B L H D -> B L (H D)")
|
||||
return x
|
||||
|
||||
def apply_attention_control(self,
|
||||
model: object,
|
||||
condition: List,
|
||||
latent_dimensions: Dict,
|
||||
region1: Dict,
|
||||
number_of_regions: int,
|
||||
enabled: bool,
|
||||
feather_radius1: float = 0.0,
|
||||
region2: Optional[Dict] = None,
|
||||
feather_radius2: Optional[float] = 0.0,
|
||||
region3: Optional[Dict] = None,
|
||||
feather_radius3: Optional[float] = 0.0):
|
||||
|
||||
# Extract dimensions and embeddings first (moved before enabled check)
|
||||
latent = latent_dimensions["samples"]
|
||||
bs_l, n_ch, lH, lW = latent.shape
|
||||
text_emb = condition[0][0].clone()
|
||||
clip_emb = condition[0][1]['pooled_output'].clone()
|
||||
bs, emb_size, emb_dim = text_emb.shape
|
||||
iH, iW = lH * 8, lW * 8
|
||||
|
||||
if not enabled:
|
||||
# Restore original attention functions
|
||||
flux_math.attention = self.original_flux_attention
|
||||
flux_layers.attention = self.original_flux_layers_attention
|
||||
print("Regional control disabled. Restored original attention functions.")
|
||||
return (model, condition) # Return original condition when disabled
|
||||
|
||||
if enabled and not has_xformers:
|
||||
raise RuntimeError("Xformers is required for this node when enabled. Please install xformers.")
|
||||
|
||||
print(f'Region attention Node enabled: {enabled}, regions: {number_of_regions}')
|
||||
|
||||
# Extract dimensions and embeddings
|
||||
latent = latent_dimensions["samples"]
|
||||
bs_l, n_ch, lH, lW = latent.shape
|
||||
text_emb = condition[0][0].clone()
|
||||
clip_emb = condition[0][1]['pooled_output'].clone()
|
||||
bs, emb_size, emb_dim = text_emb.shape
|
||||
iH, iW = lH * 8, lW * 8
|
||||
|
||||
# Process active regions
|
||||
subprompts_embeds = []
|
||||
masks = []
|
||||
region_strengths = []
|
||||
|
||||
# Collect regions and feather radii
|
||||
regions = [region1, region2, region3]
|
||||
feather_radii = [feather_radius1, feather_radius2, feather_radius3]
|
||||
|
||||
for idx, region in enumerate(regions[:number_of_regions]):
|
||||
if region is not None and region.get('conditioning') is not None:
|
||||
# Get 'strength' from region or default to 1.0
|
||||
strength = region.get('strength', 1.0)
|
||||
region_strengths.append(strength)
|
||||
subprompt_emb = region['conditioning'][0][0]
|
||||
subprompts_embeds.append(subprompt_emb)
|
||||
# Use per-region feather_radius
|
||||
feather_radius = feather_radii[idx] if feather_radii[idx] is not None else 0.0
|
||||
masks.append(self.generate_region_mask(region, iW, iH, feather_radius))
|
||||
else:
|
||||
print(f"Region {idx+1} is None or has no conditioning")
|
||||
|
||||
if not subprompts_embeds:
|
||||
print("No active regions with conditioning found.")
|
||||
# Restore original attention functions
|
||||
flux_math.attention = self.original_flux_attention
|
||||
flux_layers.attention = self.original_flux_layers_attention
|
||||
return (model, condition)
|
||||
|
||||
n_regs = len(subprompts_embeds)
|
||||
|
||||
# Generate attention components
|
||||
lin_masks, hH, hW = self.generate_test_mask(masks, iH, iW)
|
||||
Nx = int(hH * hW)
|
||||
emb_len = emb_size * (n_regs + 1) # +1 for main prompt
|
||||
|
||||
# Create attention mask
|
||||
attn_mask, q_scale, k_scale = self.prepare_attention_mask(
|
||||
lin_masks, region_strengths, Nx, emb_size, emb_len)
|
||||
|
||||
# Format for xFormers
|
||||
device = torch.device('cuda')
|
||||
attn_dtype = torch.bfloat16 if model_management.should_use_bf16(device=device) else torch.float16
|
||||
|
||||
if attn_mask is not None:
|
||||
print(f'Applying attention masks: torch.Size([{attn_mask.shape[0]}, {attn_mask.shape[1]}])')
|
||||
L = attn_mask.shape[0]
|
||||
H = 24 # Number of heads in FLUX model
|
||||
pad = (8 - L % 8) % 8 # Ensure pad is between 0 and 7
|
||||
pad_L = L + pad
|
||||
mask_out = torch.zeros([bs, H, pad_L, pad_L], dtype=attn_dtype, device=device)
|
||||
mask_out[:, :, :L, :L] = attn_mask.to(device, dtype=attn_dtype)
|
||||
attn_mask = mask_out[:, :, :pad_L, :pad_L]
|
||||
|
||||
# Prepare final mask
|
||||
attn_mask_bool = attn_mask > 0.5
|
||||
attn_mask.masked_fill_(attn_mask_bool, float('-inf'))
|
||||
|
||||
# Override attention
|
||||
attn_mask_arg = attn_mask if enabled else None
|
||||
override_attention = partial(self.xformers_attention, attn_mask=attn_mask_arg)
|
||||
flux_math.attention = override_attention
|
||||
flux_layers.attention = override_attention
|
||||
|
||||
# Create extended conditioning
|
||||
extended_condition = torch.cat([text_emb] + subprompts_embeds, dim=1)
|
||||
|
||||
return (model, [[extended_condition, {'pooled_output': clip_emb}]])
|
||||
|
||||
# Node class mappings
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxAttentionControl": FluxAttentionControl
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxAttentionControl": "Flux Attention Control"
|
||||
}
|
||||
40
custom_nodes/controlaltai-nodes/flux_controlnet_node.py
Normal file
40
custom_nodes/controlaltai-nodes/flux_controlnet_node.py
Normal file
@@ -0,0 +1,40 @@
|
||||
class FluxControlNetApply:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"conditioning": ("CONDITIONING", ),
|
||||
"control_net": ("CONTROL_NET", ),
|
||||
"image": ("IMAGE", ),
|
||||
"strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01})
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING",)
|
||||
FUNCTION = "flux_controlnet"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux"
|
||||
|
||||
def flux_controlnet(self, conditioning, control_net, image, strength):
|
||||
if strength == 0:
|
||||
return (conditioning,)
|
||||
|
||||
c = []
|
||||
control_hint = image.movedim(-1, 1)
|
||||
for t in conditioning:
|
||||
n = [t[0], t[1].copy()]
|
||||
c_net = control_net.copy().set_cond_hint(control_hint, strength)
|
||||
if 'control' in t[1]:
|
||||
c_net.set_previous_controlnet(t[1]['control'])
|
||||
n[1]['control'] = c_net
|
||||
n[1]['control_apply_to_uncond'] = False # This ensures it's only applied to positive
|
||||
c.append(n)
|
||||
|
||||
return (c,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxControlNetApply": FluxControlNetApply,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxControlNetApply": "Flux ControlNet",
|
||||
}
|
||||
143
custom_nodes/controlaltai-nodes/flux_resolution_cal_node.py
Normal file
143
custom_nodes/controlaltai-nodes/flux_resolution_cal_node.py
Normal file
@@ -0,0 +1,143 @@
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
def pil2tensor(image):
|
||||
"""Convert PIL image to tensor in the correct format"""
|
||||
return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
|
||||
|
||||
class FluxResolutionNode:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
# Generate megapixel options from 0.1 to 2.5 with 0.1 increments
|
||||
megapixel_options = [f"{i/10:.1f}" for i in range(1, 26)] # 0.1 to 2.5
|
||||
|
||||
return {
|
||||
"required": {
|
||||
"megapixel": (megapixel_options, {"default": "1.0"}),
|
||||
"aspect_ratio": ([
|
||||
"1:1 (Perfect Square)",
|
||||
"2:3 (Classic Portrait)", "3:4 (Golden Ratio)", "3:5 (Elegant Vertical)", "4:5 (Artistic Frame)", "5:7 (Balanced Portrait)", "5:8 (Tall Portrait)",
|
||||
"7:9 (Modern Portrait)", "9:16 (Slim Vertical)", "9:19 (Tall Slim)", "9:21 (Ultra Tall)", "9:32 (Skyline)",
|
||||
"3:2 (Golden Landscape)", "4:3 (Classic Landscape)", "5:3 (Wide Horizon)", "5:4 (Balanced Frame)", "7:5 (Elegant Landscape)", "8:5 (Cinematic View)",
|
||||
"9:7 (Artful Horizon)", "16:9 (Panorama)", "19:9 (Cinematic Ultrawide)", "21:9 (Epic Ultrawide)", "32:9 (Extreme Ultrawide)"
|
||||
], {"default": "1:1 (Perfect Square)"}),
|
||||
"divisible_by": (["8", "16", "32", "64"], {"default": "64"}),
|
||||
"custom_ratio": ("BOOLEAN", {"default": False, "label_on": "Enable", "label_off": "Disable"}),
|
||||
},
|
||||
"optional": {
|
||||
"custom_aspect_ratio": ("STRING", {"default": "1:1"}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("INT", "INT", "STRING", "IMAGE")
|
||||
RETURN_NAMES = ("width", "height", "resolution", "preview")
|
||||
FUNCTION = "calculate_dimensions"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux"
|
||||
OUTPUT_NODE = True
|
||||
|
||||
def create_preview_image(self, width, height, resolution, ratio_display):
|
||||
# 1024x1024 preview size
|
||||
preview_size = (1024, 1024)
|
||||
image = Image.new('RGB', preview_size, (0, 0, 0)) # Black background
|
||||
draw = ImageDraw.Draw(image)
|
||||
|
||||
# Draw grid with grey lines
|
||||
grid_color = '#333333' # Dark grey for grid
|
||||
grid_spacing = 50 # Adjusted grid spacing
|
||||
for x in range(0, preview_size[0], grid_spacing):
|
||||
draw.line([(x, 0), (x, preview_size[1])], fill=grid_color)
|
||||
for y in range(0, preview_size[1], grid_spacing):
|
||||
draw.line([(0, y), (preview_size[0], y)], fill=grid_color)
|
||||
|
||||
# Calculate preview box dimensions
|
||||
preview_width = 800 # Increased size
|
||||
preview_height = int(preview_width * (height / width))
|
||||
|
||||
# Adjust if height is too tall
|
||||
if preview_height > 800: # Adjusted for larger preview
|
||||
preview_height = 800
|
||||
preview_width = int(preview_height * (width / height))
|
||||
|
||||
# Calculate center position
|
||||
x_offset = (preview_size[0] - preview_width) // 2
|
||||
y_offset = (preview_size[1] - preview_height) // 2
|
||||
|
||||
# Draw the aspect ratio box with thicker outline
|
||||
draw.rectangle(
|
||||
[(x_offset, y_offset), (x_offset + preview_width, y_offset + preview_height)],
|
||||
outline='red',
|
||||
width=4 # Thicker outline
|
||||
)
|
||||
|
||||
# Add text with larger font sizes
|
||||
try:
|
||||
# Draw text (centered)
|
||||
text_y = y_offset + preview_height//2
|
||||
|
||||
# Resolution text in red
|
||||
draw.text((preview_size[0]//2, text_y),
|
||||
f"{width}x{height}",
|
||||
fill='red',
|
||||
anchor="mm",
|
||||
font=ImageFont.truetype("arial.ttf", 48))
|
||||
|
||||
# Aspect ratio text in red
|
||||
draw.text((preview_size[0]//2, text_y + 60),
|
||||
f"({ratio_display})",
|
||||
fill='red',
|
||||
anchor="mm",
|
||||
font=ImageFont.truetype("arial.ttf", 36))
|
||||
|
||||
# Resolution text at bottom in white
|
||||
draw.text((preview_size[0]//2, y_offset + preview_height + 60),
|
||||
f"Resolution: {resolution}",
|
||||
fill='white', # Changed to white
|
||||
anchor="mm",
|
||||
font=ImageFont.truetype("arial.ttf", 32))
|
||||
|
||||
except:
|
||||
# Fallback if font loading fails
|
||||
draw.text((preview_size[0]//2, text_y), f"{width}x{height}", fill='red', anchor="mm")
|
||||
draw.text((preview_size[0]//2, text_y + 60), f"({ratio_display})", fill='red', anchor="mm")
|
||||
draw.text((preview_size[0]//2, y_offset + preview_height + 60), f"Resolution: {resolution}", fill='white', anchor="mm")
|
||||
|
||||
# Convert to tensor using the helper function
|
||||
return pil2tensor(image)
|
||||
|
||||
def calculate_dimensions(self, megapixel, aspect_ratio, divisible_by, custom_ratio, custom_aspect_ratio=None):
|
||||
megapixel = float(megapixel)
|
||||
round_to = int(divisible_by)
|
||||
|
||||
if custom_ratio and custom_aspect_ratio:
|
||||
numeric_ratio = custom_aspect_ratio
|
||||
ratio_display = custom_aspect_ratio # Keep original format for display
|
||||
else:
|
||||
numeric_ratio = aspect_ratio.split(' ')[0]
|
||||
ratio_display = numeric_ratio # Keep original format for display
|
||||
|
||||
width_ratio, height_ratio = map(int, numeric_ratio.split(':'))
|
||||
|
||||
total_pixels = megapixel * 1_000_000
|
||||
dimension = (total_pixels / (width_ratio * height_ratio)) ** 0.5
|
||||
width = int(dimension * width_ratio)
|
||||
height = int(dimension * height_ratio)
|
||||
|
||||
# Apply user-selected rounding
|
||||
width = round(width / round_to) * round_to
|
||||
height = round(height / round_to) * round_to
|
||||
|
||||
resolution = f"{width} x {height}"
|
||||
|
||||
# Generate preview image with original ratio format
|
||||
preview = self.create_preview_image(width, height, resolution, ratio_display)
|
||||
|
||||
return width, height, resolution, preview
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxResolutionNode": FluxResolutionNode,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxResolutionNode": "Flux Resolution Calculator",
|
||||
}
|
||||
66
custom_nodes/controlaltai-nodes/flux_sampler_node.py
Normal file
66
custom_nodes/controlaltai-nodes/flux_sampler_node.py
Normal file
@@ -0,0 +1,66 @@
|
||||
import comfy.samplers
|
||||
import torch
|
||||
import comfy.sample
|
||||
import latent_preview
|
||||
|
||||
FLUX_SAMPLER_NAMES = [
|
||||
"euler", "heun", "heunpp2", "dpm_2", "lms", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_2m",
|
||||
"ipndm", "ipndm_v", "deis", "ddim", "uni_pc", "uni_pc_bh2"
|
||||
]
|
||||
|
||||
FLUX_SCHEDULER_NAMES = ["simple", "normal", "sgm_uniform", "ddim_uniform", "beta"]
|
||||
|
||||
class FluxSampler:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"model": ("MODEL",),
|
||||
"conditioning": ("CONDITIONING",),
|
||||
"latent_image": ("LATENT",),
|
||||
"sampler_name": (FLUX_SAMPLER_NAMES, {"default": "euler"}),
|
||||
"scheduler": (FLUX_SCHEDULER_NAMES, {"default": "beta"}),
|
||||
"steps": ("INT", {"default": 30, "min": 1, "max": 10000}),
|
||||
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"noise_seed": ("INT", {"default": 143220275975594, "min": 0, "max": 0xffffffffffffffff}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("LATENT",)
|
||||
RETURN_NAMES = ("latent",)
|
||||
FUNCTION = "sample"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux"
|
||||
|
||||
def sample(self, model, conditioning, latent_image, sampler_name, scheduler, steps, denoise, noise_seed):
|
||||
device = comfy.model_management.get_torch_device()
|
||||
sampler = comfy.samplers.KSampler(model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=denoise)
|
||||
|
||||
latent = latent_image.copy()
|
||||
latent_image = latent["samples"]
|
||||
|
||||
# Handle noise_mask if present
|
||||
noise_mask = latent.get("noise_mask", None)
|
||||
|
||||
noise = comfy.sample.prepare_noise(latent_image, noise_seed)
|
||||
|
||||
positive = conditioning
|
||||
negative = [] # Empty list for negative conditioning
|
||||
|
||||
callback = latent_preview.prepare_callback(model, steps)
|
||||
disable_pbar = not comfy.utils.PROGRESS_BAR_ENABLED
|
||||
|
||||
samples = sampler.sample(noise, positive, negative, cfg=1.0, latent_image=latent_image,
|
||||
force_full_denoise=True, denoise_mask=noise_mask, callback=callback,
|
||||
disable_pbar=disable_pbar, seed=noise_seed)
|
||||
|
||||
out = latent.copy()
|
||||
out["samples"] = samples
|
||||
return (out,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxSampler": FluxSampler
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxSampler": "Flux Sampler"
|
||||
}
|
||||
@@ -0,0 +1,98 @@
|
||||
import torch
|
||||
import comfy
|
||||
import folder_paths
|
||||
|
||||
class FluxUnionControlNetApply:
|
||||
# Correct UNION_CONTROLNET_TYPES mapping
|
||||
UNION_CONTROLNET_TYPES = {
|
||||
"canny": 0,
|
||||
"tile": 1,
|
||||
"depth": 2,
|
||||
"blur": 3,
|
||||
"pose": 4,
|
||||
"gray": 5,
|
||||
"low quality": 6,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"conditioning": ("CONDITIONING", ),
|
||||
"control_net": ("CONTROL_NET", ),
|
||||
"image": ("IMAGE", ),
|
||||
"union_controlnet_type": (list(s.UNION_CONTROLNET_TYPES.keys()), ),
|
||||
"strength": ("FLOAT", {
|
||||
"default": 1.0,
|
||||
"min": 0.0,
|
||||
"max": 10.0,
|
||||
"step": 0.01
|
||||
}),
|
||||
"start_percent": ("FLOAT", {
|
||||
"default": 0.0,
|
||||
"min": 0.0,
|
||||
"max": 1.0,
|
||||
"step": 0.001
|
||||
}),
|
||||
"end_percent": ("FLOAT", {
|
||||
"default": 1.0,
|
||||
"min": 0.0,
|
||||
"max": 1.0,
|
||||
"step": 0.001
|
||||
}),
|
||||
"vae": ("VAE", ),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING", "VAE")
|
||||
FUNCTION = "apply_flux_union_controlnet"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux"
|
||||
|
||||
def apply_flux_union_controlnet(self, conditioning, control_net, image, union_controlnet_type, strength, start_percent, end_percent, vae):
|
||||
if strength == 0:
|
||||
return (conditioning, vae)
|
||||
|
||||
# Map the 'union_controlnet_type' to 'control_type'
|
||||
control_type = self.UNION_CONTROLNET_TYPES[union_controlnet_type]
|
||||
control_type_list = [control_type]
|
||||
|
||||
# Set the 'control_type' using 'set_extra_arg'
|
||||
control_net = control_net.copy()
|
||||
control_net.set_extra_arg("control_type", control_type_list)
|
||||
|
||||
# Process the image to get 'control_hint'
|
||||
control_hint = image.movedim(-1, 1) # Assuming the image is in HWC format
|
||||
|
||||
# Apply the ControlNet to the positive conditioning
|
||||
cnets = {}
|
||||
c = []
|
||||
for t in conditioning:
|
||||
d = t[1].copy()
|
||||
prev_cnet = d.get('control', None)
|
||||
|
||||
# Create a unique key for caching
|
||||
cache_key = (prev_cnet, tuple(control_net.extra_args.get('control_type', [])))
|
||||
|
||||
if cache_key in cnets:
|
||||
c_net_instance = cnets[cache_key]
|
||||
else:
|
||||
# Create a copy of the 'control_net' and set the conditional hint
|
||||
c_net_instance = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae)
|
||||
c_net_instance.set_previous_controlnet(prev_cnet)
|
||||
cnets[cache_key] = c_net_instance
|
||||
|
||||
d['control'] = c_net_instance
|
||||
d['control_apply_to_uncond'] = False
|
||||
|
||||
n = [t[0], d]
|
||||
c.append(n)
|
||||
|
||||
return (c, vae)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"FluxUnionControlNetApply": FluxUnionControlNetApply,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"FluxUnionControlNetApply": "Flux Union ControlNet",
|
||||
}
|
||||
38
custom_nodes/controlaltai-nodes/get_image_size_ratio_node.py
Normal file
38
custom_nodes/controlaltai-nodes/get_image_size_ratio_node.py
Normal file
@@ -0,0 +1,38 @@
|
||||
class GetImageSizeRatio:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"image": ("IMAGE",)
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("INT", "INT", "STRING")
|
||||
RETURN_NAMES = ("width", "height", "ratio")
|
||||
FUNCTION = "get_image_size_ratio"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Image"
|
||||
|
||||
def get_image_size_ratio(self, image):
|
||||
_, height, width, _ = image.shape
|
||||
|
||||
gcd = self.greatest_common_divisor(width, height)
|
||||
ratio_width = width // gcd
|
||||
ratio_height = height // gcd
|
||||
|
||||
ratio = f"{ratio_width}:{ratio_height}"
|
||||
|
||||
return width, height, ratio
|
||||
|
||||
def greatest_common_divisor(self, a, b):
|
||||
while b != 0:
|
||||
a, b = b, a % b
|
||||
return a
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"GetImageSizeRatio": GetImageSizeRatio,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"GetImageSizeRatio": "Get Image Size & Ratio",
|
||||
}
|
||||
120
custom_nodes/controlaltai-nodes/hidream_resolution_node.py
Normal file
120
custom_nodes/controlaltai-nodes/hidream_resolution_node.py
Normal file
@@ -0,0 +1,120 @@
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
def pil2tensor(image):
|
||||
"""Convert PIL image to tensor in the correct format"""
|
||||
return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
|
||||
|
||||
class HiDreamResolutionNode:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"resolution": ([
|
||||
"1:1 (Perfect Square)",
|
||||
"3:4 (Standard Portrait)",
|
||||
"2:3 (Classic Portrait)",
|
||||
"9:16 (Widescreen Portrait)",
|
||||
"4:3 (Standard Landscape)",
|
||||
"3:2 (Classic Landscape)",
|
||||
"16:9 (Widescreen Landscape)",
|
||||
], {"default": "1:1 (Perfect Square)"}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("INT", "INT", "STRING", "IMAGE")
|
||||
RETURN_NAMES = ("width", "height", "resolution", "preview")
|
||||
FUNCTION = "get_dimensions"
|
||||
CATEGORY = "ControlAltAI Nodes/HiDream"
|
||||
OUTPUT_NODE = True
|
||||
|
||||
def create_preview_image(self, width, height, resolution):
|
||||
# 1024x1024 preview size
|
||||
preview_size = (1024, 1024)
|
||||
image = Image.new('RGB', preview_size, (0, 0, 0)) # Black background
|
||||
draw = ImageDraw.Draw(image)
|
||||
|
||||
# Draw grid with grey lines
|
||||
grid_color = '#333333' # Dark grey for grid
|
||||
grid_spacing = 50 # Adjusted grid spacing
|
||||
for x in range(0, preview_size[0], grid_spacing):
|
||||
draw.line([(x, 0), (x, preview_size[1])], fill=grid_color)
|
||||
for y in range(0, preview_size[1], grid_spacing):
|
||||
draw.line([(0, y), (preview_size[0], y)], fill=grid_color)
|
||||
|
||||
# Calculate preview box dimensions
|
||||
preview_width = 800 # Increased size
|
||||
preview_height = int(preview_width * (height / width))
|
||||
|
||||
# Adjust if height is too tall
|
||||
if preview_height > 800: # Adjusted for larger preview
|
||||
preview_height = 800
|
||||
preview_width = int(preview_height * (width / height))
|
||||
|
||||
# Calculate center position
|
||||
x_offset = (preview_size[0] - preview_width) // 2
|
||||
y_offset = (preview_size[1] - preview_height) // 2
|
||||
|
||||
# Draw the aspect ratio box with thicker outline
|
||||
draw.rectangle(
|
||||
[(x_offset, y_offset), (x_offset + preview_width, y_offset + preview_height)],
|
||||
outline='red',
|
||||
width=4 # Thicker outline
|
||||
)
|
||||
|
||||
# Add text with larger font sizes
|
||||
try:
|
||||
# Draw text (centered)
|
||||
text_y = y_offset + preview_height//2
|
||||
|
||||
# Resolution text in red
|
||||
draw.text((preview_size[0]//2, text_y),
|
||||
f"{width}x{height}",
|
||||
fill='red',
|
||||
anchor="mm",
|
||||
font=ImageFont.truetype("arial.ttf", 48))
|
||||
|
||||
except:
|
||||
# Fallback if font loading fails
|
||||
draw.text((preview_size[0]//2, text_y), f"{width}x{height}", fill='red', anchor="mm")
|
||||
|
||||
# Convert to tensor using the helper function
|
||||
return pil2tensor(image)
|
||||
|
||||
def get_dimensions(self, resolution):
|
||||
# Map from aspect ratio to actual dimensions
|
||||
resolution_map = {
|
||||
"1:1 (Perfect Square)": (1024, 1024),
|
||||
"3:4 (Standard Portrait)": (880, 1168),
|
||||
"2:3 (Classic Portrait)": (832, 1248),
|
||||
"9:16 (Widescreen Portrait)": (768, 1360),
|
||||
"4:3 (Standard Landscape)": (1168, 880),
|
||||
"3:2 (Classic Landscape)": (1248, 832),
|
||||
"16:9 (Widescreen Landscape)": (1360, 768)
|
||||
}
|
||||
|
||||
# Get dimensions from the map
|
||||
width, height = resolution_map[resolution]
|
||||
|
||||
# Resolution as string
|
||||
resolution_str = f"{width} x {height}"
|
||||
|
||||
# Generate preview image
|
||||
preview = self.create_preview_image(width, height, resolution_str)
|
||||
|
||||
return width, height, resolution_str, preview
|
||||
|
||||
def gcd(a, b):
|
||||
"""Calculate the Greatest Common Divisor of a and b."""
|
||||
while b:
|
||||
a, b = b, a % b
|
||||
return a
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"HiDreamResolutionNode": HiDreamResolutionNode,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"HiDreamResolutionNode": "HiDream Resolution",
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
class IntegerSettingsAdvanced:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"setting_1": ("BOOLEAN", {"default": True, "label_on": "Enable", "label_off": "Disable"}),
|
||||
"setting_2": ("BOOLEAN", {"default": False, "label_on": "Enable", "label_off": "Disable"}),
|
||||
"setting_3": ("BOOLEAN", {"default": False, "label_on": "Enable", "label_off": "Disable"}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("INT",)
|
||||
RETURN_NAMES = ("setting_value",)
|
||||
FUNCTION = "integer_settings_advanced"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
def integer_settings_advanced(self, setting_1, setting_2, setting_3):
|
||||
"""
|
||||
Returns integer based on which setting is enabled.
|
||||
Due to mutual exclusion (handled by JS), only one should be True.
|
||||
Priority order: setting_3 > setting_2 > setting_1
|
||||
"""
|
||||
if setting_3:
|
||||
return (3,)
|
||||
elif setting_2:
|
||||
return (2,)
|
||||
else:
|
||||
# Default to 1 (setting_1 or fallback)
|
||||
return (1,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"IntegerSettingsAdvanced": IntegerSettingsAdvanced,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"IntegerSettingsAdvanced": "Integer Settings Advanced",
|
||||
}
|
||||
28
custom_nodes/controlaltai-nodes/integer_settings_node.py
Normal file
28
custom_nodes/controlaltai-nodes/integer_settings_node.py
Normal file
@@ -0,0 +1,28 @@
|
||||
class IntegerSettings:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"setting": ("BOOLEAN", {"default": False, "label_on": "Enable", "label_off": "Disable"}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("INT",)
|
||||
RETURN_NAMES = ("setting_value",)
|
||||
FUNCTION = "integer_settings"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
def integer_settings(self, setting):
|
||||
# Handle the single boolean setting
|
||||
status = 2 if setting else 1
|
||||
return (status,)
|
||||
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"IntegerSettings": IntegerSettings,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"IntegerSettings": "Integer Settings",
|
||||
}
|
||||
89
custom_nodes/controlaltai-nodes/noise_plus_blend_node.py
Normal file
89
custom_nodes/controlaltai-nodes/noise_plus_blend_node.py
Normal file
@@ -0,0 +1,89 @@
|
||||
import numpy as np
|
||||
from PIL import Image, ImageChops
|
||||
import torch
|
||||
|
||||
class NoisePlusBlend:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"image": ("IMAGE",),
|
||||
"noise_scale": ("FLOAT", {"default": 0.40, "min": 0.00, "max": 100.00, "step": 0.01}),
|
||||
"blend_opacity": ("INT", {"default": 20, "min": 0, "max": 100}),
|
||||
},
|
||||
"optional": {
|
||||
"mask": ("MASK",),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE", "IMAGE")
|
||||
RETURN_NAMES = ("blended_image_output", "noise_output")
|
||||
FUNCTION = "noise_plus_blend"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Image"
|
||||
|
||||
def tensor_to_pil(self, tensor_image):
|
||||
"""Converts tensor to a PIL Image"""
|
||||
tensor_image = tensor_image.squeeze(0) # Remove batch dimension if it exists
|
||||
pil_image = Image.fromarray((tensor_image.cpu().numpy() * 255).astype(np.uint8))
|
||||
return pil_image
|
||||
|
||||
def pil_to_tensor(self, pil_image):
|
||||
"""Converts a PIL image back to a tensor"""
|
||||
return torch.from_numpy(np.array(pil_image).astype(np.float32) / 255).unsqueeze(0)
|
||||
|
||||
def generate_gaussian_noise(self, width, height, noise_scale=0.05):
|
||||
"""Generates Gaussian noise with a given scale."""
|
||||
noise = np.random.normal(128, 128 * noise_scale, (height, width, 3)).astype(np.uint8)
|
||||
return Image.fromarray(noise)
|
||||
|
||||
def soft_light_blend(self, base_image, noise_image, mask=None, opacity=15):
|
||||
"""Blends noise over the base image using soft light, applying mask if present."""
|
||||
# Resize noise to match base image size
|
||||
noise_image = noise_image.resize(base_image.size)
|
||||
|
||||
base_image = base_image.convert('RGB')
|
||||
noise_image = noise_image.convert('RGB')
|
||||
|
||||
noise_blended = ImageChops.soft_light(base_image, noise_image)
|
||||
blended_image = Image.blend(base_image, noise_blended, opacity / 100)
|
||||
|
||||
# Apply mask only if it's provided, valid, and contains more than a single value
|
||||
if mask is not None:
|
||||
mask_pil = self.tensor_to_pil(mask).convert('L')
|
||||
mask_resized = mask_pil.resize(base_image.size)
|
||||
|
||||
# Invert the mask by subtracting from 255
|
||||
inverted_mask = ImageChops.invert(mask_resized)
|
||||
|
||||
# Apply the inverted mask to the composite blending
|
||||
blended_image = Image.composite(base_image, blended_image, inverted_mask)
|
||||
|
||||
return blended_image
|
||||
|
||||
def noise_plus_blend(self, image, noise_scale=0.05, blend_opacity=15, mask=None):
|
||||
"""Main function to generate noise, blend, and return results."""
|
||||
# Convert Tensor image to PIL
|
||||
base_image = self.tensor_to_pil(image)
|
||||
image_size = base_image.size
|
||||
|
||||
# Generate Gaussian noise with the size of the input image
|
||||
noise_image = self.generate_gaussian_noise(image_size[0], image_size[1], noise_scale)
|
||||
|
||||
# Blend the noise with the base image using soft light
|
||||
blended_image = self.soft_light_blend(base_image, noise_image, mask, blend_opacity)
|
||||
|
||||
# Convert the final blended image back to tensor
|
||||
noise_tensor = self.pil_to_tensor(noise_image)
|
||||
blended_tensor = self.pil_to_tensor(blended_image)
|
||||
|
||||
# Return both the noise and blended image as tensors
|
||||
return blended_tensor, noise_tensor
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"NoisePlusBlend": NoisePlusBlend,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"NoisePlusBlend": "Noise Plus Blend",
|
||||
}
|
||||
229
custom_nodes/controlaltai-nodes/perturbation_texture_node.py
Normal file
229
custom_nodes/controlaltai-nodes/perturbation_texture_node.py
Normal file
@@ -0,0 +1,229 @@
|
||||
import numpy as np
|
||||
from PIL import Image, ImageChops
|
||||
import torch
|
||||
|
||||
class PerturbationTexture:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"image": ("IMAGE",),
|
||||
"noise_scale": ("FLOAT", {"default": 0.5, "min": 0.00, "max": 1.00, "step": 0.01}),
|
||||
"texture_strength": ("INT", {"default": 50, "min": 0, "max": 100}),
|
||||
"texture_type": (["Film Grain", "Skin Pore", "Natural", "Fine Detail"], {"default": "Skin Pore"}),
|
||||
"frequency": ("FLOAT", {"default": 1.0, "min": 0.2, "max": 5.0, "step": 0.1}),
|
||||
"perturbation_factor": ("FLOAT", {"default": 0.30, "min": 0.01, "max": 0.5, "step": 0.01}),
|
||||
"use_mask": ("BOOLEAN", {"default": False}),
|
||||
},
|
||||
"optional": {
|
||||
"mask": ("MASK",),
|
||||
"seed": ("INT", {"default": -1}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE", "IMAGE")
|
||||
RETURN_NAMES = ("textured_image_output", "texture_layer")
|
||||
FUNCTION = "apply_perturbation_texture"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Image"
|
||||
|
||||
def tensor_to_pil(self, tensor_image):
|
||||
"""Converts tensor to a PIL Image"""
|
||||
tensor_image = tensor_image.squeeze(0) # Remove batch dimension if it exists
|
||||
pil_image = Image.fromarray((tensor_image.cpu().numpy() * 255).astype(np.uint8))
|
||||
return pil_image
|
||||
|
||||
def pil_to_tensor(self, pil_image):
|
||||
"""Converts a PIL image back to a tensor"""
|
||||
return torch.from_numpy(np.array(pil_image).astype(np.float32) / 255).unsqueeze(0)
|
||||
|
||||
def generate_adaptive_texture(self, base_image, noise_scale, texture_type, frequency, perturbation_factor, texture_strength, seed=None):
|
||||
"""Generate texture with adaptive color matching."""
|
||||
width, height = base_image.size
|
||||
|
||||
# Set seed for reproducibility if provided
|
||||
if seed is not None and seed >= 0:
|
||||
np.random.seed(seed)
|
||||
|
||||
# Convert base image to numpy array
|
||||
base_np = np.array(base_image).astype(np.float32) / 255.0
|
||||
|
||||
# Generate noise patterns based on texture type
|
||||
noise_patterns = self.generate_noise_patterns(width, height, noise_scale, texture_type, frequency)
|
||||
|
||||
# Convert noise to -1 to 1 range for proper mixing
|
||||
noise_normalized = (noise_patterns.astype(np.float32) - 128.0) / 128.0
|
||||
|
||||
# Apply perturbation with texture_strength controlling the final intensity
|
||||
effective_perturbation = perturbation_factor * (texture_strength / 100.0)
|
||||
|
||||
# Apply noise as color-matched variations around the base color
|
||||
result = base_np + (noise_normalized * effective_perturbation)
|
||||
|
||||
# Clamp to valid range
|
||||
result = np.clip(result, 0, 1)
|
||||
|
||||
# Create a more visible texture layer for preview/debugging
|
||||
texture_layer = base_np + (noise_normalized * perturbation_factor * 2.0)
|
||||
texture_layer = np.clip(texture_layer, 0, 1)
|
||||
|
||||
final_image = Image.fromarray((result * 255).astype(np.uint8))
|
||||
texture_image = Image.fromarray((texture_layer * 255).astype(np.uint8))
|
||||
|
||||
return final_image, texture_image
|
||||
|
||||
def generate_noise_patterns(self, width, height, noise_scale, texture_type, frequency):
|
||||
"""Generates noise patterns optimized for each texture type."""
|
||||
|
||||
# Safe resize function for noise scaling
|
||||
def safe_resize(arr, target_height, target_width):
|
||||
from PIL import Image
|
||||
if arr.ndim == 2:
|
||||
img = Image.fromarray((arr * 255 / arr.max()).astype(np.uint8))
|
||||
else:
|
||||
img = Image.fromarray(arr.astype(np.uint8))
|
||||
img = img.resize((target_width, target_height), Image.LANCZOS)
|
||||
return np.array(img).astype(np.float32) / 255.0 * arr.max()
|
||||
|
||||
if texture_type == "Film Grain":
|
||||
# Film grain - larger, more irregular pattern with RGB variation
|
||||
base_noise_r = np.random.normal(128, 64 * noise_scale, (height, width))
|
||||
base_noise_g = np.random.normal(128, 64 * noise_scale, (height, width))
|
||||
base_noise_b = np.random.normal(128, 64 * noise_scale, (height, width))
|
||||
|
||||
# Add larger scale variation for film-like clustering
|
||||
large_scale_h = max(4, int(height/(4*frequency)))
|
||||
large_scale_w = max(4, int(width/(4*frequency)))
|
||||
large_scale_r = np.random.normal(0, 30 * noise_scale, (large_scale_h, large_scale_w))
|
||||
large_scale_g = np.random.normal(0, 30 * noise_scale, (large_scale_h, large_scale_w))
|
||||
large_scale_b = np.random.normal(0, 30 * noise_scale, (large_scale_h, large_scale_w))
|
||||
|
||||
large_scale_r = safe_resize(large_scale_r, height, width)
|
||||
large_scale_g = safe_resize(large_scale_g, height, width)
|
||||
large_scale_b = safe_resize(large_scale_b, height, width)
|
||||
|
||||
combined_r = np.clip(base_noise_r * 0.7 + large_scale_r * 0.3, 0, 255)
|
||||
combined_g = np.clip(base_noise_g * 0.7 + large_scale_g * 0.3, 0, 255)
|
||||
combined_b = np.clip(base_noise_b * 0.7 + large_scale_b * 0.3, 0, 255)
|
||||
|
||||
elif texture_type == "Skin Pore":
|
||||
# Fine, subtle texture optimized for skin with reduced intensity
|
||||
base_scale = noise_scale * 0.6 # More subtle for natural skin texture
|
||||
|
||||
# Create subtle RGB variations for realistic skin texture
|
||||
base_noise_r = np.random.normal(128, 32 * base_scale, (height, width))
|
||||
base_noise_g = np.random.normal(128, 28 * base_scale, (height, width))
|
||||
base_noise_b = np.random.normal(128, 24 * base_scale, (height, width))
|
||||
|
||||
# Fine pore-like details at higher frequency
|
||||
fine_h = max(4, int(height*frequency*1.5))
|
||||
fine_w = max(4, int(width*frequency*1.5))
|
||||
fine_noise_r = np.random.normal(0, 20 * base_scale, (fine_h, fine_w))
|
||||
fine_noise_g = np.random.normal(0, 18 * base_scale, (fine_h, fine_w))
|
||||
fine_noise_b = np.random.normal(0, 16 * base_scale, (fine_h, fine_w))
|
||||
|
||||
fine_noise_r = safe_resize(fine_noise_r, height, width)
|
||||
fine_noise_g = safe_resize(fine_noise_g, height, width)
|
||||
fine_noise_b = safe_resize(fine_noise_b, height, width)
|
||||
|
||||
combined_r = np.clip(base_noise_r + fine_noise_r * 0.8, 0, 255)
|
||||
combined_g = np.clip(base_noise_g + fine_noise_g * 0.8, 0, 255)
|
||||
combined_b = np.clip(base_noise_b + fine_noise_b * 0.8, 0, 255)
|
||||
|
||||
elif texture_type == "Natural":
|
||||
# Multi-layered natural texture with organic frequency distribution
|
||||
base_noise_r = np.random.normal(128, 48 * noise_scale, (height, width))
|
||||
base_noise_g = np.random.normal(128, 44 * noise_scale, (height, width))
|
||||
base_noise_b = np.random.normal(128, 40 * noise_scale, (height, width))
|
||||
|
||||
# Multiple frequency layers for natural complexity
|
||||
frequencies = [frequency*2, frequency, frequency/3]
|
||||
weights = [0.5, 0.3, 0.2]
|
||||
|
||||
combined_r = base_noise_r.copy()
|
||||
combined_g = base_noise_g.copy()
|
||||
combined_b = base_noise_b.copy()
|
||||
|
||||
for freq, weight in zip(frequencies, weights):
|
||||
f_h = max(4, int(height*freq))
|
||||
f_w = max(4, int(width*freq))
|
||||
|
||||
layer_r = np.random.normal(0, 30 * noise_scale * weight, (f_h, f_w))
|
||||
layer_g = np.random.normal(0, 28 * noise_scale * weight, (f_h, f_w))
|
||||
layer_b = np.random.normal(0, 26 * noise_scale * weight, (f_h, f_w))
|
||||
|
||||
layer_r = safe_resize(layer_r, height, width)
|
||||
layer_g = safe_resize(layer_g, height, width)
|
||||
layer_b = safe_resize(layer_b, height, width)
|
||||
|
||||
combined_r += layer_r * weight
|
||||
combined_g += layer_g * weight
|
||||
combined_b += layer_b * weight
|
||||
|
||||
combined_r = np.clip(combined_r, 0, 255)
|
||||
combined_g = np.clip(combined_g, 0, 255)
|
||||
combined_b = np.clip(combined_b, 0, 255)
|
||||
|
||||
else: # Fine Detail
|
||||
# High-frequency detailed texture for micro-details
|
||||
high_freq = frequency * 2.5
|
||||
|
||||
base_noise_r = np.random.normal(128, 40 * noise_scale, (height, width))
|
||||
base_noise_g = np.random.normal(128, 38 * noise_scale, (height, width))
|
||||
base_noise_b = np.random.normal(128, 36 * noise_scale, (height, width))
|
||||
|
||||
# High-frequency fine details
|
||||
fine_h = max(4, int(height*high_freq))
|
||||
fine_w = max(4, int(width*high_freq))
|
||||
fine_detail_r = np.random.normal(0, 25 * noise_scale, (fine_h, fine_w))
|
||||
fine_detail_g = np.random.normal(0, 23 * noise_scale, (fine_h, fine_w))
|
||||
fine_detail_b = np.random.normal(0, 21 * noise_scale, (fine_h, fine_w))
|
||||
|
||||
fine_detail_r = safe_resize(fine_detail_r, height, width)
|
||||
fine_detail_g = safe_resize(fine_detail_g, height, width)
|
||||
fine_detail_b = safe_resize(fine_detail_b, height, width)
|
||||
|
||||
combined_r = np.clip(base_noise_r + fine_detail_r * 0.7, 0, 255)
|
||||
combined_g = np.clip(base_noise_g + fine_detail_g * 0.7, 0, 255)
|
||||
combined_b = np.clip(base_noise_b + fine_detail_b * 0.7, 0, 255)
|
||||
|
||||
# Stack RGB channels into final noise pattern
|
||||
return np.stack([combined_r, combined_g, combined_b], axis=2)
|
||||
|
||||
def apply_perturbation_texture(self, image, noise_scale=0.5, texture_strength=50, texture_type="Skin Pore",
|
||||
frequency=1.0, perturbation_factor=0.15, use_mask=False, mask=None, seed=-1):
|
||||
"""Main function to apply adaptive color-matched texture."""
|
||||
# Convert tensor image to PIL
|
||||
base_image = self.tensor_to_pil(image)
|
||||
|
||||
# Use provided seed or generate random if -1
|
||||
seed_value = seed if seed >= 0 else None
|
||||
|
||||
# Generate adaptive texture
|
||||
textured_image, texture_layer = self.generate_adaptive_texture(
|
||||
base_image, noise_scale, texture_type, frequency,
|
||||
perturbation_factor, texture_strength, seed_value
|
||||
)
|
||||
|
||||
# Apply mask if specified
|
||||
if use_mask and mask is not None:
|
||||
mask_pil = self.tensor_to_pil(mask).convert('L')
|
||||
mask_resized = mask_pil.resize(base_image.size)
|
||||
# Invert mask so white areas get texture, black areas are protected
|
||||
inverted_mask = ImageChops.invert(mask_resized)
|
||||
# Composite: base where mask is black, textured where mask is white
|
||||
textured_image = Image.composite(base_image, textured_image, inverted_mask)
|
||||
|
||||
# Convert results back to tensors
|
||||
texture_tensor = self.pil_to_tensor(texture_layer)
|
||||
textured_tensor = self.pil_to_tensor(textured_image)
|
||||
|
||||
return textured_tensor, texture_tensor
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"PerturbationTexture": PerturbationTexture,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"PerturbationTexture": "Perturbation Texture",
|
||||
}
|
||||
14
custom_nodes/controlaltai-nodes/pyproject.toml
Normal file
14
custom_nodes/controlaltai-nodes/pyproject.toml
Normal file
@@ -0,0 +1,14 @@
|
||||
[project]
|
||||
name = "controlaltai-nodes"
|
||||
description = "Quality of Life Nodes from ControlAltAI. Flux Resolution Calculator, Flux Sampler, Flux Union ControlNet Apply, Noise Plus Blend, Boolean Logic, and Flux Region Nodes."
|
||||
version = "1.1.4"
|
||||
license = {file = "LICENSE"}
|
||||
|
||||
[project.urls]
|
||||
Repository = "https://github.com/gseth/ControlAltAI-Nodes"
|
||||
# Used by Comfy Registry https://comfyregistry.org
|
||||
|
||||
[tool.comfy]
|
||||
PublisherId = "controlaltai"
|
||||
DisplayName = "ControlAltAI_Nodes"
|
||||
Icon = ""
|
||||
301
custom_nodes/controlaltai-nodes/region_mask_conditioning_node.py
Normal file
301
custom_nodes/controlaltai-nodes/region_mask_conditioning_node.py
Normal file
@@ -0,0 +1,301 @@
|
||||
import torch
|
||||
import numpy as np
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
def pil2tensor(image):
|
||||
return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
|
||||
|
||||
class RegionMaskConditioning:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"mask1": ("MASK",),
|
||||
"bbox1": ("BBOX",),
|
||||
"conditioning1": ("CONDITIONING",),
|
||||
"number_of_regions": ("INT", {
|
||||
"default": 1,
|
||||
"min": 1,
|
||||
"max": 3,
|
||||
"step": 1,
|
||||
"display": "Number of Regions"
|
||||
}),
|
||||
"strength1": ("FLOAT", {
|
||||
"default": 1.0,
|
||||
"min": 0.0,
|
||||
"max": 10.0,
|
||||
"step": 0.1,
|
||||
"display": "Strength for Region 1"
|
||||
}),
|
||||
},
|
||||
"optional": {
|
||||
"mask2": ("MASK",),
|
||||
"bbox2": ("BBOX",),
|
||||
"conditioning2": ("CONDITIONING",),
|
||||
"strength2": ("FLOAT", {
|
||||
"default": 1.0,
|
||||
"min": 0.0,
|
||||
"max": 10.0,
|
||||
"step": 0.1,
|
||||
"display": "Strength for Region 2"
|
||||
}),
|
||||
"mask3": ("MASK",),
|
||||
"bbox3": ("BBOX",),
|
||||
"conditioning3": ("CONDITIONING",),
|
||||
"strength3": ("FLOAT", {
|
||||
"default": 1.0,
|
||||
"min": 0.0,
|
||||
"max": 10.0,
|
||||
"step": 0.1,
|
||||
"display": "Strength for Region 3"
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("REGION", "REGION", "REGION", "INT", "IMAGE")
|
||||
RETURN_NAMES = ("region1", "region2", "region3",
|
||||
"region_count", "preview_image")
|
||||
FUNCTION = "create_conditioned_regions"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def validate_bbox(self, bbox: Dict) -> bool:
|
||||
"""Validate bbox coordinates and structure"""
|
||||
print(f"\n=== Validating BBox ===")
|
||||
print(f"Input bbox: {bbox}")
|
||||
|
||||
if bbox is None or not isinstance(bbox, dict):
|
||||
print("Failed: Invalid bbox type")
|
||||
return False
|
||||
|
||||
required_keys = ["x1", "y1", "x2", "y2"]
|
||||
if not all(k in bbox for k in required_keys):
|
||||
print(f"Failed: Missing keys. Required: {required_keys}")
|
||||
return False
|
||||
|
||||
# Validate coordinate values
|
||||
if not all(isinstance(bbox[k], (int, float)) for k in required_keys):
|
||||
print("Failed: Invalid coordinate types")
|
||||
return False
|
||||
|
||||
# Validate coordinate ranges
|
||||
if not all(0 <= bbox[k] <= 1.0 for k in required_keys):
|
||||
print("Failed: Coordinates out of range [0,1]")
|
||||
return False
|
||||
|
||||
# Validate proper ordering
|
||||
if bbox["x1"] >= bbox["x2"] or bbox["y1"] >= bbox["y2"]:
|
||||
print("Failed: Invalid coordinate ordering")
|
||||
return False
|
||||
|
||||
print("Passed: BBox validation successful")
|
||||
return True
|
||||
|
||||
def scale_conditioning(self, conditioning: List, strength: float) -> List:
|
||||
"""Scale conditioning tensors by strength"""
|
||||
print(f"\n=== Scaling Conditioning ===")
|
||||
print(f"Strength: {strength}")
|
||||
|
||||
try:
|
||||
if not conditioning or not isinstance(conditioning, list):
|
||||
print("Failed: Invalid conditioning format")
|
||||
raise ValueError("Invalid conditioning format")
|
||||
|
||||
# Get the conditioning tensors and dict
|
||||
cond_tensors = conditioning[0][0]
|
||||
cond_dict = conditioning[0][1]
|
||||
|
||||
print(f"Input tensor shape: {cond_tensors.shape}")
|
||||
print(f"Conditioning keys: {list(cond_dict.keys())}")
|
||||
print(f"Input tensor stats: min={cond_tensors.min():.3f}, max={cond_tensors.max():.3f}, mean={cond_tensors.mean():.3f}")
|
||||
|
||||
# Scale the tensors
|
||||
scaled_tensors = cond_tensors.clone() * strength
|
||||
print(f"Scaled tensor stats: min={scaled_tensors.min():.3f}, max={scaled_tensors.max():.3f}, mean={scaled_tensors.mean():.3f}")
|
||||
|
||||
return [[scaled_tensors, cond_dict]]
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in scale_conditioning: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return conditioning
|
||||
|
||||
def create_region(self, mask: Optional[torch.Tensor], bbox: Optional[Dict],
|
||||
conditioning: Optional[List], strength: float, region_idx: int) -> Dict:
|
||||
"""Create a single region with its conditioning"""
|
||||
print(f"\n=== Creating Region {region_idx} ===")
|
||||
|
||||
# Debug inputs
|
||||
print("Input validation:")
|
||||
print(f"- Mask: {type(mask)}, shape={mask.shape if mask is not None else None}")
|
||||
print(f"- BBox: {bbox}")
|
||||
print(f"- Conditioning type: {type(conditioning)}")
|
||||
print(f"- Strength: {strength}")
|
||||
|
||||
# Default empty region
|
||||
empty_region = {
|
||||
"conditioning": None,
|
||||
"bbox": [0.0, 0.0, 0.0, 0.0], # Array format for empty
|
||||
"is_active": False,
|
||||
"strength": 1.0
|
||||
}
|
||||
|
||||
try:
|
||||
# Validate inputs
|
||||
if mask is None or bbox is None or conditioning is None:
|
||||
print(f"Region {region_idx}: Missing components")
|
||||
return empty_region
|
||||
|
||||
if not self.validate_bbox(bbox):
|
||||
print(f"Region {region_idx}: Invalid bbox")
|
||||
return empty_region
|
||||
|
||||
# Scale conditioning
|
||||
scaled_conditioning = self.scale_conditioning(conditioning, strength)
|
||||
|
||||
# Create region output - bbox array, conditioning, and strength
|
||||
region = {
|
||||
"conditioning": scaled_conditioning,
|
||||
"bbox": [bbox["x1"], bbox["y1"], bbox["x2"], bbox["y2"]], # Array format
|
||||
"is_active": True,
|
||||
"strength": strength
|
||||
}
|
||||
|
||||
print(f"\nSuccessfully created region {region_idx}")
|
||||
return region
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error creating region {region_idx}: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return empty_region
|
||||
|
||||
def create_preview(self, masks: List[torch.Tensor], bboxes: List[Dict],
|
||||
number_of_regions: int) -> torch.Tensor:
|
||||
"""Create preview of conditioned regions"""
|
||||
print("\n=== Creating Preview ===")
|
||||
|
||||
if not masks:
|
||||
print("No masks provided")
|
||||
return torch.zeros((3, 64, 64), dtype=torch.float32)
|
||||
|
||||
height, width = masks[0].shape
|
||||
print(f"Preview dimensions: {width}x{height}")
|
||||
|
||||
# Create PIL Image for preview
|
||||
preview = Image.new("RGB", (width, height), (0, 0, 0))
|
||||
draw = ImageDraw.Draw(preview)
|
||||
|
||||
# Define colors for 3 regions
|
||||
colors = [
|
||||
(255, 0, 0), # Red
|
||||
(0, 255, 0), # Green
|
||||
(255, 255, 0), # Yellow
|
||||
]
|
||||
|
||||
# Draw each region
|
||||
for i, (mask, bbox) in enumerate(zip(masks[:number_of_regions], bboxes[:number_of_regions])):
|
||||
validation_result = self.validate_bbox(bbox)
|
||||
if validation_result and mask is not None:
|
||||
print(f"\nDrawing region {i+1}:")
|
||||
# Get pixel coordinates
|
||||
x1 = int(bbox["x1"] * width)
|
||||
y1 = int(bbox["y1"] * height)
|
||||
x2 = int(bbox["x2"] * width)
|
||||
y2 = int(bbox["y2"] * height)
|
||||
|
||||
print(f"Region {i+1} coordinates: ({x1},{y1}) to ({x2},{y2})")
|
||||
|
||||
# Draw region outline
|
||||
draw.rectangle([x1, y1, x2, y2], outline=colors[i], width=4)
|
||||
|
||||
return pil2tensor(preview)
|
||||
|
||||
def create_conditioned_regions(self,
|
||||
mask1: torch.Tensor,
|
||||
bbox1: Dict,
|
||||
conditioning1: List,
|
||||
number_of_regions: int,
|
||||
strength1: float,
|
||||
mask2: Optional[torch.Tensor] = None,
|
||||
bbox2: Optional[Dict] = None,
|
||||
conditioning2: Optional[List] = None,
|
||||
strength2: Optional[float] = 1.0,
|
||||
mask3: Optional[torch.Tensor] = None,
|
||||
bbox3: Optional[Dict] = None,
|
||||
conditioning3: Optional[List] = None,
|
||||
strength3: Optional[float] = 1.0) -> Tuple:
|
||||
print("\n=== Creating Conditioned Regions ===")
|
||||
print(f"Number of regions: {number_of_regions}")
|
||||
|
||||
try:
|
||||
# Create regions
|
||||
regions = []
|
||||
active_count = 0
|
||||
|
||||
# Process required number of regions
|
||||
inputs = [
|
||||
(mask1, bbox1, conditioning1, strength1),
|
||||
(mask2, bbox2, conditioning2, strength2),
|
||||
(mask3, bbox3, conditioning3, strength3)
|
||||
]
|
||||
|
||||
# Store masks and bboxes for preview only
|
||||
preview_masks = []
|
||||
preview_bboxes = []
|
||||
|
||||
for i, (mask, bbox, conditioning, strength) in enumerate(inputs[:number_of_regions]):
|
||||
# Create region with per-region strength
|
||||
region = self.create_region(mask, bbox, conditioning, strength, i+1)
|
||||
if region["is_active"]:
|
||||
active_count += 1
|
||||
regions.append(region)
|
||||
print(f"Processed region {i+1}: active={region['is_active']}")
|
||||
|
||||
# Store for preview
|
||||
preview_masks.append(mask)
|
||||
preview_bboxes.append(bbox)
|
||||
|
||||
# Fill remaining slots with empty regions
|
||||
empty_region = {
|
||||
"conditioning": None,
|
||||
"bbox": [0.0, 0.0, 0.0, 0.0], # Array format
|
||||
"is_active": False,
|
||||
"strength": 1.0
|
||||
}
|
||||
|
||||
while len(regions) < 3:
|
||||
idx = len(regions) + 1
|
||||
print(f"Adding empty region {idx}")
|
||||
regions.append(empty_region)
|
||||
|
||||
print(f"\nCreated {active_count} active regions out of {number_of_regions} requested")
|
||||
|
||||
# Create preview using stored masks and bboxes
|
||||
preview = self.create_preview(preview_masks, preview_bboxes, number_of_regions)
|
||||
|
||||
return (*regions, active_count, preview)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in create_conditioned_regions: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
empty_region = {
|
||||
"conditioning": None,
|
||||
"bbox": [0.0, 0.0, 0.0, 0.0], # Array format
|
||||
"is_active": False,
|
||||
"strength": 1.0
|
||||
}
|
||||
empty_preview = torch.zeros((3, mask1.shape[0], mask1.shape[1]), dtype=torch.float32)
|
||||
return (empty_region, empty_region, empty_region, 0, empty_preview)
|
||||
|
||||
# Node class mappings
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"RegionMaskConditioning": RegionMaskConditioning
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"RegionMaskConditioning": "Region Mask Conditioning"
|
||||
}
|
||||
164
custom_nodes/controlaltai-nodes/region_mask_generator_node.py
Normal file
164
custom_nodes/controlaltai-nodes/region_mask_generator_node.py
Normal file
@@ -0,0 +1,164 @@
|
||||
import torch
|
||||
import numpy as np
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
def pil2tensor(image):
|
||||
return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
|
||||
|
||||
class RegionMaskGenerator:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"width": ("INT", {"default": 1024}),
|
||||
"height": ("INT", {"default": 1024}),
|
||||
"number_of_regions": ("INT", {"default": 1, "min": 1, "max": 3}),
|
||||
# Region 1
|
||||
"region1_x1": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region1_y1": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region1_x2": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region1_y2": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
# Region 2
|
||||
"region2_x1": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region2_y1": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region2_x2": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region2_y2": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
# Region 3
|
||||
"region3_x1": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region3_y1": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region3_x2": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
"region3_y2": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE", "IMAGE", "MASK", "MASK", "MASK", "INT", "BBOX", "BBOX", "BBOX")
|
||||
RETURN_NAMES = ("colored_regions_image", "bbox_preview",
|
||||
"mask1", "mask2", "mask3",
|
||||
"number_of_regions",
|
||||
"bbox1", "bbox2", "bbox3")
|
||||
FUNCTION = "generate_regions"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def create_bbox(self, x1: float, y1: float, x2: float, y2: float) -> Dict:
|
||||
"""Create bbox with debug output"""
|
||||
print(f"Creating BBOX: x1={x1:.3f}, y1={y1:.3f}, x2={x2:.3f}, y2={y2:.3f}")
|
||||
return {
|
||||
"x1": x1,
|
||||
"y1": y1,
|
||||
"x2": x2,
|
||||
"y2": y2,
|
||||
"active": True
|
||||
}
|
||||
|
||||
def create_mask_from_bbox(self, bbox: Dict, width: int, height: int) -> torch.Tensor:
|
||||
"""Create mask from bbox with debug output"""
|
||||
mask = torch.zeros((height, width), dtype=torch.float32)
|
||||
if bbox["active"]:
|
||||
x1 = int(bbox["x1"] * width)
|
||||
y1 = int(bbox["y1"] * height)
|
||||
x2 = int(bbox["x2"] * width)
|
||||
y2 = int(bbox["y2"] * height)
|
||||
print(f"Creating mask at pixels: x1={x1}, y1={y1}, x2={x2}, y2={y2}")
|
||||
mask[y1:y2, x1:x2] = 1.0
|
||||
return mask
|
||||
|
||||
def create_preview(self, masks: List[torch.Tensor], bboxes: List[Dict],
|
||||
number_of_regions: int) -> Tuple[torch.Tensor, torch.Tensor]:
|
||||
"""Create preview images with debug info"""
|
||||
if not masks:
|
||||
return torch.zeros((3, 64, 64), dtype=torch.float32), torch.zeros((3, 64, 64), dtype=torch.float32)
|
||||
|
||||
height, width = masks[0].shape
|
||||
|
||||
# Create both preview images
|
||||
region_preview = Image.new("RGB", (width, height), (0, 0, 0))
|
||||
bbox_preview = Image.new("RGB", (width, height), (0, 0, 0))
|
||||
region_draw = ImageDraw.Draw(region_preview)
|
||||
bbox_draw = ImageDraw.Draw(bbox_preview)
|
||||
|
||||
colors = [
|
||||
(255, 0, 0), # Red - Region 1
|
||||
(0, 255, 0), # Green - Region 2
|
||||
(255, 255, 0), # Yellow - Region 3
|
||||
]
|
||||
|
||||
# Store regions for ordered preview
|
||||
preview_regions = []
|
||||
for i in range(number_of_regions):
|
||||
if bboxes[i]["active"]:
|
||||
mask_np = masks[i].cpu().numpy() > 0.5
|
||||
if mask_np.any():
|
||||
preview_regions.append((i, mask_np, bboxes[i]))
|
||||
|
||||
# Draw regions in reverse order (Region 3 first, Region 1 last)
|
||||
for i, mask_np, bbox in sorted(preview_regions, reverse=True):
|
||||
# Draw on region preview
|
||||
color_array = np.zeros((height, width, 3), dtype=np.uint8)
|
||||
for c in range(3):
|
||||
color_array[mask_np, c] = colors[i][c]
|
||||
preview_np = np.array(region_preview)
|
||||
preview_np[mask_np] = color_array[mask_np]
|
||||
region_preview = Image.fromarray(preview_np)
|
||||
|
||||
# Draw on bbox preview - maintaining original bbox drawing order
|
||||
x1 = int(bbox["x1"] * width)
|
||||
y1 = int(bbox["y1"] * height)
|
||||
x2 = int(bbox["x2"] * width)
|
||||
y2 = int(bbox["y2"] * height)
|
||||
print(f"Drawing preview for region {i}: x1={x1}, y1={y1}, x2={x2}, y2={y2}")
|
||||
bbox_draw.rectangle([x1, y1, x2, y2], outline=colors[i], width=2)
|
||||
|
||||
return pil2tensor(region_preview), pil2tensor(bbox_preview)
|
||||
|
||||
def generate_regions(self,
|
||||
width: int,
|
||||
height: int,
|
||||
number_of_regions: int,
|
||||
**kwargs) -> Tuple:
|
||||
try:
|
||||
print(f"\nGenerating {number_of_regions} regions for {width}x{height} image")
|
||||
bboxes = []
|
||||
masks = []
|
||||
|
||||
# Create regions
|
||||
for i in range(3):
|
||||
if i < number_of_regions:
|
||||
print(f"\nProcessing region {i+1}:")
|
||||
bbox = self.create_bbox(
|
||||
kwargs[f"region{i+1}_x1"],
|
||||
kwargs[f"region{i+1}_y1"],
|
||||
kwargs[f"region{i+1}_x2"],
|
||||
kwargs[f"region{i+1}_y2"]
|
||||
)
|
||||
mask = self.create_mask_from_bbox(bbox, width, height)
|
||||
bboxes.append(bbox)
|
||||
masks.append(mask)
|
||||
else:
|
||||
print(f"Creating empty region {i+1}")
|
||||
empty_bbox = {"x1": 0.0, "y1": 0.0, "x2": 0.0, "y2": 0.0, "active": False}
|
||||
bboxes.append(empty_bbox)
|
||||
masks.append(torch.zeros((height, width), dtype=torch.float32))
|
||||
|
||||
# Create previews
|
||||
region_preview, bbox_preview = self.create_preview(masks, bboxes, number_of_regions)
|
||||
|
||||
return (region_preview, bbox_preview, *masks, number_of_regions, *bboxes)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in generate_regions: {str(e)}")
|
||||
empty_mask = torch.zeros((height, width), dtype=torch.float32)
|
||||
empty_bbox = {"x1": 0.0, "y1": 0.0, "x2": 0.0, "y2": 0.0, "active": False}
|
||||
empty_preview = torch.zeros((3, height, width), dtype=torch.float32)
|
||||
return (empty_preview, empty_preview,
|
||||
empty_mask, empty_mask, empty_mask,
|
||||
0, empty_bbox, empty_bbox, empty_bbox)
|
||||
|
||||
# Node class mappings
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"RegionMaskGenerator": RegionMaskGenerator
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"RegionMaskGenerator": "Region Mask Generator"
|
||||
}
|
||||
230
custom_nodes/controlaltai-nodes/region_mask_processor_node.py
Normal file
230
custom_nodes/controlaltai-nodes/region_mask_processor_node.py
Normal file
@@ -0,0 +1,230 @@
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
from typing import Tuple, Dict, Optional, List
|
||||
import numpy as np
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
def pil2tensor(image):
|
||||
"""Convert a PIL image to a PyTorch tensor in the expected format."""
|
||||
return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
|
||||
|
||||
class RegionMaskProcessor:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"mask1": ("MASK",),
|
||||
"bbox1": ("BBOX",),
|
||||
"blur_radius": ("INT", {
|
||||
"default": 5,
|
||||
"min": 0,
|
||||
"max": 32,
|
||||
"step": 1,
|
||||
"display": "Blur Radius"
|
||||
}),
|
||||
"threshold": ("FLOAT", {
|
||||
"default": 0.5,
|
||||
"min": 0.0,
|
||||
"max": 1.0,
|
||||
"step": 0.1,
|
||||
"display": "Mask Threshold"
|
||||
}),
|
||||
"feather_edges": ("BOOLEAN", {
|
||||
"default": True,
|
||||
"display": "Feather Edges"
|
||||
}),
|
||||
"number_of_regions": ("INT", {
|
||||
"default": 1,
|
||||
"min": 1,
|
||||
"max": 3,
|
||||
"display": "Number of Regions"
|
||||
}),
|
||||
},
|
||||
"optional": {
|
||||
"mask2": ("MASK",),
|
||||
"bbox2": ("BBOX",),
|
||||
"mask3": ("MASK",),
|
||||
"bbox3": ("BBOX",),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MASK", "BBOX", "MASK", "BBOX", "MASK", "BBOX", "IMAGE", "INT")
|
||||
RETURN_NAMES = ("processed_mask1", "processed_bbox1",
|
||||
"processed_mask2", "processed_bbox2",
|
||||
"processed_mask3", "processed_bbox3",
|
||||
"preview_image", "region_count")
|
||||
FUNCTION = "process_regions"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def apply_gaussian_blur(self, mask: torch.Tensor, radius: int) -> torch.Tensor:
|
||||
"""Apply gaussian blur to mask edges"""
|
||||
if radius <= 0:
|
||||
return mask
|
||||
|
||||
kernel_size = 2 * radius + 1
|
||||
sigma = radius / 3.0
|
||||
|
||||
if len(mask.shape) == 2:
|
||||
mask = mask.unsqueeze(0).unsqueeze(0)
|
||||
|
||||
kernel_1d = torch.exp(torch.linspace(-radius, radius, kernel_size).pow(2) / (-2 * sigma ** 2))
|
||||
kernel_1d = kernel_1d / kernel_1d.sum()
|
||||
|
||||
padding = radius
|
||||
kernel_h = kernel_1d.unsqueeze(0).unsqueeze(0).unsqueeze(0).to(mask.device)
|
||||
kernel_v = kernel_1d.unsqueeze(0).unsqueeze(0).unsqueeze(-1).to(mask.device)
|
||||
|
||||
mask = F.pad(mask, (padding, padding, 0, 0), mode='reflect')
|
||||
mask = F.conv2d(mask, kernel_h)
|
||||
mask = F.pad(mask, (0, 0, padding, padding), mode='reflect')
|
||||
mask = F.conv2d(mask, kernel_v)
|
||||
|
||||
return mask.squeeze()
|
||||
|
||||
def apply_feathering(self, mask: torch.Tensor, bbox: Dict, radius: int) -> Tuple[torch.Tensor, Dict]:
|
||||
"""Apply feathering to mask edges while preserving bbox boundaries"""
|
||||
if radius <= 0 or not bbox["active"]:
|
||||
return mask, bbox
|
||||
|
||||
height, width = mask.shape
|
||||
x1 = int(bbox["x1"] * width)
|
||||
y1 = int(bbox["y1"] * height)
|
||||
x2 = int(bbox["x2"] * width)
|
||||
y2 = int(bbox["y2"] * height)
|
||||
|
||||
inner_mask = torch.zeros_like(mask)
|
||||
inner_mask[y1+radius:y2-radius, x1+radius:x2-radius] = 1.0
|
||||
edge_mask = mask - inner_mask
|
||||
|
||||
if edge_mask.any():
|
||||
blurred = self.apply_gaussian_blur(mask, radius)
|
||||
result = mask.clone()
|
||||
result[edge_mask > 0] = blurred[edge_mask > 0]
|
||||
else:
|
||||
result = mask
|
||||
|
||||
return result, bbox
|
||||
|
||||
def process_single_region(self,
|
||||
mask: torch.Tensor,
|
||||
bbox: Dict,
|
||||
blur_radius: int,
|
||||
threshold: float,
|
||||
feather_edges: bool) -> Tuple[torch.Tensor, Dict]:
|
||||
"""Process a single mask-bbox pair"""
|
||||
if mask is None or not bbox["active"]:
|
||||
return mask, bbox
|
||||
|
||||
try:
|
||||
processed = (mask > threshold).float()
|
||||
|
||||
if feather_edges and blur_radius > 0:
|
||||
processed, bbox = self.apply_feathering(processed, bbox, blur_radius)
|
||||
elif blur_radius > 0:
|
||||
processed = self.apply_gaussian_blur(processed, blur_radius)
|
||||
|
||||
return processed, bbox
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error processing region: {str(e)}")
|
||||
return mask, bbox
|
||||
|
||||
def create_preview(self, masks: List[torch.Tensor], bboxes: List[Dict],
|
||||
number_of_regions: int) -> torch.Tensor:
|
||||
"""Create preview of processed regions with PIL for consistent coloring"""
|
||||
if not masks:
|
||||
return torch.zeros((3, 64, 64), dtype=torch.float32)
|
||||
|
||||
height, width = masks[0].shape
|
||||
|
||||
# Create PIL Image for preview
|
||||
preview = Image.new("RGB", (width, height), (0, 0, 0))
|
||||
|
||||
colors = [
|
||||
(255, 0, 0), # Red - Region 1
|
||||
(0, 255, 0), # Green - Region 2
|
||||
(255, 255, 0), # Yellow - Region 3
|
||||
]
|
||||
|
||||
# Store regions for ordered preview
|
||||
preview_regions = []
|
||||
for i in range(number_of_regions):
|
||||
if bboxes[i]["active"] and masks[i] is not None:
|
||||
mask_np = masks[i].cpu().numpy() > 0.5
|
||||
preview_regions.append((i, mask_np))
|
||||
|
||||
# Draw regions in reverse order (Region 3 first, Region 1 last)
|
||||
for i, mask_np in sorted(preview_regions, reverse=True):
|
||||
color_array = np.zeros((height, width, 3), dtype=np.uint8)
|
||||
color_array[mask_np] = colors[i]
|
||||
|
||||
# Convert to PIL and composite
|
||||
region_img = Image.fromarray(color_array, 'RGB')
|
||||
preview = Image.alpha_composite(
|
||||
preview.convert('RGBA'),
|
||||
Image.merge('RGBA', (*region_img.split(), Image.fromarray((mask_np * 255).astype(np.uint8))))
|
||||
)
|
||||
|
||||
return pil2tensor(preview.convert('RGB'))
|
||||
|
||||
def process_regions(self,
|
||||
mask1: torch.Tensor,
|
||||
bbox1: Dict,
|
||||
blur_radius: int,
|
||||
threshold: float,
|
||||
feather_edges: bool,
|
||||
number_of_regions: int,
|
||||
mask2: Optional[torch.Tensor] = None,
|
||||
bbox2: Optional[Dict] = None,
|
||||
mask3: Optional[torch.Tensor] = None,
|
||||
bbox3: Optional[Dict] = None) -> Tuple:
|
||||
try:
|
||||
# Process each mask-bbox pair
|
||||
mask_bbox_pairs = [
|
||||
(mask1, bbox1),
|
||||
(mask2, bbox2) if mask2 is not None else (None, None),
|
||||
(mask3, bbox3) if mask3 is not None else (None, None),
|
||||
]
|
||||
|
||||
processed_masks = []
|
||||
processed_bboxes = []
|
||||
active_count = 0
|
||||
|
||||
for i, (mask, bbox) in enumerate(mask_bbox_pairs):
|
||||
if i < number_of_regions and mask is not None and bbox is not None:
|
||||
proc_mask, proc_bbox = self.process_single_region(
|
||||
mask, bbox, blur_radius, threshold, feather_edges
|
||||
)
|
||||
if proc_bbox["active"]:
|
||||
active_count += 1
|
||||
processed_masks.append(proc_mask)
|
||||
processed_bboxes.append(proc_bbox)
|
||||
else:
|
||||
empty_mask = torch.zeros_like(mask1)
|
||||
empty_bbox = {"x1": 0.0, "y1": 0.0, "x2": 0.0, "y2": 0.0, "active": False}
|
||||
processed_masks.append(empty_mask)
|
||||
processed_bboxes.append(empty_bbox)
|
||||
|
||||
# Create preview
|
||||
preview = self.create_preview(processed_masks, processed_bboxes, number_of_regions)
|
||||
|
||||
return (*[item for pair in zip(processed_masks, processed_bboxes) for item in pair],
|
||||
preview, active_count)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error processing regions: {str(e)}")
|
||||
empty_mask = torch.zeros_like(mask1)
|
||||
empty_bbox = {"x1": 0.0, "y1": 0.0, "x2": 0.0, "y2": 0.0, "active": False}
|
||||
empty_preview = torch.zeros((3, mask1.shape[0], mask1.shape[1]), dtype=torch.float32)
|
||||
return (empty_mask, empty_bbox, empty_mask, empty_bbox,
|
||||
empty_mask, empty_bbox,
|
||||
empty_preview, 0)
|
||||
|
||||
# Node class mappings
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"RegionMaskProcessor": RegionMaskProcessor
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"RegionMaskProcessor": "Region Mask Processor"
|
||||
}
|
||||
270
custom_nodes/controlaltai-nodes/region_mask_validator_node.py
Normal file
270
custom_nodes/controlaltai-nodes/region_mask_validator_node.py
Normal file
@@ -0,0 +1,270 @@
|
||||
import torch
|
||||
from typing import Tuple, Dict, Optional, List
|
||||
import numpy as np
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
def pil2tensor(image):
|
||||
return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
|
||||
|
||||
class RegionMaskValidator:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"mask1": ("MASK",),
|
||||
"bbox1": ("BBOX",),
|
||||
"number_of_regions": ("INT", {
|
||||
"default": 1,
|
||||
"min": 1,
|
||||
"max": 3,
|
||||
"step": 1
|
||||
}),
|
||||
"min_region_size": ("INT", {
|
||||
"default": 64,
|
||||
"min": 32,
|
||||
"max": 512,
|
||||
"step": 32,
|
||||
"display": "Minimum Region Size (px)"
|
||||
}),
|
||||
"max_overlap": ("FLOAT", {
|
||||
"default": 0.1,
|
||||
"min": 0.0,
|
||||
"max": 0.5,
|
||||
"step": 0.01,
|
||||
"display": "Maximum Region Overlap"
|
||||
}),
|
||||
},
|
||||
"optional": {
|
||||
"mask2": ("MASK",),
|
||||
"bbox2": ("BBOX",),
|
||||
"mask3": ("MASK",),
|
||||
"bbox3": ("BBOX",),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MASK", "BBOX", "MASK", "BBOX", "MASK", "BBOX",
|
||||
"INT", "BOOLEAN", "STRING", "IMAGE")
|
||||
RETURN_NAMES = ("valid_mask1", "valid_bbox1",
|
||||
"valid_mask2", "valid_bbox2",
|
||||
"valid_mask3", "valid_bbox3",
|
||||
"valid_region_count", "is_valid", "validation_message",
|
||||
"validation_preview")
|
||||
FUNCTION = "validate_regions"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def get_region_dimensions(self, bbox: Dict, width: int, height: int) -> Tuple[int, int, Tuple[int, int]]:
|
||||
"""Calculate region dimensions in pixels"""
|
||||
if not bbox["active"]:
|
||||
return 0, (0, 0)
|
||||
|
||||
x1 = int(bbox["x1"] * width)
|
||||
y1 = int(bbox["y1"] * height)
|
||||
x2 = int(bbox["x2"] * width)
|
||||
y2 = int(bbox["y2"] * height)
|
||||
|
||||
w = x2 - x1
|
||||
h = y2 - y1
|
||||
area = w * h
|
||||
print(f"Region dimensions: {w}x{h} pixels")
|
||||
return area, (w, h)
|
||||
|
||||
def calculate_overlap(self, bbox1: Dict, bbox2: Dict, width: int, height: int) -> Tuple[Tuple[int, int], float]:
|
||||
"""Calculate overlap dimensions and ratio"""
|
||||
if not (bbox1["active"] and bbox2["active"]):
|
||||
return (0, 0), 0.0
|
||||
|
||||
# Convert to pixel coordinates
|
||||
x1_1 = int(bbox1["x1"] * width)
|
||||
y1_1 = int(bbox1["y1"] * height)
|
||||
x2_1 = int(bbox1["x2"] * width)
|
||||
y2_1 = int(bbox1["y2"] * height)
|
||||
|
||||
x1_2 = int(bbox2["x1"] * width)
|
||||
y1_2 = int(bbox2["y1"] * height)
|
||||
x2_2 = int(bbox2["x2"] * width)
|
||||
y2_2 = int(bbox2["y2"] * height)
|
||||
|
||||
# Calculate intersection
|
||||
x_left = max(x1_1, x1_2)
|
||||
y_top = max(y1_1, y1_2)
|
||||
x_right = min(x2_1, x2_2)
|
||||
y_bottom = min(y2_1, y2_2)
|
||||
|
||||
if x_right > x_left and y_bottom > y_top:
|
||||
overlap_width = x_right - x_left
|
||||
overlap_height = y_bottom - y_top
|
||||
overlap_area = overlap_width * overlap_height
|
||||
|
||||
area1 = (x2_1 - x1_1) * (y2_1 - y1_1)
|
||||
area2 = (x2_2 - x1_2) * (y2_2 - y1_2)
|
||||
smaller_area = min(area1, area2)
|
||||
overlap_ratio = overlap_area / smaller_area
|
||||
|
||||
print(f"Overlap dimensions: {overlap_width}x{overlap_height} pixels ({overlap_ratio:.1%})")
|
||||
return (overlap_width, overlap_height), overlap_ratio
|
||||
|
||||
return (0, 0), 0.0
|
||||
|
||||
def create_validation_preview(self, masks: List[torch.Tensor], bboxes: List[Dict],
|
||||
number_of_regions: int, is_valid: bool,
|
||||
messages: List[str], img_width: int, img_height: int) -> torch.Tensor:
|
||||
"""Create visual validation feedback with improved text rendering"""
|
||||
if not masks:
|
||||
return torch.zeros((3, 64, 64), dtype=torch.float32)
|
||||
|
||||
preview = Image.new("RGB", (img_width, img_height), (0, 0, 0))
|
||||
draw = ImageDraw.Draw(preview)
|
||||
|
||||
# Colors for valid/invalid regions
|
||||
colors = {
|
||||
'valid': [(0, 255, 0), (0, 200, 0), (0, 150, 0)], # Green shades
|
||||
'invalid': [(255, 0, 0), (200, 0, 0), (150, 0, 0)] # Red shades
|
||||
}
|
||||
|
||||
# Draw regions with validation status and improved text
|
||||
for i, (mask, bbox) in enumerate(zip(masks[:number_of_regions], bboxes[:number_of_regions])):
|
||||
if bbox["active"]:
|
||||
x1 = int(bbox["x1"] * img_width)
|
||||
y1 = int(bbox["y1"] * img_height)
|
||||
x2 = int(bbox["x2"] * img_width)
|
||||
y2 = int(bbox["y2"] * img_height)
|
||||
|
||||
w = x2 - x1
|
||||
h = y2 - y1
|
||||
color = colors['valid' if is_valid else 'invalid'][i]
|
||||
|
||||
# Draw thicker rectangle outline
|
||||
draw.rectangle([x1, y1, x2, y2], outline=color, width=4)
|
||||
|
||||
# Improved region label with dimensions
|
||||
label = f"R{i+1}: {w}x{h}"
|
||||
# Position text with offset from corner and draw twice for better visibility
|
||||
text_x = x1 + 10
|
||||
text_y = y1 + 10
|
||||
|
||||
# Draw text shadow/outline for better contrast
|
||||
shadow_offset = 2
|
||||
shadow_color = (0, 0, 0)
|
||||
for dx in [-shadow_offset, shadow_offset]:
|
||||
for dy in [-shadow_offset, shadow_offset]:
|
||||
draw.text((text_x + dx, text_y + dy), label, fill=shadow_color, font=None, size=64)
|
||||
|
||||
# Draw main text
|
||||
draw.text((text_x, text_y), label, fill=color, font=None, size=64)
|
||||
|
||||
# If region is invalid, add error message below the label
|
||||
if not is_valid and i < len(messages):
|
||||
error_y = text_y + 30 # Position error message below label
|
||||
# Draw error message with shadow for contrast
|
||||
for dx in [-shadow_offset, shadow_offset]:
|
||||
for dy in [-shadow_offset, shadow_offset]:
|
||||
draw.text((text_x + dx, error_y + dy), messages[i], fill=shadow_color, font=None, size=20)
|
||||
draw.text((text_x, error_y), messages[i], fill=color, font=None, size=20)
|
||||
|
||||
return pil2tensor(preview)
|
||||
|
||||
def validate_regions(self,
|
||||
mask1: torch.Tensor,
|
||||
bbox1: Dict,
|
||||
number_of_regions: int,
|
||||
min_region_size: int,
|
||||
max_overlap: float,
|
||||
mask2: Optional[torch.Tensor] = None,
|
||||
bbox2: Optional[Dict] = None,
|
||||
mask3: Optional[torch.Tensor] = None,
|
||||
bbox3: Optional[Dict] = None) -> Tuple:
|
||||
try:
|
||||
print(f"\nValidating {number_of_regions} regions:")
|
||||
messages = []
|
||||
is_valid = True
|
||||
height, width = mask1.shape
|
||||
print(f"Canvas size: {width}x{height} pixels")
|
||||
|
||||
# Collect regions
|
||||
regions = [
|
||||
(mask1, bbox1),
|
||||
(mask2, bbox2) if mask2 is not None else (None, None),
|
||||
(mask3, bbox3) if mask3 is not None else (None, None),
|
||||
]
|
||||
|
||||
# Validate each region
|
||||
valid_regions = []
|
||||
valid_count = 0
|
||||
for i, (mask, bbox) in enumerate(regions):
|
||||
if i < number_of_regions and mask is not None and bbox is not None:
|
||||
print(f"\nValidating Region {i+1}:")
|
||||
# Check region size
|
||||
_, (w, h) = self.get_region_dimensions(bbox, width, height)
|
||||
|
||||
if w < min_region_size or h < min_region_size:
|
||||
message = f"Region {i+1} too small: {w}x{h} pixels (minimum: {min_region_size}x{min_region_size})"
|
||||
print(f"Failed: {message}")
|
||||
messages.append(message)
|
||||
is_valid = False
|
||||
bbox = bbox.copy()
|
||||
bbox["active"] = False
|
||||
else:
|
||||
print(f"Passed: Region {i+1} size check ({w}x{h} pixels)")
|
||||
valid_count += 1
|
||||
|
||||
valid_regions.append((mask, bbox))
|
||||
else:
|
||||
valid_regions.append((
|
||||
torch.zeros_like(mask1),
|
||||
{"x1": 0.0, "y1": 0.0, "x2": 0.0, "y2": 0.0, "active": False}
|
||||
))
|
||||
|
||||
# Check overlaps
|
||||
if valid_count > 1:
|
||||
print("\nChecking region overlaps:")
|
||||
for i in range(len(valid_regions)):
|
||||
for j in range(i + 1, len(valid_regions)):
|
||||
mask_i, bbox_i = valid_regions[i]
|
||||
mask_j, bbox_j = valid_regions[j]
|
||||
|
||||
if bbox_i["active"] and bbox_j["active"]:
|
||||
print(f"Checking overlap between regions {i+1} and {j+1}:")
|
||||
(ow, oh), overlap_ratio = self.calculate_overlap(bbox_i, bbox_j, width, height)
|
||||
|
||||
if overlap_ratio > max_overlap:
|
||||
message = f"Excessive overlap ({ow}x{oh} pixels, {overlap_ratio:.1%}) between regions {i+1} and {j+1}"
|
||||
print(f"Failed: {message}")
|
||||
messages.append(message)
|
||||
is_valid = False
|
||||
|
||||
# Create validation message
|
||||
validation_message = "All regions valid" if is_valid else "\n".join(messages)
|
||||
print(f"\nValidation {'passed' if is_valid else 'failed'}:")
|
||||
print(validation_message)
|
||||
|
||||
# Create validation preview
|
||||
preview = self.create_validation_preview(
|
||||
[r[0] for r in valid_regions],
|
||||
[r[1] for r in valid_regions],
|
||||
number_of_regions,
|
||||
is_valid,
|
||||
messages,
|
||||
width,
|
||||
height
|
||||
)
|
||||
|
||||
return (*[item for region in valid_regions for item in region],
|
||||
valid_count, is_valid, validation_message, preview)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Validation error: {str(e)}")
|
||||
empty_mask = torch.zeros_like(mask1)
|
||||
empty_bbox = {"x1": 0.0, "y1": 0.0, "x2": 0.0, "y2": 0.0, "active": False}
|
||||
empty_preview = torch.zeros((3, height, width), dtype=torch.float32)
|
||||
return (empty_mask, empty_bbox, empty_mask, empty_bbox,
|
||||
empty_mask, empty_bbox,
|
||||
0, False, f"Validation error: {str(e)}", empty_preview)
|
||||
|
||||
# Node class mappings
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"RegionMaskValidator": RegionMaskValidator
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"RegionMaskValidator": "Region Mask Validator"
|
||||
}
|
||||
@@ -0,0 +1,90 @@
|
||||
import torch
|
||||
import numpy as np
|
||||
from typing import Tuple
|
||||
|
||||
class RegionOverlayVisualizer:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"image": ("IMAGE",),
|
||||
"region_preview": ("IMAGE",),
|
||||
"opacity": ("FLOAT", {
|
||||
"default": 0.3,
|
||||
"min": 0.0,
|
||||
"max": 1.0,
|
||||
"step": 0.1,
|
||||
"display": "Overlay Opacity"
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE",)
|
||||
FUNCTION = "visualize_regions"
|
||||
CATEGORY = "ControlAltAI Nodes/Flux Region"
|
||||
|
||||
def visualize_regions(
|
||||
self,
|
||||
image: torch.Tensor,
|
||||
region_preview: torch.Tensor,
|
||||
opacity: float,
|
||||
) -> Tuple[torch.Tensor]:
|
||||
try:
|
||||
print("\n=== Starting Region Overlay Visualization ===")
|
||||
print(f"Initial shapes - Image: {image.shape}, Preview: {region_preview.shape}")
|
||||
|
||||
# Ensure input tensors are in [B, H, W, C] format
|
||||
if len(image.shape) == 3:
|
||||
image = image.unsqueeze(0)
|
||||
if len(region_preview.shape) == 3:
|
||||
region_preview = region_preview.unsqueeze(0)
|
||||
|
||||
# Get working copies
|
||||
base_image = image.clone()
|
||||
preview = region_preview.clone()
|
||||
|
||||
# Convert to numpy for mask creation (keeping batch and HWC format)
|
||||
preview_np = (preview * 255).byte().cpu().numpy()
|
||||
|
||||
# Create mask based on preview content (operating on the last dimension - channels)
|
||||
color_sum = np.sum(preview_np, axis=-1) # Sum across color channels
|
||||
max_channel = np.max(preview_np, axis=-1)
|
||||
min_channel = np.min(preview_np, axis=-1)
|
||||
|
||||
# Create binary mask where content exists
|
||||
mask = (
|
||||
(color_sum > 50) &
|
||||
(max_channel > 30) &
|
||||
((max_channel - min_channel) > 10)
|
||||
)
|
||||
|
||||
# Expand mask to match input dimensions
|
||||
mask = mask[..., None] # Add channel dimension back
|
||||
mask = torch.from_numpy(mask).to(image.device)
|
||||
|
||||
print(f"Mask shape: {mask.shape}")
|
||||
print(f"Masked pixels: {mask.sum().item()}/{mask.numel()} ({mask.sum().item()/mask.numel()*100:.2f}%)")
|
||||
|
||||
# Apply blending only where mask is True
|
||||
result = torch.where(
|
||||
mask.bool(),
|
||||
(1 - opacity) * base_image + opacity * preview,
|
||||
base_image
|
||||
)
|
||||
|
||||
print(f"Final shape: {result.shape}")
|
||||
return (result,)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in visualization: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return (image,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"RegionOverlayVisualizer": RegionOverlayVisualizer
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"RegionOverlayVisualizer": "Region Overlay Visualizer"
|
||||
}
|
||||
40
custom_nodes/controlaltai-nodes/text_bridge_node.py
Normal file
40
custom_nodes/controlaltai-nodes/text_bridge_node.py
Normal file
@@ -0,0 +1,40 @@
|
||||
class TextBridge:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"text_input": ("STRING", {"default": "", "multiline": True}),
|
||||
},
|
||||
"optional": {
|
||||
"passthrough_text": ("STRING", {"forceInput": True}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("text_output",)
|
||||
FUNCTION = "bridge_text"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Utility"
|
||||
|
||||
def bridge_text(self, text_input="", passthrough_text=""):
|
||||
"""
|
||||
Bridge function that allows editing of input text and passes it through as output.
|
||||
If passthrough_text is connected, it uses that as the base text.
|
||||
The text_input field allows manual editing/override.
|
||||
"""
|
||||
# If passthrough_text is provided and text_input is empty or default, use passthrough
|
||||
if passthrough_text and (not text_input or text_input == ""):
|
||||
output_text = passthrough_text
|
||||
else:
|
||||
# Use the manually entered/edited text
|
||||
output_text = text_input
|
||||
|
||||
return (output_text,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"TextBridge": TextBridge,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"TextBridge": "Text Bridge",
|
||||
}
|
||||
62
custom_nodes/controlaltai-nodes/three_way_switch_node.py
Normal file
62
custom_nodes/controlaltai-nodes/three_way_switch_node.py
Normal file
@@ -0,0 +1,62 @@
|
||||
class AnyType(str):
|
||||
"""A special string subclass that equals any other type for ComfyUI type checking."""
|
||||
def __ne__(self, __value: object) -> bool:
|
||||
return False
|
||||
|
||||
# Create an instance to use as the any type
|
||||
any_type = AnyType("*")
|
||||
|
||||
class ThreeWaySwitch:
|
||||
"""Three-way switch that selects between three inputs based on selection setting."""
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"selection_setting": ("INT", {"default": 1, "min": 1, "max": 3}),
|
||||
},
|
||||
"optional": {
|
||||
"input_1": (any_type,),
|
||||
"input_2": (any_type,),
|
||||
"input_3": (any_type,),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = (any_type,)
|
||||
RETURN_NAMES = ("output",)
|
||||
FUNCTION = "switch_inputs"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
@classmethod
|
||||
def VALIDATE_INPUTS(cls, **kwargs):
|
||||
"""Allow any input types."""
|
||||
return True
|
||||
|
||||
def switch_inputs(self, selection_setting=1, input_1=None, input_2=None, input_3=None):
|
||||
"""
|
||||
Three-way switch that selects between three inputs based on the selection_setting.
|
||||
Compatible with IntegerSettingsAdvanced node:
|
||||
- selection_setting = 1: selects input_1
|
||||
- selection_setting = 2: selects input_2
|
||||
- selection_setting = 3: selects input_3
|
||||
"""
|
||||
if selection_setting == 2:
|
||||
# Second option - select input_2, fallback to input_1, then input_3
|
||||
selected_output = input_2 if input_2 is not None else (input_1 if input_1 is not None else input_3)
|
||||
elif selection_setting == 3:
|
||||
# Third option - select input_3, fallback to input_1, then input_2
|
||||
selected_output = input_3 if input_3 is not None else (input_1 if input_1 is not None else input_2)
|
||||
else:
|
||||
# Default/First option (1) - select input_1, fallback to input_2, then input_3
|
||||
selected_output = input_1 if input_1 is not None else (input_2 if input_2 is not None else input_3)
|
||||
|
||||
return (selected_output,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"ThreeWaySwitch": ThreeWaySwitch,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"ThreeWaySwitch": "Switch (Three Way)",
|
||||
}
|
||||
57
custom_nodes/controlaltai-nodes/two_way_switch_node.py
Normal file
57
custom_nodes/controlaltai-nodes/two_way_switch_node.py
Normal file
@@ -0,0 +1,57 @@
|
||||
class AnyType(str):
|
||||
"""A special string subclass that equals any other type for ComfyUI type checking."""
|
||||
def __ne__(self, __value: object) -> bool:
|
||||
return False
|
||||
|
||||
# Create an instance to use as the any type
|
||||
any_type = AnyType("*")
|
||||
|
||||
class TwoWaySwitch:
|
||||
"""Two-way switch that selects between two inputs based on selection setting."""
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"selection_setting": ("INT", {"default": 1, "min": 1, "max": 2}),
|
||||
},
|
||||
"optional": {
|
||||
"input_1": (any_type,),
|
||||
"input_2": (any_type,),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = (any_type,)
|
||||
RETURN_NAMES = ("output",)
|
||||
FUNCTION = "switch_inputs"
|
||||
|
||||
CATEGORY = "ControlAltAI Nodes/Logic"
|
||||
|
||||
@classmethod
|
||||
def VALIDATE_INPUTS(cls, **kwargs):
|
||||
"""Allow any input types."""
|
||||
return True
|
||||
|
||||
def switch_inputs(self, selection_setting=1, input_1=None, input_2=None):
|
||||
"""
|
||||
Two-way switch that selects between two inputs based on the selection_setting.
|
||||
Compatible with IntegerSettings node:
|
||||
- selection_setting = 1 (Disable): selects input_1
|
||||
- selection_setting = 2 (Enable): selects input_2
|
||||
"""
|
||||
if selection_setting == 2:
|
||||
# Enable state - select second input
|
||||
selected_output = input_2 if input_2 is not None else input_1
|
||||
else:
|
||||
# Disable state (1) or any other value - select first input
|
||||
selected_output = input_1 if input_1 is not None else input_2
|
||||
|
||||
return (selected_output,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"TwoWaySwitch": TwoWaySwitch,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"TwoWaySwitch": "Switch (Two Way)",
|
||||
}
|
||||
@@ -0,0 +1,78 @@
|
||||
import { app } from "/scripts/app.js";
|
||||
|
||||
// Register the extension for IntegerSettingsAdvanced node
|
||||
app.registerExtension({
|
||||
name: "ControlAltAI.IntegerSettingsAdvanced",
|
||||
|
||||
async beforeRegisterNodeDef(nodeType, nodeData, app) {
|
||||
if (nodeData.name === "IntegerSettingsAdvanced") {
|
||||
console.log("Registering IntegerSettingsAdvanced mutual exclusion behavior");
|
||||
|
||||
const onNodeCreated = nodeType.prototype.onNodeCreated;
|
||||
nodeType.prototype.onNodeCreated = function () {
|
||||
const result = onNodeCreated?.apply(this, arguments);
|
||||
|
||||
// Store reference to the node
|
||||
const node = this;
|
||||
|
||||
// Function to enforce mutual exclusion
|
||||
function enforceMutualExclusion(activeWidget) {
|
||||
// Get all boolean widgets
|
||||
const booleanWidgets = node.widgets.filter(w =>
|
||||
w.type === "toggle" &&
|
||||
(w.name === "setting_1" || w.name === "setting_2" || w.name === "setting_3")
|
||||
);
|
||||
|
||||
// If a widget is being set to true, set others to false
|
||||
if (activeWidget.value === true) {
|
||||
booleanWidgets.forEach(widget => {
|
||||
if (widget !== activeWidget) {
|
||||
widget.value = false;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Always ensure at least one is true (always one behavior)
|
||||
const anyEnabled = booleanWidgets.some(w => w.value === true);
|
||||
if (!anyEnabled) {
|
||||
// If none are enabled, enable setting_1 as default
|
||||
const setting1Widget = booleanWidgets.find(w => w.name === "setting_1");
|
||||
if (setting1Widget) {
|
||||
setting1Widget.value = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Trigger canvas redraw
|
||||
if (app.graph) {
|
||||
app.graph.setDirtyCanvas(true, false);
|
||||
}
|
||||
}
|
||||
|
||||
// Hook into widget callbacks after node is fully created
|
||||
setTimeout(() => {
|
||||
node.widgets.forEach(widget => {
|
||||
if (widget.type === "toggle" &&
|
||||
(widget.name === "setting_1" || widget.name === "setting_2" || widget.name === "setting_3")) {
|
||||
|
||||
// Store original callback
|
||||
const originalCallback = widget.callback;
|
||||
|
||||
// Override with mutual exclusion logic
|
||||
widget.callback = function(value) {
|
||||
// Call original callback first
|
||||
if (originalCallback) {
|
||||
originalCallback.call(this, value);
|
||||
}
|
||||
|
||||
// Apply mutual exclusion
|
||||
enforceMutualExclusion(this);
|
||||
};
|
||||
}
|
||||
});
|
||||
}, 10);
|
||||
|
||||
return result;
|
||||
};
|
||||
}
|
||||
}
|
||||
});
|
||||
26
custom_nodes/controlaltai-nodes/xformers_instructions.txt
Normal file
26
custom_nodes/controlaltai-nodes/xformers_instructions.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
XFormers is now needed for the flux region spatial control to work.
|
||||
|
||||
Go to your python_embeded folder and Check your pytorch and cuda version:
|
||||
|
||||
python.exe -c "import torch; print(torch.__version__)"
|
||||
|
||||
check for xformers if installed:
|
||||
|
||||
python.exe -m pip show xformers
|
||||
|
||||
Please go to: https://github.com/facebookresearch/xformers/releases
|
||||
|
||||
Check for the latest Xformers version that is compatible with your installed Pytorch version.
|
||||
|
||||
You can Install the latest version of xformers using this command:
|
||||
|
||||
python.exe -m pip install xformers==PUTVERSIONHERE --index-url https://download.pytorch.org/whl/cuVERSION
|
||||
|
||||
example For PyTorch 2.5.1 with CUDA 12.4:
|
||||
python.exe -m pip install xformers==0.0.28.post3 --index-url https://download.pytorch.org/whl/cu124
|
||||
|
||||
As of 8th December 2024:
|
||||
Recommended:
|
||||
xformers==0.0.28.post3
|
||||
PyTorch 2.5.1
|
||||
CUDA version: cu124 (for CUDA 12.4)
|
||||
Reference in New Issue
Block a user