Compare commits

...

13 Commits

Author SHA1 Message Date
cabacaece8 Never halt on errors, log and continue
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Close stale issues / stale (push) Has been cancelled
Generate Pydantic Stubs from api.comfy.org / generate-models (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 02:05:36 +00:00
9e06eb3bce Harden setup script with better error handling
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
- Downloads to .part files, moves on success (no corrupt files)
- Skips existing non-empty files, removes empty/corrupt ones
- Ctrl+C trap cleans up partial downloads
- Preflight checks for required commands and directory
- CUDA verification after PyTorch install
- Symlink helper that warns if target is missing
- Summary at end lists all failed downloads and node deps
- Removed set -e in favor of explicit error handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 02:04:57 +00:00
700d6ead21 Add --upgrade to pip installs in setup script
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Ensures packages are updated if already present on vast.ai instances.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 01:58:21 +00:00
f09734b0ee Add custom nodes, Civitai loras (LFS), and vast.ai setup script
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Includes 30 custom nodes committed directly, 7 Civitai-exclusive
loras stored via Git LFS, and a setup script that installs all
dependencies and downloads HuggingFace-hosted models on vast.ai.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 00:56:42 +00:00
AustinMroz
2b70ab9ad0 Add a Create List node (#12173) 2026-02-05 01:18:21 -05:00
Comfy Org PR Bot
00efcc6cd0 Bump comfyui-frontend-package to 1.38.13 (#12238) 2026-02-05 01:17:37 -05:00
comfyanonymous
cb459573c8 ComfyUI v0.12.3 2026-02-05 01:13:35 -05:00
comfyanonymous
35183543e0 Add VAE tiled decode node for audio. (#12299) 2026-02-05 01:12:04 -05:00
blepping
a246cc02b2 Improvements to ACE-Steps 1.5 text encoding (#12283) 2026-02-05 00:17:37 -05:00
comfyanonymous
a50c32d63f Disable sage attention on ace step 1.5 (#12297) 2026-02-04 22:15:30 -05:00
comfyanonymous
6125b80979 Add llm sampling options and make reference audio work on ace step 1.5 (#12295) 2026-02-04 21:29:22 -05:00
comfyanonymous
c8fcbd66ee Try to fix ace text encoder slowness on some configs. (#12290) 2026-02-04 19:37:05 -05:00
comfyanonymous
26dd7eb421 Fix ace step nan issue on some hardware/pytorch configs. (#12289) 2026-02-04 18:25:06 -05:00
2288 changed files with 748843 additions and 50 deletions

1
.gitattributes vendored
View File

@@ -1,3 +1,4 @@
/web/assets/** linguist-generated
/web/** linguist-vendored
comfy_api_nodes/apis/__init__.py linguist-generated
models/loras/*.safetensors filter=lfs diff=lfs merge=lfs -text

15
.gitignore vendored
View File

@@ -3,10 +3,19 @@ __pycache__/
/output/
/input/
!/input/example.png
/models/
/models/*
!/models/loras/
/models/loras/*
!/models/loras/Anime_CRABDM.safetensors
!/models/loras/CharacterDesign-FluxV2.safetensors
!/models/loras/FluxMythSharpL1nes.safetensors
!/models/loras/ILLUSTRATION (FLUX) - V3.1.safetensors
!/models/loras/Mezzotint_Artstyle_for_Flux_-_by_Ethanar.safetensors
!/models/loras/oil-paint.safetensors
!/models/loras/Phandigrams_III.safetensors
/temp/
/custom_nodes/
!custom_nodes/example_node.py.example
/custom_nodes/**/.*
!/custom_nodes/**/.gitignore
extra_model_paths.yaml
/.vs
.vscode/

View File

@@ -183,7 +183,7 @@ class AceStepAttention(nn.Module):
else:
attn_bias = window_bias
attn_output = optimized_attention(query_states, key_states, value_states, self.num_heads, attn_bias, skip_reshape=True)
attn_output = optimized_attention(query_states, key_states, value_states, self.num_heads, attn_bias, skip_reshape=True, low_precision_attention=False)
attn_output = self.o_proj(attn_output)
return attn_output
@@ -1035,8 +1035,7 @@ class AceStepConditionGenerationModel(nn.Module):
audio_codes = torch.nn.functional.pad(audio_codes, (0, math.ceil(src_latents.shape[1] / 5) - audio_codes.shape[1]), "constant", 35847)
lm_hints_5Hz = self.tokenizer.quantizer.get_output_from_indices(audio_codes, dtype=text_hidden_states.dtype)
else:
assert False
# TODO ?
lm_hints_5Hz, indices = self.tokenizer.tokenize(refer_audio_acoustic_hidden_states_packed)
lm_hints = self.detokenizer(lm_hints_5Hz)

View File

@@ -524,6 +524,9 @@ def attention_pytorch(q, k, v, heads, mask=None, attn_precision=None, skip_resha
@wrap_attn
def attention_sage(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False, skip_output_reshape=False, **kwargs):
if kwargs.get("low_precision_attention", True) is False:
return attention_pytorch(q, k, v, heads, mask=mask, skip_reshape=skip_reshape, skip_output_reshape=skip_output_reshape, **kwargs)
exception_fallback = False
if skip_reshape:
b, _, _, dim_head = q.shape

View File

@@ -1548,6 +1548,7 @@ class ACEStep15(BaseModel):
def extra_conds(self, **kwargs):
out = super().extra_conds(**kwargs)
device = kwargs["device"]
noise = kwargs["noise"]
cross_attn = kwargs.get("cross_attn", None)
if cross_attn is not None:
@@ -1571,15 +1572,19 @@ class ACEStep15(BaseModel):
1.4844e-01, 9.4727e-02, 3.8477e-01, -1.2578e+00, -3.3203e-01,
-8.5547e-01, 4.3359e-01, 4.2383e-01, -8.9453e-01, -5.0391e-01,
-5.6152e-02, -2.9219e+00, -2.4658e-02, 5.0391e-01, 9.8438e-01,
7.2754e-02, -2.1582e-01, 6.3672e-01, 1.0000e+00]]], device=device).movedim(-1, 1).repeat(1, 1, 750)
7.2754e-02, -2.1582e-01, 6.3672e-01, 1.0000e+00]]], device=device).movedim(-1, 1).repeat(1, 1, noise.shape[2])
pass_audio_codes = True
else:
refer_audio = refer_audio[-1]
refer_audio = refer_audio[-1][:, :, :noise.shape[2]]
pass_audio_codes = False
if pass_audio_codes:
audio_codes = kwargs.get("audio_codes", None)
if audio_codes is not None:
out['audio_codes'] = comfy.conds.CONDRegular(torch.tensor(audio_codes, device=device))
refer_audio = refer_audio[:, :, :750]
out['refer_audio'] = comfy.conds.CONDRegular(refer_audio)
audio_codes = kwargs.get("audio_codes", None)
if audio_codes is not None:
out['audio_codes'] = comfy.conds.CONDRegular(torch.tensor(audio_codes, device=device))
return out
class Omnigen2(BaseModel):

View File

@@ -54,6 +54,8 @@ try:
SDPA_BACKEND_PRIORITY.insert(0, SDPBackend.CUDNN_ATTENTION)
def scaled_dot_product_attention(q, k, v, *args, **kwargs):
if q.nelement() < 1024 * 128: # arbitrary number, for small inputs cudnn attention seems slower
return torch.nn.functional.scaled_dot_product_attention(q, k, v, *args, **kwargs)
with sdpa_kernel(SDPA_BACKEND_PRIORITY, set_priority=True):
return torch.nn.functional.scaled_dot_product_attention(q, k, v, *args, **kwargs)
else:

View File

@@ -976,7 +976,7 @@ class VAE:
if overlap is not None:
args["overlap"] = overlap
if dims == 1:
if dims == 1 or self.extra_1d_channel is not None:
args.pop("tile_y")
output = self.decode_tiled_1d(samples, **args)
elif dims == 2:

View File

@@ -3,6 +3,7 @@ import comfy.text_encoders.llama
from comfy import sd1_clip
import torch
import math
import yaml
import comfy.utils
@@ -101,9 +102,7 @@ def sample_manual_loop_no_classes(
return output_audio_codes
def generate_audio_codes(model, positive, negative, min_tokens=1, max_tokens=1024, seed=0):
cfg_scale = 2.0
def generate_audio_codes(model, positive, negative, min_tokens=1, max_tokens=1024, seed=0, cfg_scale=2.0, temperature=0.85, top_p=0.9, top_k=0):
positive = [[token for token, _ in inner_list] for inner_list in positive]
negative = [[token for token, _ in inner_list] for inner_list in negative]
positive = positive[0]
@@ -120,34 +119,80 @@ def generate_audio_codes(model, positive, negative, min_tokens=1, max_tokens=102
positive = [model.special_tokens["pad"]] * pos_pad + positive
paddings = [pos_pad, neg_pad]
return sample_manual_loop_no_classes(model, [positive, negative], paddings, cfg_scale=cfg_scale, seed=seed, min_tokens=min_tokens, max_new_tokens=max_tokens)
return sample_manual_loop_no_classes(model, [positive, negative], paddings, cfg_scale=cfg_scale, temperature=temperature, top_p=top_p, top_k=top_k, seed=seed, min_tokens=min_tokens, max_new_tokens=max_tokens)
class ACE15Tokenizer(sd1_clip.SD1Tokenizer):
def __init__(self, embedding_directory=None, tokenizer_data={}):
super().__init__(embedding_directory=embedding_directory, tokenizer_data=tokenizer_data, name="qwen3_06b", tokenizer=Qwen3Tokenizer)
def _metas_to_cot(self, *, return_yaml: bool = False, **kwargs) -> str:
user_metas = {
k: kwargs.pop(k)
for k in ("bpm", "duration", "keyscale", "timesignature", "language", "caption")
if k in kwargs
}
timesignature = user_metas.get("timesignature")
if isinstance(timesignature, str) and timesignature.endswith("/4"):
user_metas["timesignature"] = timesignature.rsplit("/", 1)[0]
user_metas = {
k: v if not isinstance(v, str) or not v.isdigit() else int(v)
for k, v in user_metas.items()
if v not in {"unspecified", None}
}
if len(user_metas):
meta_yaml = yaml.dump(user_metas, allow_unicode=True, sort_keys=True).strip()
else:
meta_yaml = ""
return f"<think>\n{meta_yaml}\n</think>" if not return_yaml else meta_yaml
def _metas_to_cap(self, **kwargs) -> str:
use_keys = ("bpm", "duration", "keyscale", "timesignature")
user_metas = { k: kwargs.pop(k, "N/A") for k in use_keys }
duration = user_metas["duration"]
if duration == "N/A":
user_metas["duration"] = "30 seconds"
elif isinstance(duration, (str, int, float)):
user_metas["duration"] = f"{math.ceil(float(duration))} seconds"
else:
raise TypeError("Unexpected type for duration key, must be str, int or float")
return "\n".join(f"- {k}: {user_metas[k]}" for k in use_keys)
def tokenize_with_weights(self, text, return_word_ids=False, **kwargs):
out = {}
lyrics = kwargs.get("lyrics", "")
bpm = kwargs.get("bpm", 120)
duration = kwargs.get("duration", 120)
keyscale = kwargs.get("keyscale", "C major")
timesignature = kwargs.get("timesignature", 2)
language = kwargs.get("language", "en")
language = kwargs.get("language")
seed = kwargs.get("seed", 0)
generate_audio_codes = kwargs.get("generate_audio_codes", True)
cfg_scale = kwargs.get("cfg_scale", 2.0)
temperature = kwargs.get("temperature", 0.85)
top_p = kwargs.get("top_p", 0.9)
top_k = kwargs.get("top_k", 0.0)
duration = math.ceil(duration)
meta_lm = 'bpm: {}\nduration: {}\nkeyscale: {}\ntimesignature: {}'.format(bpm, duration, keyscale, timesignature)
lm_template = "<|im_start|>system\n# Instruction\nGenerate audio semantic tokens based on the given conditions:\n\n<|im_end|>\n<|im_start|>user\n# Caption\n{}\n{}\n<|im_end|>\n<|im_start|>assistant\n<think>\n{}\n</think>\n\n<|im_end|>\n"
kwargs["duration"] = duration
meta_cap = '- bpm: {}\n- timesignature: {}\n- keyscale: {}\n- duration: {}\n'.format(bpm, timesignature, keyscale, duration)
out["lm_prompt"] = self.qwen3_06b.tokenize_with_weights(lm_template.format(text, lyrics, meta_lm), disable_weights=True)
out["lm_prompt_negative"] = self.qwen3_06b.tokenize_with_weights(lm_template.format(text, lyrics, ""), disable_weights=True)
cot_text = self._metas_to_cot(caption = text, **kwargs)
meta_cap = self._metas_to_cap(**kwargs)
out["lyrics"] = self.qwen3_06b.tokenize_with_weights("# Languages\n{}\n\n# Lyric{}<|endoftext|><|endoftext|>".format(language, lyrics), return_word_ids, disable_weights=True, **kwargs)
out["qwen3_06b"] = self.qwen3_06b.tokenize_with_weights("# Instruction\nGenerate audio semantic tokens based on the given conditions:\n\n# Caption\n{}# Metas\n{}<|endoftext|>\n<|endoftext|>".format(text, meta_cap), return_word_ids, **kwargs)
out["lm_metadata"] = {"min_tokens": duration * 5, "seed": seed}
lm_template = "<|im_start|>system\n# Instruction\nGenerate audio semantic tokens based on the given conditions:\n\n<|im_end|>\n<|im_start|>user\n# Caption\n{}\n# Lyric\n{}\n<|im_end|>\n<|im_start|>assistant\n{}\n<|im_end|>\n"
out["lm_prompt"] = self.qwen3_06b.tokenize_with_weights(lm_template.format(text, lyrics, cot_text), disable_weights=True)
out["lm_prompt_negative"] = self.qwen3_06b.tokenize_with_weights(lm_template.format(text, lyrics, "<think>\n</think>"), disable_weights=True)
out["lyrics"] = self.qwen3_06b.tokenize_with_weights("# Languages\n{}\n\n# Lyric\n{}<|endoftext|><|endoftext|>".format(language if language is not None else "", lyrics), return_word_ids, disable_weights=True, **kwargs)
out["qwen3_06b"] = self.qwen3_06b.tokenize_with_weights("# Instruction\nGenerate audio semantic tokens based on the given conditions:\n\n# Caption\n{}\n# Metas\n{}\n<|endoftext|>\n<|endoftext|>".format(text, meta_cap), return_word_ids, **kwargs)
out["lm_metadata"] = {"min_tokens": duration * 5,
"seed": seed,
"generate_audio_codes": generate_audio_codes,
"cfg_scale": cfg_scale,
"temperature": temperature,
"top_p": top_p,
"top_k": top_k,
}
return out
@@ -203,10 +248,14 @@ class ACE15TEModel(torch.nn.Module):
self.qwen3_06b.set_clip_options({"layer": [0]})
lyrics_embeds, _, extra_l = self.qwen3_06b.encode_token_weights(token_weight_pairs_lyrics)
lm_metadata = token_weight_pairs["lm_metadata"]
audio_codes = generate_audio_codes(getattr(self, self.lm_model, self.qwen3_06b), token_weight_pairs["lm_prompt"], token_weight_pairs["lm_prompt_negative"], min_tokens=lm_metadata["min_tokens"], max_tokens=lm_metadata["min_tokens"], seed=lm_metadata["seed"])
out = {"conditioning_lyrics": lyrics_embeds[:, 0]}
return base_out, None, {"conditioning_lyrics": lyrics_embeds[:, 0], "audio_codes": [audio_codes]}
lm_metadata = token_weight_pairs["lm_metadata"]
if lm_metadata["generate_audio_codes"]:
audio_codes = generate_audio_codes(getattr(self, self.lm_model, self.qwen3_06b), token_weight_pairs["lm_prompt"], token_weight_pairs["lm_prompt_negative"], min_tokens=lm_metadata["min_tokens"], max_tokens=lm_metadata["min_tokens"], seed=lm_metadata["seed"], cfg_scale=lm_metadata["cfg_scale"], temperature=lm_metadata["temperature"], top_p=lm_metadata["top_p"], top_k=lm_metadata["top_k"])
out["audio_codes"] = [audio_codes]
return base_out, None, out
def set_clip_options(self, options):
self.qwen3_06b.set_clip_options(options)

View File

@@ -651,10 +651,10 @@ class Llama2_(nn.Module):
mask = None
if attention_mask is not None:
mask = 1.0 - attention_mask.to(x.dtype).reshape((attention_mask.shape[0], 1, -1, attention_mask.shape[-1])).expand(attention_mask.shape[0], 1, seq_len, attention_mask.shape[-1])
mask = mask.masked_fill(mask.to(torch.bool), torch.finfo(x.dtype).min)
mask = mask.masked_fill(mask.to(torch.bool), torch.finfo(x.dtype).min / 4)
if seq_len > 1:
causal_mask = torch.empty(past_len + seq_len, past_len + seq_len, dtype=x.dtype, device=x.device).fill_(torch.finfo(x.dtype).min).triu_(1)
causal_mask = torch.empty(past_len + seq_len, past_len + seq_len, dtype=x.dtype, device=x.device).fill_(torch.finfo(x.dtype).min / 4).triu_(1)
if mask is not None:
mask += causal_mask
else:

View File

@@ -44,13 +44,18 @@ class TextEncodeAceStepAudio15(io.ComfyNode):
io.Combo.Input("timesignature", options=['2', '3', '4', '6']),
io.Combo.Input("language", options=["en", "ja", "zh", "es", "de", "fr", "pt", "ru", "it", "nl", "pl", "tr", "vi", "cs", "fa", "id", "ko", "uk", "hu", "ar", "sv", "ro", "el"]),
io.Combo.Input("keyscale", options=[f"{root} {quality}" for quality in ["major", "minor"] for root in ["C", "C#", "Db", "D", "D#", "Eb", "E", "F", "F#", "Gb", "G", "G#", "Ab", "A", "A#", "Bb", "B"]]),
io.Boolean.Input("generate_audio_codes", default=True, tooltip="Enable the LLM that generates audio codes. This can be slow but will increase the quality of the generated audio. Turn this off if you are giving the model an audio reference.", advanced=True),
io.Float.Input("cfg_scale", default=2.0, min=0.0, max=100.0, step=0.1, advanced=True),
io.Float.Input("temperature", default=0.85, min=0.0, max=2.0, step=0.01, advanced=True),
io.Float.Input("top_p", default=0.9, min=0.0, max=2000.0, step=0.01, advanced=True),
io.Int.Input("top_k", default=0, min=0, max=100, advanced=True),
],
outputs=[io.Conditioning.Output()],
)
@classmethod
def execute(cls, clip, tags, lyrics, seed, bpm, duration, timesignature, language, keyscale) -> io.NodeOutput:
tokens = clip.tokenize(tags, lyrics=lyrics, bpm=bpm, duration=duration, timesignature=int(timesignature), language=language, keyscale=keyscale, seed=seed)
def execute(cls, clip, tags, lyrics, seed, bpm, duration, timesignature, language, keyscale, generate_audio_codes, cfg_scale, temperature, top_p, top_k) -> io.NodeOutput:
tokens = clip.tokenize(tags, lyrics=lyrics, bpm=bpm, duration=duration, timesignature=int(timesignature), language=language, keyscale=keyscale, seed=seed, generate_audio_codes=generate_audio_codes, cfg_scale=cfg_scale, temperature=temperature, top_p=top_p, top_k=top_k)
conditioning = clip.encode_from_tokens_scheduled(tokens)
return io.NodeOutput(conditioning)
@@ -100,14 +105,15 @@ class EmptyAceStep15LatentAudio(io.ComfyNode):
latent = torch.zeros([batch_size, 64, length], device=comfy.model_management.intermediate_device())
return io.NodeOutput({"samples": latent, "type": "audio"})
class ReferenceTimbreAudio(io.ComfyNode):
class ReferenceAudio(io.ComfyNode):
@classmethod
def define_schema(cls):
return io.Schema(
node_id="ReferenceTimbreAudio",
display_name="Reference Audio",
category="advanced/conditioning/audio",
is_experimental=True,
description="This node sets the reference audio for timbre (for ace step 1.5)",
description="This node sets the reference audio for ace step 1.5",
inputs=[
io.Conditioning.Input("conditioning"),
io.Latent.Input("latent", optional=True),
@@ -131,7 +137,7 @@ class AceExtension(ComfyExtension):
EmptyAceStepLatentAudio,
TextEncodeAceStepAudio15,
EmptyAceStep15LatentAudio,
ReferenceTimbreAudio,
ReferenceAudio,
]
async def comfy_entrypoint() -> AceExtension:

View File

@@ -94,6 +94,19 @@ class VAEEncodeAudio(IO.ComfyNode):
encode = execute # TODO: remove
def vae_decode_audio(vae, samples, tile=None, overlap=None):
if tile is not None:
audio = vae.decode_tiled(samples["samples"], tile_y=tile, overlap=overlap).movedim(-1, 1)
else:
audio = vae.decode(samples["samples"]).movedim(-1, 1)
std = torch.std(audio, dim=[1, 2], keepdim=True) * 5.0
std[std < 1.0] = 1.0
audio /= std
vae_sample_rate = getattr(vae, "audio_sample_rate", 44100)
return {"waveform": audio, "sample_rate": vae_sample_rate if "sample_rate" not in samples else samples["sample_rate"]}
class VAEDecodeAudio(IO.ComfyNode):
@classmethod
def define_schema(cls):
@@ -111,16 +124,33 @@ class VAEDecodeAudio(IO.ComfyNode):
@classmethod
def execute(cls, vae, samples) -> IO.NodeOutput:
audio = vae.decode(samples["samples"]).movedim(-1, 1)
std = torch.std(audio, dim=[1,2], keepdim=True) * 5.0
std[std < 1.0] = 1.0
audio /= std
vae_sample_rate = getattr(vae, "audio_sample_rate", 44100)
return IO.NodeOutput({"waveform": audio, "sample_rate": vae_sample_rate if "sample_rate" not in samples else samples["sample_rate"]})
return IO.NodeOutput(vae_decode_audio(vae, samples))
decode = execute # TODO: remove
class VAEDecodeAudioTiled(IO.ComfyNode):
@classmethod
def define_schema(cls):
return IO.Schema(
node_id="VAEDecodeAudioTiled",
search_aliases=["latent to audio"],
display_name="VAE Decode Audio (Tiled)",
category="latent/audio",
inputs=[
IO.Latent.Input("samples"),
IO.Vae.Input("vae"),
IO.Int.Input("tile_size", default=512, min=32, max=8192, step=8),
IO.Int.Input("overlap", default=64, min=0, max=1024, step=8),
],
outputs=[IO.Audio.Output()],
)
@classmethod
def execute(cls, vae, samples, tile_size, overlap) -> IO.NodeOutput:
return IO.NodeOutput(vae_decode_audio(vae, samples, tile_size, overlap))
class SaveAudio(IO.ComfyNode):
@classmethod
def define_schema(cls):
@@ -675,6 +705,7 @@ class AudioExtension(ComfyExtension):
EmptyLatentAudio,
VAEEncodeAudio,
VAEDecodeAudio,
VAEDecodeAudioTiled,
SaveAudio,
SaveAudioMP3,
SaveAudioOpus,

View File

@@ -0,0 +1,47 @@
from __future__ import annotations
from typing_extensions import override
from comfy_api.latest import ComfyExtension, io
class CreateList(io.ComfyNode):
@classmethod
def define_schema(cls):
template_matchtype = io.MatchType.Template("type")
template_autogrow = io.Autogrow.TemplatePrefix(
input=io.MatchType.Input("input", template=template_matchtype),
prefix="input",
)
return io.Schema(
node_id="CreateList",
display_name="Create List",
category="logic",
is_input_list=True,
search_aliases=["Image Iterator", "Text Iterator", "Iterator"],
inputs=[io.Autogrow.Input("inputs", template=template_autogrow)],
outputs=[
io.MatchType.Output(
template=template_matchtype,
is_output_list=True,
display_name="list",
),
],
)
@classmethod
def execute(cls, inputs: io.Autogrow.Type) -> io.NodeOutput:
output_list = []
for input in inputs.values():
output_list += input
return io.NodeOutput(output_list)
class ToolkitExtension(ComfyExtension):
@override
async def get_node_list(self) -> list[type[io.ComfyNode]]:
return [
CreateList,
]
async def comfy_entrypoint() -> ToolkitExtension:
return ToolkitExtension()

View File

@@ -1,3 +1,3 @@
# This file is automatically generated by the build process when version is
# updated in pyproject.toml.
__version__ = "0.12.2"
__version__ = "0.12.3"

1
custom_nodes/ComfyMath/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
**/__pycache__

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,19 @@
# ComfyMath
Provides Math Nodes for [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
## Features
Provides nodes for:
* Boolean Logic
* Integer Arithmetic
* Floating Point Arithmetic and Functions
* Vec2, Vec3, and Vec4 Arithmetic and Functions
## Installation
From the `custom_nodes` directory in your ComfyUI installation, run:
```sh
git clone https://github.com/evanspearman/ComfyMath.git
```

View File

@@ -0,0 +1,29 @@
from .src.comfymath.convert import NODE_CLASS_MAPPINGS as convert_NCM
from .src.comfymath.bool import NODE_CLASS_MAPPINGS as bool_NCM
from .src.comfymath.int import NODE_CLASS_MAPPINGS as int_NCM
from .src.comfymath.float import NODE_CLASS_MAPPINGS as float_NCM
from .src.comfymath.number import NODE_CLASS_MAPPINGS as number_NCM
from .src.comfymath.vec import NODE_CLASS_MAPPINGS as vec_NCM
from .src.comfymath.control import NODE_CLASS_MAPPINGS as control_NCM
from .src.comfymath.graphics import NODE_CLASS_MAPPINGS as graphics_NCM
NODE_CLASS_MAPPINGS = {
**convert_NCM,
**bool_NCM,
**int_NCM,
**float_NCM,
**number_NCM,
**vec_NCM,
**control_NCM,
**graphics_NCM,
}
def remove_cm_prefix(node_mapping: str) -> str:
if node_mapping.startswith("CM_"):
return node_mapping[3:]
return node_mapping
NODE_DISPLAY_NAME_MAPPINGS = {key: remove_cm_prefix(key) for key in NODE_CLASS_MAPPINGS}

View File

@@ -0,0 +1,19 @@
[tool.poetry]
name = "comfymath"
version = "0.1.0"
description = "Math nodes for ComfyUI"
authors = ["Evan Spearman <evan@spearman.mb.ca>"]
license = { text = "Apache License 2.0" }
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.25.1"
[tool.poetry.group.dev.dependencies]
mypy = "^1.4.1"
black = "^23.7.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -0,0 +1 @@
numpy

View File

@@ -0,0 +1,59 @@
from typing import Any, Callable, Mapping
DEFAULT_BOOL = ("BOOLEAN", {"default": False})
BOOL_UNARY_OPERATIONS: Mapping[str, Callable[[bool], bool]] = {
"Not": lambda a: not a,
}
BOOL_BINARY_OPERATIONS: Mapping[str, Callable[[bool, bool], bool]] = {
"Nor": lambda a, b: not (a or b),
"Xor": lambda a, b: a ^ b,
"Nand": lambda a, b: not (a and b),
"And": lambda a, b: a and b,
"Xnor": lambda a, b: not (a ^ b),
"Or": lambda a, b: a or b,
"Eq": lambda a, b: a == b,
"Neq": lambda a, b: a != b,
}
class BoolUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {"op": (list(BOOL_UNARY_OPERATIONS.keys()),), "a": DEFAULT_BOOL}
}
RETURN_TYPES = ("BOOLEAN",)
FUNCTION = "op"
CATEGORY = "math/bool"
def op(self, op: str, a: bool) -> tuple[bool]:
return (BOOL_UNARY_OPERATIONS[op](a),)
class BoolBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(BOOL_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_BOOL,
"b": DEFAULT_BOOL,
}
}
RETURN_TYPES = ("BOOLEAN",)
FUNCTION = "op"
CATEGORY = "math/bool"
def op(self, op: str, a: bool, b: bool) -> tuple[bool]:
return (BOOL_BINARY_OPERATIONS[op](a, b),)
NODE_CLASS_MAPPINGS = {
"CM_BoolUnaryOperation": BoolUnaryOperation,
"CM_BoolBinaryOperation": BoolBinaryOperation,
}

View File

@@ -0,0 +1,3 @@
from typing import Any, Mapping
NODE_CLASS_MAPPINGS: Mapping[str, Any] = {}

View File

@@ -0,0 +1,273 @@
from typing import Any, Mapping
from .vec import VEC2_ZERO, VEC3_ZERO, VEC4_ZERO
from .types import Number, Vec2, Vec3, Vec4
class BoolToInt:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("BOOLEAN", {"default": False})}}
RETURN_TYPES = ("INT",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: bool) -> tuple[int]:
return (int(a),)
class IntToBool:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("INT", {"default": 0})}}
RETURN_TYPES = ("BOOLEAN",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: int) -> tuple[bool]:
return (a != 0,)
class FloatToInt:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("FLOAT", {"default": 0.0, "round": False})}}
RETURN_TYPES = ("INT",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: float) -> tuple[int]:
return (int(a),)
class IntToFloat:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("INT", {"default": 0})}}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: int) -> tuple[float]:
return (float(a),)
class IntToNumber:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("INT", {"default": 0})}}
RETURN_TYPES = ("NUMBER",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: int) -> tuple[Number]:
return (a,)
class NumberToInt:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("NUMBER", {"default": 0.0})}}
RETURN_TYPES = ("INT",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: Number) -> tuple[int]:
return (int(a),)
class FloatToNumber:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("FLOAT", {"default": 0.0, "round": False})}}
RETURN_TYPES = ("NUMBER",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: float) -> tuple[Number]:
return (a,)
class NumberToFloat:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("NUMBER", {"default": 0.0})}}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: Number) -> tuple[float]:
return (float(a),)
class ComposeVec2:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"x": ("FLOAT", {"default": 0.0, "round": False}),
"y": ("FLOAT", {"default": 0.0, "round": False}),
}
}
RETURN_TYPES = ("VEC2",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, x: float, y: float) -> tuple[Vec2]:
return ((x, y),)
class FillVec2:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"a": ("FLOAT", {"default": 0.0, "round": False}),
}
}
RETURN_TYPES = ("VEC2",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: float) -> tuple[Vec2]:
return ((a, a),)
class BreakoutVec2:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("VEC2", {"default": VEC2_ZERO})}}
RETURN_TYPES = ("FLOAT", "FLOAT")
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: Vec2) -> tuple[float, float]:
return (a[0], a[1])
class ComposeVec3:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"x": ("FLOAT", {"default": 0.0}),
"y": ("FLOAT", {"default": 0.0}),
"z": ("FLOAT", {"default": 0.0}),
}
}
RETURN_TYPES = ("VEC3",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, x: float, y: float, z: float) -> tuple[Vec3]:
return ((x, y, z),)
class FillVec3:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"a": ("FLOAT", {"default": 0.0}),
}
}
RETURN_TYPES = ("VEC3",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: float) -> tuple[Vec3]:
return ((a, a, a),)
class BreakoutVec3:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("VEC3", {"default": VEC3_ZERO})}}
RETURN_TYPES = ("FLOAT", "FLOAT", "FLOAT")
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: Vec3) -> tuple[float, float, float]:
return (a[0], a[1], a[2])
class ComposeVec4:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"x": ("FLOAT", {"default": 0.0}),
"y": ("FLOAT", {"default": 0.0}),
"z": ("FLOAT", {"default": 0.0}),
"w": ("FLOAT", {"default": 0.0}),
}
}
RETURN_TYPES = ("VEC4",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, x: float, y: float, z: float, w: float) -> tuple[Vec4]:
return ((x, y, z, w),)
class FillVec4:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"a": ("FLOAT", {"default": 0.0}),
}
}
RETURN_TYPES = ("VEC4",)
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: float) -> tuple[Vec4]:
return ((a, a, a, a),)
class BreakoutVec4:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"a": ("VEC4", {"default": VEC4_ZERO})}}
RETURN_TYPES = ("FLOAT", "FLOAT", "FLOAT", "FLOAT")
FUNCTION = "op"
CATEGORY = "math/conversion"
def op(self, a: Vec4) -> tuple[float, float, float, float]:
return (a[0], a[1], a[2], a[3])
NODE_CLASS_MAPPINGS = {
"CM_BoolToInt": BoolToInt,
"CM_IntToBool": IntToBool,
"CM_FloatToInt": FloatToInt,
"CM_IntToFloat": IntToFloat,
"CM_IntToNumber": IntToNumber,
"CM_NumberToInt": NumberToInt,
"CM_FloatToNumber": FloatToNumber,
"CM_NumberToFloat": NumberToFloat,
"CM_ComposeVec2": ComposeVec2,
"CM_ComposeVec3": ComposeVec3,
"CM_ComposeVec4": ComposeVec4,
"CM_BreakoutVec2": BreakoutVec2,
"CM_BreakoutVec3": BreakoutVec3,
"CM_BreakoutVec4": BreakoutVec4,
}

View File

@@ -0,0 +1,159 @@
import math
from typing import Any, Callable, Mapping
DEFAULT_FLOAT = ("FLOAT", {"default": 0.0, "step": 0.001, "round": False})
FLOAT_UNARY_OPERATIONS: Mapping[str, Callable[[float], float]] = {
"Neg": lambda a: -a,
"Inc": lambda a: a + 1,
"Dec": lambda a: a - 1,
"Abs": lambda a: abs(a),
"Sqr": lambda a: a * a,
"Cube": lambda a: a * a * a,
"Sqrt": lambda a: math.sqrt(a),
"Exp": lambda a: math.exp(a),
"Ln": lambda a: math.log(a),
"Log10": lambda a: math.log10(a),
"Log2": lambda a: math.log2(a),
"Sin": lambda a: math.sin(a),
"Cos": lambda a: math.cos(a),
"Tan": lambda a: math.tan(a),
"Asin": lambda a: math.asin(a),
"Acos": lambda a: math.acos(a),
"Atan": lambda a: math.atan(a),
"Sinh": lambda a: math.sinh(a),
"Cosh": lambda a: math.cosh(a),
"Tanh": lambda a: math.tanh(a),
"Asinh": lambda a: math.asinh(a),
"Acosh": lambda a: math.acosh(a),
"Atanh": lambda a: math.atanh(a),
"Round": lambda a: round(a),
"Floor": lambda a: math.floor(a),
"Ceil": lambda a: math.ceil(a),
"Trunc": lambda a: math.trunc(a),
"Erf": lambda a: math.erf(a),
"Erfc": lambda a: math.erfc(a),
"Gamma": lambda a: math.gamma(a),
"Radians": lambda a: math.radians(a),
"Degrees": lambda a: math.degrees(a),
}
FLOAT_UNARY_CONDITIONS: Mapping[str, Callable[[float], bool]] = {
"IsZero": lambda a: a == 0.0,
"IsPositive": lambda a: a > 0.0,
"IsNegative": lambda a: a < 0.0,
"IsNonZero": lambda a: a != 0.0,
"IsPositiveInfinity": lambda a: math.isinf(a) and a > 0.0,
"IsNegativeInfinity": lambda a: math.isinf(a) and a < 0.0,
"IsNaN": lambda a: math.isnan(a),
"IsFinite": lambda a: math.isfinite(a),
"IsInfinite": lambda a: math.isinf(a),
"IsEven": lambda a: a % 2 == 0.0,
"IsOdd": lambda a: a % 2 != 0.0,
}
FLOAT_BINARY_OPERATIONS: Mapping[str, Callable[[float, float], float]] = {
"Add": lambda a, b: a + b,
"Sub": lambda a, b: a - b,
"Mul": lambda a, b: a * b,
"Div": lambda a, b: a / b,
"Mod": lambda a, b: a % b,
"Pow": lambda a, b: a**b,
"FloorDiv": lambda a, b: a // b,
"Max": lambda a, b: max(a, b),
"Min": lambda a, b: min(a, b),
"Log": lambda a, b: math.log(a, b),
"Atan2": lambda a, b: math.atan2(a, b),
}
FLOAT_BINARY_CONDITIONS: Mapping[str, Callable[[float, float], bool]] = {
"Eq": lambda a, b: a == b,
"Neq": lambda a, b: a != b,
"Gt": lambda a, b: a > b,
"Gte": lambda a, b: a >= b,
"Lt": lambda a, b: a < b,
"Lte": lambda a, b: a <= b,
}
class FloatUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_UNARY_OPERATIONS.keys()),),
"a": DEFAULT_FLOAT,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/float"
def op(self, op: str, a: float) -> tuple[float]:
return (FLOAT_UNARY_OPERATIONS[op](a),)
class FloatUnaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_UNARY_CONDITIONS.keys()),),
"a": DEFAULT_FLOAT,
}
}
RETURN_TYPES = ("BOOLEAN",)
FUNCTION = "op"
CATEGORY = "math/float"
def op(self, op: str, a: float) -> tuple[bool]:
return (FLOAT_UNARY_CONDITIONS[op](a),)
class FloatBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_FLOAT,
"b": DEFAULT_FLOAT,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/float"
def op(self, op: str, a: float, b: float) -> tuple[float]:
return (FLOAT_BINARY_OPERATIONS[op](a, b),)
class FloatBinaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_BINARY_CONDITIONS.keys()),),
"a": DEFAULT_FLOAT,
"b": DEFAULT_FLOAT,
}
}
RETURN_TYPES = ("BOOLEAN",)
FUNCTION = "op"
CATEGORY = "math/float"
def op(self, op: str, a: float, b: float) -> tuple[bool]:
return (FLOAT_BINARY_CONDITIONS[op](a, b),)
NODE_CLASS_MAPPINGS = {
"CM_FloatUnaryOperation": FloatUnaryOperation,
"CM_FloatUnaryCondition": FloatUnaryCondition,
"CM_FloatBinaryOperation": FloatBinaryOperation,
"CM_FloatBinaryCondition": FloatBinaryCondition,
}

View File

@@ -0,0 +1,153 @@
from abc import ABC, abstractmethod
from typing import Any, Mapping, Sequence, Tuple
SDXL_SUPPORTED_RESOLUTIONS = [
(1024, 1024, 1.0),
(1152, 896, 1.2857142857142858),
(896, 1152, 0.7777777777777778),
(1216, 832, 1.4615384615384615),
(832, 1216, 0.6842105263157895),
(1344, 768, 1.75),
(768, 1344, 0.5714285714285714),
(1536, 640, 2.4),
(640, 1536, 0.4166666666666667),
]
SDXL_EXTENDED_RESOLUTIONS = [
(512, 2048, 0.25),
(512, 1984, 0.26),
(512, 1920, 0.27),
(512, 1856, 0.28),
(576, 1792, 0.32),
(576, 1728, 0.33),
(576, 1664, 0.35),
(640, 1600, 0.4),
(640, 1536, 0.42),
(704, 1472, 0.48),
(704, 1408, 0.5),
(704, 1344, 0.52),
(768, 1344, 0.57),
(768, 1280, 0.6),
(832, 1216, 0.68),
(832, 1152, 0.72),
(896, 1152, 0.78),
(896, 1088, 0.82),
(960, 1088, 0.88),
(960, 1024, 0.94),
(1024, 1024, 1.0),
(1024, 960, 1.8),
(1088, 960, 1.14),
(1088, 896, 1.22),
(1152, 896, 1.30),
(1152, 832, 1.39),
(1216, 832, 1.47),
(1280, 768, 1.68),
(1344, 768, 1.76),
(1408, 704, 2.0),
(1472, 704, 2.10),
(1536, 640, 2.4),
(1600, 640, 2.5),
(1664, 576, 2.90),
(1728, 576, 3.0),
(1792, 576, 3.12),
(1856, 512, 3.63),
(1920, 512, 3.76),
(1984, 512, 3.89),
(2048, 512, 4.0),
]
class Resolution(ABC):
@classmethod
@abstractmethod
def resolutions(cls) -> Sequence[Tuple[int, int, float]]: ...
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"resolution": ([f"{res[0]}x{res[1]}" for res in cls.resolutions()],)
}
}
RETURN_TYPES = ("INT", "INT")
RETURN_NAMES = ("width", "height")
FUNCTION = "op"
CATEGORY = "math/graphics"
def op(self, resolution: str) -> tuple[int, int]:
width, height = resolution.split("x")
return (int(width), int(height))
class NearestResolution(ABC):
@classmethod
@abstractmethod
def resolutions(cls) -> Sequence[Tuple[int, int, float]]: ...
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {"required": {"image": ("IMAGE",)}}
RETURN_TYPES = ("INT", "INT")
RETURN_NAMES = ("width", "height")
FUNCTION = "op"
CATEGORY = "math/graphics"
def op(self, image) -> tuple[int, int]:
image_width = image.size()[2]
image_height = image.size()[1]
print(f"Input image resolution: {image_width}x{image_height}")
image_ratio = image_width / image_height
differences = [
(abs(image_ratio - resolution[2]), resolution)
for resolution in self.resolutions()
]
smallest = None
for difference in differences:
if smallest is None:
smallest = difference
else:
if difference[0] < smallest[0]:
smallest = difference
if smallest is not None:
width = smallest[1][0]
height = smallest[1][1]
else:
width = 1024
height = 1024
print(f"Selected resolution: {width}x{height}")
return (width, height)
class SDXLResolution(Resolution):
@classmethod
def resolutions(cls):
return SDXL_SUPPORTED_RESOLUTIONS
class SDXLExtendedResolution(Resolution):
@classmethod
def resolutions(cls):
return SDXL_EXTENDED_RESOLUTIONS
class NearestSDXLResolution(NearestResolution):
@classmethod
def resolutions(cls):
return SDXL_SUPPORTED_RESOLUTIONS
class NearestSDXLExtendedResolution(NearestResolution):
@classmethod
def resolutions(cls):
return SDXL_EXTENDED_RESOLUTIONS
NODE_CLASS_MAPPINGS = {
"CM_SDXLResolution": SDXLResolution,
"CM_NearestSDXLResolution": NearestSDXLResolution,
"CM_SDXLExtendedResolution": SDXLExtendedResolution,
"CM_NearestSDXLExtendedResolution": NearestSDXLExtendedResolution,
}

View File

@@ -0,0 +1,129 @@
import math
from typing import Any, Callable, Mapping
DEFAULT_INT = ("INT", {"default": 0})
INT_UNARY_OPERATIONS: Mapping[str, Callable[[int], int]] = {
"Abs": lambda a: abs(a),
"Neg": lambda a: -a,
"Inc": lambda a: a + 1,
"Dec": lambda a: a - 1,
"Sqr": lambda a: a * a,
"Cube": lambda a: a * a * a,
"Not": lambda a: ~a,
"Factorial": lambda a: math.factorial(a),
}
INT_UNARY_CONDITIONS: Mapping[str, Callable[[int], bool]] = {
"IsZero": lambda a: a == 0,
"IsNonZero": lambda a: a != 0,
"IsPositive": lambda a: a > 0,
"IsNegative": lambda a: a < 0,
"IsEven": lambda a: a % 2 == 0,
"IsOdd": lambda a: a % 2 == 1,
}
INT_BINARY_OPERATIONS: Mapping[str, Callable[[int, int], int]] = {
"Add": lambda a, b: a + b,
"Sub": lambda a, b: a - b,
"Mul": lambda a, b: a * b,
"Div": lambda a, b: a // b,
"Mod": lambda a, b: a % b,
"Pow": lambda a, b: a**b,
"And": lambda a, b: a & b,
"Nand": lambda a, b: ~a & b,
"Or": lambda a, b: a | b,
"Nor": lambda a, b: ~a & b,
"Xor": lambda a, b: a ^ b,
"Xnor": lambda a, b: ~a ^ b,
"Shl": lambda a, b: a << b,
"Shr": lambda a, b: a >> b,
"Max": lambda a, b: max(a, b),
"Min": lambda a, b: min(a, b),
}
INT_BINARY_CONDITIONS: Mapping[str, Callable[[int, int], bool]] = {
"Eq": lambda a, b: a == b,
"Neq": lambda a, b: a != b,
"Gt": lambda a, b: a > b,
"Lt": lambda a, b: a < b,
"Geq": lambda a, b: a >= b,
"Leq": lambda a, b: a <= b,
}
class IntUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {"op": (list(INT_UNARY_OPERATIONS.keys()),), "a": DEFAULT_INT}
}
RETURN_TYPES = ("INT",)
FUNCTION = "op"
CATEGORY = "math/int"
def op(self, op: str, a: int) -> tuple[int]:
return (INT_UNARY_OPERATIONS[op](a),)
class IntUnaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {"op": (list(INT_UNARY_CONDITIONS.keys()),), "a": DEFAULT_INT}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/int"
def op(self, op: str, a: int) -> tuple[bool]:
return (INT_UNARY_CONDITIONS[op](a),)
class IntBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(INT_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_INT,
"b": DEFAULT_INT,
}
}
RETURN_TYPES = ("INT",)
FUNCTION = "op"
CATEGORY = "math/int"
def op(self, op: str, a: int, b: int) -> tuple[int]:
return (INT_BINARY_OPERATIONS[op](a, b),)
class IntBinaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(INT_BINARY_CONDITIONS.keys()),),
"a": DEFAULT_INT,
"b": DEFAULT_INT,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/int"
def op(self, op: str, a: int, b: int) -> tuple[bool]:
return (INT_BINARY_CONDITIONS[op](a, b),)
NODE_CLASS_MAPPINGS = {
"CM_IntUnaryOperation": IntUnaryOperation,
"CM_IntUnaryCondition": IntUnaryCondition,
"CM_IntBinaryOperation": IntBinaryOperation,
"CM_IntBinaryCondition": IntBinaryCondition,
}

View File

@@ -0,0 +1,94 @@
from dataclasses import dataclass
from typing import Any, Callable, Mapping
from .float import (
FLOAT_UNARY_OPERATIONS,
FLOAT_UNARY_CONDITIONS,
FLOAT_BINARY_OPERATIONS,
FLOAT_BINARY_CONDITIONS,
)
from .types import Number
DEFAULT_NUMBER = ("NUMBER", {"default": 0.0})
class NumberUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_UNARY_OPERATIONS.keys()),),
"a": DEFAULT_NUMBER,
}
}
RETURN_TYPES = ("NUMBER",)
FUNCTION = "op"
CATEGORY = "math/number"
def op(self, op: str, a: Number) -> tuple[float]:
return (FLOAT_UNARY_OPERATIONS[op](float(a)),)
class NumberUnaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_UNARY_CONDITIONS.keys()),),
"a": DEFAULT_NUMBER,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/Number"
def op(self, op: str, a: Number) -> tuple[bool]:
return (FLOAT_UNARY_CONDITIONS[op](float(a)),)
class NumberBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_NUMBER,
"b": DEFAULT_NUMBER,
}
}
RETURN_TYPES = ("NUMBER",)
FUNCTION = "op"
CATEGORY = "math/number"
def op(self, op: str, a: Number, b: Number) -> tuple[float]:
return (FLOAT_BINARY_OPERATIONS[op](float(a), float(b)),)
class NumberBinaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(FLOAT_BINARY_CONDITIONS.keys()),),
"a": DEFAULT_NUMBER,
"b": DEFAULT_NUMBER,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/float"
def op(self, op: str, a: Number, b: Number) -> tuple[bool]:
return (FLOAT_BINARY_CONDITIONS[op](float(a), float(b)),)
NODE_CLASS_MAPPINGS = {
"CM_NumberUnaryOperation": NumberUnaryOperation,
"CM_NumberUnaryCondition": NumberUnaryCondition,
"CM_NumberBinaryOperation": NumberBinaryOperation,
"CM_NumberBinaryCondition": NumberBinaryCondition,
}

View File

@@ -0,0 +1,16 @@
import sys
if sys.version_info[1] < 10:
from typing import Tuple, Union
Number = Union[int, float]
Vec2 = Tuple[float, float]
Vec3 = Tuple[float, float, float]
Vec4 = Tuple[float, float, float, float]
else:
from typing import TypeAlias
Number: TypeAlias = int | float
Vec2: TypeAlias = tuple[float, float]
Vec3: TypeAlias = tuple[float, float, float]
Vec4: TypeAlias = tuple[float, float, float, float]

View File

@@ -0,0 +1,500 @@
import numpy
from typing import Any, Callable, Mapping
from .types import Vec2, Vec3, Vec4
VEC2_ZERO = (0.0, 0.0)
DEFAULT_VEC2 = ("VEC2", {"default": VEC2_ZERO})
VEC3_ZERO = (0.0, 0.0, 0.0)
DEFAULT_VEC3 = ("VEC3", {"default": VEC3_ZERO})
VEC4_ZERO = (0.0, 0.0, 0.0, 0.0)
DEFAULT_VEC4 = ("VEC4", {"default": VEC4_ZERO})
VEC_UNARY_OPERATIONS: Mapping[str, Callable[[numpy.ndarray], numpy.ndarray]] = {
"Neg": lambda a: -a,
"Normalize": lambda a: a / numpy.linalg.norm(a),
}
VEC_TO_SCALAR_UNARY_OPERATION: Mapping[str, Callable[[numpy.ndarray], float]] = {
"Norm": lambda a: numpy.linalg.norm(a).astype(float),
}
VEC_UNARY_CONDITIONS: Mapping[str, Callable[[numpy.ndarray], bool]] = {
"IsZero": lambda a: not numpy.any(a).astype(bool),
"IsNotZero": lambda a: numpy.any(a).astype(bool),
"IsNormalized": lambda a: numpy.allclose(a, a / numpy.linalg.norm(a)),
"IsNotNormalized": lambda a: not numpy.allclose(a, a / numpy.linalg.norm(a)),
}
VEC_BINARY_OPERATIONS: Mapping[
str, Callable[[numpy.ndarray, numpy.ndarray], numpy.ndarray]
] = {
"Add": lambda a, b: a + b,
"Sub": lambda a, b: a - b,
"Cross": lambda a, b: numpy.cross(a, b),
}
VEC_TO_SCALAR_BINARY_OPERATION: Mapping[
str, Callable[[numpy.ndarray, numpy.ndarray], float]
] = {
"Dot": lambda a, b: numpy.dot(a, b),
"Distance": lambda a, b: numpy.linalg.norm(a - b).astype(float),
}
VEC_BINARY_CONDITIONS: Mapping[str, Callable[[numpy.ndarray, numpy.ndarray], bool]] = {
"Eq": lambda a, b: numpy.allclose(a, b),
"Neq": lambda a, b: not numpy.allclose(a, b),
}
VEC_SCALAR_OPERATION: Mapping[str, Callable[[numpy.ndarray, float], numpy.ndarray]] = {
"Mul": lambda a, b: a * b,
"Div": lambda a, b: a / b,
}
def _vec2_from_numpy(a: numpy.ndarray) -> Vec2:
return (
float(a[0]),
float(a[1]),
)
def _vec3_from_numpy(a: numpy.ndarray) -> Vec3:
return (
float(a[0]),
float(a[1]),
float(a[2]),
)
def _vec4_from_numpy(a: numpy.ndarray) -> Vec4:
return (
float(a[0]),
float(a[1]),
float(a[2]),
float(a[3]),
)
class Vec2UnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_UNARY_OPERATIONS.keys()),),
"a": DEFAULT_VEC2,
}
}
RETURN_TYPES = ("VEC2",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2) -> tuple[Vec2]:
return (_vec2_from_numpy(VEC_UNARY_OPERATIONS[op](numpy.array(a))),)
class Vec2ToScalarUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_TO_SCALAR_UNARY_OPERATION.keys()),),
"a": DEFAULT_VEC2,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2) -> tuple[float]:
return (VEC_TO_SCALAR_UNARY_OPERATION[op](numpy.array(a)),)
class Vec2UnaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_UNARY_CONDITIONS.keys()),),
"a": DEFAULT_VEC2,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2) -> tuple[bool]:
return (VEC_UNARY_CONDITIONS[op](numpy.array(a)),)
class Vec2BinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_VEC2,
"b": DEFAULT_VEC2,
}
}
RETURN_TYPES = ("VEC2",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2, b: Vec2) -> tuple[Vec2]:
return (
_vec2_from_numpy(VEC_BINARY_OPERATIONS[op](numpy.array(a), numpy.array(b))),
)
class Vec2ToScalarBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_TO_SCALAR_BINARY_OPERATION.keys()),),
"a": DEFAULT_VEC2,
"b": DEFAULT_VEC2,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2, b: Vec2) -> tuple[float]:
return (VEC_TO_SCALAR_BINARY_OPERATION[op](numpy.array(a), numpy.array(b)),)
class Vec2BinaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_BINARY_CONDITIONS.keys()),),
"a": DEFAULT_VEC2,
"b": DEFAULT_VEC2,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2, b: Vec2) -> tuple[bool]:
return (VEC_BINARY_CONDITIONS[op](numpy.array(a), numpy.array(b)),)
class Vec2ScalarOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_SCALAR_OPERATION.keys()),),
"a": DEFAULT_VEC2,
"b": ("FLOAT",),
}
}
RETURN_TYPES = ("VEC2",)
FUNCTION = "op"
CATEGORY = "math/vec2"
def op(self, op: str, a: Vec2, b: float) -> tuple[Vec2]:
return (_vec2_from_numpy(VEC_SCALAR_OPERATION[op](numpy.array(a), b)),)
class Vec3UnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_UNARY_OPERATIONS.keys()),),
"a": DEFAULT_VEC3,
}
}
RETURN_TYPES = ("VEC3",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3) -> tuple[Vec3]:
return (_vec3_from_numpy(VEC_UNARY_OPERATIONS[op](numpy.array(a))),)
class Vec3ToScalarUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_TO_SCALAR_UNARY_OPERATION.keys()),),
"a": DEFAULT_VEC3,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3) -> tuple[float]:
return (VEC_TO_SCALAR_UNARY_OPERATION[op](numpy.array(a)),)
class Vec3UnaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_UNARY_CONDITIONS.keys()),),
"a": DEFAULT_VEC3,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3) -> tuple[bool]:
return (VEC_UNARY_CONDITIONS[op](numpy.array(a)),)
class Vec3BinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_VEC3,
"b": DEFAULT_VEC3,
}
}
RETURN_TYPES = ("VEC3",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3, b: Vec3) -> tuple[Vec3]:
return (
_vec3_from_numpy(VEC_BINARY_OPERATIONS[op](numpy.array(a), numpy.array(b))),
)
class Vec3ToScalarBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_TO_SCALAR_BINARY_OPERATION.keys()),),
"a": DEFAULT_VEC3,
"b": DEFAULT_VEC3,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3, b: Vec3) -> tuple[float]:
return (VEC_TO_SCALAR_BINARY_OPERATION[op](numpy.array(a), numpy.array(b)),)
class Vec3BinaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_BINARY_CONDITIONS.keys()),),
"a": DEFAULT_VEC3,
"b": DEFAULT_VEC3,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3, b: Vec3) -> tuple[bool]:
return (VEC_BINARY_CONDITIONS[op](numpy.array(a), numpy.array(b)),)
class Vec3ScalarOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_SCALAR_OPERATION.keys()),),
"a": DEFAULT_VEC3,
"b": ("FLOAT",),
}
}
RETURN_TYPES = ("VEC3",)
FUNCTION = "op"
CATEGORY = "math/vec3"
def op(self, op: str, a: Vec3, b: float) -> tuple[Vec3]:
return (_vec3_from_numpy(VEC_SCALAR_OPERATION[op](numpy.array(a), b)),)
class Vec4UnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_UNARY_OPERATIONS.keys()),),
"a": DEFAULT_VEC4,
}
}
RETURN_TYPES = ("VEC4",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4) -> tuple[Vec4]:
return (_vec4_from_numpy(VEC_UNARY_OPERATIONS[op](numpy.array(a))),)
class Vec4ToScalarUnaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_TO_SCALAR_UNARY_OPERATION.keys()),),
"a": DEFAULT_VEC4,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4) -> tuple[float]:
return (VEC_TO_SCALAR_UNARY_OPERATION[op](numpy.array(a)),)
class Vec4UnaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_UNARY_CONDITIONS.keys()),),
"a": DEFAULT_VEC4,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4) -> tuple[bool]:
return (VEC_UNARY_CONDITIONS[op](numpy.array(a)),)
class Vec4BinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_BINARY_OPERATIONS.keys()),),
"a": DEFAULT_VEC4,
"b": DEFAULT_VEC4,
}
}
RETURN_TYPES = ("VEC4",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4, b: Vec4) -> tuple[Vec4]:
return (
_vec4_from_numpy(VEC_BINARY_OPERATIONS[op](numpy.array(a), numpy.array(b))),
)
class Vec4ToScalarBinaryOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_TO_SCALAR_BINARY_OPERATION.keys()),),
"a": DEFAULT_VEC4,
"b": DEFAULT_VEC4,
}
}
RETURN_TYPES = ("FLOAT",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4, b: Vec4) -> tuple[float]:
return (VEC_TO_SCALAR_BINARY_OPERATION[op](numpy.array(a), numpy.array(b)),)
class Vec4BinaryCondition:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_BINARY_CONDITIONS.keys()),),
"a": DEFAULT_VEC4,
"b": DEFAULT_VEC4,
}
}
RETURN_TYPES = ("BOOL",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4, b: Vec4) -> tuple[bool]:
return (VEC_BINARY_CONDITIONS[op](numpy.array(a), numpy.array(b)),)
class Vec4ScalarOperation:
@classmethod
def INPUT_TYPES(cls) -> Mapping[str, Any]:
return {
"required": {
"op": (list(VEC_SCALAR_OPERATION.keys()),),
"a": DEFAULT_VEC4,
"b": ("FLOAT",),
}
}
RETURN_TYPES = ("VEC4",)
FUNCTION = "op"
CATEGORY = "math/vec4"
def op(self, op: str, a: Vec4, b: float) -> tuple[Vec4]:
return (_vec4_from_numpy(VEC_SCALAR_OPERATION[op](numpy.array(a), b)),)
NODE_CLASS_MAPPINGS = {
"CM_Vec2UnaryOperation": Vec2UnaryOperation,
"CM_Vec2UnaryCondition": Vec2UnaryCondition,
"CM_Vec2ToScalarUnaryOperation": Vec2ToScalarUnaryOperation,
"CM_Vec2BinaryOperation": Vec2BinaryOperation,
"CM_Vec2BinaryCondition": Vec2BinaryCondition,
"CM_Vec2ToScalarBinaryOperation": Vec2ToScalarBinaryOperation,
"CM_Vec2ScalarOperation": Vec2ScalarOperation,
"CM_Vec3UnaryOperation": Vec3UnaryOperation,
"CM_Vec3UnaryCondition": Vec3UnaryCondition,
"CM_Vec3ToScalarUnaryOperation": Vec3ToScalarUnaryOperation,
"CM_Vec3BinaryOperation": Vec3BinaryOperation,
"CM_Vec3BinaryCondition": Vec3BinaryCondition,
"CM_Vec3ToScalarBinaryOperation": Vec3ToScalarBinaryOperation,
"CM_Vec3ScalarOperation": Vec3ScalarOperation,
"CM_Vec4UnaryOperation": Vec4UnaryOperation,
"CM_Vec4UnaryCondition": Vec4UnaryCondition,
"CM_Vec4ToScalarUnaryOperation": Vec4ToScalarUnaryOperation,
"CM_Vec4BinaryOperation": Vec4BinaryOperation,
"CM_Vec4BinaryCondition": Vec4BinaryCondition,
"CM_Vec4ToScalarBinaryOperation": Vec4ToScalarBinaryOperation,
"CM_Vec4ScalarOperation": Vec4ScalarOperation,
}

View File

@@ -0,0 +1,156 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintainted in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
*.pyc
.idea
/node_modules
/node.zip

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Crystian
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,672 @@
# comfyui-crystools [![Donate](https://img.shields.io/badge/Donate-PayPal-blue.svg)](https://paypal.me/crystian77) <a src="https://colab.research.google.com/assets/colab-badge.svg" href="https://colab.research.google.com/drive/1xiTiPmZkcIqNOsLQPO1UNCdJZqgK3U5k?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
**_🪛 A powerful set of tools for your belt when you work with ComfyUI 🪛_**
With this suit, you can see the resources monitor, progress bar & time elapsed, metadata and compare between two images, compare between two JSONs, show any value to console/display, pipes, and more!
This provides better nodes to load/save images, previews, etc, and see "hidden" data without loading a new workflow.
![Show metadata](./docs/jake.gif)
# Table of contents
- [General](#general)
- [Metadata](#metadata)
- [Debugger](#debugger)
- [Image](#image)
- [Pipe](#pipe)
- [Utils](#utils)
- [Primitives](#primitives)
- [List](#list)
- [Switch](#switch)
- Others: [About](#about), [To do](#to-do), [Changelog](#changelog), [Installation](#installation), [Use](#use)
---
## General
### Resources monitor
**🎉Finally, you can see the resources used by ComfyUI (CPU, GPU, RAM, VRAM, GPU Temp and space) on the menu in real-time!**
Horizontal:
![Monitors](./docs/monitor1.webp)
Vertical:
![Monitors](./docs/monitor3.webp)
Now you can identify the bottlenecks in your workflow and know when it's time to restart the server, unload models or even close some tabs!
You can configure the refresh rate which resources to show:
![Monitors](./docs/monitor-settings.png)
> **Notes:**
> - The GPU data is only available when you use CUDA (only NVIDIA cards, sorry AMD users).
> - This extension needs ComfyUI 1915 (or higher).
> - The cost of the monitor is low (0.1 to 0.5% of utilization), you can disable it from settings (`Refres rate` to `0`).
> - Data comes from these libraries:
> - [psutil](https://pypi.org/project/psutil/)
> - [torch](https://pytorch.org/)
> - [pynvml](https://pypi.org/project/pynvml/) (official NVIDIA library)
### Progress bar
You can see the progress of your workflow with a progress bar on the menu!
![Progress bar](./docs/progress-bar.png)
https://github.com/crystian/comfyui-crystools/assets/3886806/35cc1257-2199-4b85-936e-2e31d892959c
Additionally, it shows the time elapsed at the end of the workflow, and you can `click` on it to see the **current working node.**
> **Notes:**
> - If you don't want to see it, you can turn it off from settings (`Show progress bar in menu`)
## Metadata
### Node: Metadata extractor
This node is used to extract the metadata from the image and handle it as a JSON source for other nodes.
You can see **all information**, even metadata from other sources (like Photoshop, see sample).
The input comes from the [load image with metadata](#node-load-image-with-metadata) or [preview from image](#node-preview-from-image) nodes (and others in the future).
![Metadata extractor](./docs/metadata-extractor.png)
**Sample:** [metadata-extractor.json](./samples/metadata-extractor.json)
><details>
> <summary>Other metadata sample (photoshop)</summary>
>
> With metadata from Photoshop
![Metadata extractor](./docs/metadata-extractor-photoshop.png)
></details>
><details>
> <summary><i>Parameters</i></summary>
>
> - input:
> - metadata_raw: The metadata raw from the image or preview node
> - Output:
> - prompt: The prompt used to produce the image.
> - workflow: The workflow used to produce the image (all information about nodes, values, etc).
> - file info: The file info of the image/metadata (resolution, size, etc) is human readable.
> - raw to JSON: The entire metadata is raw but formatted/readable.
> - raw to property: The entire metadata is raw in "properties" format.
> - raw to csv: The entire metadata is raw in "csv" format.
></details>
<br />
### Node: Metadata comparator
This node is so useful for comparing two metadata and seeing the differences (**the main reason why I created this extension!**)
You can compare 3 inputs: "Prompt", "Workflow" and "Fileinfo"
There are three potential "outputs": `values_changed`, `dictionary_item_added`, and `dictionary_item_removed` (in this order of priority).
![Metadata extractor](./docs/metadata-comparator.png)
**Sample:** [metadata-comparator.json](./samples/metadata-comparator.json)
**Notes:**
- I use [DeepDiff](https://pypi.org/project/deepdiff) for that. For more info check the link.
- If you want to compare two JSONs, you can use the [JSON comparator](#node-JSON-comparator) node.
><details>
> <summary><i>Parameters</i></summary>
>
> - options:
> - what: What to compare, you can choose between "Prompt", "Workflow" and "Fileinfo"
> - input:
> - metadata_raw_old: The metadata raw to start comparing
> - metadata_raw_new: The metadata raw to compare
> - Output:
> - diff: This is the same output you can see in the display of the node; you can use it on other nodes.
></details>
<br />
---
## Debugger
### Node: Show Metadata
With this node, you will be able to see the JSON produced from your entire prompt and workflow so that you can know all the values (and more) of your prompt quickly without opening the file (PNG or JSON).
![Show metadata](./docs/debugger-show-metadata.png)
**Sample:** [debugger-metadata.json](./samples/debugger-metadata.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Options:
> - Active: Enable/disable the node
> - Parsed: Show the parsed JSON or plain text
> - What: Show the prompt or workflow (prompt are values to produce the image, and workflow is the entire workflow of ComfyUI)
></details>
<br />
### Node: Show any
You can see on the console or display any text or data from the nodes. Connect it to what you want to inspect, and you will see it.
![Show any](./docs/debugger-show-any.png)
**Sample:** [debugger-any.json](./samples/debugger-any.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - any_value: Any value to show, which can be a string, number, etc.
> - Options:
> - Console: Enable/disable write to console
> - Display: Enable/disable write on this node
> - Prefix: Prefix to console
></details>
<br />
### Node: Show any to JSON
It is the same as the previous one, but it formatted the value to JSON (only display).
![Show any](./docs/debugger-show-json.png)
**Sample:** [debugger-json.json](./samples/debugger-json.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - any_value: Any value to try to convert to JSON
> - Output:
> - string: The same string is shown on the display
></details>
<br />
---
## Image
### Node: Load image with metadata
This node is the same as the default one, but it adds three features: Prompt, Metadata, and supports **subfolders** of the "input" folder.
![Load image with metadata](./docs/image-load.png)
**Sample:** [image-load.json](./samples/image-load.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - image: Read the images from the input folder (and subfolder) (you can drop the image here or even paste an image from the clipboard)
> - Output:
> - Image/Mask: The same as the default node
> - Prompt: The prompt used to produce the image (not the workflow)
> - Metadata RAW: The metadata raw of the image (full workflow) as string
></details>
**Note:** The subfolders support inspired on: [comfyui-imagesubfolders](https://github.com/catscandrive/comfyui-imagesubfolders)
<br />
### Node: Save image with extra metadata
This node is the same as the default one, but it adds two features: Save the workflow in the png or not, and you can add any piece of metadata (as JSON).
This saves custom data on the image, so you can share it with others, and they can see the workflow and metadata (see [preview from metadata](#node-preview-from-metadata)), even your custom data.
It can be any type of information that supports text and JSON.
![Save image with extra metadata](./docs/image-save.png)
**Sample:** [image-save.json](./samples/image-save.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - options:
> - with_workflow: If you want to save into the image workflow (special to share the workflow with others)
> - Input:
> - image: The image to save (same as the default node)
> - Output:
> - Metadata RAW: The metadata raw of the image (full workflow) as string
></details>
**Note:** The data is saved as special "exif" (as ComfyUI does) in the png file; you can read it with [Load image with metadata](#node-load-image-with-metadata).
> **Important:**
> - If you want to save your workflow with a particular name and your data as creator, you need to use the [ComfyUI-Crystools-save](https://github.com/crystian/ComfyUI-Crystools-save) extension; try it!
![Crystools-save](./docs/crystools-save.png)
<br />
### Node: Preview from image
This node is used to preview the image with the **current prompt** and additional features.
![Preview from image](./docs/image-preview.png)
**Feature:** It supports cache (shows as "CACHED") (not permanent yet!), so you can disconnect the node and still see the image and data, so you can use it to compare with others!
![Preview from image](./docs/image-preview-diff.png)
As you can see the seed, steps, and cfg were changed
**Sample:** [image-preview-image.json](./samples/image-preview-image.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - image: Any kind of image link
> - Output:
> - Metadata RAW: The metadata raw of the image and full workflow.
> - You can use it to **compare with others** (see [metadata comparator](#node-metadata-comparator))
> - The file info like filename, resolution, datetime and size with **the current prompt, not the original one!** (see important note)
></details>
> **Important:**
> - If you want to read the metadata of the image, you need to use the [load image with metadata](#node-load-image-with-metadata) and use the output "metadata RAW" not the image link.
> - To do a preview, it is necessary to save it first on the temporal folder, and the data shown is from the temporal image, **not the original one** even **the prompt!**
<br />
### Node: Preview from metadata
This node is used to preview the image from the metadata and shows additional data (all around this one).
It supports the same features as [preview from image](#node-preview-from-image) (cache, metadata raw, etc.). But the important difference is you see **real data from the image** (not the temporal one or the current prompt).
![Preview from metadata](./docs/image-preview-metadata.png)
**Sample:** [image-preview-metadata.json](./samples/image-preview-metadata.json)
<br />
### Node: Show resolution
This node is used to show the resolution of an image.
> Can be used with any image link.
![Show resolution](./docs/image-resolution.png)
**Sample:** [image-resolution.json](./samples/image-resolution.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - image: Any kind of image link
> - Output:
> - Width: The width of the image
> - Height: The height of the image
></details>
<br />
---
## Pipe
### Nodes: Pipe to/edit any, Pipe from any
This powerful set of nodes is used to better organize your pipes.
The "Pipe to/edit any" node is used to encapsulate multiple links into a single one. It includes support for editing and easily adding the modified content back to the same pipe number.
The "Pipe from any" node is used to extract the content of a pipe.
Typical example:
![Pipes](./docs/pipe-0.png)
With pipes:
![Pipes](./docs/pipe-1.png)
**Sample:** [pipe-1.json](./samples/pipe-1.json)
Editing pipes:
![Pipes](./docs/pipe-2.png)
**Sample:** [pipe-2.json](./samples/pipe-2.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - CPipeAny: This is the type of this pipe you can use to edit (see the sample)
> - any_*: 6 possible inputs to use
> - Output:
> - CPipeAny: You can continue the pipe with this output; you can use it to bifurcate the pipe (see the sample)
></details>
>**Important:**
> - Please note that it supports "any," meaning it does not validate (not yet!) the correspondence of input nodes with the output ones. When creating the link, it is recommended to link consciously number by number.
> - "RecursionError" It's crucial to note that the flow of links **must be in the same direction**, and they cannot be mixed with other flows that use the result of this one. Otherwise, this may lead to recursion and block the server (you need to restart it!)
><details>
> <summary><i>Bad example with "RecursionError: maximum recursion depth exceeded"</i></summary>
>
> If you see something like this on your console, you need to check your pipes. That is bad sample of pipes, you can't mix the flows.
![Pipes](./docs/pipe-3.png)
></details>
<br />
---
## Utils
Some useful nodes to use in your workflow.
### Node: JSON comparator
This node is so useful to compare two JSONs and see the differences.
![JSON comparator](./docs/utils-json-comparator.png)
**Sample:** [utils-json-comparator.json](./samples/utils-json-comparator.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - input:
> - json_old: The first JSON to start compare
> - json_new: The JSON to compare
> - Output:
> - diff: A new JSON with the differences
></details>
**Notes:**
As you can see, it is the same as the [metadata comparator](#node-metadata-comparator) but with JSONs.
The other is intentionally simple to compare two images metadata; this is more generic.
The main difference is that you can compare any JSON, not only metadata.
<br />
### Node: Stats system
This node is used to show the system stats (RAM, VRAM, and Space).
It **should** connect as a pipe.
![JSON comparator](./docs/utils-stats.png)
**Sample:** [utils-stats.json](./samples/utils-stats.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - input:
> - latent: The latent to use to measure the stats
> - Output:
> - latent: Return the same latent to continue the pipe
></details>
**Notes:** The original is in [WAS](https://github.com/WASasquatch/was-node-suite-comfyui), I only show it on the display.
<br />
---
## Primitives
### Nodes: Primitive boolean, Primitive integer, Primitive float, Primitive string, Primitive string multiline
A set of nodes with primitive values to use in your prompts.
![Primitives](./docs/primitives.png)
<br />
---
## List
A set of nodes with a list of values (any or strings/texts) for any proposal (news nodes to use it coming soon!).
> **Important:** You can use other nodes like "Show any" to see the values of the list
### Node: List of strings
**Feature:** You can concatenate them.
![Lists](./docs/list-string.png)
**Sample:** [list-strings.json](./samples/list-strings.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - string_*: 8 possible inputs to use
> - delimiter: Use to concatenate the values on the output
> - Output:
> - concatenated: A string with all values concatenated
> - list_string: The list of strings (only with values)
></details>
<br />
### Node: List of any
You can concatenate any value (it will try to convert it to a string and show the value), so it is util to see several values at the same time.
![Lists](./docs/list-any.png)
**Sample:** [list-any.json](./samples/list-any.json)
><details>
> <summary><i>Parameters</i></summary>
>
> - Input:
> - any_*: 8 possible inputs to use
> - Output:
> - list_any: The list of any elements (only with values)
></details>
<br />
---
## Switch
A set of nodes to switch between flows.
All switches are boolean; you can switch between flows by simply changing the value of the switch.
You have predefined switches (string, latent, image, conditioning) but you can use "Switch any" for any value/type.
![Switches](./docs/switches.png)
**Sample:** [switch.json](./samples/switch.json)
<br />
---
## About
**Notes from the author:**
- This is my first project in python ¯\\_(ツ)_/¯ (PR are welcome!)
- I'm a software engineer but in other languages (web technologies)
- My Instagram is: https://www.instagram.com/crystian.ia I'll publish my works on it, so consider following me for news! :)
- I'm not a native English speaker, so sorry for my English :P
---
## To do
- [ ] Several unit tests
- [ ] Add permanent cache for preview/metadata image (to survive to F5! or restart the server)
---
## Changelog
### Crystools
### 1.27.0 (17/08/2025)
- revert the lower case on name, cannot change on registry ¯\_(ツ)_/¯
- zluda check removed, it is not necessary anymore
### 1.25.3 (27/07/2025)
- change the name to lower case
### 1.25.1 (02/06/2025)
- fix issues with switches on settings menu
- node: "Switch from any" added
- load image with metadata: filtered (exclude hidden folders an typically metadata files)
- other fixes
### 1.24.0 (02/06/2025)
- PRs by community merged
- Improved VRAM usage/readout
- HDD error handling
- Lazy switches
### 1.23.0 (02/06/2025)
- Jetson support added by @johnnynunez
- some ui fixes
### 1.20.0 (21/10/2024)
- BETA of JSON file reader and extractor, to allow you to read your own JSON files and extract the values to use in your workflow
### 1.19.0 (06/10/2024)
- HORIZONTAL UI! New version is ready! 🎉
### 1.18.0 (21/09/2024)
- HORIZONTAL UI! 🎉
- Configurable size of monitors on settings menu
### 1.17.0 (21/09/2024)
- Settings menu reorganized
- Preparing for horizontal UI
- Update from ComfyUI (typescript and new features)
### 1.16.0 (31/07/2024)
- Rollback of AMD support by manager does not support other repository parameter (https://test.pypi.org/simple by pyrsmi)
### 1.15.0 (21/07/2024)
- AMD Branch merged to the main branch, should work for AMD users on **Linux**
### 1.14.0 (15/07/2024)
- Tried to use AMD info, but it breaks installation on windows, so I removed it ¯\_(ツ)_/¯
- AMD Branch added, if you use AMD and Linux, you can try it (not tested for me)
### 1.13.0 (01/07/2024)
- Integrate with new ecosystem of ComfyUI
- Webp support added on load image with metadata node
### 1.12.0 (27/03/2024)
- GPU Temperature added
### 1.10.0 (17/01/2024)
- Multi-gpu added
### 1.9.2 (15/01/2024)
- Big refactor on hardwareInfo and monitor.ts, gpu was separated on another file, preparing for multi-gpu support
### 1.8.0 (14/01/2024) - internal
- HDD monitor selector on settings
### 1.7.0 (11/01/2024) - internal
- Typescript added!
### 1.6.0 (11/01/2024)
- Fix issue [#7](https://github.com/crystian/comfyui-crystools/issues/7) to the thread deadlock on concurrency
### 1.5.0 (10/01/2024)
- Improvements on the resources monitor and how handle the threads
- Some fixes
### 1.3.0 (08/01/2024)
- Added in general Resources monitor (CPU, GPU, RAM, VRAM, and space)
- Added this icon to identify this set of tools: 🪛
### 1.2.0 (05/01/2024)
- progress bar added
### 1.1.0 (29/12/2023)
- Node added: "Save image with extra metadata"
- Support to **read** Jpeg metadata added (not save)
### 1.0.0 (26/12/2023)
- First release
### Crystools-save - DEPRECATED (01/06/2025)
### 1.1.0 (07/01/2024)
- Labeling updated according to the new version of Crystools (this project)
### 1.0.0 (29/12/2023)
- Created another extension to save the info about the author on workflow: [ComfyUI-Crystools-save](https://github.com/crystian/ComfyUI-Crystools-save)
---
## Installation
### Install from GitHub
1. Install [ComfyUi](https://github.com/comfyanonymous/ComfyUI).
2. Clone this repo into `custom_nodes`:
```
cd ComfyUI/custom_nodes
git clone https://github.com/crystian/comfyui-crystools.git
cd comfyui-crystools
pip install -r requirements.txt
```
3. Start up ComfyUI.
#### For AMD users
If you are an AMD user with Linux, you can try the AMD branch:
**ATTENTION:** Don't install with the manager, you need to install manually:
```
cd ComfyUI/custom_nodes
git clone -b AMD https://github.com/crystian/comfyui-crystools.git
cd comfyui-crystools
pip install -r requirements.txt
```
### Install from manager
Search for `crystools` in the [manager](https://github.com/ltdrdata/ComfyUI-Manager.git) and install it.
### Using on Google Colab
You can use it on Google Colab, but you need to install it manually:
[Google Colab](https://colab.research.google.com/drive/1xiTiPmZkcIqNOsLQPO1UNCdJZqgK3U5k?usp=sharing)
* Run the first cell to install ComfyUI and launch the server
* After it finishes, use the link to open the a new tab, looking a line like this:
```
This is the URL to access ComfyUI: https://identifying-complications-fw-some.trycloudflare.com
```
---
## Use
You can use it as any other node, just using the menu in the category `crystools` or double clicking on the canvas (I recommended using the "oo" to fast filter), all nodes were post fixing with `[Crystools]`.
![Menu](./docs/menu.png)
![shortcut](./docs/shortcut.png)
If for some reason you need to see the logs, you can define the environment variable `CRYSTOOLS_LOGLEVEL` and set the [value](https://docs.python.org/es/3/howto/logging.html).
---
Made with ❤️ by Crystian.

View File

@@ -0,0 +1,108 @@
"""
@author: Crystian
@title: Crystools
@nickname: Crystools
@version: 1.27.4
@project: "https://github.com/crystian/comfyui-crystools",
@description: Plugins for multiples uses, mainly for debugging, you need them! IG: https://www.instagram.com/crystian.ia
"""
from .core import version, logger
logger.info(f'Crystools version: {version}')
from .nodes._names import CLASSES
from .nodes.primitive import CBoolean, CText, CTextML, CInteger, CFloat
from .nodes.switch import CSwitchBooleanAny, CSwitchBooleanLatent, CSwitchBooleanConditioning, CSwitchBooleanImage, \
CSwitchBooleanString, CSwitchBooleanMask, CSwitchFromAny
from .nodes.debugger import CConsoleAny, CConsoleAnyToJson
from .nodes.image import CImagePreviewFromImage, CImageLoadWithMetadata, CImageGetResolution, CImagePreviewFromMetadata, \
CImageSaveWithExtraMetadata
from .nodes.list import CListAny, CListString
from .nodes.pipe import CPipeToAny, CPipeFromAny
from .nodes.utils import CUtilsCompareJsons, CUtilsStatSystem
from .nodes.metadata import CMetadataExtractor, CMetadataCompare
from .nodes.parameters import CJsonFile, CJsonExtractor
from .server import *
from .general import *
NODE_CLASS_MAPPINGS = {
CLASSES.CBOOLEAN_NAME.value: CBoolean,
CLASSES.CTEXT_NAME.value: CText,
CLASSES.CTEXTML_NAME.value: CTextML,
CLASSES.CINTEGER_NAME.value: CInteger,
CLASSES.CFLOAT_NAME.value: CFloat,
CLASSES.CDEBUGGER_CONSOLE_ANY_NAME.value: CConsoleAny,
CLASSES.CDEBUGGER_CONSOLE_ANY_TO_JSON_NAME.value: CConsoleAnyToJson,
CLASSES.CLIST_ANY_NAME.value: CListAny,
CLASSES.CLIST_STRING_NAME.value: CListString,
CLASSES.CSWITCH_FROM_ANY_NAME.value: CSwitchFromAny,
CLASSES.CSWITCH_ANY_NAME.value: CSwitchBooleanAny,
CLASSES.CSWITCH_LATENT_NAME.value: CSwitchBooleanLatent,
CLASSES.CSWITCH_CONDITIONING_NAME.value: CSwitchBooleanConditioning,
CLASSES.CSWITCH_IMAGE_NAME.value: CSwitchBooleanImage,
CLASSES.CSWITCH_MASK_NAME.value: CSwitchBooleanMask,
CLASSES.CSWITCH_STRING_NAME.value: CSwitchBooleanString,
CLASSES.CPIPE_TO_ANY_NAME.value: CPipeToAny,
CLASSES.CPIPE_FROM_ANY_NAME.value: CPipeFromAny,
CLASSES.CIMAGE_LOAD_METADATA_NAME.value: CImageLoadWithMetadata,
CLASSES.CIMAGE_GET_RESOLUTION_NAME.value: CImageGetResolution,
CLASSES.CIMAGE_PREVIEW_IMAGE_NAME.value: CImagePreviewFromImage,
CLASSES.CIMAGE_PREVIEW_METADATA_NAME.value: CImagePreviewFromMetadata,
CLASSES.CIMAGE_SAVE_METADATA_NAME.value: CImageSaveWithExtraMetadata,
CLASSES.CMETADATA_EXTRACTOR_NAME.value: CMetadataExtractor,
CLASSES.CMETADATA_COMPARATOR_NAME.value: CMetadataCompare,
CLASSES.CUTILS_JSON_COMPARATOR_NAME.value: CUtilsCompareJsons,
CLASSES.CUTILS_STAT_SYSTEM_NAME.value: CUtilsStatSystem,
CLASSES.CJSONFILE_NAME.value: CJsonFile,
CLASSES.CJSONEXTRACTOR_NAME.value: CJsonExtractor,
}
NODE_DISPLAY_NAME_MAPPINGS = {
CLASSES.CBOOLEAN_NAME.value: CLASSES.CBOOLEAN_DESC.value,
CLASSES.CTEXT_NAME.value: CLASSES.CTEXT_DESC.value,
CLASSES.CTEXTML_NAME.value: CLASSES.CTEXTML_DESC.value,
CLASSES.CINTEGER_NAME.value: CLASSES.CINTEGER_DESC.value,
CLASSES.CFLOAT_NAME.value: CLASSES.CFLOAT_DESC.value,
CLASSES.CDEBUGGER_CONSOLE_ANY_NAME.value: CLASSES.CDEBUGGER_ANY_DESC.value,
CLASSES.CDEBUGGER_CONSOLE_ANY_TO_JSON_NAME.value: CLASSES.CDEBUGGER_CONSOLE_ANY_TO_JSON_DESC.value,
CLASSES.CLIST_ANY_NAME.value: CLASSES.CLIST_ANY_DESC.value,
CLASSES.CLIST_STRING_NAME.value: CLASSES.CLIST_STRING_DESC.value,
CLASSES.CSWITCH_FROM_ANY_NAME.value: CLASSES.CSWITCH_FROM_ANY_DESC.value,
CLASSES.CSWITCH_ANY_NAME.value: CLASSES.CSWITCH_ANY_DESC.value,
CLASSES.CSWITCH_LATENT_NAME.value: CLASSES.CSWITCH_LATENT_DESC.value,
CLASSES.CSWITCH_CONDITIONING_NAME.value: CLASSES.CSWITCH_CONDITIONING_DESC.value,
CLASSES.CSWITCH_IMAGE_NAME.value: CLASSES.CSWITCH_IMAGE_DESC.value,
CLASSES.CSWITCH_MASK_NAME.value: CLASSES.CSWITCH_MASK_DESC.value,
CLASSES.CSWITCH_STRING_NAME.value: CLASSES.CSWITCH_STRING_DESC.value,
CLASSES.CPIPE_TO_ANY_NAME.value: CLASSES.CPIPE_TO_ANY_DESC.value,
CLASSES.CPIPE_FROM_ANY_NAME.value: CLASSES.CPIPE_FROM_ANY_DESC.value,
CLASSES.CIMAGE_LOAD_METADATA_NAME.value: CLASSES.CIMAGE_LOAD_METADATA_DESC.value,
CLASSES.CIMAGE_GET_RESOLUTION_NAME.value: CLASSES.CIMAGE_GET_RESOLUTION_DESC.value,
CLASSES.CIMAGE_PREVIEW_IMAGE_NAME.value: CLASSES.CIMAGE_PREVIEW_IMAGE_DESC.value,
CLASSES.CIMAGE_PREVIEW_METADATA_NAME.value: CLASSES.CIMAGE_PREVIEW_METADATA_DESC.value,
CLASSES.CIMAGE_SAVE_METADATA_NAME.value: CLASSES.CIMAGE_SAVE_METADATA_DESC.value,
CLASSES.CMETADATA_EXTRACTOR_NAME.value: CLASSES.CMETADATA_EXTRACTOR_DESC.value,
CLASSES.CMETADATA_COMPARATOR_NAME.value: CLASSES.CMETADATA_COMPARATOR_DESC.value,
CLASSES.CUTILS_JSON_COMPARATOR_NAME.value: CLASSES.CUTILS_JSON_COMPARATOR_DESC.value,
CLASSES.CUTILS_STAT_SYSTEM_NAME.value: CLASSES.CUTILS_STAT_SYSTEM_DESC.value,
CLASSES.CJSONFILE_NAME.value: CLASSES.CJSONFILE_DESC.value,
CLASSES.CJSONEXTRACTOR_NAME.value: CLASSES.CJSONEXTRACTOR_DESC.value,
}
WEB_DIRECTORY = "./web"
__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"]

View File

@@ -0,0 +1,6 @@
from .logger import *
from .keys import *
from .types import *
from .config import *
from .common import *
from .version import *

View File

@@ -0,0 +1,119 @@
import os
import json
import torch
from deepdiff import DeepDiff
from ..core import CONFIG, logger
# just a helper function to set the widget values (or clear them)
def setWidgetValues(value=None, unique_id=None, extra_pnginfo=None) -> None:
if unique_id and extra_pnginfo:
workflow = extra_pnginfo["workflow"]
node = next((x for x in workflow["nodes"] if str(x["id"]) == unique_id), None)
if node:
node["widgets_values"] = value
return None
# find difference between two jsons
def findJsonStrDiff(json1, json2):
msgError = "Could not compare jsons"
returnJson = {"error": msgError}
try:
# TODO review this
# dict1 = json.loads(json1)
# dict2 = json.loads(json2)
returnJson = findJsonsDiff(json1, json2)
returnJson = json.dumps(returnJson, indent=CONFIG["indent"])
except Exception as e:
logger.warn(f"{msgError}: {e}")
return returnJson
def findJsonsDiff(json1, json2):
msgError = "Could not compare jsons"
returnJson = {"error": msgError}
try:
diff = DeepDiff(json1, json2, ignore_order=True, verbose_level=2)
returnJson = {k: v for k, v in diff.items() if
k in ('dictionary_item_added', 'dictionary_item_removed', 'values_changed')}
# just for print "values_changed" at first
returnJson = dict(reversed(returnJson.items()))
except Exception as e:
logger.warn(f"{msgError}: {e}")
return returnJson
# powered by:
# https://github.com/WASasquatch/was-node-suite-comfyui/blob/main/WAS_Node_Suite.py
# class: WAS_Samples_Passthrough_Stat_System
def get_system_stats():
import psutil
# RAM
ram = psutil.virtual_memory()
ram_used = ram.used / (1024 ** 3)
ram_total = ram.total / (1024 ** 3)
ram_stats = f"Used RAM: {ram_used:.2f} GB / Total RAM: {ram_total:.2f} GB"
# VRAM (with PyTorch)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
vram_used = torch.cuda.memory_allocated(device) / (1024 ** 3)
vram_total = torch.cuda.get_device_properties(device).total_memory / (1024 ** 3)
vram_stats = f"Used VRAM: {vram_used:.2f} GB / Total VRAM: {vram_total:.2f} GB"
# Hard Drive Space
hard_drive = psutil.disk_usage("/")
used_space = hard_drive.used / (1024 ** 3)
total_space = hard_drive.total / (1024 ** 3)
hard_drive_stats = f"Used Space: {used_space:.2f} GB / Total Space: {total_space:.2f} GB"
return [ram_stats, vram_stats, hard_drive_stats]
# return x and y resolution of an image (torch tensor)
def getResolutionByTensor(image=None) -> dict:
res = {"x": 0, "y": 0}
if image is not None:
img = image.movedim(-1, 1)
res["x"] = img.shape[3]
res["y"] = img.shape[2]
return res
# by https://stackoverflow.com/questions/6080477/how-to-get-the-size-of-tar-gz-in-mb-file-in-python
def get_size(path):
size = os.path.getsize(path)
if size < 1024:
return f"{size} bytes"
elif size < pow(1024, 2):
return f"{round(size / 1024, 2)} KB"
elif size < pow(1024, 3):
return f"{round(size / (pow(1024, 2)), 2)} MB"
elif size < pow(1024, 4):
return f"{round(size / (pow(1024, 3)), 2)} GB"
def get_nested_value(data, dotted_key, default=None):
keys = dotted_key.split('.')
for key in keys:
if isinstance(data, str):
data = json.loads(data)
if isinstance(data, dict) and key in data:
data = data[key]
else:
return default
return data

View File

@@ -0,0 +1,7 @@
import os
import logging
CONFIG = {
"loglevel": int(os.environ.get("CRYSTOOLS_LOGLEVEL", logging.INFO)),
"indent": int(os.environ.get("CRYSTOOLS_INDENT", 2))
}

View File

@@ -0,0 +1,29 @@
from enum import Enum
class TEXTS(Enum):
CUSTOM_NODE_NAME = "Crystools"
LOGGER_PREFIX = "Crystools"
CONCAT = "concatenated"
INACTIVE_MSG = "inactive"
INVALID_METADATA_MSG = "Invalid metadata raw"
FILE_NOT_FOUND = "File not found!"
class CATEGORY(Enum):
TESTING = "_for_testing"
MAIN = "crystools 🪛"
PRIMITIVE = "/Primitive"
DEBUGGER = "/Debugger"
LIST = "/List"
SWITCH = "/Switch"
PIPE = "/Pipe"
IMAGE = "/Image"
UTILS = "/Utils"
METADATA = "/Metadata"
# remember, all keys should be in lowercase!
class KEYS(Enum):
LIST = "list_string"
PREFIX = "prefix"

View File

@@ -0,0 +1,39 @@
# by https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet/blob/main/control/logger.py
import sys
import copy
import logging
from .keys import TEXTS
from .config import CONFIG
class ColoredFormatter(logging.Formatter):
COLORS = {
"DEBUG": "\033[0;36m", # CYAN
"INFO": "\033[0;32m", # GREEN
"WARNING": "\033[0;33m", # YELLOW
"ERROR": "\033[0;31m", # RED
"CRITICAL": "\033[0;37;41m", # WHITE ON RED
"RESET": "\033[0m", # RESET COLOR
}
def format(self, record):
colored_record = copy.copy(record)
levelname = colored_record.levelname
seq = self.COLORS.get(levelname, self.COLORS["RESET"])
colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}"
return super().format(colored_record)
# Create a new logger
logger = logging.getLogger(TEXTS.LOGGER_PREFIX.value)
logger.propagate = False
# Add handler if we don't have one.
if not logger.handlers:
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(ColoredFormatter("[%(name)s %(levelname)s] %(message)s"))
logger.addHandler(handler)
# Configure logger
loglevel = CONFIG["loglevel"]
logger.setLevel(loglevel)

View File

@@ -0,0 +1,36 @@
import sys
FLOAT = ("FLOAT", {"default": 1,
"min": -sys.float_info.max,
"max": sys.float_info.max,
"step": 0.01})
BOOLEAN = ("BOOLEAN", {"default": True})
BOOLEAN_FALSE = ("BOOLEAN", {"default": False})
INT = ("INT", {"default": 1,
"min": -sys.maxsize,
"max": sys.maxsize,
"step": 1})
STRING = ("STRING", {"default": ""})
STRING_ML = ("STRING", {"multiline": True, "default": ""})
STRING_WIDGET = ("STRING", {"forceInput": True})
JSON_WIDGET = ("JSON", {"forceInput": True})
METADATA_RAW = ("METADATA_RAW", {"forceInput": True})
class AnyType(str):
"""A special class that is always equal in not equal comparisons. Credit to pythongosssss"""
def __eq__(self, _) -> bool:
return True
def __ne__(self, __value: object) -> bool:
return False
any = AnyType("*")

View File

@@ -0,0 +1 @@
version = "1.27.4"

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 208 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 256 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 299 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 343 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

View File

@@ -0,0 +1,3 @@
from .monitor import *
from .hdd import *
from .gpu import *

View File

@@ -0,0 +1,300 @@
import torch
import comfy.model_management
from ..core import logger
import os
import platform
def is_jetson() -> bool:
"""
Determines if the Python environment is running on a Jetson device by checking the device model
information or the platform release.
"""
PROC_DEVICE_MODEL = ''
try:
with open('/proc/device-tree/model', 'r') as f:
PROC_DEVICE_MODEL = f.read().strip()
logger.info(f"Device model: {PROC_DEVICE_MODEL}")
return "NVIDIA" in PROC_DEVICE_MODEL
except Exception as e:
# logger.warning(f"JETSON: Could not read /proc/device-tree/model: {e} (If you're not using Jetson, ignore this warning)")
# If /proc/device-tree/model is not available, check platform.release()
platform_release = platform.release()
logger.info(f"Platform release: {platform_release}")
if 'tegra' in platform_release.lower():
logger.info("Detected 'tegra' in platform release. Assuming Jetson device.")
return True
else:
logger.info("JETSON: Not detected.")
return False
IS_JETSON = is_jetson()
class CGPUInfo:
"""
This class is responsible for getting information from GPU (ONLY).
"""
cuda = False
pynvmlLoaded = False
jtopLoaded = False
cudaAvailable = False
torchDevice = 'cpu'
cudaDevice = 'cpu'
cudaDevicesFound = 0
switchGPU = True
switchVRAM = True
switchTemperature = True
gpus = []
gpusUtilization = []
gpusVRAM = []
gpusTemperature = []
def __init__(self):
if IS_JETSON:
# Try to import jtop for Jetson devices
try:
from jtop import jtop
self.jtopInstance = jtop()
self.jtopInstance.start()
self.jtopLoaded = True
logger.info('jtop initialized on Jetson device.')
except ImportError as e:
logger.error('jtop is not installed. ' + str(e))
except Exception as e:
logger.error('Could not initialize jtop. ' + str(e))
else:
# Try to import pynvml for non-Jetson devices
try:
import pynvml
self.pynvml = pynvml
self.pynvml.nvmlInit()
self.pynvmlLoaded = True
logger.info('pynvml (NVIDIA) initialized.')
except ImportError as e:
logger.error('pynvml is not installed. ' + str(e))
except Exception as e:
logger.error('Could not init pynvml (NVIDIA). ' + str(e))
self.anygpuLoaded = self.pynvmlLoaded or self.jtopLoaded
try:
self.torchDevice = comfy.model_management.get_torch_device_name(comfy.model_management.get_torch_device())
except Exception as e:
logger.error('Could not pick default device. ' + str(e))
if self.pynvmlLoaded and not self.jtopLoaded and not self.deviceGetCount():
logger.warning('No GPU detected, disabling GPU monitoring.')
self.anygpuLoaded = False
self.pynvmlLoaded = False
self.jtopLoaded = False
if self.anygpuLoaded:
if self.deviceGetCount() > 0:
self.cudaDevicesFound = self.deviceGetCount()
logger.info(f"GPU/s:")
for deviceIndex in range(self.cudaDevicesFound):
deviceHandle = self.deviceGetHandleByIndex(deviceIndex)
gpuName = self.deviceGetName(deviceHandle, deviceIndex)
logger.info(f"{deviceIndex}) {gpuName}")
self.gpus.append({
'index': deviceIndex,
'name': gpuName,
})
# Same index as gpus, with default values
self.gpusUtilization.append(True)
self.gpusVRAM.append(True)
self.gpusTemperature.append(True)
self.cuda = True
logger.info(self.systemGetDriverVersion())
else:
logger.warning('No GPU with CUDA detected.')
else:
logger.warning('No GPU monitoring libraries available.')
self.cudaDevice = 'cpu' if self.torchDevice == 'cpu' else 'cuda'
self.cudaAvailable = torch.cuda.is_available()
if self.cuda and self.cudaAvailable and self.torchDevice == 'cpu':
logger.warning('CUDA is available, but torch is using CPU.')
def getInfo(self):
logger.debug('Getting GPUs info...')
return self.gpus
def getStatus(self):
gpuUtilization = -1
gpuTemperature = -1
vramUsed = -1
vramTotal = -1
vramPercent = -1
gpuType = ''
gpus = []
if self.cudaDevice == 'cpu':
gpuType = 'cpu'
gpus.append({
'gpu_utilization': -1,
'gpu_temperature': -1,
'vram_total': -1,
'vram_used': -1,
'vram_used_percent': -1,
})
else:
gpuType = self.cudaDevice
if self.anygpuLoaded and self.cuda and self.cudaAvailable:
for deviceIndex in range(self.cudaDevicesFound):
deviceHandle = self.deviceGetHandleByIndex(deviceIndex)
gpuUtilization = -1
vramPercent = -1
vramUsed = -1
vramTotal = -1
gpuTemperature = -1
# GPU Utilization
if self.switchGPU and self.gpusUtilization[deviceIndex]:
try:
gpuUtilization = self.deviceGetUtilizationRates(deviceHandle)
except Exception as e:
logger.error('Could not get GPU utilization. ' + str(e))
logger.error('Monitor of GPU is turning off.')
self.switchGPU = False
if self.switchVRAM and self.gpusVRAM[deviceIndex]:
try:
memory = self.deviceGetMemoryInfo(deviceHandle)
vramUsed = memory['used']
vramTotal = memory['total']
# Check if vramTotal is not zero or None
if vramTotal and vramTotal != 0:
vramPercent = vramUsed / vramTotal * 100
except Exception as e:
logger.error('Could not get GPU memory info. ' + str(e))
self.switchVRAM = False
# Temperature
if self.switchTemperature and self.gpusTemperature[deviceIndex]:
try:
gpuTemperature = self.deviceGetTemperature(deviceHandle)
except Exception as e:
logger.error('Could not get GPU temperature. Turning off this feature. ' + str(e))
self.switchTemperature = False
gpus.append({
'gpu_utilization': gpuUtilization,
'gpu_temperature': gpuTemperature,
'vram_total': vramTotal,
'vram_used': vramUsed,
'vram_used_percent': vramPercent,
})
return {
'device_type': gpuType,
'gpus': gpus,
}
def deviceGetCount(self):
if self.pynvmlLoaded:
return self.pynvml.nvmlDeviceGetCount()
elif self.jtopLoaded:
# For Jetson devices, we assume there's one GPU
return 1
else:
return 0
def deviceGetHandleByIndex(self, index):
if self.pynvmlLoaded:
return self.pynvml.nvmlDeviceGetHandleByIndex(index)
elif self.jtopLoaded:
return index # On Jetson, index acts as handle
else:
return 0
def deviceGetName(self, deviceHandle, deviceIndex):
if self.pynvmlLoaded:
gpuName = 'Unknown GPU'
try:
gpuName = self.pynvml.nvmlDeviceGetName(deviceHandle)
try:
gpuName = gpuName.decode('utf-8', errors='ignore')
except AttributeError:
pass
except UnicodeDecodeError as e:
gpuName = 'Unknown GPU (decoding error)'
logger.error(f"UnicodeDecodeError: {e}")
return gpuName
elif self.jtopLoaded:
# Access the GPU name from self.jtopInstance.gpu
try:
gpu_info = self.jtopInstance.gpu
gpu_name = next(iter(gpu_info.keys()))
return gpu_name
except Exception as e:
logger.error('Could not get GPU name. ' + str(e))
return 'Unknown GPU'
else:
return ''
def systemGetDriverVersion(self):
if self.pynvmlLoaded:
return f'NVIDIA Driver: {self.pynvml.nvmlSystemGetDriverVersion()}'
elif self.jtopLoaded:
# No direct method to get driver version from jtop
return 'NVIDIA Driver: unknown'
else:
return 'Driver unknown'
def deviceGetUtilizationRates(self, deviceHandle):
if self.pynvmlLoaded:
return self.pynvml.nvmlDeviceGetUtilizationRates(deviceHandle).gpu
elif self.jtopLoaded:
# GPU utilization from jtop stats
try:
gpu_util = self.jtopInstance.stats.get('GPU', -1)
return gpu_util
except Exception as e:
logger.error('Could not get GPU utilization. ' + str(e))
return -1
else:
return 0
def deviceGetMemoryInfo(self, deviceHandle):
if self.pynvmlLoaded:
mem = self.pynvml.nvmlDeviceGetMemoryInfo(deviceHandle)
return {'total': mem.total, 'used': mem.used}
elif self.jtopLoaded:
mem_data = self.jtopInstance.memory['RAM']
total = mem_data['tot']
used = mem_data['used']
return {'total': total, 'used': used}
else:
return {'total': 1, 'used': 1}
def deviceGetTemperature(self, deviceHandle):
if self.pynvmlLoaded:
return self.pynvml.nvmlDeviceGetTemperature(deviceHandle, self.pynvml.NVML_TEMPERATURE_GPU)
elif self.jtopLoaded:
try:
temperature = self.jtopInstance.stats.get('Temp gpu', -1)
return temperature
except Exception as e:
logger.error('Could not get GPU temperature. ' + str(e))
return -1
else:
return 0
def close(self):
if self.jtopLoaded and self.jtopInstance is not None:
self.jtopInstance.close()

View File

@@ -0,0 +1,132 @@
import platform
import re
import cpuinfo
from cpuinfo import DataSource
import psutil
from .gpu import CGPUInfo
from .hdd import getDrivesInfo
from ..core import logger
class CHardwareInfo:
"""
This is only class to get information from hardware.
Specially for share it to other software.
"""
switchCPU = False
switchHDD = False
switchRAM = False
whichHDD = '/' # breaks linux
@property
def switchGPU(self):
return self.GPUInfo.switchGPU
@switchGPU.setter
def switchGPU(self, value):
self.GPUInfo.switchGPU = value
@property
def switchVRAM(self):
return self.GPUInfo.switchVRAM
@switchVRAM.setter
def switchVRAM(self, value):
self.GPUInfo.switchVRAM = value
def __init__(self, switchCPU=False, switchGPU=False, switchHDD=False, switchRAM=False, switchVRAM=False):
self.switchCPU = switchCPU
self.switchHDD = switchHDD
self.switchRAM = switchRAM
self.print_sys_info()
self.GPUInfo = CGPUInfo()
self.switchGPU = switchGPU
self.switchVRAM = switchVRAM
def print_sys_info(self):
brand = None
if DataSource.is_windows: # Windows
brand = DataSource.winreg_processor_brand().strip()
elif DataSource.has_proc_cpuinfo(): # Linux
return_code, output = DataSource.cat_proc_cpuinfo()
if return_code == 0 and output is not None:
for line in output.splitlines():
r = re.search(r'model name\s*:\s*(.+)', line)
if r:
brand = r.group(1)
break
elif DataSource.has_sysctl(): # macOS
return_code, output = DataSource.sysctl_machdep_cpu_hw_cpufrequency()
if return_code == 0 and output is not None:
for line in output.splitlines():
r = re.search(r'machdep\.cpu\.brand_string\s*:\s*(.+)', line)
if r:
brand = r.group(1)
break
# fallback to use cpuinfo.get_cpu_info()
if not brand:
brand = cpuinfo.get_cpu_info().get('brand_raw', "Unknown")
arch_string_raw = 'Arch unknown'
try:
arch_string_raw = DataSource.arch_string_raw
except:
pass
specName = 'CPU: ' + brand
specArch = 'Arch: ' + arch_string_raw
specOs = 'OS: ' + str(platform.system()) + ' ' + str(platform.release())
logger.info(f"{specName} - {specArch} - {specOs}")
def getHDDsInfo(self):
return getDrivesInfo()
def getGPUInfo(self):
return self.GPUInfo.getInfo()
def getStatus(self):
cpu = -1
ramTotal = -1
ramUsed = -1
ramUsedPercent = -1
hddTotal = -1
hddUsed = -1
hddUsedPercent = -1
if self.switchCPU:
cpu = psutil.cpu_percent()
if self.switchRAM:
ram = psutil.virtual_memory()
ramTotal = ram.total
ramUsed = ram.used
ramUsedPercent = ram.percent
if self.switchHDD:
try:
hdd = psutil.disk_usage(self.whichHDD)
hddTotal = hdd.total
hddUsed = hdd.used
hddUsedPercent = hdd.percent
except Exception as e:
logger.error(f"Error getting disk usage for {self.whichHDD}: {e}")
hddTotal = -1
hddUsed = -1
hddUsedPercent = -1
getStatus = self.GPUInfo.getStatus()
return {
'cpu_utilization': cpu,
'ram_total': ramTotal,
'ram_used': ramUsed,
'ram_used_percent': ramUsedPercent,
'hdd_total': hddTotal,
'hdd_used': hddUsed,
'hdd_used_percent': hddUsedPercent,
'device_type': getStatus['device_type'],
'gpus': getStatus['gpus'],
}

View File

@@ -0,0 +1,10 @@
import psutil
from ..core import logger
def getDrivesInfo():
hdds = []
logger.debug('Getting HDDs info...')
for partition in psutil.disk_partitions():
hdds.append(partition.mountpoint)
return hdds

View File

@@ -0,0 +1,67 @@
import asyncio
import server
import time
import threading
from .hardware import CHardwareInfo
from ..core import logger
lock = threading.Lock()
class CMonitor:
monitorThread = None
threadController = threading.Event()
rate = 0
hardwareInfo = None
def __init__(self, rate=5, switchCPU=False, switchGPU=False, switchHDD=False, switchRAM=False, switchVRAM=False):
self.rate = rate
self.hardwareInfo = CHardwareInfo(switchCPU, switchGPU, switchHDD, switchRAM, switchVRAM)
self.startMonitor()
async def send_message(self, data) -> None:
# I'm not sure if it is ok, but works ¯\_(ツ)_/¯
# I tried to use async with send_json, but eventually that don't send the message
server.PromptServer.instance.send_sync('crystools.monitor', data)
def startMonitorLoop(self):
# logger.debug('Starting monitor loop...')
asyncio.run(self.MonitorLoop())
async def MonitorLoop(self):
while self.rate > 0 and not self.threadController.is_set():
data = self.hardwareInfo.getStatus()
# logger.debug('data to send' + str(data))
await self.send_message(data)
await asyncio.sleep(self.rate)
def startMonitor(self):
if self.monitorThread is not None:
self.stopMonitor()
logger.debug('Restarting monitor...')
else:
if self.rate == 0:
logger.debug('Monitor rate is 0, not starting monitor.')
return None
logger.debug('Starting monitor...')
self.threadController.clear()
if self.monitorThread is None or not self.monitorThread.is_alive():
lock.acquire()
self.monitorThread = threading.Thread(target=self.startMonitorLoop)
lock.release()
self.monitorThread.daemon = True
self.monitorThread.start()
def stopMonitor(self):
logger.debug('Stopping monitor...')
self.threadController.set()
cmonitor = CMonitor(1, True, True, True, True, True)

View File

@@ -0,0 +1 @@
# intentionally left blank

View File

@@ -0,0 +1,80 @@
from enum import Enum
prefix = '🪛 '
# IMPORTANT DON'T CHANGE THE 'NAME' AND 'TYPE' OF THE ENUMS, IT WILL BREAK THE COMPATIBILITY!
# remember: NAME is for search, DESC is for contextual
class CLASSES(Enum):
CBOOLEAN_NAME = 'Primitive boolean [Crystools]'
CBOOLEAN_DESC = prefix + 'Primitive boolean'
CTEXT_NAME = 'Primitive string [Crystools]'
CTEXT_DESC = prefix + 'Primitive string'
CTEXTML_NAME = 'Primitive string multiline [Crystools]'
CTEXTML_DESC = prefix + 'Primitive string multiline'
CINTEGER_NAME = 'Primitive integer [Crystools]'
CINTEGER_DESC = prefix + 'Primitive integer'
CFLOAT_NAME = 'Primitive float [Crystools]'
CFLOAT_DESC = prefix + 'Primitive float'
CDEBUGGER_CONSOLE_ANY_NAME = 'Show any [Crystools]'
CDEBUGGER_ANY_DESC = prefix + 'Show any value to console/display'
CDEBUGGER_CONSOLE_ANY_TO_JSON_NAME = 'Show any to JSON [Crystools]'
CDEBUGGER_CONSOLE_ANY_TO_JSON_DESC = prefix + 'Show any to JSON'
CLIST_ANY_TYPE = 'ListAny'
CLIST_ANY_NAME = 'List of any [Crystools]'
CLIST_ANY_DESC = prefix + 'List of any'
CLIST_STRING_TYPE = 'ListString'
CLIST_STRING_NAME = 'List of strings [Crystools]'
CLIST_STRING_DESC = prefix + 'List of strings'
CSWITCH_FROM_ANY_NAME = 'Switch from any [Crystools]'
CSWITCH_FROM_ANY_DESC = prefix + 'Switch from any'
CSWITCH_ANY_NAME = 'Switch any [Crystools]'
CSWITCH_ANY_DESC = prefix + 'Switch any'
CSWITCH_STRING_NAME = 'Switch string [Crystools]'
CSWITCH_STRING_DESC = prefix + 'Switch string'
CSWITCH_CONDITIONING_NAME = 'Switch conditioning [Crystools]'
CSWITCH_CONDITIONING_DESC = prefix + 'Switch conditioning'
CSWITCH_IMAGE_NAME = 'Switch image [Crystools]'
CSWITCH_IMAGE_DESC = prefix + 'Switch image'
CSWITCH_MASK_NAME = 'Switch mask [Crystools]'
CSWITCH_MASK_DESC = prefix + 'Switch mask'
CSWITCH_LATENT_NAME = 'Switch latent [Crystools]'
CSWITCH_LATENT_DESC = prefix + 'Switch latent'
CPIPE_ANY_TYPE = 'CPipeAny'
CPIPE_TO_ANY_NAME = 'Pipe to/edit any [Crystools]'
CPIPE_TO_ANY_DESC = prefix + 'Pipe to/edit any'
CPIPE_FROM_ANY_NAME = 'Pipe from any [Crystools]'
CPIPE_FROM_ANY_DESC = prefix + 'Pipe from any'
CIMAGE_LOAD_METADATA_NAME = 'Load image with metadata [Crystools]'
CIMAGE_LOAD_METADATA_DESC = prefix + 'Load image with metadata'
CIMAGE_GET_RESOLUTION_NAME = 'Get resolution [Crystools]'
CIMAGE_GET_RESOLUTION_DESC = prefix + 'Get resolution'
CIMAGE_PREVIEW_IMAGE_NAME = 'Preview from image [Crystools]'
CIMAGE_PREVIEW_IMAGE_DESC = prefix + 'Preview from image'
CIMAGE_PREVIEW_METADATA_NAME = 'Preview from metadata [Crystools]'
CIMAGE_PREVIEW_METADATA_DESC = prefix + 'Preview from metadata'
CIMAGE_SAVE_METADATA_NAME = 'Save image with extra metadata [Crystools]'
CIMAGE_SAVE_METADATA_DESC = prefix + 'Save image with extra metadata'
CMETADATA_EXTRACTOR_NAME = 'Metadata extractor [Crystools]'
CMETADATA_EXTRACTOR_DESC = prefix + 'Metadata extractor'
CMETADATA_COMPARATOR_NAME = 'Metadata comparator [Crystools]'
CMETADATA_COMPARATOR_DESC = prefix + 'Metadata comparator'
CUTILS_JSON_COMPARATOR_NAME = 'JSON comparator [Crystools]'
CUTILS_JSON_COMPARATOR_DESC = prefix + 'JSON comparator'
CUTILS_STAT_SYSTEM_NAME = 'Stats system [Crystools]'
CUTILS_STAT_SYSTEM_DESC = prefix + 'Stats system (powered by WAS)'
# CPARAMETERS_NAME = 'External parameter from JSON file [Crystools]'
# CPARAMETERS_DESC = prefix + 'External parameters from JSON file'
CJSONFILE_NAME = 'Read JSON file [Crystools]'
CJSONFILE_DESC = prefix + 'Read JSON file (BETA)'
CJSONEXTRACTOR_NAME = 'JSON extractor [Crystools]'
CJSONEXTRACTOR_DESC = prefix + 'JSON extractor (BETA)'

View File

@@ -0,0 +1,123 @@
import json
from ..core import CONFIG, CATEGORY, BOOLEAN, BOOLEAN_FALSE, KEYS, TEXTS, STRING, logger, any
class CConsoleAny:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
},
"optional": {
"any_value": (any,),
"console": BOOLEAN_FALSE,
"display": BOOLEAN,
KEYS.PREFIX.value: STRING,
},
"hidden": {
# "unique_id": "UNIQUE_ID",
# "extra_pnginfo": "EXTRA_PNGINFO",
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.DEBUGGER.value
INPUT_IS_LIST = True
RETURN_TYPES = ()
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, any_value=None, console=False, display=True, prefix=None):
console = console[0]
display = display[0]
prefix = prefix[0]
text = ""
textToDisplay = TEXTS.INACTIVE_MSG.value
if any_value is not None:
try:
if type(any_value) == list:
for item in any_value:
try:
text += str(item)
except Exception as e:
text += "source exists, but could not be serialized.\n"
logger.warn(e)
else:
logger.warn("any_value is not a list")
except Exception:
try:
text = json.dumps(any_value)[1:-1]
except Exception:
text = 'source exists, but could not be serialized.'
logger.debug(f"Show any to console is running...")
if console:
if prefix is not None and prefix != "":
print(f"{prefix}: {text}")
else:
print(text)
if display:
textToDisplay = text
value = [console, display, prefix, textToDisplay]
# setWidgetValues(value, unique_id, extra_pnginfo)
return {"ui": {"text": value}}
class CConsoleAnyToJson:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
},
"optional": {
"any_value": (any,),
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.DEBUGGER.value
INPUT_IS_LIST = True
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("string",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, any_value=None):
text = TEXTS.INACTIVE_MSG.value
if any_value is not None and isinstance(any_value, list):
item = any_value[0]
if isinstance(item, dict):
try:
text = json.dumps(item, indent=CONFIG["indent"])
except Exception as e:
text = "The input is a dict, but could not be serialized.\n"
logger.warn(e)
elif isinstance(item, list):
try:
text = json.dumps(item, indent=CONFIG["indent"])
except Exception as e:
text = "The input is a list, but could not be serialized.\n"
logger.warn(e)
else:
text = str(item)
logger.debug(f"Show any-json to console is running...")
return {"ui": {"text": [text]}, "result": (text,)}

View File

@@ -0,0 +1,517 @@
import fnmatch
import os
import random
import sys
import json
import piexif
import hashlib
from datetime import datetime
import torch
import numpy as np
from pathlib import Path
from PIL import Image, ImageOps
from PIL.ExifTags import TAGS, GPSTAGS, IFD
from PIL.PngImagePlugin import PngImageFile
from PIL.JpegImagePlugin import JpegImageFile
from nodes import PreviewImage, SaveImage
import folder_paths
from ..core import CATEGORY, CONFIG, BOOLEAN, METADATA_RAW,TEXTS, setWidgetValues, logger, getResolutionByTensor, get_size
sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), "comfy"))
class CImagePreviewFromImage(PreviewImage):
def __init__(self):
self.output_dir = folder_paths.get_temp_directory()
self.type = "temp"
self.prefix_append = "_" + ''.join(random.choice("abcdefghijklmnopqrstupvxyz") for x in range(5))
self.compress_level = 1
self.data_cached = None
self.data_cached_text = None
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
# if it is required, in next node does not receive any value even the cache!
},
"optional": {
"image": ("IMAGE",),
},
"hidden": {
"prompt": "PROMPT",
"extra_pnginfo": "EXTRA_PNGINFO",
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.IMAGE.value
RETURN_TYPES = ("METADATA_RAW",)
RETURN_NAMES = ("Metadata RAW",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, image=None, prompt=None, extra_pnginfo=None):
text = ""
title = ""
data = {
"result": [''],
"ui": {
"text": [''],
"images": [],
}
}
if image is not None:
saved = self.save_images(image, "crystools/i", prompt, extra_pnginfo)
image = saved["ui"]["images"][0]
image_path = Path(self.output_dir).joinpath(image["subfolder"], image["filename"])
img, promptFromImage, metadata = buildMetadata(image_path)
images = [image]
result = metadata
data["result"] = [result]
data["ui"]["images"] = images
title = "Source: Image link \n"
text += buildPreviewText(metadata)
text += f"Current prompt (NO FROM IMAGE!):\n"
text += json.dumps(promptFromImage, indent=CONFIG["indent"])
self.data_cached_text = text
self.data_cached = data
elif image is None and self.data_cached is not None:
title = "Source: Image link - CACHED\n"
data = self.data_cached
text = self.data_cached_text
else:
logger.debug("Source: Empty on CImagePreviewFromImage")
text = "Source: Empty"
data['ui']['text'] = [title + text]
return data
class CImagePreviewFromMetadata(PreviewImage):
def __init__(self):
self.data_cached = None
self.data_cached_text = None
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
# if it is required, in next node does not receive any value even the cache!
},
"optional": {
"metadata_raw": METADATA_RAW,
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.IMAGE.value
RETURN_TYPES = ("METADATA_RAW",)
RETURN_NAMES = ("Metadata RAW",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, metadata_raw=None):
text = ""
title = ""
data = {
"result": [''],
"ui": {
"text": [''],
"images": [],
}
}
if metadata_raw is not None and metadata_raw != '':
promptFromImage = {}
if "prompt" in metadata_raw:
promptFromImage = metadata_raw["prompt"]
title = "Source: Metadata RAW\n"
text += buildPreviewText(metadata_raw)
text += f"Prompt from image:\n"
text += json.dumps(promptFromImage, indent=CONFIG["indent"])
images = self.resolveImage(metadata_raw["fileinfo"]["filename"])
result = metadata_raw
data["result"] = [result]
data["ui"]["images"] = images
self.data_cached_text = text
self.data_cached = data
elif metadata_raw is None and self.data_cached is not None:
title = "Source: Metadata RAW - CACHED\n"
data = self.data_cached
text = self.data_cached_text
else:
logger.debug("Source: Empty on CImagePreviewFromMetadata")
text = "Source: Empty"
data["ui"]["text"] = [title + text]
return data
def resolveImage(self, filename=None):
images = []
if filename is not None:
image_input_folder = os.path.normpath(folder_paths.get_input_directory())
image_input_folder_abs = Path(image_input_folder).resolve()
image_path = os.path.normpath(filename)
image_path_abs = Path(image_path).resolve()
if Path(image_path_abs).is_file() is False:
raise Exception(TEXTS.FILE_NOT_FOUND.value)
try:
# get common path, should be input/output/temp folder
common = os.path.commonpath([image_input_folder_abs, image_path_abs])
if common != image_input_folder:
raise Exception("Path invalid (should be in the input folder)")
relative = os.path.normpath(os.path.relpath(image_path_abs, image_input_folder_abs))
images.append({
"filename": Path(relative).name,
"subfolder": os.path.dirname(relative),
"type": "input"
})
except Exception as e:
logger.warn(e)
return images
class CImageGetResolution:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"image": ("IMAGE",),
},
"hidden": {
"unique_id": "UNIQUE_ID",
"extra_pnginfo": "EXTRA_PNGINFO",
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.IMAGE.value
RETURN_TYPES = ("INT", "INT",)
RETURN_NAMES = ("width", "height",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, image, extra_pnginfo=None, unique_id=None):
res = getResolutionByTensor(image)
text = [f"{res['x']}x{res['y']}"]
setWidgetValues(text, unique_id, extra_pnginfo)
logger.debug(f"Resolution: {text}")
return {"ui": {"text": text}, "result": (res["x"], res["y"])}
# subfolders based on: https://github.com/catscandrive/comfyui-imagesubfolders
class CImageLoadWithMetadata:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
input_dir = folder_paths.get_input_directory()
exclude_files = {"Thumbs.db", "*.DS_Store", "desktop.ini", "*.lock" }
exclude_folders = {"clipspace", ".*"}
file_list = []
for root, dirs, files in os.walk(input_dir, followlinks=True):
# Exclude specific folders
dirs[:] = [d for d in dirs if not any(fnmatch.fnmatch(d, exclude) for exclude in exclude_folders)]
files = [f for f in files if not any(fnmatch.fnmatch(f, exclude) for exclude in exclude_files)]
for file in files:
relpath = os.path.relpath(os.path.join(root, file), start=input_dir)
# fix for windows
relpath = relpath.replace("\\", "/")
file_list.append(relpath)
return {
"required": {
"image": (sorted(file_list), {"image_upload": True})
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.IMAGE.value
RETURN_TYPES = ("IMAGE", "MASK", "JSON", "METADATA_RAW")
RETURN_NAMES = ("image", "mask", "prompt", "Metadata RAW")
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, image):
image_path = folder_paths.get_annotated_filepath(image)
imgF = Image.open(image_path)
img, prompt, metadata = buildMetadata(image_path)
if imgF.format == 'WEBP':
# Use piexif to extract EXIF data from WebP image
try:
exif_data = piexif.load(image_path)
prompt, metadata = self.process_exif_data(exif_data)
except ValueError:
prompt = {}
img = ImageOps.exif_transpose(img)
image = img.convert("RGB")
image = np.array(image).astype(np.float32) / 255.0
image = torch.from_numpy(image)[None,]
if 'A' in img.getbands():
mask = np.array(img.getchannel('A')).astype(np.float32) / 255.0
mask = 1. - torch.from_numpy(mask)
else:
mask = torch.zeros((64, 64), dtype=torch.float32, device="cpu")
return image, mask.unsqueeze(0), prompt, metadata
def process_exif_data(self, exif_data):
metadata = {}
# 检查 '0th' 键下的 271 值,提取 Prompt 信息
if '0th' in exif_data and 271 in exif_data['0th']:
prompt_data = exif_data['0th'][271].decode('utf-8')
# 移除可能的前缀 'Prompt:'
prompt_data = prompt_data.replace('Prompt:', '', 1)
# 假设 prompt_data 是一个字符串,尝试将其转换为 JSON 对象
try:
metadata['prompt'] = json.loads(prompt_data)
except json.JSONDecodeError:
metadata['prompt'] = prompt_data
# 检查 '0th' 键下的 270 值,提取 Workflow 信息
if '0th' in exif_data and 270 in exif_data['0th']:
workflow_data = exif_data['0th'][270].decode('utf-8')
# 移除可能的前缀 'Workflow:'
workflow_data = workflow_data.replace('Workflow:', '', 1)
try:
# 尝试将字节字符串转换为 JSON 对象
metadata['workflow'] = json.loads(workflow_data)
except json.JSONDecodeError:
# 如果转换失败,则将原始字符串存储在 metadata 中
metadata['workflow'] = workflow_data
metadata.update(exif_data)
return metadata
@classmethod
def IS_CHANGED(cls, image):
image_path = folder_paths.get_annotated_filepath(image)
m = hashlib.sha256()
with open(image_path, 'rb') as f:
m.update(f.read())
return m.digest().hex()
@classmethod
def VALIDATE_INPUTS(cls, image):
if not folder_paths.exists_annotated_filepath(image):
return "Invalid image file: {}".format(image)
return True
class CImageSaveWithExtraMetadata(SaveImage):
def __init__(self):
super().__init__()
self.data_cached = None
self.data_cached_text = None
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
# if it is required, in next node does not receive any value even the cache!
"image": ("IMAGE",),
"filename_prefix": ("STRING", {"default": "ComfyUI"}),
"with_workflow": BOOLEAN,
},
"optional": {
"metadata_extra": ("STRING", {"multiline": True, "default": json.dumps({
"Title": "Image generated by Crystian",
"Description": "More info: https:\/\/www.instagram.com\/crystian.ia",
"Author": "crystian.ia",
"Software": "ComfyUI",
"Category": "StableDiffusion",
"Rating": 5,
"UserComment": "",
"Keywords": [
""
],
"Copyrights": "",
}, indent=CONFIG["indent"]).replace("\\/", "/"),
}),
},
"hidden": {
"prompt": "PROMPT",
"extra_pnginfo": "EXTRA_PNGINFO",
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.IMAGE.value
RETURN_TYPES = ("METADATA_RAW",)
RETURN_NAMES = ("Metadata RAW",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, image=None, filename_prefix="ComfyUI", with_workflow=True, metadata_extra=None, prompt=None, extra_pnginfo=None):
data = {
"result": [''],
"ui": {
"text": [''],
"images": [],
}
}
if image is not None:
if with_workflow is True:
extra_pnginfo_new = extra_pnginfo.copy()
prompt = prompt.copy()
else:
extra_pnginfo_new = None
prompt = None
if metadata_extra is not None and metadata_extra != 'undefined':
try:
# metadata_extra = json.loads(f"{{{metadata_extra}}}") // a fix?
metadata_extra = json.loads(metadata_extra)
except Exception as e:
logger.error(f"Error parsing metadata_extra (it will send as string), error: {e}")
metadata_extra = {"extra": str(metadata_extra)}
if isinstance(metadata_extra, dict):
for k, v in metadata_extra.items():
if extra_pnginfo_new is None:
extra_pnginfo_new = {}
extra_pnginfo_new[k] = v
saved = super().save_images(image, filename_prefix, prompt, extra_pnginfo_new)
image = saved["ui"]["images"][0]
image_path = Path(self.output_dir).joinpath(image["subfolder"], image["filename"])
img, promptFromImage, metadata = buildMetadata(image_path)
images = [image]
result = metadata
data["result"] = [result]
data["ui"]["images"] = images
else:
logger.debug("Source: Empty on CImageSaveWithExtraMetadata")
return data
def buildMetadata(image_path):
if Path(image_path).is_file() is False:
raise Exception(TEXTS.FILE_NOT_FOUND.value)
img = Image.open(image_path)
metadata = {}
prompt = {}
metadata["fileinfo"] = {
"filename": Path(image_path).as_posix(),
"resolution": f"{img.width}x{img.height}",
"date": str(datetime.fromtimestamp(os.path.getmtime(image_path))),
"size": str(get_size(image_path)),
}
# only for png files
if isinstance(img, PngImageFile):
metadataFromImg = img.info
# for all metadataFromImg convert to string (but not for workflow and prompt!)
for k, v in metadataFromImg.items():
# from ComfyUI
if k == "workflow":
try:
metadata["workflow"] = json.loads(metadataFromImg["workflow"])
except Exception as e:
logger.warn(f"Error parsing metadataFromImg 'workflow': {e}")
# from ComfyUI
elif k == "prompt":
try:
metadata["prompt"] = json.loads(metadataFromImg["prompt"])
# extract prompt to use on metadataFromImg
prompt = metadata["prompt"]
except Exception as e:
logger.warn(f"Error parsing metadataFromImg 'prompt': {e}")
else:
try:
# for all possible metadataFromImg by user
metadata[str(k)] = json.loads(v)
except Exception as e:
logger.debug(f"Error parsing {k} as json, trying as string: {e}")
try:
metadata[str(k)] = str(v)
except Exception as e:
logger.debug(f"Error parsing {k} it will be skipped: {e}")
if isinstance(img, JpegImageFile):
exif = img.getexif()
for k, v in exif.items():
tag = TAGS.get(k, k)
if v is not None:
metadata[str(tag)] = str(v)
for ifd_id in IFD:
try:
if ifd_id == IFD.GPSInfo:
resolve = GPSTAGS
else:
resolve = TAGS
ifd = exif.get_ifd(ifd_id)
ifd_name = str(ifd_id.name)
metadata[ifd_name] = {}
for k, v in ifd.items():
tag = resolve.get(k, k)
metadata[ifd_name][str(tag)] = str(v)
except KeyError:
pass
return img, prompt, metadata
def buildPreviewText(metadata):
text = f"File: {metadata['fileinfo']['filename']}\n"
text += f"Resolution: {metadata['fileinfo']['resolution']}\n"
text += f"Date: {metadata['fileinfo']['date']}\n"
text += f"Size: {metadata['fileinfo']['size']}\n"
return text

View File

@@ -0,0 +1,149 @@
from ..core import STRING, TEXTS, KEYS, CATEGORY, any, logger
from ._names import CLASSES
class CListAny:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
},
"optional": {
"any_1": (any,),
"any_2": (any,),
"any_3": (any,),
"any_4": (any,),
"any_5": (any,),
"any_6": (any,),
"any_7": (any,),
"any_8": (any,),
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.LIST.value
RETURN_TYPES = (any,),
RETURN_NAMES = ("any_list",)
OUTPUT_IS_LIST = (True,)
FUNCTION = "execute"
def execute(self,
any_1=None,
any_2=None,
any_3=None,
any_4=None,
any_5=None,
any_6=None,
any_7=None,
any_8=None):
list_any = []
if any_1 is not None:
try:
list_any.append(any_1)
except Exception as e:
logger.warn(e)
if any_2 is not None:
try:
list_any.append(any_2)
except Exception as e:
logger.warn(e)
if any_3 is not None:
try:
list_any.append(any_3)
except Exception as e:
logger.warn(e)
if any_4 is not None:
try:
list_any.append(any_4)
except Exception as e:
logger.warn(e)
if any_5 is not None:
try:
list_any.append(any_5)
except Exception as e:
logger.warn(e)
if any_6 is not None:
try:
list_any.append(any_6)
except Exception as e:
logger.warn(e)
if any_7 is not None:
try:
list_any.append(any_7)
except Exception as e:
logger.warn(e)
if any_8 is not None:
try:
list_any.append(any_8)
except Exception as e:
logger.warn(e)
# yes, double brackets are needed because of the OUTPUT_IS_LIST... ¯\_(ツ)_/¯
return [[list_any]]
class CListString:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
},
"optional": {
"string_1": STRING,
"string_2": STRING,
"string_3": STRING,
"string_4": STRING,
"string_5": STRING,
"string_6": STRING,
"string_7": STRING,
"string_8": STRING,
"delimiter": ("STRING", {"default": " "}),
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.LIST.value
RETURN_TYPES = ("STRING", CLASSES.CLIST_STRING_TYPE.value,)
RETURN_NAMES = (TEXTS.CONCAT.value, KEYS.LIST.value)
OUTPUT_IS_LIST = (False, True, )
FUNCTION = "execute"
def execute(self,
string_1=None,
string_2=None,
string_3=None,
string_4=None,
string_5=None,
string_6=None,
string_7=None,
string_8=None,
delimiter=""):
list_str = []
if string_1 is not None and string_1 != "":
list_str.append(string_1)
if string_2 is not None and string_2 != "":
list_str.append(string_2)
if string_3 is not None and string_3 != "":
list_str.append(string_3)
if string_4 is not None and string_4 != "":
list_str.append(string_4)
if string_5 is not None and string_5 != "":
list_str.append(string_5)
if string_6 is not None and string_6 != "":
list_str.append(string_6)
if string_7 is not None and string_7 != "":
list_str.append(string_7)
if string_8 is not None and string_8 != "":
list_str.append(string_8)
return delimiter.join(list_str), [list_str]

View File

@@ -0,0 +1,164 @@
import json
import re
from ..core import CATEGORY, CONFIG, METADATA_RAW, TEXTS, findJsonsDiff, logger
class CMetadataExtractor:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"metadata_raw": METADATA_RAW,
},
"optional": {
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.METADATA.value
RETURN_TYPES = ("JSON", "JSON", "JSON", "JSON", "STRING", "STRING")
RETURN_NAMES = ("prompt", "workflow", "file info", "raw to JSON", "raw to property", "raw to csv")
# OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, metadata_raw=None):
prompt = {}
workflow = {}
fileinfo = {}
text = ""
csv = ""
if metadata_raw is not None and isinstance(metadata_raw, dict):
try:
for key, value in metadata_raw.items():
if isinstance(value, dict):
# yes, double json.dumps is needed for jsons
value = json.dumps(json.dumps(value))
else:
value = json.dumps(value)
text += f"\"{key}\"={value}\n"
# remove spaces
# value = re.sub(' +', ' ', value)
value = re.sub('\n', ' ', value)
csv += f'"{key}"\t{value}\n'
if csv != "":
csv = '"key"\t"value"\n' + csv
except Exception as e:
logger.warn(e)
try:
if "prompt" in metadata_raw:
prompt = metadata_raw["prompt"]
else:
raise Exception("Prompt not found in metadata_raw")
except Exception as e:
logger.warn(e)
try:
if "workflow" in metadata_raw:
workflow = metadata_raw["workflow"]
else:
raise Exception("Workflow not found in metadata_raw")
except Exception as e:
logger.warn(e)
try:
if "fileinfo" in metadata_raw:
fileinfo = metadata_raw["fileinfo"]
else:
raise Exception("Fileinfo not found in metadata_raw")
except Exception as e:
logger.warn(e)
elif metadata_raw is None:
logger.debug("metadata_raw is None")
else:
logger.warn(TEXTS.INVALID_METADATA_MSG.value)
return (json.dumps(prompt, indent=CONFIG["indent"]),
json.dumps(workflow, indent=CONFIG["indent"]),
json.dumps(fileinfo, indent=CONFIG["indent"]),
json.dumps(metadata_raw, indent=CONFIG["indent"]),
text, csv)
class CMetadataCompare:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"metadata_raw_old": METADATA_RAW,
"metadata_raw_new": METADATA_RAW,
"what": (["Prompt", "Workflow", "Fileinfo"],),
},
"optional": {
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.METADATA.value
RETURN_TYPES = ("JSON",)
RETURN_NAMES = ("diff",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, what, metadata_raw_old=None, metadata_raw_new=None):
prompt_old = {}
workflow_old = {}
fileinfo_old = {}
prompt_new = {}
workflow_new = {}
fileinfo_new = {}
diff = ""
if type(metadata_raw_old) == dict and type(metadata_raw_new) == dict:
if "prompt" in metadata_raw_old:
prompt_old = metadata_raw_old["prompt"]
else:
logger.warn("Prompt not found in metadata_raw_old")
if "workflow" in metadata_raw_old:
workflow_old = metadata_raw_old["workflow"]
else:
logger.warn("Workflow not found in metadata_raw_old")
if "fileinfo" in metadata_raw_old:
fileinfo_old = metadata_raw_old["fileinfo"]
else:
logger.warn("Fileinfo not found in metadata_raw_old")
if "prompt" in metadata_raw_new:
prompt_new = metadata_raw_new["prompt"]
else:
logger.warn("Prompt not found in metadata_raw_new")
if "workflow" in metadata_raw_new:
workflow_new = metadata_raw_new["workflow"]
else:
logger.warn("Workflow not found in metadata_raw_new")
if "fileinfo" in metadata_raw_new:
fileinfo_new = metadata_raw_new["fileinfo"]
else:
logger.warn("Fileinfo not found in metadata_raw_new")
if what == "Prompt":
diff = findJsonsDiff(prompt_old, prompt_new)
elif what == "Workflow":
diff = findJsonsDiff(workflow_old, workflow_new)
else:
diff = findJsonsDiff(fileinfo_old, fileinfo_new)
diff = json.dumps(diff, indent=CONFIG["indent"])
else:
invalid_msg = TEXTS.INVALID_METADATA_MSG.value
logger.warn(invalid_msg)
diff = invalid_msg
return {"ui": {"text": [diff]}, "result": (diff,)}

View File

@@ -0,0 +1,170 @@
import json
from ..core import CONFIG, any, JSON_WIDGET, CATEGORY, STRING, INT, FLOAT, BOOLEAN, logger, get_nested_value
# class CParameter:
# def __init__(self):
# pass
#
# @classmethod
# def INPUT_TYPES(cls):
# return {
# "required": {
# },
# "optional": {
# "path_to_json": STRING,
# "key": STRING,
# "default": STRING,
# },
# }
#
# CATEGORY = CATEGORY.MAIN.value + CATEGORY.UTILS.value
# INPUT_IS_LIST = False
#
# RETURN_TYPES = (any,)
# RETURN_NAMES = ("any",)
#
# FUNCTION = "execute"
#
# def execute(self, path_to_json=None, key=True, default=None):
# text = default
# value = text
#
# if path_to_json is not None and path_to_json != "":
# logger.debug(f"External parameter from: '{path_to_json}'")
# try:
# with open(path_to_json, 'r') as file:
# data = json.load(file)
# logger.debug(f"File found, data: '{data}'")
#
# result = get_value(data, key, default)
# text = result["text"]
# value = result["value"]
#
# except Exception as e:
# logger.error(e)
# text = f"Error reading file: {e}\nReturning default value: '{default}'"
# value = default
#
# return {"ui": {"text": [text]}, "result": [value]}
class CJsonFile:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
},
"optional": {
"path_to_json": STRING,
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.UTILS.value
INPUT_IS_LIST = False
RETURN_TYPES = ("JSON",)
RETURN_NAMES = ("json",)
FUNCTION = "execute"
def IS_CHANGED(path_to_json=None):
return True
def execute(self, path_to_json=None):
text = ""
data = {}
if path_to_json is not None and path_to_json != "":
logger.debug(f"Open json file: '{path_to_json}'")
try:
with open(path_to_json, 'r') as file:
data = json.load(file)
text = json.dumps(data, indent=CONFIG["indent"])
logger.debug(f"File found, data: '{str(data)}'")
except Exception as e:
logger.error(e)
text = f"Error reading file: {e}"
return {"ui": {"text": [text]}, "result": [data]}
class CJsonExtractor:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"json": JSON_WIDGET,
},
"optional": {
"key": STRING,
"default": STRING,
},
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.UTILS.value
INPUT_IS_LIST = False
RETURN_TYPES = (any, "STRING", "INT", "FLOAT", "BOOLEAN")
RETURN_NAMES = ("any", "string", "int", "float", "boolean")
# OUTPUT_IS_LIST = (False,)
FUNCTION = "execute"
def execute(cls, json=None, key=True, default=None):
result = get_value(json, key, default)
result["any"] = result["value"]
try:
result["string"] = str(result["value"])
except Exception as e:
result["string"] = result["value"]
try:
result["int"] = int(result["value"])
except Exception as e:
result["int"] = result["value"]
try:
result["float"] = float(result["value"])
except Exception as e:
result["float"] = result["value"]
try:
result["boolean"] = result["value"].lower() == "true"
except Exception as e:
result["boolean"] = result["value"]
return {
"ui": {"text": [result["text"]]},
"result": [
result["any"],
result["string"],
result["int"],
result["float"],
result["boolean"]
]
}
def get_value(data, key, default=None):
text = ""
val = ""
if key is not None and key != "":
val = get_nested_value(data, key, default)
if default != val:
text = f"Key found, return value: '{val}'"
else:
text = f"Key no found, return default value: '{val}'"
else:
text = f"Key is empty, return default value: '{val}'"
return {
"text": text,
"value": val
}

View File

@@ -0,0 +1,74 @@
from ..core import CATEGORY, any
from ._names import CLASSES
class CPipeToAny:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {},
"optional": {
CLASSES.CPIPE_ANY_TYPE.value: (CLASSES.CPIPE_ANY_TYPE.value,),
"any_1": (any,),
"any_2": (any,),
"any_3": (any,),
"any_4": (any,),
"any_5": (any,),
"any_6": (any,),
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PIPE.value
RETURN_TYPES = (CLASSES.CPIPE_ANY_TYPE.value,)
FUNCTION = "execute"
def execute(self, CPipeAny=None, any_1=None, any_2=None, any_3=None, any_4=None, any_5=None, any_6=None):
any_1_original = None
any_2_original = None
any_3_original = None
any_4_original = None
any_5_original = None
any_6_original = None
if CPipeAny != None:
any_1_original, any_2_original, any_3_original, any_4_original, any_5_original, any_6_original = CPipeAny
CAnyPipeMod = []
CAnyPipeMod.append(any_1 if any_1 is not None else any_1_original)
CAnyPipeMod.append(any_2 if any_2 is not None else any_2_original)
CAnyPipeMod.append(any_3 if any_3 is not None else any_3_original)
CAnyPipeMod.append(any_4 if any_4 is not None else any_4_original)
CAnyPipeMod.append(any_5 if any_5 is not None else any_5_original)
CAnyPipeMod.append(any_6 if any_6 is not None else any_6_original)
return (CAnyPipeMod,)
class CPipeFromAny:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
CLASSES.CPIPE_ANY_TYPE.value: (CLASSES.CPIPE_ANY_TYPE.value,),
},
"optional": {
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PIPE.value
RETURN_TYPES = (CLASSES.CPIPE_ANY_TYPE.value, any, any, any, any, any, any,)
RETURN_NAMES = (CLASSES.CPIPE_ANY_TYPE.value, "any_1", "any_2", "any_3", "any_4", "any_5", "any_6",)
FUNCTION = "execute"
def execute(self, CPipeAny=None, ):
any_1, any_2, any_3, any_4, any_5, any_6 = CPipeAny
return CPipeAny, any_1, any_2, any_3, any_4, any_5, any_6

View File

@@ -0,0 +1,111 @@
from ..core import BOOLEAN, CATEGORY, STRING, INT, FLOAT, STRING_ML
class CBoolean:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PRIMITIVE.value
RETURN_TYPES = ("BOOLEAN",)
RETURN_NAMES = ("boolean",)
FUNCTION = "execute"
def execute(self, boolean=True):
return (boolean,)
class CText:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"string": STRING,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PRIMITIVE.value
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("string",)
FUNCTION = "execute"
def execute(self, string=""):
return (string,)
class CTextML:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"string": STRING_ML,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PRIMITIVE.value
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("string",)
FUNCTION = "execute"
def execute(self, string=""):
return (string,)
class CInteger:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"int": INT,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PRIMITIVE.value
RETURN_TYPES = ("INT",)
RETURN_NAMES = ("int",)
FUNCTION = "execute"
def execute(self, int=True):
return (int,)
class CFloat:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"float": FLOAT,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.PRIMITIVE.value
RETURN_TYPES = ("FLOAT",)
RETURN_NAMES = ("float",)
FUNCTION = "execute"
def execute(self, float=True):
return (float,)

View File

@@ -0,0 +1,225 @@
from ..core import BOOLEAN, STRING, CATEGORY, any, logger
class CSwitchFromAny:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"any": (any, ),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = (any, any,)
RETURN_NAMES = ("on_true", "on_false",)
FUNCTION = "execute"
def execute(self, any,boolean=True):
logger.debug("Any switch: " + str(boolean))
if boolean:
return any, None
else:
return None, any
class CSwitchBooleanAny:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"on_true": (any, {"lazy": True}),
"on_false": (any, {"lazy": True}),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = (any,)
FUNCTION = "execute"
def check_lazy_status(self, on_true=None, on_false=None, boolean=True):
needed = "on_true" if boolean else "on_false"
return [needed]
def execute(self, on_true, on_false, boolean=True):
logger.debug("Any switch: " + str(boolean))
if boolean:
return (on_true,)
else:
return (on_false,)
class CSwitchBooleanString:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"on_true": ("STRING", {"default": "", "lazy": True}),
"on_false": ("STRING", {"default": "", "lazy": True}),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("string",)
FUNCTION = "execute"
def check_lazy_status(self, on_true=None, on_false=None, boolean=True):
needed = "on_true" if boolean else "on_false"
return [needed]
def execute(self, on_true, on_false, boolean=True):
logger.debug("String switch: " + str(boolean))
if boolean:
return (on_true,)
else:
return (on_false,)
class CSwitchBooleanConditioning:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"on_true": ("CONDITIONING", {"lazy": True}),
"on_false": ("CONDITIONING", {"lazy": True}),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = ("CONDITIONING",)
RETURN_NAMES = ("conditioning",)
FUNCTION = "execute"
def check_lazy_status(self, on_true=None, on_false=None, boolean=True):
needed = "on_true" if boolean else "on_false"
return [needed]
def execute(self, on_true, on_false, boolean=True):
logger.debug("Conditioning switch: " + str(boolean))
if boolean:
return (on_true,)
else:
return (on_false,)
class CSwitchBooleanImage:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"on_true": ("IMAGE", {"lazy": True}),
"on_false": ("IMAGE", {"lazy": True}),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = ("IMAGE",)
RETURN_NAMES = ("image",)
FUNCTION = "execute"
def check_lazy_status(self, on_true=None, on_false=None, boolean=True):
needed = "on_true" if boolean else "on_false"
return [needed]
def execute(self, on_true, on_false, boolean=True):
logger.debug("Image switch: " + str(boolean))
if boolean:
return (on_true,)
else:
return (on_false,)
class CSwitchBooleanLatent:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"on_true": ("LATENT", {"lazy": True}),
"on_false": ("LATENT", {"lazy": True}),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = ("LATENT",)
RETURN_NAMES = ("latent",)
FUNCTION = "execute"
def check_lazy_status(self, on_true=None, on_false=None, boolean=True):
needed = "on_true" if boolean else "on_false"
return [needed]
def execute(self, on_true, on_false, boolean=True):
logger.debug("Latent switch: " + str(boolean))
if boolean:
return (on_true,)
else:
return (on_false,)
class CSwitchBooleanMask:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"on_true": ("MASK", {"lazy": True}),
"on_false": ("MASK", {"lazy": True}),
"boolean": BOOLEAN,
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.SWITCH.value
RETURN_TYPES = ("MASK",)
RETURN_NAMES = ("mask",)
FUNCTION = "execute"
def check_lazy_status(self, on_true=None, on_false=None, boolean=True):
needed = "on_true" if boolean else "on_false"
return [needed]
def execute(self, on_true, on_false, boolean=True):
logger.debug("Mask switch: " + str(boolean))
if boolean:
return (on_true,)
else:
return (on_false,)

View File

@@ -0,0 +1,55 @@
from ..core import CATEGORY, JSON_WIDGET, findJsonStrDiff, get_system_stats, logger
class CUtilsCompareJsons:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"json_old": JSON_WIDGET,
"json_new": JSON_WIDGET,
},
"optional": {
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.UTILS.value
RETURN_TYPES = ("JSON",)
RETURN_NAMES = ("json_compared",)
OUTPUT_NODE = True
FUNCTION = "execute"
def execute(self, json_old, json_new):
json = findJsonStrDiff(json_old, json_new)
return (str(json),)
# Credits to: https://github.com/WASasquatch/was-node-suite-comfyui for the following node!
class CUtilsStatSystem:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"latent": ("LATENT",),
}
}
CATEGORY = CATEGORY.MAIN.value + CATEGORY.UTILS.value
RETURN_TYPES = ("LATENT",)
RETURN_NAMES = ("latent",)
FUNCTION = "execute"
def execute(self, latent):
log = "Samples Passthrough:\n"
for stat in get_system_stats():
log += stat + "\n"
logger.debug(log)
return {"ui": {"text": [log]}, "result": (latent,)}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
{
"name": "crystools-typescript",
"private": true,
"version": "1.0.0",
"type": "module",
"scripts": {
"start": "tsc --watch",
"tsc": "tsc",
"lint": "eslint web --ext ts --report-unused-disable-directives --max-warnings 0",
"lint-fix": "eslint web --ext ts --fix",
"validate": "npm run tsc && npm run lint"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "6.18.1",
"@typescript-eslint/parser": "6.18.1",
"eslint": "8.56.0",
"eslint-plugin-import": "2.29.1",
"typescript": "5.3.3"
}
}

View File

@@ -0,0 +1,21 @@
[project]
name = "ComfyUI-Crystools"
description = "With this suit, you can see the resources monitor, progress bar & time elapsed, metadata and compare between two images, compare between two JSONs, show any value to console/display, pipes, and more!\nThis provides better nodes to load/save images, previews, etc, and see \"hidden\" data without loading a new workflow."
version = "1.27.4"
license = { file = "LICENSE" }
dependencies = ["deepdiff", "torch", "numpy", "Pillow", "pynvml", "py-cpuinfo"]
classifiers = [
"Operating System :: OS Independent",
"Environment :: GPU :: NVIDIA CUDA",
]
[project.urls]
Repository = "https://github.com/crystian/ComfyUI-Crystools"
Documentation = "https://github.com/crystian/ComfyUI-Crystools/blob/main/README.md"
"Bug Tracker" = "https://github.com/crystian/ComfyUI-Crystools/issues"
[tool.comfy]
PublisherId = "crystian"
DisplayName = "ComfyUI-Crystools"
Icon = "https://raw.githubusercontent.com/crystian/ComfyUI-Crystools/main/docs/screwdriver.png"

View File

@@ -0,0 +1,8 @@
deepdiff
torch
numpy
Pillow
pynvml; platform_machine != 'aarch64'
py-cpuinfo
piexif
jetson-stats; platform_machine == 'aarch64'

View File

@@ -0,0 +1,102 @@
{
"last_node_id": 4,
"last_link_id": 4,
"nodes": [
{
"id": 3,
"type": "Show any [Crystools]",
"pos": [
475,
250
],
"size": {
"0": 400,
"1": 750
},
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "any_value",
"type": "*",
"link": 3
}
],
"properties": {
"Node name for S&R": "Show any [Crystools]"
},
"widgets_values": [
false,
true,
""
]
},
{
"id": 1,
"type": "Load image with metadata [Crystools]",
"pos": [
100,
75
],
"size": [
325,
350
],
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "image",
"type": "IMAGE",
"links": null,
"shape": 3
},
{
"name": "mask",
"type": "MASK",
"links": null,
"shape": 3
},
{
"name": "prompt",
"type": "JSON",
"links": [
3
],
"shape": 3,
"slot_index": 2
},
{
"name": "Metadata RAW",
"type": "METADATA_RAW",
"links": [],
"shape": 3,
"slot_index": 3
}
],
"properties": {
"Node name for S&R": "Load image with metadata [Crystools]"
},
"widgets_values": [
"tests/ComfyUI_00314_20-8-euler.png",
"image"
]
}
],
"links": [
[
3,
1,
2,
3,
0,
"*"
]
],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}

View File

@@ -0,0 +1,182 @@
{
"last_node_id": 4,
"last_link_id": 4,
"nodes": [
{
"id": 4,
"type": "Show any to JSON [Crystools]",
"pos": [
900,
175
],
"size": {
"0": 400,
"1": 825
},
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "any_value",
"type": "*",
"link": 4
}
],
"outputs": [
{
"name": "string",
"type": "STRING",
"links": null,
"shape": 3
}
],
"properties": {
"Node name for S&R": "Show any to JSON [Crystools]"
}
},
{
"id": 2,
"type": "Show any to JSON [Crystools]",
"pos": [
1350,
50
],
"size": [
400,
950
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "any_value",
"type": "*",
"link": 2
}
],
"outputs": [
{
"name": "string",
"type": "STRING",
"links": null,
"shape": 3
}
],
"properties": {
"Node name for S&R": "Show any to JSON [Crystools]"
}
},
{
"id": 3,
"type": "Show any [Crystools]",
"pos": [
475,
250
],
"size": [
400,
750
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [
{
"name": "any_value",
"type": "*",
"link": 3
}
],
"properties": {
"Node name for S&R": "Show any [Crystools]"
}
},
{
"id": 1,
"type": "Load image with metadata [Crystools]",
"pos": [
100,
75
],
"size": [
325,
350
],
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "image",
"type": "IMAGE",
"links": null,
"shape": 3
},
{
"name": "mask",
"type": "MASK",
"links": null,
"shape": 3
},
{
"name": "prompt",
"type": "JSON",
"links": [
2,
3
],
"shape": 3,
"slot_index": 2
},
{
"name": "Metadata RAW",
"type": "METADATA_RAW",
"links": [
4
],
"shape": 3,
"slot_index": 3
}
],
"properties": {
"Node name for S&R": "Load image with metadata [Crystools]"
},
"widgets_values": [
"tests/ComfyUI_00314_20-8-euler.png",
"image"
]
}
],
"links": [
[
2,
1,
2,
2,
0,
"*"
],
[
3,
1,
2,
3,
0,
"*"
],
[
4,
1,
3,
4,
0,
"*"
]
],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}

View File

@@ -0,0 +1,382 @@
{
"last_node_id": 13,
"last_link_id": 11,
"nodes": [
{
"id": 7,
"type": "CLIPTextEncode",
"pos": [
413,
389
],
"size": {
"0": 425.27801513671875,
"1": 180.6060791015625
},
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 5
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
6
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"text, watermark"
]
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
415,
186
],
"size": {
"0": 422.84503173828125,
"1": 164.31304931640625
},
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 3
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
4
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"
]
},
{
"id": 5,
"type": "EmptyLatentImage",
"pos": [
473,
609
],
"size": {
"0": 315,
"1": 106
},
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
2
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
512,
512,
1
]
},
{
"id": 8,
"type": "VAEDecode",
"pos": [
1209,
188
],
"size": {
"0": 210,
"1": 46
},
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 8
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
9
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "VAEDecode"
}
},
{
"id": 9,
"type": "SaveImage",
"pos": [
1451,
189
],
"size": [
200,
275
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"properties": {},
"widgets_values": [
"ComfyUI"
]
},
{
"id": 3,
"type": "KSampler",
"pos": [
863,
186
],
"size": [
325,
475
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 1
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 4
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 2
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
7
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
156680208700286,
"fixed",
20,
8,
"euler",
"normal",
1
]
},
{
"id": 10,
"type": "Show Metadata [Crystools]",
"pos": [
1225,
525
],
"size": [
450,
525
],
"flags": {},
"order": 1,
"mode": 0,
"properties": {}
},
{
"id": 4,
"type": "CheckpointLoaderSimple",
"pos": [
26,
474
],
"size": {
"0": 315,
"1": 98
},
"flags": {},
"order": 2,
"mode": 0,
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
1
],
"slot_index": 0
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
3,
5
],
"slot_index": 1
},
{
"name": "VAE",
"type": "VAE",
"links": [
8
],
"slot_index": 2
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"sd-v1-5-pruned-emaonly.safetensors"
]
}
],
"links": [
[
1,
4,
0,
3,
0,
"MODEL"
],
[
2,
5,
0,
3,
3,
"LATENT"
],
[
3,
4,
1,
6,
0,
"CLIP"
],
[
4,
6,
0,
3,
1,
"CONDITIONING"
],
[
5,
4,
1,
7,
0,
"CLIP"
],
[
6,
7,
0,
3,
2,
"CONDITIONING"
],
[
7,
3,
0,
8,
0,
"LATENT"
],
[
8,
4,
2,
8,
1,
"VAE"
],
[
9,
8,
0,
9,
0,
"IMAGE"
]
],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}

View File

@@ -0,0 +1,59 @@
{
"last_node_id": 1,
"last_link_id": 0,
"nodes": [
{
"id": 1,
"type": "Load image with metadata [Crystools]",
"pos": [
150,
200
],
"size": [
325,
350
],
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "image",
"type": "IMAGE",
"links": null,
"shape": 3
},
{
"name": "mask",
"type": "MASK",
"links": null,
"shape": 3
},
{
"name": "prompt",
"type": "JSON",
"links": null,
"shape": 3
},
{
"name": "Metadata RAW",
"type": "METADATA_RAW",
"links": null,
"shape": 3
}
],
"properties": {
"Node name for S&R": "Load image with metadata [Crystools]"
},
"widgets_values": [
"tests/ComfyUI_00314_20-8-euler.png",
"image"
]
}
],
"links": [],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}

View File

@@ -0,0 +1,373 @@
{
"last_node_id": 14,
"last_link_id": 12,
"nodes": [
{
"id": 7,
"type": "CLIPTextEncode",
"pos": [
413,
389
],
"size": {
"0": 425.27801513671875,
"1": 180.6060791015625
},
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 5
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
6
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"text, watermark"
]
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
415,
186
],
"size": {
"0": 422.84503173828125,
"1": 164.31304931640625
},
"flags": {},
"order": 2,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 3
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"links": [
4
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"
]
},
{
"id": 5,
"type": "EmptyLatentImage",
"pos": [
473,
609
],
"size": {
"0": 315,
"1": 106
},
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
2
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
512,
512,
1
]
},
{
"id": 8,
"type": "VAEDecode",
"pos": [
1209,
188
],
"size": {
"0": 210,
"1": 46
},
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 8
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"links": [
12
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "VAEDecode"
}
},
{
"id": 4,
"type": "CheckpointLoaderSimple",
"pos": [
26,
474
],
"size": {
"0": 315,
"1": 98
},
"flags": {},
"order": 1,
"mode": 0,
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
1
],
"slot_index": 0
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
3,
5
],
"slot_index": 1
},
{
"name": "VAE",
"type": "VAE",
"links": [
8
],
"slot_index": 2
}
],
"properties": {
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"sd-v1-5-pruned-emaonly.safetensors"
]
},
{
"id": 3,
"type": "KSampler",
"pos": [
863,
186
],
"size": {
"0": 325,
"1": 475
},
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 1
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 4
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 2
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"links": [
7
],
"slot_index": 0
}
],
"properties": {
"Node name for S&R": "KSampler"
},
"widgets_values": [
156680208700286,
"fixed",
20,
8,
"euler",
"normal",
1
]
},
{
"id": 14,
"type": "Preview from image [Crystools]",
"pos": [
1525,
175
],
"size": [
400,
800
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "image",
"type": "IMAGE",
"link": 12
}
],
"outputs": [
{
"name": "Metadata RAW",
"type": "METADATA_RAW",
"links": null,
"shape": 3
}
],
"properties": {
"Node name for S&R": "Preview from image [Crystools]"
}
}
],
"links": [
[
1,
4,
0,
3,
0,
"MODEL"
],
[
2,
5,
0,
3,
3,
"LATENT"
],
[
3,
4,
1,
6,
0,
"CLIP"
],
[
4,
6,
0,
3,
1,
"CONDITIONING"
],
[
5,
4,
1,
7,
0,
"CLIP"
],
[
6,
7,
0,
3,
2,
"CONDITIONING"
],
[
7,
3,
0,
8,
0,
"LATENT"
],
[
8,
4,
2,
8,
1,
"VAE"
],
[
12,
8,
0,
14,
0,
"IMAGE"
]
],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}

View File

@@ -0,0 +1,104 @@
{
"last_node_id": 2,
"last_link_id": 1,
"nodes": [
{
"id": 1,
"type": "Load image with metadata [Crystools]",
"pos": [
150,
200
],
"size": [
325,
350
],
"flags": {},
"order": 0,
"mode": 0,
"outputs": [
{
"name": "image",
"type": "IMAGE",
"links": null,
"shape": 3
},
{
"name": "mask",
"type": "MASK",
"links": null,
"shape": 3
},
{
"name": "prompt",
"type": "JSON",
"links": null,
"shape": 3
},
{
"name": "Metadata RAW",
"type": "METADATA_RAW",
"links": [
1
],
"shape": 3,
"slot_index": 3
}
],
"properties": {
"Node name for S&R": "Load image with metadata [Crystools]"
},
"widgets_values": [
"tests/ComfyUI_00314_20-8-euler.png",
"image"
]
},
{
"id": 2,
"type": "Preview from metadata [Crystools]",
"pos": [
575,
200
],
"size": [
425,
675
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [
{
"name": "metadata_raw",
"type": "METADATA_RAW",
"link": 1
}
],
"outputs": [
{
"name": "Metadata RAW",
"type": "METADATA_RAW",
"links": null,
"shape": 3
}
],
"properties": {
"Node name for S&R": "Preview from metadata [Crystools]"
}
}
],
"links": [
[
1,
1,
3,
2,
0,
"METADATA_RAW"
]
],
"groups": [],
"config": {},
"extra": {},
"version": 0.4
}

Some files were not shown because too many files have changed in this diff Show More