Add custom nodes, Civitai loras (LFS), and vast.ai setup script
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Includes 30 custom nodes committed directly, 7 Civitai-exclusive loras stored via Git LFS, and a setup script that installs all dependencies and downloads HuggingFace-hosted models on vast.ai. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
3
custom_nodes/comfyui-image-saver/.gitignore
vendored
Normal file
3
custom_nodes/comfyui-image-saver/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
__pycache__
|
||||
.pytest_cache
|
||||
.vscode
|
||||
203
custom_nodes/comfyui-image-saver/CHANGELOG.md
Normal file
203
custom_nodes/comfyui-image-saver/CHANGELOG.md
Normal file
@@ -0,0 +1,203 @@
|
||||
# v1.21.0
|
||||
|
||||
- Cleaner naming for batch saves.
|
||||
- Allow custom time_format via prompt parameters in image saver.
|
||||
|
||||
# v1.20.0
|
||||
|
||||
- Add RandomShapeGenerator
|
||||
|
||||
# v1.19.0
|
||||
|
||||
- Bring sampler/scheduler selectors back.
|
||||
|
||||
# v1.18.0
|
||||
|
||||
- Add WorkflowInputValue node to retrieve input values from nodes in workflow.
|
||||
|
||||
# v1.17.0
|
||||
|
||||
- BREAKING CHANGE: sampler/scheduler loaders are removed. Instead AnyToString node is added to help convert sampler/scheduler types to string, which works with native loaders. Besides loader nodes removal, Input Parameters has to be recreated and reconnected with the saver node(s). Check example workflow for reference.
|
||||
|
||||
# v1.16.0
|
||||
|
||||
- Improved Civitai Hash Fetcher search reliability with smart matching and fallbacks
|
||||
- Added NSFW model search support
|
||||
- Fixed Civitai Hash Fetcher caching bug
|
||||
- Refactored file matching with multi-level fallback strategy
|
||||
- Added GGUF model format support
|
||||
- Case-insensitive extension check for checkpoints
|
||||
- Skip resources with missing hashes
|
||||
|
||||
# v1.15.2
|
||||
|
||||
- Bugfix: sanitize filename only, without the path
|
||||
|
||||
# v1.15.1
|
||||
|
||||
- Bugfix: Add missing parameter
|
||||
- Bugfix: Don't sanitize slashes in filenames
|
||||
|
||||
# v1.15.0
|
||||
|
||||
- Allow custom info to be added to metadata, inserted into the a111 string between clip skip and model hash
|
||||
- Sanitize filenames
|
||||
- Fixed timeout exception to prevent network timeout crashes
|
||||
|
||||
# v1.14.2
|
||||
|
||||
- Update list of schedulers
|
||||
|
||||
# v1.14.1
|
||||
|
||||
- Expose ConditioningConcatOptional utility
|
||||
|
||||
# v1.14.0
|
||||
|
||||
- Add ConditioningConcatOptional utility
|
||||
|
||||
# v1.13.1
|
||||
|
||||
- Fix parameter name mismatch
|
||||
|
||||
# v1.13.0
|
||||
|
||||
- Add support for Efficiency node pack's schedulers
|
||||
|
||||
# v1.12.0
|
||||
|
||||
- Schedulers list for KSampler (inspire) has been updated.
|
||||
- BREAKING CHANGE: To avoid confusion, following nodes have been renamed:
|
||||
SchedulerSelector -> SchedulerSelectorInspire
|
||||
SchedulerSelectorComfy -> SchedulerSelector
|
||||
SchedulerToString -> SchedulerInspireToString
|
||||
SchedulerComfyToString -> SchedulerToString
|
||||
|
||||
# v1.11.1
|
||||
|
||||
- Place preview switch at the end
|
||||
|
||||
# v1.11.0
|
||||
|
||||
- Allow disabling the previews
|
||||
|
||||
# v1.10.1
|
||||
|
||||
- Fix regression with path handling
|
||||
|
||||
# v1.10.0
|
||||
|
||||
- Provide 'Image Saver Simple' & 'Image Saver Metadata' that can be used together, separating metadata node from image saver node
|
||||
- `scheduler` input has been renamed to `scheduler_name`
|
||||
|
||||
# v1.9.2
|
||||
|
||||
- Do not override proxy settings of requests.get
|
||||
|
||||
# v1.9.1
|
||||
|
||||
- Bugfix: handle network connection error for civitai
|
||||
|
||||
# v1.9.0
|
||||
|
||||
- Allow multiple comma-separated model names
|
||||
- Add debug a111_params output
|
||||
|
||||
# v1.8.0
|
||||
|
||||
- Allow workflow embed for all file formats.
|
||||
- Added optional version field for Civitai Hash Fetcher.
|
||||
- Added InputParameters node to simplify common KSampler parameters input.
|
||||
|
||||
# v1.7.0
|
||||
|
||||
- Add hash output for optional chaining of additional hashes.
|
||||
- Add tests for image saving.
|
||||
- Fix f-string failure.
|
||||
|
||||
# v1.6.0
|
||||
|
||||
- Add Civitai download option for LoRA weight saving (#68).
|
||||
- Add easy_remix option for stripping LoRAs from prompt (#68).
|
||||
- Add width/height filename variables (#67).
|
||||
- Add progress bar for sha256 calculation (#70).
|
||||
- Add "jpg" extension to the list for more control over the target filename (#69).
|
||||
|
||||
# v1.5.2
|
||||
|
||||
- Reverted experimental webp support for the moment. Needs more testing.
|
||||
- Fix putting "prompt" into JPEGs.
|
||||
|
||||
# v1.5.1
|
||||
|
||||
- Fix workflow storage in lossless webp
|
||||
|
||||
# v1.5.0
|
||||
|
||||
- New lines are no longer removed from prompts.
|
||||
- Added Civitai Hash Fetcher node that can retrieve a ressource hash from civitai based on its name.
|
||||
- Added an "aditional hashes" input that accepts a comma separated list of resource hahes that will be stored in the image metadata.
|
||||
- Experimental support for storing workflow in webp.
|
||||
|
||||
# v1.4.0
|
||||
|
||||
- Add UNETLoaderWithName
|
||||
- Also check the unet directory (if not found in checkpoints) when calculating model hash
|
||||
- Add tooltips
|
||||
- Image Saver: Add clip skip parameter
|
||||
- Adds the suffix _0x to the file name if a file with that name already exists (#40)
|
||||
- Remove strip_a1111_params option
|
||||
- Bugfix: Fixing the outputs names of SchedulerToString, SchedulerComfyToString and SamplerToString nodes
|
||||
|
||||
# v1.3.0
|
||||
|
||||
- Saver node: converted sampler input to string
|
||||
- SamplerSelector node: output sampler name also as a string
|
||||
- Add SamplerToString util node
|
||||
- Fixed converter nodes
|
||||
- Change min value for widgets with fixed steps
|
||||
|
||||
# v1.2.1
|
||||
|
||||
- Update Impact Pack scheduler list
|
||||
|
||||
# v1.2.0
|
||||
|
||||
- Add option to strip positive/negative prompt from the a1111 parameters comment (hashes for loras/embeddings are still always added)
|
||||
- Add option for embedding prompt/workflow in PNG
|
||||
- Add 'AYS SDXL', 'AYS SD1' and 'AYS SVD' to scheduler selectors
|
||||
- added dpmpp_3m_sde sampler
|
||||
- added exponential scheduler
|
||||
- Fix suffix for batches
|
||||
- Save json for each image in batch
|
||||
- Allow to leave modelname empty
|
||||
|
||||
# v1.1.0
|
||||
|
||||
- Fix extension check in full_lora_path_for
|
||||
- add 'save_workflow_as_json', which allows saving an additional file with the json workflow included
|
||||
|
||||
# v1.0.0
|
||||
|
||||
- **BREAKING CHANGE**: Convert CheckpointSelector to CheckpointLoaderWithName (571fcfa319438a32e051f90b32827363bccbd2ef). Fixes 2 issues:
|
||||
- oversized search fields (https://github.com/giriss/comfy-image-saver/issues/5)
|
||||
- selector breaking when model files are added/removed at runtime
|
||||
- Try to find loras with incomplete paths (002471d95078d8b2858afc92bc4589c8c4e8d459):
|
||||
- `<lora:asdf:1.2>` will be found and hashed if the actual location is `<lora:subdirectory/asdf:1.2>`
|
||||
- Update default filename pattern from `%time_%seed` to `%time_%basemodelname_%seed` (72f17f0a4e97a7c402806cc21e9f564a5209073d)
|
||||
- Include embedding, lora and model information in the metadata in civitai format (https://github.com/alexopus/ComfyUI-Image-Saver/pull/2)
|
||||
- Rename all nodes to avoid conflicts with the forked repo
|
||||
- Make PNG optimization optional and off by default (c760e50b62701af3d44edfb69d3776965a645406)
|
||||
- Calculate model hash only if there is no calculated one on disk already. Store on disk after calculation (96df2c9c74c089a8cca811ccf7aaa72f68faf9db)
|
||||
- Fix civitai sampler/scheduler name (af4eec9bc1cc55643c0df14aaf3a446fbbc3d86d)
|
||||
- Fix metadata format according to https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/5ef669de080814067961f28357256e8fe27544f4/modules/processing.py#L673 (https://github.com/giriss/comfy-image-saver/pull/11)
|
||||
- Add input `denoise` (https://github.com/Danand/comfy-image-saver/commit/37fc8903e05c0d70a7b7cfb3a4bcc51f4f464637)
|
||||
- Add resolving of more placeholders for file names (https://github.com/giriss/comfy-image-saver/pull/16)
|
||||
- `%sampler_name`
|
||||
- `%steps`
|
||||
- `%cfg`
|
||||
- `%scheduler`
|
||||
- `%basemodelname`
|
||||
|
||||
|
||||
Changes since the fork from https://github.com/giriss/comfy-image-saver.
|
||||
128
custom_nodes/comfyui-image-saver/CLAUDE.md
Normal file
128
custom_nodes/comfyui-image-saver/CLAUDE.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
ComfyUI-Image-Saver is a ComfyUI custom node plugin that saves images with generation metadata compatible with Civitai. It supports PNG, JPEG, and WebP formats, storing model, LoRA, and embedding hashes for proper resource recognition.
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
cd saver
|
||||
python -m pytest
|
||||
```
|
||||
|
||||
### Installation for Development
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Package Information
|
||||
- Main dependency: `piexif` (for EXIF metadata handling)
|
||||
- Version defined in `pyproject.toml`
|
||||
- ComfyUI plugin structure with node registration
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
1. **Node Registration (`__init__.py`)**
|
||||
- Registers all custom nodes with ComfyUI
|
||||
- Maps node class names to implementations
|
||||
- Defines `WEB_DIRECTORY` for JavaScript assets
|
||||
|
||||
2. **Image Saving System (`nodes.py`)**
|
||||
- `ImageSaver`: Main node for saving images with full metadata
|
||||
- `ImageSaverSimple`: Simplified version for basic usage
|
||||
- `ImageSaverMetadata`: Metadata-only node for separation of concerns
|
||||
- `Metadata` dataclass: Structured metadata container
|
||||
|
||||
3. **Core Saver Logic (`saver/saver.py`)**
|
||||
- `save_image()`: Handles different image formats (PNG, JPEG, WebP)
|
||||
- PNG: Uses `PngInfo` for metadata storage
|
||||
- JPEG/WebP: Uses EXIF format via `piexif`
|
||||
- Workflow embedding with size limits (65535 bytes for JPEG)
|
||||
|
||||
4. **Utility Modules**
|
||||
- `utils.py`: File operations, hashing, path resolution
|
||||
- `utils_civitai.py`: Civitai API integration and metadata formatting
|
||||
- `prompt_metadata_extractor.py`: Extracts LoRAs and embeddings from prompts
|
||||
|
||||
5. **Node Types**
|
||||
- `nodes_loaders.py`: Checkpoint and UNet loaders with name tracking
|
||||
- `nodes_selectors.py`: Sampler and scheduler selection utilities
|
||||
- `nodes_literals.py`: Literal value generators (seed, strings, etc.)
|
||||
- `civitai_nodes.py`: Civitai hash fetching functionality
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Metadata Support**: A1111-compatible parameters with Civitai resource hashes
|
||||
- **Multi-format**: PNG (full workflow), JPEG/WebP (parameters only)
|
||||
- **Hash Calculation**: SHA256 hashing with file caching (`.sha256` files)
|
||||
- **Resource Detection**: Automatic LoRA, embedding, and model hash extraction
|
||||
- **Civitai Integration**: Downloads resource metadata for proper attribution
|
||||
- **Filename Templating**: Supports variables like `%date`, `%time`, `%seed`, `%model`, `%width`, `%height`, `%counter`, `%sampler_name`, `%steps`, `%cfg`, `%scheduler_name`, `%basemodelname`, `%denoise`, `%clip_skip`, `%custom`
|
||||
|
||||
### Advanced Features
|
||||
|
||||
- **Multiple Model Support**: ModelName parameter accepts comma-separated model names. Primary model hash is used in metadata, additional models are added to `additional_hashes`
|
||||
- **Easy Remix Mode**: When enabled, automatically cleans prompts by removing LoRA tags and simplifying embeddings for better Civitai remix compatibility
|
||||
- **Custom Metadata Field**: Arbitrary string can be inserted into A1111 parameters via the `custom` parameter
|
||||
- **Manual Hash Management**: User-added resource hashes stored in `/models/image-saver/manual-hashes.json` for resources not found via Civitai API
|
||||
- **File Path Matching**: Three-level fallback strategy for finding resources:
|
||||
1. Exact path match
|
||||
2. Filename stem match (without extension)
|
||||
3. Base name match (case-insensitive)
|
||||
- **Civitai Hash Fetcher Node**: Dedicated node (`CivitaiHashFetcher`) for looking up model hashes directly from Civitai by username and model name
|
||||
- **Caching Strategy**:
|
||||
- `.sha256` files: SHA256 hashes cached alongside model files
|
||||
- `.civitai.info` files: Civitai metadata cached to reduce API calls
|
||||
- Internal cache: CivitaiHashFetcher maintains runtime cache to avoid redundant lookups
|
||||
|
||||
### Data Flow
|
||||
|
||||
1. **Input Processing**: Parameters and images received from ComfyUI workflow
|
||||
2. **Metadata Extraction**: Prompts parsed for LoRAs, embeddings, model references
|
||||
3. **Hash Generation**: SHA256 hashes calculated for all resources
|
||||
4. **Civitai Lookup**: Resource metadata fetched from Civitai API
|
||||
5. **Metadata Assembly**: A1111-compatible parameter string generated
|
||||
6. **Image Saving**: Metadata embedded in image files based on format
|
||||
7. **Output**: Saved images with proper metadata for sharing/recognition
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
ComfyUI-Image-Saver/
|
||||
├── __init__.py # Node registration
|
||||
├── nodes.py # Main image saver nodes
|
||||
├── saver/ # Core saving logic
|
||||
│ ├── saver.py # Image format handling
|
||||
│ └── test_saver.py # Unit tests
|
||||
├── utils.py # File operations and hashing
|
||||
├── utils_civitai.py # Civitai API integration
|
||||
├── prompt_metadata_extractor.py # Prompt parsing
|
||||
├── nodes_*.py # Specialized node types
|
||||
├── civitai_nodes.py # Civitai functionality
|
||||
└── js/ # Frontend JavaScript
|
||||
├── read_exif_workflow.js # ComfyUI extension for reading EXIF workflows from dropped images
|
||||
└── lib/exif-reader.js # EXIF reading utilities (ExifReader v4.26.2)
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Tests are located in `saver/test_saver.py` and use pytest. The test configuration is in `saver/pytest.ini`.
|
||||
|
||||
Run tests with:
|
||||
```bash
|
||||
cd saver && python -m pytest
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Hash files (`.sha256`) are cached alongside model files to avoid recalculation
|
||||
- JPEG format has a 65535-byte limit for EXIF data
|
||||
- WebP workflow embedding is experimental
|
||||
- Resource paths are resolved through ComfyUI's folder_paths system
|
||||
- Civitai integration can be disabled via `download_civitai_data` parameter
|
||||
21
custom_nodes/comfyui-image-saver/LICENSE
Normal file
21
custom_nodes/comfyui-image-saver/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 Girish Gopaul
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
74
custom_nodes/comfyui-image-saver/README.md
Normal file
74
custom_nodes/comfyui-image-saver/README.md
Normal file
@@ -0,0 +1,74 @@
|
||||
[!] Forked from https://github.com/giriss/comfy-image-saver, which seems to be inactive since a while.
|
||||
|
||||
# Save image with generation metadata in ComfyUI
|
||||
|
||||
Allows you to save images with their **generation metadata**. Includes the metadata compatible with *Civitai* geninfo auto-detection. Works with PNG, JPG and WEBP. For PNG stores both the full workflow in comfy format, plus a1111-style parameters. For JPEG/WEBP only the a1111-style parameters are stored. **Includes hashes of Models, LoRAs and embeddings for proper resource linking** on civitai.
|
||||
|
||||
You can find the example workflow file named `example-workflow.json`.
|
||||
<img width="1288" height="1039" alt="workflow" src="https://github.com/user-attachments/assets/dbbb9f67-afa3-48a2-8cd3-e4116393f8e0" />
|
||||
|
||||
You can also add LoRAs to the prompt in \<lora:name:weight\> format, which would be translated into hashes and stored together with the metadata. For this it is recommended to use `ImpactWildcardEncode` from the fantastic [ComfyUI-Impact-Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack). It will allow you to convert the LoRAs directly to proper conditioning without having to worry about avoiding/concatenating lora strings, which have no effect in standard conditioning nodes. Here is an example:
|
||||

|
||||
|
||||
This would have civitai autodetect all of the resources (assuming the model/lora/embedding hashes match):
|
||||

|
||||
|
||||
## How to install?
|
||||
|
||||
### Method 1: Manager (Recommended)
|
||||
If you have *ComfyUI-Manager*, you can simply search "**ComfyUI Image Saver**" and install these custom nodes.
|
||||
|
||||
### Method 2: Easy
|
||||
If you don't have *ComfyUI-Manager*, then:
|
||||
- Using CLI, go to the ComfyUI folder
|
||||
- `cd custom_nodes`
|
||||
- `git clone git@github.com:alexopus/ComfyUI-Image-Saver.git`
|
||||
- `cd ComfyUI-Image-Saver`
|
||||
- `pip install -r requirements.txt`
|
||||
- Start/restart ComfyUI
|
||||
|
||||
## Customization of file/folder names
|
||||
|
||||
You can use following placeholders:
|
||||
|
||||
- `%date`
|
||||
- `%time` *– format taken from `time_format`*
|
||||
- `%time_format<format>` *– custom datetime format using Python strftime codes*
|
||||
- `%model` *– full name of model file*
|
||||
- `%basemodelname` *– name of model (without file extension)*
|
||||
- `%seed`
|
||||
- `%counter`
|
||||
- `%sampler_name`
|
||||
- `%scheduler`
|
||||
- `%steps`
|
||||
- `%cfg`
|
||||
- `%denoise`
|
||||
|
||||
Example:
|
||||
|
||||
| `filename` value | Result file name |
|
||||
| --- | --- |
|
||||
| `%time-%basemodelname-%cfg-%steps-%sampler_name-%scheduler-%seed` | `2023-11-16-131331-Anything-v4.5-pruned-mergedVae-7.0-25-dpm_2-normal-1_01.png` |
|
||||
| `%time_format<%Y%m%d_%H%M%S>-%seed` | `20231116_131331-1.png` |
|
||||
| `%time_format<%B %d, %Y> %basemodelname` | `November 16, 2023 Anything-v4.5.png` |
|
||||
| `img_%time_format<%Y-%m-%d>_%seed` | `img_2023-11-16_1.png` |
|
||||
|
||||
**Common strftime format codes for `%time_format<format>`:**
|
||||
|
||||
| Code | Meaning | Example |
|
||||
|------|---------|---------|
|
||||
| `%Y` | Year (4-digit) | 2023 |
|
||||
| `%y` | Year (2-digit) | 23 |
|
||||
| `%m` | Month (01-12) | 11 |
|
||||
| `%B` | Month name (full) | November |
|
||||
| `%b` | Month name (short) | Nov |
|
||||
| `%d` | Day (01-31) | 16 |
|
||||
| `%H` | Hour 24h | 13 |
|
||||
| `%I` | Hour 12h | 01 |
|
||||
| `%M` | Minute | 13 |
|
||||
| `%S` | Second | 31 |
|
||||
| `%p` | AM/PM | PM |
|
||||
| `%A` | Weekday (full) | Thursday |
|
||||
| `%a` | Weekday (short) | Thu |
|
||||
| `%F` | YYYY-MM-DD | 2023-11-16 |
|
||||
| `%T` | HH:MM:SS | 13:13:31 |
|
||||
35
custom_nodes/comfyui-image-saver/__init__.py
Normal file
35
custom_nodes/comfyui-image-saver/__init__.py
Normal file
@@ -0,0 +1,35 @@
|
||||
from typing import Any
|
||||
|
||||
from .nodes import ImageSaver, ImageSaverSimple, ImageSaverMetadata
|
||||
from .nodes_literals import SeedGenerator, StringLiteral, SizeLiteral, IntLiteral, FloatLiteral, CfgLiteral, ConditioningConcatOptional, RandomShapeGenerator
|
||||
from .nodes_loaders import CheckpointLoaderWithName, UNETLoaderWithName
|
||||
from .nodes_selectors import SamplerSelector, SchedulerSelector, SchedulerSelectorInspire, SchedulerSelectorEfficiency, InputParameters, AnyToString, WorkflowInputValue
|
||||
from .civitai_nodes import CivitaiHashFetcher
|
||||
|
||||
NODE_CLASS_MAPPINGS: dict[str, Any] = {
|
||||
"Checkpoint Loader with Name (Image Saver)": CheckpointLoaderWithName,
|
||||
"UNet loader with Name (Image Saver)": UNETLoaderWithName,
|
||||
"Image Saver": ImageSaver,
|
||||
"Image Saver Simple": ImageSaverSimple,
|
||||
"Image Saver Metadata": ImageSaverMetadata,
|
||||
"Sampler Selector (Image Saver)": SamplerSelector,
|
||||
"Scheduler Selector (Image Saver)": SchedulerSelector,
|
||||
"Scheduler Selector (inspire) (Image Saver)": SchedulerSelectorInspire,
|
||||
"Scheduler Selector (Eff.) (Image Saver)": SchedulerSelectorEfficiency,
|
||||
"Input Parameters (Image Saver)": InputParameters,
|
||||
"Any to String (Image Saver)": AnyToString,
|
||||
"Workflow Input Value (Image Saver)": WorkflowInputValue,
|
||||
"Seed Generator (Image Saver)": SeedGenerator,
|
||||
"String Literal (Image Saver)": StringLiteral,
|
||||
"Width/Height Literal (Image Saver)": SizeLiteral,
|
||||
"Cfg Literal (Image Saver)": CfgLiteral,
|
||||
"Int Literal (Image Saver)": IntLiteral,
|
||||
"Float Literal (Image Saver)": FloatLiteral,
|
||||
"Conditioning Concat Optional (Image Saver)": ConditioningConcatOptional,
|
||||
"RandomShapeGenerator": RandomShapeGenerator,
|
||||
"Civitai Hash Fetcher (Image Saver)": CivitaiHashFetcher,
|
||||
}
|
||||
|
||||
WEB_DIRECTORY = "js"
|
||||
|
||||
__all__ = ['NODE_CLASS_MAPPINGS', 'WEB_DIRECTORY']
|
||||
135
custom_nodes/comfyui-image-saver/civitai_nodes.py
Normal file
135
custom_nodes/comfyui-image-saver/civitai_nodes.py
Normal file
@@ -0,0 +1,135 @@
|
||||
import requests
|
||||
|
||||
class CivitaiHashFetcher:
|
||||
"""
|
||||
A ComfyUI custom node that fetches the AutoV3 hash of a model from Civitai
|
||||
based on the provided username and model name.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.last_username = None
|
||||
self.last_model_name = None
|
||||
self.last_version = None
|
||||
self.last_hash = None # Store the last fetched hash
|
||||
|
||||
RETURN_TYPES = ("STRING",) # The node outputs a string (AutoV3 hash)
|
||||
FUNCTION = "get_autov3_hash"
|
||||
CATEGORY = "CivitaiAPI"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"username": ("STRING", {"default": "", "multiline": False}),
|
||||
"model_name": ("STRING", {"default": "", "multiline": False}),
|
||||
},
|
||||
"optional": {
|
||||
"version": ("STRING", {"default": "", "multiline": False, "tooltip": "Specify version keyword to fetch a particular model version (optional)"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_autov3_hash(self, username, model_name, version=""):
|
||||
"""
|
||||
Fetches the latest model version from Civitai and extracts its AutoV3 hash.
|
||||
Uses caching to avoid redundant API calls.
|
||||
"""
|
||||
# Check if inputs are the same as last time
|
||||
if (self.last_username is not None and self.last_model_name is not None and self.last_version is not None and
|
||||
username == self.last_username and model_name == self.last_model_name and version == self.last_version):
|
||||
return self.last_hash
|
||||
|
||||
base_url = "https://civitai.com/api/v1/models"
|
||||
params = {
|
||||
"username": username,
|
||||
"query": model_name,
|
||||
"limit": 20, # Fetch more results due to API ranking issues
|
||||
"nsfw": "true" # Include NSFW models in search results
|
||||
}
|
||||
|
||||
try:
|
||||
# Fetch models by username and model name
|
||||
response = requests.get(base_url, params=params, timeout=10)
|
||||
if response.status_code != 200:
|
||||
return (f"Error: API request failed with status {response.status_code}",)
|
||||
|
||||
data = response.json()
|
||||
items = data.get("items", [])
|
||||
|
||||
# If no results with query, try without query (fallback for API search issues)
|
||||
if not items and params.get("query"):
|
||||
print("ComfyUI-Image-Saver: No results with query, trying without query parameter...")
|
||||
params_no_query = {
|
||||
"username": username,
|
||||
"limit": 100,
|
||||
"nsfw": "true"
|
||||
}
|
||||
response = requests.get(base_url, params=params_no_query, timeout=10)
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
items = data.get("items", [])
|
||||
|
||||
if not items:
|
||||
return (f"No models found for user '{username}' with name '{model_name}'",)
|
||||
|
||||
# Find best matching model (prefer exact/partial matches)
|
||||
model_name_lower = model_name.lower()
|
||||
best_match = None
|
||||
|
||||
# Try exact match first
|
||||
for item in items:
|
||||
if item.get("name", "").lower() == model_name_lower:
|
||||
best_match = item
|
||||
break
|
||||
|
||||
# If no exact match, try partial match
|
||||
if not best_match:
|
||||
for item in items:
|
||||
item_name_lower = item.get("name", "").lower()
|
||||
if model_name_lower in item_name_lower or item_name_lower.startswith(model_name_lower):
|
||||
best_match = item
|
||||
break
|
||||
|
||||
# Fall back to first result if no good match
|
||||
if not best_match:
|
||||
best_match = items[0]
|
||||
|
||||
model = best_match
|
||||
model_versions = model.get("modelVersions", [])
|
||||
if not model_versions:
|
||||
return ("No model versions found.",)
|
||||
|
||||
# If a version keyword is provided, search for a model version whose name contains it (case-insensitive).
|
||||
chosen_version = None
|
||||
if version:
|
||||
for v in model_versions:
|
||||
if version.lower() in v.get("name", "").lower():
|
||||
chosen_version = v
|
||||
break
|
||||
# If no version is provided or no match was found, use the first (latest) version.
|
||||
if chosen_version is None:
|
||||
chosen_version = model_versions[0]
|
||||
version_id = chosen_version.get("id")
|
||||
|
||||
# Fetch detailed version info
|
||||
version_url = f"https://civitai.com/api/v1/model-versions/{version_id}"
|
||||
version_response = requests.get(version_url, timeout=10)
|
||||
if version_response.status_code != 200:
|
||||
return (f"Error: Version API request failed with status {version_response.status_code}",)
|
||||
|
||||
version_data = version_response.json()
|
||||
|
||||
# Extract the AutoV3 hash from the model version files
|
||||
for file_info in version_data.get("files", []):
|
||||
autov3_hash = file_info.get("hashes", {}).get("AutoV3")
|
||||
if autov3_hash:
|
||||
# Cache the result before returning
|
||||
self.last_username = username
|
||||
self.last_model_name = model_name
|
||||
self.last_version = version # Store version to track changes
|
||||
self.last_hash = autov3_hash
|
||||
return (autov3_hash,) # Return the first found hash
|
||||
|
||||
return ("No AutoV3 hash found in version files.",)
|
||||
|
||||
except Exception as e:
|
||||
return (f"Error: {e}",)
|
||||
1
custom_nodes/comfyui-image-saver/example-workflow.json
Normal file
1
custom_nodes/comfyui-image-saver/example-workflow.json
Normal file
File diff suppressed because one or more lines are too long
3
custom_nodes/comfyui-image-saver/js/lib/exif-reader.js
Normal file
3
custom_nodes/comfyui-image-saver/js/lib/exif-reader.js
Normal file
File diff suppressed because one or more lines are too long
75
custom_nodes/comfyui-image-saver/js/read_exif_workflow.js
Normal file
75
custom_nodes/comfyui-image-saver/js/read_exif_workflow.js
Normal file
@@ -0,0 +1,75 @@
|
||||
import { app } from '../../scripts/app.js'
|
||||
import { ExifReader } from './lib/exif-reader.js' // https://github.com/mattiasw/ExifReader v4.26.2
|
||||
|
||||
const SETTING_CATEGORY_NAME = "Image Saver";
|
||||
const SETTING_SECTION_FILE_HANDLING = "File Handling";
|
||||
|
||||
app.registerExtension({
|
||||
name: "ComfyUI-Image-Saver",
|
||||
settings: [
|
||||
{
|
||||
id: "ImageSaver.HandleImageWorkflowDrop",
|
||||
name: "Use a custom file drop handler to load workflows from JPEG and WEBP files",
|
||||
type: "boolean",
|
||||
defaultValue: true,
|
||||
category: [SETTING_CATEGORY_NAME, SETTING_SECTION_FILE_HANDLING, "Custom File Drop Handler"],
|
||||
tooltip:
|
||||
"Use a custom file handler for dropped JPEG and WEBP files.\n" +
|
||||
"This is needed to load embedded workflows.\n" +
|
||||
"Only disable this if it interferes with another extension's file drop handler.",
|
||||
},
|
||||
],
|
||||
async setup() {
|
||||
// Save original function, reassign to our own handler
|
||||
const handleFileOriginal = app.handleFile;
|
||||
app.handleFile = async function (file) {
|
||||
if (app.ui.settings.getSettingValue("ImageSaver.HandleImageWorkflowDrop") && (file.type === "image/jpeg" || file.type === "image/webp")) {
|
||||
try {
|
||||
const exifTags = await ExifReader.load(file);
|
||||
|
||||
const workflowString = "workflow:";
|
||||
const promptString = "prompt:";
|
||||
let workflow;
|
||||
let prompt;
|
||||
// Search Exif tag data for workflow and prompt
|
||||
Object.values(exifTags).some(value => {
|
||||
try {
|
||||
const description = `${value.description}`;
|
||||
if (workflow === undefined && description.slice(0, workflowString.length).toLowerCase() === workflowString) {
|
||||
workflow = JSON.parse(description.slice(workflowString.length));
|
||||
} else if (prompt === undefined && description.slice(0, promptString.length).toLowerCase() === promptString) {
|
||||
prompt = JSON.parse(description.slice(promptString.length));
|
||||
}
|
||||
} catch (error) {
|
||||
if (!(error instanceof SyntaxError)) {
|
||||
console.error(`ComfyUI-Image-Saver: Error reading Exif value: ${error}`);
|
||||
}
|
||||
}
|
||||
|
||||
return workflow !== undefined;
|
||||
});
|
||||
|
||||
if (workflow !== undefined) {
|
||||
// Remove file extension
|
||||
let filename = file.name;
|
||||
let dot = filename.lastIndexOf('.');
|
||||
if (dot !== -1) {
|
||||
filename = filename.slice(0, dot);
|
||||
}
|
||||
|
||||
app.loadGraphData(workflow, true, true, filename);
|
||||
return;
|
||||
} else if (prompt !== undefined) {
|
||||
app.loadApiJson(prompt);
|
||||
return;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`ComfyUI-Image-Saver: Error parsing Exif: ${error}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to original function
|
||||
handleFileOriginal.call(this, file);
|
||||
}
|
||||
},
|
||||
})
|
||||
584
custom_nodes/comfyui-image-saver/nodes.py
Normal file
584
custom_nodes/comfyui-image-saver/nodes.py
Normal file
@@ -0,0 +1,584 @@
|
||||
import os
|
||||
from datetime import datetime
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
import json
|
||||
import numpy as np
|
||||
import re
|
||||
|
||||
from PIL import Image
|
||||
import torch
|
||||
|
||||
import folder_paths
|
||||
from nodes import MAX_RESOLUTION
|
||||
|
||||
from .saver.saver import save_image
|
||||
from .utils import sanitize_filename, get_sha256, full_checkpoint_path_for
|
||||
from .utils_civitai import get_civitai_sampler_name, get_civitai_metadata, MAX_HASH_LENGTH
|
||||
from .prompt_metadata_extractor import PromptMetadataExtractor
|
||||
|
||||
def parse_checkpoint_name(ckpt_name: str) -> str:
|
||||
return os.path.basename(ckpt_name)
|
||||
|
||||
def parse_checkpoint_name_without_extension(ckpt_name: str) -> str:
|
||||
filename = parse_checkpoint_name(ckpt_name)
|
||||
name_without_ext, ext = os.path.splitext(filename)
|
||||
supported_extensions = folder_paths.supported_pt_extensions | {".gguf"}
|
||||
|
||||
# Only remove extension if it's a known model file extension
|
||||
if ext.lower() in supported_extensions:
|
||||
return name_without_ext
|
||||
else:
|
||||
return filename # Keep full name if extension isn't recognized
|
||||
|
||||
def get_timestamp(time_format: str) -> str:
|
||||
now = datetime.now()
|
||||
try:
|
||||
timestamp = now.strftime(time_format)
|
||||
except:
|
||||
timestamp = now.strftime("%Y-%m-%d-%H%M%S")
|
||||
|
||||
return timestamp
|
||||
|
||||
def apply_custom_time_format(filename: str) -> str:
|
||||
"""
|
||||
Replace %time_format<strftime_format> patterns with formatted datetime.
|
||||
Example: %time_format<%Y-%m-%d> becomes 2026-01-17
|
||||
"""
|
||||
now = datetime.now()
|
||||
# Pattern to match %time_format<XXX> where XXX is any strftime format string
|
||||
# Use negative lookahead to exclude %time_format itself from variable delimiters
|
||||
pattern = r'%time_format<([^>]*)>'
|
||||
def replace_format(match):
|
||||
format_str = match.group(1)
|
||||
try:
|
||||
return now.strftime(format_str)
|
||||
except:
|
||||
# If format is invalid, return original
|
||||
return match.group(0)
|
||||
|
||||
return re.sub(pattern, replace_format, filename)
|
||||
|
||||
def save_json(image_info: dict[str, Any] | None, filename: str) -> None:
|
||||
try:
|
||||
workflow = (image_info or {}).get('workflow')
|
||||
if workflow is None:
|
||||
print('No image info found, skipping saving of JSON')
|
||||
with open(f'{filename}.json', 'w') as workflow_file:
|
||||
json.dump(workflow, workflow_file)
|
||||
print(f'Saved workflow to {filename}.json')
|
||||
except Exception as e:
|
||||
print(f'Failed to save workflow as json due to: {e}, proceeding with the remainder of saving execution')
|
||||
|
||||
def make_pathname(filename: str, width: int, height: int, seed: int, modelname: str, counter: int, time_format: str, sampler_name: str, steps: int, cfg: float, scheduler_name: str, denoise: float, clip_skip: int, custom: str) -> str:
|
||||
# Process custom time_format patterns first
|
||||
filename = apply_custom_time_format(filename)
|
||||
filename = filename.replace("%date", get_timestamp("%Y-%m-%d"))
|
||||
filename = filename.replace("%time", get_timestamp(time_format))
|
||||
filename = filename.replace("%model", parse_checkpoint_name(modelname))
|
||||
filename = filename.replace("%width", str(width))
|
||||
filename = filename.replace("%height", str(height))
|
||||
filename = filename.replace("%seed", str(seed))
|
||||
filename = filename.replace("%counter", str(counter))
|
||||
filename = filename.replace("%sampler_name", sampler_name)
|
||||
filename = filename.replace("%steps", str(steps))
|
||||
filename = filename.replace("%cfg", str(cfg))
|
||||
filename = filename.replace("%scheduler_name", scheduler_name)
|
||||
filename = filename.replace("%basemodelname", parse_checkpoint_name_without_extension(modelname))
|
||||
filename = filename.replace("%denoise", str(denoise))
|
||||
filename = filename.replace("%clip_skip", str(clip_skip))
|
||||
filename = filename.replace("%custom", custom)
|
||||
|
||||
directory, basename = os.path.split(filename)
|
||||
sanitized_basename = sanitize_filename(basename)
|
||||
return os.path.join(directory, sanitized_basename)
|
||||
|
||||
def make_filename(filename: str, width: int, height: int, seed: int, modelname: str, counter: int, time_format: str, sampler_name: str, steps: int, cfg: float, scheduler_name: str, denoise: float, clip_skip: int, custom: str) -> str:
|
||||
filename = make_pathname(filename, width, height, seed, modelname, counter, time_format, sampler_name, steps, cfg, scheduler_name, denoise, clip_skip, custom)
|
||||
return get_timestamp(time_format) if filename == "" else filename
|
||||
|
||||
@dataclass
|
||||
class Metadata:
|
||||
modelname: str
|
||||
positive: str
|
||||
negative: str
|
||||
width: int
|
||||
height: int
|
||||
seed: int
|
||||
steps: int
|
||||
cfg: float
|
||||
sampler_name: str
|
||||
scheduler_name: str
|
||||
denoise: float
|
||||
clip_skip: int
|
||||
custom: str
|
||||
additional_hashes: str
|
||||
ckpt_path: str
|
||||
a111_params: str
|
||||
final_hashes: str
|
||||
|
||||
class ImageSaverMetadata:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"optional": {
|
||||
"modelname": ("STRING", {"default": '', "multiline": False, "tooltip": "model name (can be multiple, separated by commas)"}),
|
||||
"positive": ("STRING", {"default": 'unknown', "multiline": True, "tooltip": "positive prompt"}),
|
||||
"negative": ("STRING", {"default": 'unknown', "multiline": True, "tooltip": "negative prompt"}),
|
||||
"width": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, "tooltip": "image width"}),
|
||||
"height": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, "tooltip": "image height"}),
|
||||
"seed_value": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "tooltip": "seed"}),
|
||||
"steps": ("INT", {"default": 20, "min": 1, "max": 10000, "tooltip": "number of steps"}),
|
||||
"cfg": ("FLOAT", {"default": 7.0, "min": 0.0, "max": 100.0, "tooltip": "CFG value"}),
|
||||
"sampler_name": ("STRING", {"default": '', "multiline": False, "tooltip": "sampler name (as string)"}),
|
||||
"scheduler_name": ("STRING", {"default": 'normal', "multiline": False, "tooltip": "scheduler name (as string)"}),
|
||||
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "tooltip": "denoise value"}),
|
||||
"clip_skip": ("INT", {"default": 0, "min": -24, "max": 24, "tooltip": "skip last CLIP layers (positive or negative value, 0 for no skip)"}),
|
||||
"additional_hashes": ("STRING", {"default": "", "multiline": False, "tooltip": "hashes separated by commas, optionally with names. 'Name:HASH' (e.g., 'MyLoRA:FF735FF83F98')\nWith download_civitai_data set to true, weights can be added as well. (e.g., 'HASH:Weight', 'Name:HASH:Weight')"}),
|
||||
"download_civitai_data": ("BOOLEAN", {"default": True, "tooltip": "Download and cache data from civitai.com to save correct metadata. Allows LoRA weights to be saved to the metadata."}),
|
||||
"easy_remix": ("BOOLEAN", {"default": True, "tooltip": "Strip LoRAs and simplify 'embedding:path' from the prompt to make the Remix option on civitai.com more seamless."}),
|
||||
"custom": ("STRING", {"default": "", "multiline": False, "tooltip": "custom string to add to the metadata, inserted into the a111 string between clip skip and model hash"}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("METADATA","STRING","STRING")
|
||||
RETURN_NAMES = ("metadata","hashes","a1111_params")
|
||||
OUTPUT_TOOLTIPS = ("metadata for Image Saver Simple","Comma-separated list of the hashes to chain with other Image Saver additional_hashes","Written parameters to the image metadata")
|
||||
FUNCTION = "get_metadata"
|
||||
CATEGORY = "ImageSaver"
|
||||
DESCRIPTION = "Prepare metadata for Image Saver Simple"
|
||||
|
||||
def get_metadata(
|
||||
self,
|
||||
modelname: str = "",
|
||||
positive: str = "unknown",
|
||||
negative: str = "unknown",
|
||||
width: int = 512,
|
||||
height: int = 512,
|
||||
seed_value: int = 0,
|
||||
steps: int = 20,
|
||||
cfg: float = 7.0,
|
||||
sampler_name: str = "",
|
||||
scheduler_name: str = "normal",
|
||||
denoise: float = 1.0,
|
||||
clip_skip: int = 0,
|
||||
custom: str = "",
|
||||
additional_hashes: str = "",
|
||||
download_civitai_data: bool = True,
|
||||
easy_remix: bool = True,
|
||||
) -> tuple[Metadata, str, str]:
|
||||
metadata = ImageSaverMetadata.make_metadata(modelname, positive, negative, width, height, seed_value, steps, cfg, sampler_name, scheduler_name, denoise, clip_skip, custom, additional_hashes, download_civitai_data, easy_remix)
|
||||
return (metadata, metadata.final_hashes, metadata.a111_params)
|
||||
|
||||
@staticmethod
|
||||
def make_metadata(modelname: str, positive: str, negative: str, width: int, height: int, seed_value: int, steps: int, cfg: float, sampler_name: str, scheduler_name: str, denoise: float, clip_skip: int, custom: str, additional_hashes: str, download_civitai_data: bool, easy_remix: bool) -> Metadata:
|
||||
modelname, additional_hashes = ImageSaver.get_multiple_models(modelname, additional_hashes)
|
||||
|
||||
ckpt_path = full_checkpoint_path_for(modelname)
|
||||
if ckpt_path:
|
||||
modelhash = get_sha256(ckpt_path)[:10]
|
||||
else:
|
||||
modelhash = ""
|
||||
|
||||
metadata_extractor = PromptMetadataExtractor([positive, negative])
|
||||
embeddings = metadata_extractor.get_embeddings()
|
||||
loras = metadata_extractor.get_loras()
|
||||
civitai_sampler_name = get_civitai_sampler_name(sampler_name.replace('_gpu', ''), scheduler_name)
|
||||
basemodelname = parse_checkpoint_name_without_extension(modelname)
|
||||
|
||||
# Get existing hashes from model, loras, and embeddings
|
||||
existing_hashes = {modelhash.lower()} | {t[2].lower() for t in loras.values()} | {t[2].lower() for t in embeddings.values()}
|
||||
# Parse manual hashes
|
||||
manual_entries = ImageSaver.parse_manual_hashes(additional_hashes, existing_hashes, download_civitai_data)
|
||||
# Get Civitai metadata
|
||||
civitai_resources, hashes, add_model_hash = get_civitai_metadata(modelname, ckpt_path, modelhash, loras, embeddings, manual_entries, download_civitai_data)
|
||||
|
||||
if easy_remix:
|
||||
positive = ImageSaver.clean_prompt(positive, metadata_extractor)
|
||||
negative = ImageSaver.clean_prompt(negative, metadata_extractor)
|
||||
|
||||
positive_a111_params = positive.strip()
|
||||
negative_a111_params = f"\nNegative prompt: {negative.strip()}"
|
||||
clip_skip_str = f", Clip skip: {abs(clip_skip)}" if clip_skip != 0 else ""
|
||||
custom_str = f", {custom}" if custom else ""
|
||||
model_hash_str = f", Model hash: {add_model_hash}" if add_model_hash else ""
|
||||
hashes_str = f", Hashes: {json.dumps(hashes, separators=(',', ':'))}" if hashes else ""
|
||||
|
||||
a111_params = (
|
||||
f"{positive_a111_params}{negative_a111_params}\n"
|
||||
f"Steps: {steps}, Sampler: {civitai_sampler_name}, CFG scale: {cfg}, Seed: {seed_value}, "
|
||||
f"Size: {width}x{height}{clip_skip_str}{custom_str}{model_hash_str}, Model: {basemodelname}{hashes_str}, Version: ComfyUI"
|
||||
)
|
||||
|
||||
# Add Civitai resource listing
|
||||
if download_civitai_data and civitai_resources:
|
||||
a111_params += f", Civitai resources: {json.dumps(civitai_resources, separators=(',', ':'))}"
|
||||
|
||||
# Combine all resources (model, loras, embeddings, manual entries) for final hash string
|
||||
all_resources = { modelname: ( ckpt_path, None, modelhash ) } | loras | embeddings | manual_entries
|
||||
|
||||
hash_parts = []
|
||||
for name, (_, weight, hash_value) in (all_resources.items() if isinstance(all_resources, dict) else all_resources):
|
||||
# Format: "name:hash" or "name:hash:weight" depending on download_civitai_data
|
||||
if name:
|
||||
# Extract clean name (only remove actual model file extensions, preserve dots in model names)
|
||||
filename = name.split(':')[-1]
|
||||
name_without_ext, ext = os.path.splitext(filename)
|
||||
supported_extensions = folder_paths.supported_pt_extensions | {".gguf"}
|
||||
|
||||
# Only remove extension if it's a known model file extension
|
||||
if ext.lower() in supported_extensions:
|
||||
clean_name = name_without_ext
|
||||
else:
|
||||
clean_name = filename # Keep full name if extension isn't recognized
|
||||
|
||||
name_part = f"{clean_name}:"
|
||||
else:
|
||||
name_part = ""
|
||||
|
||||
# Skip entries without a valid hash
|
||||
if not hash_value:
|
||||
continue
|
||||
|
||||
weight_part = f":{weight}" if weight is not None and download_civitai_data else ""
|
||||
hash_parts.append(f"{name_part}{hash_value}{weight_part}")
|
||||
|
||||
final_hashes = ",".join(hash_parts)
|
||||
|
||||
metadata = Metadata(modelname, positive, negative, width, height, seed_value, steps, cfg, sampler_name, scheduler_name, denoise, clip_skip, custom, additional_hashes, ckpt_path, a111_params, final_hashes)
|
||||
return metadata
|
||||
|
||||
class ImageSaverSimple:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE", { "tooltip": "image(s) to save"}),
|
||||
"filename": ("STRING", {"default": '%time_%basemodelname_%seed', "multiline": False, "tooltip": "filename (available variables: %date, %time, %time_format<format>, %model, %width, %height, %seed, %counter, %sampler_name, %steps, %cfg, %scheduler_name, %basemodelname, %denoise, %clip_skip)"}),
|
||||
"path": ("STRING", {"default": '', "multiline": False, "tooltip": "path to save the images (under Comfy's save directory)"}),
|
||||
"extension": (['png', 'jpeg', 'jpg', 'webp'], { "tooltip": "file extension/type to save image as"}),
|
||||
"lossless_webp": ("BOOLEAN", {"default": True, "tooltip": "if True, saved WEBP files will be lossless"}),
|
||||
"quality_jpeg_or_webp": ("INT", {"default": 100, "min": 1, "max": 100, "tooltip": "quality setting of JPEG/WEBP"}),
|
||||
"optimize_png": ("BOOLEAN", {"default": False, "tooltip": "if True, saved PNG files will be optimized (can reduce file size but is slower)"}),
|
||||
"embed_workflow": ("BOOLEAN", {"default": True, "tooltip": "if True, embeds the workflow in the saved image files.\nStable for PNG, experimental for WEBP.\nJPEG experimental and only if metadata size is below 65535 bytes"}),
|
||||
"save_workflow_as_json": ("BOOLEAN", {"default": False, "tooltip": "if True, also saves the workflow as a separate JSON file"}),
|
||||
},
|
||||
"optional": {
|
||||
"metadata": ("METADATA", {"default": None, "tooltip": "metadata to embed in the image"}),
|
||||
"counter": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "tooltip": "counter"}),
|
||||
"time_format": ("STRING", {"default": "%Y-%m-%d-%H%M%S", "multiline": False, "tooltip": "timestamp format"}),
|
||||
"show_preview": ("BOOLEAN", {"default": True, "tooltip": "if True, displays saved images in the UI preview"}),
|
||||
},
|
||||
"hidden": {
|
||||
"prompt": "PROMPT",
|
||||
"extra_pnginfo": "EXTRA_PNGINFO",
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING","STRING")
|
||||
RETURN_NAMES = ("hashes","a1111_params")
|
||||
OUTPUT_TOOLTIPS = ("Comma-separated list of the hashes to chain with other Image Saver additional_hashes","Written parameters to the image metadata")
|
||||
FUNCTION = "save_images"
|
||||
|
||||
OUTPUT_NODE = True
|
||||
|
||||
CATEGORY = "ImageSaver"
|
||||
DESCRIPTION = "Save images with civitai-compatible generation metadata"
|
||||
|
||||
def save_images(self,
|
||||
images: list[torch.Tensor],
|
||||
filename: str,
|
||||
path: str,
|
||||
extension: str,
|
||||
lossless_webp: bool,
|
||||
quality_jpeg_or_webp: int,
|
||||
optimize_png: bool,
|
||||
embed_workflow: bool = True,
|
||||
save_workflow_as_json: bool = False,
|
||||
show_preview: bool = True,
|
||||
metadata: Metadata | None = None,
|
||||
counter: int = 0,
|
||||
time_format: str = "%Y-%m-%d-%H%M%S",
|
||||
prompt: dict[str, Any] | None = None,
|
||||
extra_pnginfo: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
if metadata is None:
|
||||
metadata = Metadata('', '', '', 512, 512, 0, 20, 7.0, '', 'normal', 1.0, 0, '', '', '', '', '')
|
||||
|
||||
path = make_pathname(path, metadata.width, metadata.height, metadata.seed, metadata.modelname, counter, time_format, metadata.sampler_name, metadata.steps, metadata.cfg, metadata.scheduler_name, metadata.denoise, metadata.clip_skip, metadata.custom)
|
||||
|
||||
filenames = ImageSaver.save_images(images, filename, extension, path, quality_jpeg_or_webp, lossless_webp, optimize_png, prompt, extra_pnginfo, save_workflow_as_json, embed_workflow, counter, time_format, metadata)
|
||||
|
||||
subfolder = os.path.normpath(path)
|
||||
|
||||
result: dict[str, Any] = {
|
||||
"result": (metadata.final_hashes, metadata.a111_params),
|
||||
}
|
||||
|
||||
if show_preview:
|
||||
result["ui"] = {"images": [{"filename": filename, "subfolder": subfolder if subfolder != '.' else '', "type": 'output'} for filename in filenames]}
|
||||
|
||||
return result
|
||||
|
||||
class ImageSaver:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE", { "tooltip": "image(s) to save"}),
|
||||
"filename": ("STRING", {"default": '%time_%basemodelname_%seed', "multiline": False, "tooltip": "filename (available variables: %date, %time, %time_format<format>, %model, %width, %height, %seed, %counter, %sampler_name, %steps, %cfg, %scheduler_name, %basemodelname, %denoise, %clip_skip)"}),
|
||||
"path": ("STRING", {"default": '', "multiline": False, "tooltip": "path to save the images (under Comfy's save directory)"}),
|
||||
"extension": (['png', 'jpeg', 'jpg', 'webp'], { "tooltip": "file extension/type to save image as"}),
|
||||
},
|
||||
"optional": {
|
||||
"steps": ("INT", {"default": 20, "min": 1, "max": 10000, "tooltip": "number of steps"}),
|
||||
"cfg": ("FLOAT", {"default": 7.0, "min": 0.0, "max": 100.0, "tooltip": "CFG value"}),
|
||||
"modelname": ("STRING", {"default": '', "multiline": False, "tooltip": "model name (can be multiple, separated by commas)"}),
|
||||
"sampler_name": ("STRING", {"default": '', "multiline": False, "tooltip": "sampler name (as string)"}),
|
||||
"scheduler_name": ("STRING", {"default": 'normal', "multiline": False, "tooltip": "scheduler name (as string)"}),
|
||||
"positive": ("STRING", {"default": 'unknown', "multiline": True, "tooltip": "positive prompt"}),
|
||||
"negative": ("STRING", {"default": 'unknown', "multiline": True, "tooltip": "negative prompt"}),
|
||||
"seed_value": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "tooltip": "seed"}),
|
||||
"width": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, "tooltip": "image width"}),
|
||||
"height": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, "tooltip": "image height"}),
|
||||
"lossless_webp": ("BOOLEAN", {"default": True, "tooltip": "if True, saved WEBP files will be lossless"}),
|
||||
"quality_jpeg_or_webp": ("INT", {"default": 100, "min": 1, "max": 100, "tooltip": "quality setting of JPEG/WEBP"}),
|
||||
"optimize_png": ("BOOLEAN", {"default": False, "tooltip": "if True, saved PNG files will be optimized (can reduce file size but is slower)"}),
|
||||
"counter": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "tooltip": "counter"}),
|
||||
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "tooltip": "denoise value"}),
|
||||
"clip_skip": ("INT", {"default": 0, "min": -24, "max": 24, "tooltip": "skip last CLIP layers (positive or negative value, 0 for no skip)"}),
|
||||
"time_format": ("STRING", {"default": "%Y-%m-%d-%H%M%S", "multiline": False, "tooltip": "timestamp format"}),
|
||||
"save_workflow_as_json": ("BOOLEAN", {"default": False, "tooltip": "if True, also saves the workflow as a separate JSON file"}),
|
||||
"embed_workflow": ("BOOLEAN", {"default": True, "tooltip": "if True, embeds the workflow in the saved image files.\nStable for PNG, experimental for WEBP.\nJPEG experimental and only if metadata size is below 65535 bytes"}),
|
||||
"additional_hashes": ("STRING", {"default": "", "multiline": False, "tooltip": "hashes separated by commas, optionally with names. 'Name:HASH' (e.g., 'MyLoRA:FF735FF83F98')\nWith download_civitai_data set to true, weights can be added as well. (e.g., 'HASH:Weight', 'Name:HASH:Weight')"}),
|
||||
"download_civitai_data": ("BOOLEAN", {"default": True, "tooltip": "Download and cache data from civitai.com to save correct metadata. Allows LoRA weights to be saved to the metadata."}),
|
||||
"easy_remix": ("BOOLEAN", {"default": True, "tooltip": "Strip LoRAs and simplify 'embedding:path' from the prompt to make the Remix option on civitai.com more seamless."}),
|
||||
"show_preview": ("BOOLEAN", {"default": True, "tooltip": "if True, displays saved images in the UI preview"}),
|
||||
"custom": ("STRING", {"default": "", "multiline": False, "tooltip": "custom string to add to the metadata, inserted into the a111 string between clip skip and model hash"}),
|
||||
},
|
||||
"hidden": {
|
||||
"prompt": "PROMPT",
|
||||
"extra_pnginfo": "EXTRA_PNGINFO",
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING","STRING")
|
||||
RETURN_NAMES = ("hashes","a1111_params")
|
||||
OUTPUT_TOOLTIPS = ("Comma-separated list of the hashes to chain with other Image Saver additional_hashes","Written parameters to the image metadata")
|
||||
FUNCTION = "save_files"
|
||||
|
||||
OUTPUT_NODE = True
|
||||
|
||||
CATEGORY = "ImageSaver"
|
||||
DESCRIPTION = "Save images with civitai-compatible generation metadata"
|
||||
|
||||
def save_files(
|
||||
self,
|
||||
images: list[torch.Tensor],
|
||||
filename: str,
|
||||
path: str,
|
||||
extension: str,
|
||||
steps: int = 20,
|
||||
cfg: float = 7.0,
|
||||
modelname: str = "",
|
||||
sampler_name: str = "",
|
||||
scheduler_name: str = "normal",
|
||||
positive: str = "unknown",
|
||||
negative: str = "unknown",
|
||||
seed_value: int = 0,
|
||||
width: int = 512,
|
||||
height: int = 512,
|
||||
lossless_webp: bool = True,
|
||||
quality_jpeg_or_webp: int = 100,
|
||||
optimize_png: bool = False,
|
||||
counter: int = 0,
|
||||
denoise: float = 1.0,
|
||||
clip_skip: int = 0,
|
||||
time_format: str = "%Y-%m-%d-%H%M%S",
|
||||
save_workflow_as_json: bool = False,
|
||||
embed_workflow: bool = True,
|
||||
additional_hashes: str = "",
|
||||
download_civitai_data: bool = True,
|
||||
easy_remix: bool = True,
|
||||
show_preview: bool = True,
|
||||
custom: str = "",
|
||||
prompt: dict[str, Any] | None = None,
|
||||
extra_pnginfo: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
metadata = ImageSaverMetadata.make_metadata(modelname, positive, negative, width, height, seed_value, steps, cfg, sampler_name, scheduler_name, denoise, clip_skip, custom, additional_hashes, download_civitai_data, easy_remix)
|
||||
|
||||
path = make_pathname(path, metadata.width, metadata.height, metadata.seed, metadata.modelname, counter, time_format, metadata.sampler_name, metadata.steps, metadata.cfg, metadata.scheduler_name, metadata.denoise, metadata.clip_skip, metadata.custom)
|
||||
|
||||
filenames = ImageSaver.save_images(images, filename, extension, path, quality_jpeg_or_webp, lossless_webp, optimize_png, prompt, extra_pnginfo, save_workflow_as_json, embed_workflow, counter, time_format, metadata)
|
||||
|
||||
subfolder = os.path.normpath(path)
|
||||
|
||||
result: dict[str, Any] = {
|
||||
"result": (metadata.final_hashes, metadata.a111_params),
|
||||
}
|
||||
|
||||
if show_preview:
|
||||
result["ui"] = {"images": [{"filename": filename, "subfolder": subfolder if subfolder != '.' else '', "type": 'output'} for filename in filenames]}
|
||||
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def save_images(
|
||||
images: list[torch.Tensor],
|
||||
filename_pattern: str,
|
||||
extension: str,
|
||||
path: str,
|
||||
quality_jpeg_or_webp: int,
|
||||
lossless_webp: bool,
|
||||
optimize_png: bool,
|
||||
prompt: dict[str, Any] | None,
|
||||
extra_pnginfo: dict[str, Any] | None,
|
||||
save_workflow_as_json: bool,
|
||||
embed_workflow: bool,
|
||||
counter: int,
|
||||
time_format: str,
|
||||
metadata: Metadata
|
||||
) -> list[str]:
|
||||
filename_prefix = make_filename(filename_pattern, metadata.width, metadata.height, metadata.seed, metadata.modelname, counter, time_format, metadata.sampler_name, metadata.steps, metadata.cfg, metadata.scheduler_name, metadata.denoise, metadata.clip_skip, metadata.custom)
|
||||
|
||||
output_path = os.path.join(folder_paths.output_directory, path)
|
||||
|
||||
if output_path.strip() != '':
|
||||
if not os.path.exists(output_path.strip()):
|
||||
print(f'The path `{output_path.strip()}` specified doesn\'t exist! Creating directory.')
|
||||
os.makedirs(output_path, exist_ok=True)
|
||||
|
||||
result_paths: list[str] = list()
|
||||
num_images = len(images)
|
||||
for idx, image in enumerate(images):
|
||||
i = 255. * image.cpu().numpy()
|
||||
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
|
||||
|
||||
current_filename_prefix = ImageSaver.get_unique_filename(output_path, filename_prefix, extension, batch_size=num_images, batch_index=idx)
|
||||
final_filename = f"{current_filename_prefix}.{extension}"
|
||||
filepath = os.path.join(output_path, final_filename)
|
||||
|
||||
save_image(img, filepath, extension, quality_jpeg_or_webp, lossless_webp, optimize_png, metadata.a111_params, prompt, extra_pnginfo, embed_workflow)
|
||||
|
||||
if save_workflow_as_json:
|
||||
save_json(extra_pnginfo, os.path.join(output_path, current_filename_prefix))
|
||||
|
||||
result_paths.append(final_filename)
|
||||
return result_paths
|
||||
|
||||
# Match 'anything' or 'anything:anything' with trimmed white space
|
||||
re_manual_hash = re.compile(r'^\s*([^:]+?)(?:\s*:\s*([^\s:][^:]*?))?\s*$')
|
||||
# Match 'anything', 'anything:anything' or 'anything:anything:number' with trimmed white space
|
||||
re_manual_hash_weights = re.compile(r'^\s*([^:]+?)(?:\s*:\s*([^\s:][^:]*?))?(?:\s*:\s*([-+]?(?:\d+(?:\.\d*)?|\.\d+)))?\s*$')
|
||||
|
||||
@staticmethod
|
||||
def get_multiple_models(modelname: str, additional_hashes: str) -> tuple[str, str]:
|
||||
model_names = [m.strip() for m in modelname.split(',')]
|
||||
modelname = model_names[0] # Use the first model as the primary one
|
||||
|
||||
# Process additional model names and add to additional_hashes
|
||||
for additional_model in model_names[1:]:
|
||||
additional_ckpt_path = full_checkpoint_path_for(additional_model)
|
||||
if additional_ckpt_path:
|
||||
additional_modelhash = get_sha256(additional_ckpt_path)[:10]
|
||||
# Add to additional_hashes in "name:HASH" format
|
||||
if additional_hashes:
|
||||
additional_hashes += ","
|
||||
additional_hashes += f"{additional_model}:{additional_modelhash}"
|
||||
return modelname, additional_hashes
|
||||
|
||||
@staticmethod
|
||||
def parse_manual_hashes(additional_hashes: str, existing_hashes: set[str], download_civitai_data: bool) -> dict[str, tuple[str | None, float | None, str]]:
|
||||
"""Process additional_hashes input (a string) by normalizing, removing extra spaces/newlines, and splitting by comma"""
|
||||
manual_entries: dict[str, tuple[str | None, float | None, str]] = {}
|
||||
unnamed_count = 0
|
||||
|
||||
additional_hash_split = additional_hashes.replace("\n", ",").split(",") if additional_hashes else []
|
||||
for entry in additional_hash_split:
|
||||
match = (ImageSaver.re_manual_hash_weights if download_civitai_data else ImageSaver.re_manual_hash).search(entry)
|
||||
if match is None:
|
||||
print(f"ComfyUI-Image-Saver: Invalid additional hash string: '{entry}'")
|
||||
continue
|
||||
|
||||
groups = tuple(group for group in match.groups() if group)
|
||||
|
||||
# Read weight and remove from groups, if needed
|
||||
weight = None
|
||||
if download_civitai_data and len(groups) > 1:
|
||||
try:
|
||||
weight = float(groups[-1])
|
||||
groups = groups[:-1]
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
# Read hash, optionally preceded by name
|
||||
name, hash = groups if len(groups) > 1 else (None, groups[0])
|
||||
|
||||
if len(hash) > MAX_HASH_LENGTH:
|
||||
print(f"ComfyUI-Image-Saver: Skipping hash. Length exceeds maximum of {MAX_HASH_LENGTH} characters: {hash}")
|
||||
continue
|
||||
|
||||
if any(hash.lower() == existing_hash.lower() for _, _, existing_hash in manual_entries.values()):
|
||||
print(f"ComfyUI-Image-Saver: Skipping duplicate hash: {hash}")
|
||||
continue # Skip duplicates
|
||||
|
||||
if hash.lower() in existing_hashes:
|
||||
print(f"ComfyUI-Image-Saver: Skipping manual hash already present in resources: {hash}")
|
||||
continue
|
||||
|
||||
if name is None:
|
||||
unnamed_count += 1
|
||||
name = f"manual{unnamed_count}"
|
||||
elif name in manual_entries:
|
||||
print(f"ComfyUI-Image-Saver: Duplicate manual hash name '{name}' is being overwritten.")
|
||||
|
||||
manual_entries[name] = (None, weight, hash)
|
||||
|
||||
if len(manual_entries) > 29:
|
||||
print("ComfyUI-Image-Saver: Reached maximum limit of 30 manual hashes. Skipping the rest.")
|
||||
break
|
||||
|
||||
return manual_entries
|
||||
|
||||
@staticmethod
|
||||
def clean_prompt(prompt: str, metadata_extractor: PromptMetadataExtractor) -> str:
|
||||
"""Clean prompts for easier remixing by removing LoRAs and simplifying embeddings."""
|
||||
# Strip loras
|
||||
prompt = re.sub(metadata_extractor.LORA, "", prompt)
|
||||
# Shorten 'embedding:path/to/my_embedding' -> 'my_embedding'
|
||||
# Note: Possible inaccurate embedding name if the filename has been renamed from the default
|
||||
prompt = re.sub(metadata_extractor.EMBEDDING, lambda match: Path(match.group(1)).stem, prompt)
|
||||
# Remove prompt control edits. e.g., 'STYLE(A1111, mean)', 'SHIFT(1)`, etc.`
|
||||
prompt = re.sub(r'\b[A-Z]+\([^)]*\)', "", prompt)
|
||||
return prompt
|
||||
|
||||
@staticmethod
|
||||
def get_unique_filename(output_path: str, filename_prefix: str, extension: str, batch_size: int = 1, batch_index: int = 0) -> str:
|
||||
existing_files = [f for f in os.listdir(output_path) if f.startswith(filename_prefix) and f.endswith(extension)]
|
||||
|
||||
# For single images with no existing files, return plain filename
|
||||
if batch_size == 1 and not existing_files:
|
||||
return f"{filename_prefix}"
|
||||
|
||||
# For batches or when files exist, always use numbered suffix
|
||||
suffixes: list[int] = []
|
||||
for f in existing_files:
|
||||
name, _ = os.path.splitext(f)
|
||||
parts = name.split('_')
|
||||
if parts[-1].isdigit():
|
||||
suffixes.append(int(parts[-1]))
|
||||
|
||||
if suffixes:
|
||||
# Start numbering after the highest existing suffix
|
||||
base_suffix = max(suffixes) + 1
|
||||
else:
|
||||
# No numbered files exist yet
|
||||
if existing_files:
|
||||
# Plain file exists, start at 1 (the plain file is effectively 0)
|
||||
base_suffix = 1
|
||||
else:
|
||||
# No files at all, start at 1
|
||||
base_suffix = 1
|
||||
|
||||
return f"{filename_prefix}_{base_suffix + batch_index:02d}"
|
||||
358
custom_nodes/comfyui-image-saver/nodes_literals.py
Normal file
358
custom_nodes/comfyui-image-saver/nodes_literals.py
Normal file
@@ -0,0 +1,358 @@
|
||||
from sys import float_info
|
||||
from typing import Any
|
||||
from nodes import MAX_RESOLUTION
|
||||
import torch
|
||||
|
||||
import numpy as np
|
||||
from PIL import Image, ImageDraw
|
||||
import random
|
||||
import math
|
||||
|
||||
class SeedGenerator:
|
||||
RETURN_TYPES = ("INT",)
|
||||
OUTPUT_TOOLTIPS = ("seed (INT)",)
|
||||
FUNCTION = "get_seed"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Provides seed as integer"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "control_after_generate": True, "tooltip": "The random seed used for creating the noise."}),
|
||||
"increment": ("INT", {"default": 0, "min": -0xffffffffffffffff, "max": 0xffffffffffffffff, "tooltip": "number to add to the final seed value"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_seed(self, seed: int, increment: int) -> tuple[int,]:
|
||||
return (seed + increment,)
|
||||
|
||||
class StringLiteral:
|
||||
RETURN_TYPES = ("STRING",)
|
||||
OUTPUT_TOOLTIPS = ("string (STRING)",)
|
||||
FUNCTION = "get_string"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Provides a string"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"string": ("STRING", {"default": "", "multiline": True, "tooltip": "string"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_string(self, string: str) -> tuple[str,] :
|
||||
return (string,)
|
||||
|
||||
class SizeLiteral:
|
||||
RETURN_TYPES = ("INT",)
|
||||
RETURN_NAMES = ("size",)
|
||||
OUTPUT_TOOLTIPS = ("size (INT)",)
|
||||
FUNCTION = "get_int"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = f"Provides integer number between 0 and {MAX_RESOLUTION} (step=8)"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"size": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 8, "tooltip": "size as integer (in steps of 8)"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_int(self, size: int) -> tuple[int,]:
|
||||
return (size,)
|
||||
|
||||
class IntLiteral:
|
||||
RETURN_TYPES = ("INT",)
|
||||
OUTPUT_TOOLTIPS = ("int (INT)",)
|
||||
FUNCTION = "get_int"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Provides integer number between 0 and 1000000"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"int": ("INT", {"default": 0, "min": 0, "max": 1000000, "tooltip": "integer number"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_int(self, int: int) -> tuple[int,]:
|
||||
return (int,)
|
||||
|
||||
class FloatLiteral:
|
||||
RETURN_TYPES = ("FLOAT",)
|
||||
OUTPUT_TOOLTIPS = ("float (FLOAT)",)
|
||||
FUNCTION = "get_float"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = f"Provides a floating point number between {float_info.min} and {float_info.max} (step=0.01)"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"float": ("FLOAT", {"default": 1.0, "min": float_info.min, "max": float_info.max, "step": 0.01, "tooltip": "floating point number"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_float(self, float: float):
|
||||
return (float,)
|
||||
|
||||
class CfgLiteral:
|
||||
RETURN_TYPES = ("FLOAT",)
|
||||
RETURN_NAMES = ("value",)
|
||||
OUTPUT_TOOLTIPS = ("cfg (FLOAT)",)
|
||||
FUNCTION = "get_float"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Provides CFG value between 0.0 and 100.0"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"cfg": ("FLOAT", {"default": 7.0, "min": 0.0, "max": 100.0, "tooltip": "CFG as a floating point number"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_float(self, cfg: float) -> tuple[float,]:
|
||||
return (cfg,)
|
||||
|
||||
class ConditioningConcatOptional:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"conditioning_to": ("CONDITIONING", {"tooltip": "base conditioning to concat to (or pass through, if second is empty)"}),
|
||||
},
|
||||
"optional": {
|
||||
"conditioning_from": ("CONDITIONING", {"tooltip": "conditioning to concat to conditioning_to, if empty, then conditioning_to is passed through unchanged"}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING",)
|
||||
FUNCTION = "concat"
|
||||
CATEGORY = "conditioning"
|
||||
|
||||
def concat(self, conditioning_to, conditioning_from=None):
|
||||
if conditioning_from is None:
|
||||
return (conditioning_to,)
|
||||
|
||||
out = []
|
||||
if len(conditioning_from) > 1:
|
||||
print("Warning: ConditioningConcat conditioning_from contains more than 1 cond, only the first one will actually be applied to conditioning_to.")
|
||||
|
||||
cond_from = conditioning_from[0][0]
|
||||
for i in range(len(conditioning_to)):
|
||||
t1 = conditioning_to[i][0]
|
||||
tw = torch.cat((t1, cond_from), 1)
|
||||
n = [tw, conditioning_to[i][1].copy()]
|
||||
out.append(n)
|
||||
|
||||
return (out,)
|
||||
|
||||
class RandomShapeGenerator:
|
||||
"""
|
||||
A ComfyUI node that generates images with random shapes.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"width": ("INT", { "default": 512, "min": 64, "max": 4096, "step": 64, "tooltip": "Width of the generated image in pixels" }),
|
||||
"height": ("INT", { "default": 512, "min": 64, "max": 4096, "step": 64, "tooltip": "Height of the generated image in pixels" }),
|
||||
"bg_color": (["random", "white", "black", "red", "green", "blue", "yellow", "cyan", "magenta"], { "tooltip": "Background color preset or random" }),
|
||||
"fg_color": (["random", "black", "white", "red", "green", "blue", "yellow", "cyan", "magenta"], { "tooltip": "Foreground shape color preset or random" }),
|
||||
"shape_type": (["random", "circle", "oval", "triangle", "square", "rectangle", "rhombus", "pentagon", "hexagon"], { "tooltip": "Type of shape to generate or random" }),
|
||||
"seed": ("INT", { "default": 0, "min": 0, "max": 0xffffffffffffffff, "control_after_generate": True, "tooltip": "Random seed for reproducible shape generation" }),
|
||||
},
|
||||
"optional": {
|
||||
"bg_color_override": ("STRING", { "default": "", "multiline": False, "tooltip": "Override background color with hex (#AABBCC) or RGB(r, g, b) format" }),
|
||||
"fg_color_override": ("STRING", { "default": "", "multiline": False, "tooltip": "Override foreground color with hex (#AABBCC) or RGB(r, g, b) format" }),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE", "STRING", "STRING")
|
||||
RETURN_NAMES = ("image", "bg_rgb", "fg_rgb")
|
||||
OUTPUT_TOOLTIPS = ("Generated image with random shape", "Background color as RGB/hex", "Foreground color as RGB/hex")
|
||||
FUNCTION = "generate_shape"
|
||||
CATEGORY = "image/generators"
|
||||
DESCRIPTION = "Generates images with random shapes for testing and prototyping"
|
||||
|
||||
def __init__(self):
|
||||
self.color_map = {
|
||||
"white": (255, 255, 255),
|
||||
"black": (0, 0, 0),
|
||||
"red": (255, 0, 0),
|
||||
"green": (0, 255, 0),
|
||||
"blue": (0, 0, 255),
|
||||
"yellow": (255, 255, 0),
|
||||
"cyan": (0, 255, 255),
|
||||
"magenta": (255, 0, 255),
|
||||
}
|
||||
|
||||
def parse_rgb_string(self, rgb_str: str) -> tuple[int, int, int] | None:
|
||||
"""Parse RGB string like 'RGB(123, 45, 67)' or '#AABBCC' into tuple (123, 45, 67)"""
|
||||
if not rgb_str or rgb_str.strip() == "":
|
||||
return None
|
||||
|
||||
rgb_str = rgb_str.strip()
|
||||
|
||||
try:
|
||||
# Try hex format first (#AABBCC or AABBCC)
|
||||
if rgb_str.startswith("#"):
|
||||
hex_str = rgb_str[1:]
|
||||
else:
|
||||
hex_str = rgb_str
|
||||
|
||||
# Check if it's a valid hex string (6 characters)
|
||||
if len(hex_str) == 6 and all(c in '0123456789ABCDEFabcdef' for c in hex_str):
|
||||
r = int(hex_str[0:2], 16)
|
||||
g = int(hex_str[2:4], 16)
|
||||
b = int(hex_str[4:6], 16)
|
||||
return (r, g, b)
|
||||
|
||||
# Try RGB(r, g, b) format
|
||||
rgb_str_upper = rgb_str.upper()
|
||||
if rgb_str_upper.startswith("RGB(") and rgb_str_upper.endswith(")"):
|
||||
values = rgb_str[4:-1].split(",")
|
||||
r, g, b = [int(v.strip()) for v in values]
|
||||
# Validate range
|
||||
if all(0 <= val <= 255 for val in [r, g, b]):
|
||||
return (r, g, b)
|
||||
except (ValueError, IndexError):
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
def draw_shape(self, draw: ImageDraw.ImageDraw, img_width: int, img_height: int, shape_type: str, shape_color: tuple[int, int, int]) -> None:
|
||||
"""Draw a random shape on the image."""
|
||||
|
||||
# Random size - prefer larger sizes (40-70% of image dimensions)
|
||||
size_factor = random.uniform(0.4, 0.7)
|
||||
shape_width = int(img_width * size_factor)
|
||||
shape_height = int(img_height * size_factor)
|
||||
|
||||
# Random position (ensure shape stays fully within bounds)
|
||||
x = random.randint(0, max(0, img_width - shape_width))
|
||||
y = random.randint(0, max(0, img_height - shape_height))
|
||||
|
||||
# Draw the shape based on type
|
||||
if shape_type == 'circle':
|
||||
# Make it a perfect circle using the minimum dimension
|
||||
radius = min(shape_width, shape_height) // 2
|
||||
draw.ellipse([x, y, x + radius * 2, y + radius * 2], fill=shape_color)
|
||||
|
||||
elif shape_type == 'oval':
|
||||
draw.ellipse([x, y, x + shape_width, y + shape_height], fill=shape_color)
|
||||
|
||||
elif shape_type == 'square':
|
||||
# Make it a perfect square
|
||||
side = min(shape_width, shape_height)
|
||||
draw.rectangle([x, y, x + side, y + side], fill=shape_color)
|
||||
|
||||
elif shape_type == 'rectangle':
|
||||
draw.rectangle([x, y, x + shape_width, y + shape_height], fill=shape_color)
|
||||
|
||||
elif shape_type == 'triangle':
|
||||
# Equilateral-ish triangle
|
||||
points = [
|
||||
(x + shape_width // 2, y), # top
|
||||
(x, y + shape_height), # bottom left
|
||||
(x + shape_width, y + shape_height) # bottom right
|
||||
]
|
||||
draw.polygon(points, fill=shape_color)
|
||||
|
||||
elif shape_type == 'rhombus':
|
||||
# Diamond shape
|
||||
points = [
|
||||
(x + shape_width // 2, y), # top
|
||||
(x + shape_width, y + shape_height // 2), # right
|
||||
(x + shape_width // 2, y + shape_height), # bottom
|
||||
(x, y + shape_height // 2) # left
|
||||
]
|
||||
draw.polygon(points, fill=shape_color)
|
||||
|
||||
elif shape_type == 'pentagon':
|
||||
# Regular pentagon
|
||||
cx, cy = x + shape_width // 2, y + shape_height // 2
|
||||
radius = min(shape_width, shape_height) // 2
|
||||
points = []
|
||||
for i in range(5):
|
||||
angle = i * 2 * math.pi / 5 - math.pi / 2
|
||||
px = cx + radius * math.cos(angle)
|
||||
py = cy + radius * math.sin(angle)
|
||||
points.append((px, py))
|
||||
draw.polygon(points, fill=shape_color)
|
||||
|
||||
elif shape_type == 'hexagon':
|
||||
# Regular hexagon
|
||||
cx, cy = x + shape_width // 2, y + shape_height // 2
|
||||
radius = min(shape_width, shape_height) // 2
|
||||
points = []
|
||||
for i in range(6):
|
||||
angle = i * 2 * math.pi / 6
|
||||
px = cx + radius * math.cos(angle)
|
||||
py = cy + radius * math.sin(angle)
|
||||
points.append((px, py))
|
||||
draw.polygon(points, fill=shape_color)
|
||||
|
||||
def generate_shape(self, width: int, height: int, bg_color: str, fg_color: str, shape_type: str, seed: int, bg_color_override: str = "", fg_color_override: str = "") -> tuple[torch.Tensor, str, str]:
|
||||
"""Generate an image with a random shape."""
|
||||
|
||||
# Set random seed for reproducibility
|
||||
random.seed(seed)
|
||||
|
||||
# Get colors from map or generate random RGB values
|
||||
# Check for override first
|
||||
bg_override = self.parse_rgb_string(bg_color_override)
|
||||
if bg_override is not None:
|
||||
bg_rgb = bg_override
|
||||
elif bg_color == "random":
|
||||
bg_rgb = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
|
||||
else:
|
||||
bg_rgb = self.color_map.get(bg_color, (255, 255, 255))
|
||||
|
||||
fg_override = self.parse_rgb_string(fg_color_override)
|
||||
if fg_override is not None:
|
||||
fg_rgb = fg_override
|
||||
elif fg_color == "random":
|
||||
fg_rgb = (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
|
||||
else:
|
||||
fg_rgb = self.color_map.get(fg_color, (0, 0, 0))
|
||||
|
||||
# Create image
|
||||
img = Image.new('RGB', (width, height), bg_rgb)
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
# Select shape type
|
||||
if shape_type == "random":
|
||||
shapes = ['circle', 'oval', 'triangle', 'square', 'rectangle', 'rhombus', 'pentagon', 'hexagon']
|
||||
selected_shape = random.choice(shapes)
|
||||
else:
|
||||
selected_shape = shape_type
|
||||
|
||||
# Draw the shape
|
||||
self.draw_shape(draw, width, height, selected_shape, fg_rgb)
|
||||
|
||||
# Convert PIL Image to torch tensor (ComfyUI format)
|
||||
# ComfyUI expects images in format [batch, height, width, channels] with values 0-1
|
||||
img_array = np.array(img).astype(np.float32) / 255.0
|
||||
img_tensor = torch.from_numpy(img_array)[None,]
|
||||
|
||||
# Format RGB values as strings for output (both formats)
|
||||
bg_hex = f"#{bg_rgb[0]:02X}{bg_rgb[1]:02X}{bg_rgb[2]:02X}"
|
||||
fg_hex = f"#{fg_rgb[0]:02X}{fg_rgb[1]:02X}{fg_rgb[2]:02X}"
|
||||
bg_rgb_str = f"RGB({bg_rgb[0]}, {bg_rgb[1]}, {bg_rgb[2]}) / {bg_hex}"
|
||||
fg_rgb_str = f"RGB({fg_rgb[0]}, {fg_rgb[1]}, {fg_rgb[2]}) / {fg_hex}"
|
||||
|
||||
return (img_tensor, bg_rgb_str, fg_rgb_str)
|
||||
|
||||
60
custom_nodes/comfyui-image-saver/nodes_loaders.py
Normal file
60
custom_nodes/comfyui-image-saver/nodes_loaders.py
Normal file
@@ -0,0 +1,60 @@
|
||||
import torch
|
||||
import folder_paths
|
||||
import comfy.sd
|
||||
|
||||
class CheckpointLoaderWithName:
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "VAE", "STRING")
|
||||
RETURN_NAMES = ("MODEL", "CLIP", "VAE", "model_name")
|
||||
OUTPUT_TOOLTIPS = ("U-Net model (denoising latents)", "CLIP (Contrastive Language-Image Pre-Training) model (encoding text prompts)", "VAE (Variational autoencoder) model (latent<->pixel encoding/decoding)", "checkpoint name")
|
||||
FUNCTION = "load_checkpoint"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Loads U-Net model, CLIP model and VAE model from a checkpoint file"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"ckpt_name": (folder_paths.get_filename_list("checkpoints"), {"tooltip": "checkpoint"}),
|
||||
}
|
||||
}
|
||||
|
||||
def load_checkpoint(self, ckpt_name, output_vae=True, output_clip=True):
|
||||
ckpt_path = folder_paths.get_full_path("checkpoints", ckpt_name)
|
||||
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
|
||||
|
||||
# add checkpoint name to the output tuple (without the ClipVisionModel)
|
||||
out = (*out[:3], ckpt_name)
|
||||
return out
|
||||
|
||||
class UNETLoaderWithName:
|
||||
RETURN_TYPES = ("MODEL", "STRING")
|
||||
RETURN_NAMES = ("model", "filename")
|
||||
OUTPUT_TOOLTIPS = ("U-Net model (denoising latents)", "model filename")
|
||||
FUNCTION = "load_unet"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Loads U-Net model and outputs it's filename"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"unet_name": (folder_paths.get_filename_list("diffusion_models"),),
|
||||
"weight_dtype": (["default", "fp8_e4m3fn", "fp8_e4m3fn_fast", "fp8_e5m2"],)
|
||||
}
|
||||
}
|
||||
|
||||
def load_unet(self, unet_name, weight_dtype):
|
||||
model_options = {}
|
||||
if weight_dtype == "fp8_e4m3fn":
|
||||
model_options["dtype"] = torch.float8_e4m3fn
|
||||
elif weight_dtype == "fp8_e4m3fn_fast":
|
||||
model_options["dtype"] = torch.float8_e4m3fn
|
||||
model_options["fp8_optimizations"] = True
|
||||
elif weight_dtype == "fp8_e5m2":
|
||||
model_options["dtype"] = torch.float8_e5m2
|
||||
|
||||
unet_path = folder_paths.get_full_path_or_raise("diffusion_models", unet_name)
|
||||
model = comfy.sd.load_diffusion_model(unet_path, model_options=model_options)
|
||||
return (model, unet_name)
|
||||
196
custom_nodes/comfyui-image-saver/nodes_selectors.py
Normal file
196
custom_nodes/comfyui-image-saver/nodes_selectors.py
Normal file
@@ -0,0 +1,196 @@
|
||||
from typing import Any
|
||||
import comfy
|
||||
|
||||
INSPIRE_SCHEDULERS = comfy.samplers.KSampler.SCHEDULERS + ['AYS SDXL', 'AYS SD1', 'AYS SVD', "GITS[coeff=1.2]", 'OSS FLUX', 'OSS Wan', 'OSS Chroma']
|
||||
EFF_SCHEDULERS = comfy.samplers.KSampler.SCHEDULERS + ['AYS SD1', 'AYS SDXL', 'AYS SVD', 'GITS']
|
||||
|
||||
class AnyToString:
|
||||
"""Converts any input type to a string. Useful for connecting sampler/scheduler outputs from various custom nodes."""
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("string",)
|
||||
OUTPUT_TOOLTIPS = ("String representation of the input",)
|
||||
FUNCTION = "convert"
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Converts any input type to string"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"value": ("*",),
|
||||
}
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def VALIDATE_INPUTS(cls, input_types):
|
||||
return True
|
||||
|
||||
def convert(self, value: Any) -> tuple[str,]:
|
||||
return (str(value),)
|
||||
|
||||
|
||||
class WorkflowInputValue:
|
||||
"""Extracts an input value from the workflow by node ID and input name."""
|
||||
|
||||
RETURN_TYPES = ("*",)
|
||||
RETURN_NAMES = ("value",)
|
||||
OUTPUT_TOOLTIPS = ("Input value from the specified node",)
|
||||
FUNCTION = "get_input_value"
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Extract an input value from the workflow by node ID and input name"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"node_id": ("STRING", {"default": "", "multiline": False, "tooltip": "The ID of the node to extract from"}),
|
||||
"input_name": ("STRING", {"default": "", "multiline": False, "tooltip": "The name of the input to extract"}),
|
||||
},
|
||||
"hidden": {
|
||||
"prompt": "PROMPT",
|
||||
"extra_pnginfo": "EXTRA_PNGINFO",
|
||||
},
|
||||
}
|
||||
|
||||
def get_input_value(self, node_id: str, input_name: str, prompt: dict[str, Any] | None = None, extra_pnginfo: dict[str, Any] | None = None):
|
||||
if prompt is None:
|
||||
return (None,)
|
||||
|
||||
# Verify the node exists in the workflow structure
|
||||
if extra_pnginfo and "workflow" in extra_pnginfo:
|
||||
workflow = extra_pnginfo["workflow"]
|
||||
node_exists = any(str(node.get("id")) == node_id for node in workflow.get("nodes", []))
|
||||
if not node_exists:
|
||||
print(f"WorkflowInputValue: Node {node_id} not found in workflow structure")
|
||||
return (None,)
|
||||
|
||||
# Get the node from the prompt (execution values)
|
||||
node = prompt.get(node_id)
|
||||
if node is None:
|
||||
print(f"WorkflowInputValue: Node {node_id} not found in prompt")
|
||||
return (None,)
|
||||
|
||||
# Get the inputs from the node
|
||||
inputs = node.get("inputs", {})
|
||||
if input_name not in inputs:
|
||||
print(f"WorkflowInputValue: Input '{input_name}' not found in node {node_id}")
|
||||
print(f"WorkflowInputValue: Available inputs: {list(inputs.keys())}")
|
||||
return (None,)
|
||||
|
||||
value = inputs[input_name]
|
||||
return (value,)
|
||||
|
||||
|
||||
class SamplerSelector:
|
||||
RETURN_TYPES = (comfy.samplers.KSampler.SAMPLERS, "STRING")
|
||||
RETURN_NAMES = ("sampler", "sampler_name")
|
||||
OUTPUT_TOOLTIPS = ("sampler (SAMPLERS)", "sampler name (STRING)")
|
||||
FUNCTION = "get_names"
|
||||
|
||||
CATEGORY = 'ImageSaver/utils'
|
||||
DESCRIPTION = 'Provides one of the available ComfyUI samplers'
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"sampler_name": (comfy.samplers.KSampler.SAMPLERS, {"tooltip": "sampler (Comfy's standard)"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_names(self, sampler_name: str) -> tuple[str, str]:
|
||||
return (sampler_name, sampler_name)
|
||||
|
||||
class SchedulerSelector:
|
||||
RETURN_TYPES = (comfy.samplers.KSampler.SCHEDULERS, "STRING")
|
||||
RETURN_NAMES = ("scheduler", "scheduler_name")
|
||||
OUTPUT_TOOLTIPS = ("scheduler (SCHEDULERS)", "scheduler name (STRING)")
|
||||
FUNCTION = "get_names"
|
||||
|
||||
CATEGORY = 'ImageSaver/utils'
|
||||
DESCRIPTION = 'Provides one of the standard KSampler schedulers'
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"scheduler": (comfy.samplers.KSampler.SCHEDULERS, {"tooltip": "scheduler (Comfy's standard)"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_names(self, scheduler: str) -> tuple[str, str]:
|
||||
return (scheduler, scheduler)
|
||||
|
||||
class SchedulerSelectorInspire:
|
||||
RETURN_TYPES = (INSPIRE_SCHEDULERS, "STRING")
|
||||
RETURN_NAMES = ("scheduler", "scheduler_name")
|
||||
OUTPUT_TOOLTIPS = ("scheduler (ComfyUI + Inspire Pack Schedulers)", "scheduler name (STRING)")
|
||||
FUNCTION = "get_names"
|
||||
|
||||
CATEGORY = 'ImageSaver/utils'
|
||||
DESCRIPTION = 'Provides one of the KSampler (inspire) schedulers'
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"scheduler": (INSPIRE_SCHEDULERS, {"tooltip": "scheduler (Comfy's standard + extras)"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_names(self, scheduler: str) -> tuple[str, str]:
|
||||
return (scheduler, scheduler)
|
||||
|
||||
class SchedulerSelectorEfficiency:
|
||||
RETURN_TYPES = (EFF_SCHEDULERS, "STRING")
|
||||
RETURN_NAMES = ("scheduler", "scheduler_name")
|
||||
OUTPUT_TOOLTIPS = ("scheduler (ComfyUI + Efficiency Pack Schedulers)", "scheduler name (STRING)")
|
||||
FUNCTION = "get_names"
|
||||
|
||||
CATEGORY = 'ImageSaver/utils'
|
||||
DESCRIPTION = 'Provides one of the KSampler (Eff.) schedulers'
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"scheduler": (EFF_SCHEDULERS, {"tooltip": "scheduler (Comfy's standard + Efficiency nodes)"}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_names(self, scheduler: str) -> tuple[str, str]:
|
||||
return (scheduler, scheduler)
|
||||
|
||||
|
||||
class InputParameters:
|
||||
RETURN_TYPES = ("INT", "INT", "FLOAT", comfy.samplers.KSampler.SAMPLERS, comfy.samplers.KSampler.SCHEDULERS, "FLOAT")
|
||||
RETURN_NAMES = ("seed", "steps", "cfg", "sampler", "scheduler", "denoise")
|
||||
OUTPUT_TOOLTIPS = (
|
||||
"seed (INT)",
|
||||
"steps (INT)",
|
||||
"cfg (FLOAT)",
|
||||
"sampler (SAMPLERS)",
|
||||
"scheduler (SCHEDULERS)",
|
||||
"denoise (FLOAT)",
|
||||
)
|
||||
FUNCTION = "get_values"
|
||||
|
||||
CATEGORY = "ImageSaver/utils"
|
||||
DESCRIPTION = "Combined node for seed, steps, cfg, sampler, scheduler and denoise."
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls) -> dict[str, Any]:
|
||||
return {
|
||||
"required": {
|
||||
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff, "control_after_generate": True, "tooltip": "The random seed used for creating the noise."}),
|
||||
"steps": ("INT", {"default": 20, "min": 1, "max": 10000, "tooltip": "The number of steps used in the denoising process."}),
|
||||
"cfg": ("FLOAT", {"default": 7.0, "min": 0.0, "max": 100.0, "step":0.1, "round": 0.01, "tooltip": "The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality."}),
|
||||
"sampler": (comfy.samplers.KSampler.SAMPLERS, {"tooltip": "The algorithm used when sampling, this can affect the quality, speed, and style of the generated output."}),
|
||||
"scheduler": (comfy.samplers.KSampler.SCHEDULERS, {"tooltip": "The scheduler controls how noise is gradually removed to form the image."}),
|
||||
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01, "tooltip": "The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling."}),
|
||||
}
|
||||
}
|
||||
|
||||
def get_values(self, seed: int, steps: int, cfg: float, sampler: str, scheduler: str, denoise: float) -> tuple[int, int, float, str, str, float]:
|
||||
return (seed, steps, cfg, sampler, scheduler, denoise)
|
||||
@@ -0,0 +1,77 @@
|
||||
import re
|
||||
from typing import List, Dict, Tuple
|
||||
from comfy.sd1_clip import escape_important, unescape_important, token_weights
|
||||
|
||||
from .utils import full_embedding_path_for, full_lora_path_for, get_sha256
|
||||
from .utils_civitai import civitai_embedding_key_name, civitai_lora_key_name
|
||||
|
||||
"""
|
||||
Extracts Embeddings and Lora's from the given prompts
|
||||
and allows asking for their sha's
|
||||
This module is based on civit's plugin and website implementations
|
||||
The image saver node goes through the automatic flow, not comfy, on civit
|
||||
see: https://github.com/civitai/sd_civitai_extension/blob/2008ba9126ddbb448f23267029b07e4610dffc15/scripts/gen_hashing.py
|
||||
see: https://github.com/civitai/civitai/blob/d83262f401fb372c375e6222d8c2413fa221c2c5/src/utils/metadata/automatic.metadata
|
||||
"""
|
||||
class PromptMetadataExtractor:
|
||||
# Anything that follows embedding:<characters except , or whitespace
|
||||
EMBEDDING: str = r'embedding:([^,\s\(\)\:]+)'
|
||||
# Anything that follows <lora:NAME> with allowance for :weight, :weight.fractal or LBW
|
||||
LORA: str = r'<lora:([^>:]+)(?::([^>]+))?>'
|
||||
|
||||
def __init__(self, prompts: List[str]) -> None:
|
||||
self.__embeddings: Dict[str, Tuple[str, float, str]] = {}
|
||||
self.__loras: Dict[str, Tuple[str, float, str]] = {}
|
||||
self.__perform(prompts)
|
||||
|
||||
def get_embeddings(self) -> Dict[str, Tuple[str, float, str]]:
|
||||
"""
|
||||
Returns the embeddings used in the given prompts in a format as known by civitAI
|
||||
Example output: {"embed:EasyNegative": "66a7279a88", "embed:FastNegativeEmbedding": "687b669d82", "embed:ng_deepnegative_v1_75t": "54e7e4826d", "embed:imageSharpener": "fe5a4dfc4a"}
|
||||
"""
|
||||
return self.__embeddings
|
||||
|
||||
def get_loras(self) -> Dict[str, Tuple[str, float, str]]:
|
||||
"""
|
||||
Returns the lora's used in the given prompts in a format as known by civitAI
|
||||
Example output: {"LORA:epi_noiseoffset2": "81680c064e", "LORA:GoodHands-beta2": "ba43b0efee"}
|
||||
"""
|
||||
return self.__loras
|
||||
|
||||
# Private API
|
||||
def __perform(self, prompts: List[str]) -> None:
|
||||
for prompt in prompts:
|
||||
# Use ComfyUI's built-in attention parser to get accurate weights for embeddings
|
||||
parsed = ((unescape_important(value), weight) for value, weight in token_weights(escape_important(prompt), 1.0))
|
||||
for text, weight in parsed:
|
||||
embeddings = re.findall(self.EMBEDDING, text, re.IGNORECASE | re.MULTILINE)
|
||||
for embedding in embeddings:
|
||||
self.__extract_embedding_information(embedding, weight)
|
||||
loras = re.findall(self.LORA, prompt, re.IGNORECASE | re.MULTILINE)
|
||||
for lora in loras:
|
||||
self.__extract_lora_information(lora)
|
||||
|
||||
def __extract_embedding_information(self, embedding: str, weight: float) -> None:
|
||||
embedding_name = civitai_embedding_key_name(embedding)
|
||||
embedding_path = full_embedding_path_for(embedding)
|
||||
if embedding_path == None:
|
||||
return
|
||||
sha = self.__get_shortened_sha(embedding_path)
|
||||
# Based on https://github.com/civitai/sd_civitai_extension/blob/2008ba9126ddbb448f23267029b07e4610dffc15/scripts/gen_hashing.py#L53
|
||||
self.__embeddings[embedding_name] = (embedding_path, weight, sha)
|
||||
|
||||
def __extract_lora_information(self, lora: Tuple[str, str]) -> None:
|
||||
lora_name = civitai_lora_key_name(lora[0])
|
||||
lora_path = full_lora_path_for(lora[0])
|
||||
if lora_path == None:
|
||||
return
|
||||
try:
|
||||
lora_weight = float(lora[1].split(':')[0])
|
||||
except (ValueError, TypeError):
|
||||
lora_weight = 1.0
|
||||
sha = self.__get_shortened_sha(lora_path)
|
||||
# Based on https://github.com/civitai/sd_civitai_extension/blob/2008ba9126ddbb448f23267029b07e4610dffc15/scripts/gen_hashing.py#L63
|
||||
self.__loras[lora_name] = (lora_path, lora_weight, sha)
|
||||
|
||||
def __get_shortened_sha(self, file_path: str) -> str:
|
||||
return get_sha256(file_path)[:10]
|
||||
15
custom_nodes/comfyui-image-saver/pyproject.toml
Normal file
15
custom_nodes/comfyui-image-saver/pyproject.toml
Normal file
@@ -0,0 +1,15 @@
|
||||
[project]
|
||||
name = "comfyui-image-saver"
|
||||
description = "Save images with generation metadata compatible with Civitai. Works with png, jpeg and webp. Stores LoRAs, models and embeddings hashes for resource recognition."
|
||||
version = "1.21.0"
|
||||
license = { file = "LICENSE" }
|
||||
dependencies = ["piexif"]
|
||||
|
||||
[project.urls]
|
||||
Repository = "https://github.com/alexopus/ComfyUI-Image-Saver"
|
||||
# Used by Comfy Registry https://comfyregistry.org
|
||||
|
||||
[tool.comfy]
|
||||
PublisherId = "alexopus"
|
||||
DisplayName = "ComfyUI Image Saver"
|
||||
Icon = ""
|
||||
2
custom_nodes/comfyui-image-saver/requirements.txt
Normal file
2
custom_nodes/comfyui-image-saver/requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
|
||||
piexif
|
||||
0
custom_nodes/comfyui-image-saver/saver/__init__.py
Normal file
0
custom_nodes/comfyui-image-saver/saver/__init__.py
Normal file
385
custom_nodes/comfyui-image-saver/saver/default_workflow.json
Normal file
385
custom_nodes/comfyui-image-saver/saver/default_workflow.json
Normal file
@@ -0,0 +1,385 @@
|
||||
{
|
||||
"last_node_id": 9,
|
||||
"last_link_id": 9,
|
||||
"nodes": [
|
||||
{
|
||||
"id": 7,
|
||||
"type": "CLIPTextEncode",
|
||||
"pos": [
|
||||
413,
|
||||
389
|
||||
],
|
||||
"size": {
|
||||
"0": 425.27801513671875,
|
||||
"1": 180.6060791015625
|
||||
},
|
||||
"flags": {},
|
||||
"order": 3,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "clip",
|
||||
"type": "CLIP",
|
||||
"link": 5
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "CONDITIONING",
|
||||
"type": "CONDITIONING",
|
||||
"links": [
|
||||
6
|
||||
],
|
||||
"slot_index": 0
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "CLIPTextEncode"
|
||||
},
|
||||
"widgets_values": [
|
||||
"text, watermark"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"type": "CLIPTextEncode",
|
||||
"pos": [
|
||||
415,
|
||||
186
|
||||
],
|
||||
"size": {
|
||||
"0": 422.84503173828125,
|
||||
"1": 164.31304931640625
|
||||
},
|
||||
"flags": {},
|
||||
"order": 2,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "clip",
|
||||
"type": "CLIP",
|
||||
"link": 3
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "CONDITIONING",
|
||||
"type": "CONDITIONING",
|
||||
"links": [
|
||||
4
|
||||
],
|
||||
"slot_index": 0
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "CLIPTextEncode"
|
||||
},
|
||||
"widgets_values": [
|
||||
"beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"type": "EmptyLatentImage",
|
||||
"pos": [
|
||||
473,
|
||||
609
|
||||
],
|
||||
"size": {
|
||||
"0": 315,
|
||||
"1": 106
|
||||
},
|
||||
"flags": {},
|
||||
"order": 0,
|
||||
"mode": 0,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "LATENT",
|
||||
"type": "LATENT",
|
||||
"links": [
|
||||
2
|
||||
],
|
||||
"slot_index": 0
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "EmptyLatentImage"
|
||||
},
|
||||
"widgets_values": [
|
||||
512,
|
||||
512,
|
||||
1
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"type": "KSampler",
|
||||
"pos": [
|
||||
863,
|
||||
186
|
||||
],
|
||||
"size": {
|
||||
"0": 315,
|
||||
"1": 262
|
||||
},
|
||||
"flags": {},
|
||||
"order": 4,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "model",
|
||||
"type": "MODEL",
|
||||
"link": 1
|
||||
},
|
||||
{
|
||||
"name": "positive",
|
||||
"type": "CONDITIONING",
|
||||
"link": 4
|
||||
},
|
||||
{
|
||||
"name": "negative",
|
||||
"type": "CONDITIONING",
|
||||
"link": 6
|
||||
},
|
||||
{
|
||||
"name": "latent_image",
|
||||
"type": "LATENT",
|
||||
"link": 2
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "LATENT",
|
||||
"type": "LATENT",
|
||||
"links": [
|
||||
7
|
||||
],
|
||||
"slot_index": 0
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "KSampler"
|
||||
},
|
||||
"widgets_values": [
|
||||
156680208700286,
|
||||
"randomize",
|
||||
20,
|
||||
8,
|
||||
"euler",
|
||||
"normal",
|
||||
1
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 8,
|
||||
"type": "VAEDecode",
|
||||
"pos": [
|
||||
1209,
|
||||
188
|
||||
],
|
||||
"size": {
|
||||
"0": 210,
|
||||
"1": 46
|
||||
},
|
||||
"flags": {},
|
||||
"order": 5,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "samples",
|
||||
"type": "LATENT",
|
||||
"link": 7
|
||||
},
|
||||
{
|
||||
"name": "vae",
|
||||
"type": "VAE",
|
||||
"link": 8
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "IMAGE",
|
||||
"type": "IMAGE",
|
||||
"links": [
|
||||
9
|
||||
],
|
||||
"slot_index": 0
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "VAEDecode"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"type": "SaveImage",
|
||||
"pos": [
|
||||
1451,
|
||||
189
|
||||
],
|
||||
"size": {
|
||||
"0": 210,
|
||||
"1": 58
|
||||
},
|
||||
"flags": {},
|
||||
"order": 6,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "images",
|
||||
"type": "IMAGE",
|
||||
"link": 9
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "SaveImage"
|
||||
},
|
||||
"widgets_values": [
|
||||
"ComfyUI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"type": "CheckpointLoaderSimple",
|
||||
"pos": [
|
||||
26,
|
||||
474
|
||||
],
|
||||
"size": {
|
||||
"0": 315,
|
||||
"1": 98
|
||||
},
|
||||
"flags": {},
|
||||
"order": 1,
|
||||
"mode": 0,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "MODEL",
|
||||
"type": "MODEL",
|
||||
"links": [
|
||||
1
|
||||
],
|
||||
"slot_index": 0
|
||||
},
|
||||
{
|
||||
"name": "CLIP",
|
||||
"type": "CLIP",
|
||||
"links": [
|
||||
3,
|
||||
5
|
||||
],
|
||||
"slot_index": 1
|
||||
},
|
||||
{
|
||||
"name": "VAE",
|
||||
"type": "VAE",
|
||||
"links": [
|
||||
8
|
||||
],
|
||||
"slot_index": 2
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Node name for S&R": "CheckpointLoaderSimple"
|
||||
},
|
||||
"widgets_values": [
|
||||
"3Guofeng3_v32Light.safetensors"
|
||||
]
|
||||
}
|
||||
],
|
||||
"links": [
|
||||
[
|
||||
1,
|
||||
4,
|
||||
0,
|
||||
3,
|
||||
0,
|
||||
"MODEL"
|
||||
],
|
||||
[
|
||||
2,
|
||||
5,
|
||||
0,
|
||||
3,
|
||||
3,
|
||||
"LATENT"
|
||||
],
|
||||
[
|
||||
3,
|
||||
4,
|
||||
1,
|
||||
6,
|
||||
0,
|
||||
"CLIP"
|
||||
],
|
||||
[
|
||||
4,
|
||||
6,
|
||||
0,
|
||||
3,
|
||||
1,
|
||||
"CONDITIONING"
|
||||
],
|
||||
[
|
||||
5,
|
||||
4,
|
||||
1,
|
||||
7,
|
||||
0,
|
||||
"CLIP"
|
||||
],
|
||||
[
|
||||
6,
|
||||
7,
|
||||
0,
|
||||
3,
|
||||
2,
|
||||
"CONDITIONING"
|
||||
],
|
||||
[
|
||||
7,
|
||||
3,
|
||||
0,
|
||||
8,
|
||||
0,
|
||||
"LATENT"
|
||||
],
|
||||
[
|
||||
8,
|
||||
4,
|
||||
2,
|
||||
8,
|
||||
1,
|
||||
"VAE"
|
||||
],
|
||||
[
|
||||
9,
|
||||
8,
|
||||
0,
|
||||
9,
|
||||
0,
|
||||
"IMAGE"
|
||||
]
|
||||
],
|
||||
"groups": [],
|
||||
"config": {},
|
||||
"extra": {
|
||||
"ds": {
|
||||
"scale": 0.8264462809917354,
|
||||
"offset": [
|
||||
565.6800000000005,
|
||||
-43.919999999999995
|
||||
]
|
||||
},
|
||||
"info": {
|
||||
"name": "workflow",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"version": "1",
|
||||
"created": "2024-06-02T20:17:02.243Z",
|
||||
"modified": "2024-06-02T20:17:11.438Z",
|
||||
"software": "ComfyUI"
|
||||
}
|
||||
},
|
||||
"version": 0.4
|
||||
}
|
||||
3
custom_nodes/comfyui-image-saver/saver/pytest.ini
Normal file
3
custom_nodes/comfyui-image-saver/saver/pytest.ini
Normal file
@@ -0,0 +1,3 @@
|
||||
[pytest]
|
||||
testpaths = .
|
||||
python_files = test_*.py
|
||||
63
custom_nodes/comfyui-image-saver/saver/saver.py
Normal file
63
custom_nodes/comfyui-image-saver/saver/saver.py
Normal file
@@ -0,0 +1,63 @@
|
||||
from typing import Any, cast
|
||||
from PIL.PngImagePlugin import PngInfo
|
||||
from PIL.Image import Image
|
||||
|
||||
import json
|
||||
import piexif
|
||||
import piexif.helper
|
||||
|
||||
def save_image(image: Image, filepath: str, extension: str, quality_jpeg_or_webp: int, lossless_webp: bool, optimize_png: bool, a111_params: str, prompt: dict[str, Any] | None, extra_pnginfo: dict[str, Any] | None, embed_workflow: bool) -> None:
|
||||
if extension == 'png':
|
||||
metadata = PngInfo()
|
||||
if a111_params:
|
||||
metadata.add_text("parameters", a111_params)
|
||||
|
||||
if embed_workflow:
|
||||
if extra_pnginfo is not None:
|
||||
for k, v in extra_pnginfo.items():
|
||||
metadata.add_text(k, json.dumps(v, separators=(',', ':')))
|
||||
if prompt is not None:
|
||||
metadata.add_text("prompt", json.dumps(prompt, separators=(',', ':')))
|
||||
|
||||
image.save(filepath, pnginfo=metadata, optimize=optimize_png)
|
||||
else: # webp & jpeg
|
||||
image.save(filepath, optimize=True, quality=quality_jpeg_or_webp, lossless=lossless_webp)
|
||||
|
||||
# Native example adding workflow to exif:
|
||||
# https://github.com/comfyanonymous/ComfyUI/blob/095610717000bffd477a7e72988d1fb2299afacb/comfy_extras/nodes_images.py#L113
|
||||
pnginfo_json = {}
|
||||
prompt_json = {}
|
||||
if embed_workflow:
|
||||
if extra_pnginfo is not None:
|
||||
pnginfo_json = {piexif.ImageIFD.Make - i: f"{k}:{json.dumps(v, separators=(',', ':'))}" for i, (k, v) in enumerate(extra_pnginfo.items())}
|
||||
if prompt is not None:
|
||||
prompt_json = {piexif.ImageIFD.Model: f"prompt:{json.dumps(prompt, separators=(',', ':'))}"}
|
||||
|
||||
def get_exif_bytes() -> bytes:
|
||||
exif_dict = ({
|
||||
"0th": pnginfo_json | prompt_json
|
||||
} if pnginfo_json or prompt_json else {}) | ({
|
||||
"Exif": {
|
||||
piexif.ExifIFD.UserComment: cast(bytes, piexif.helper.UserComment.dump(a111_params, encoding="unicode"))
|
||||
},
|
||||
} if a111_params else {})
|
||||
return cast(bytes, piexif.dump(exif_dict))
|
||||
|
||||
exif_bytes = get_exif_bytes()
|
||||
|
||||
# JPEG format limits the EXIF bytes to a maximum of 65535 bytes
|
||||
if extension == "jpg" or extension == "jpeg":
|
||||
MAX_EXIF_SIZE = 65535
|
||||
if len(exif_bytes) > MAX_EXIF_SIZE and embed_workflow:
|
||||
print("ComfyUI-Image-Saver: Error: Workflow is too large, removing client request prompt.")
|
||||
prompt_json = {}
|
||||
exif_bytes = get_exif_bytes()
|
||||
if len(exif_bytes) > MAX_EXIF_SIZE:
|
||||
print("ComfyUI-Image-Saver: Error: Workflow is still too large, cannot embed workflow!")
|
||||
pnginfo_json = {}
|
||||
exif_bytes = get_exif_bytes()
|
||||
if len(exif_bytes) > MAX_EXIF_SIZE:
|
||||
print("ComfyUI-Image-Saver: Error: Metadata exceeds maximum size for JPEG. Cannot save metadata.")
|
||||
return
|
||||
|
||||
piexif.insert(exif_bytes, filepath)
|
||||
197
custom_nodes/comfyui-image-saver/saver/test_saver.py
Normal file
197
custom_nodes/comfyui-image-saver/saver/test_saver.py
Normal file
@@ -0,0 +1,197 @@
|
||||
import os
|
||||
import itertools
|
||||
import json
|
||||
import tempfile
|
||||
import shutil
|
||||
import pytest
|
||||
from PIL import Image
|
||||
import piexif
|
||||
import piexif.helper
|
||||
from .saver import save_image
|
||||
|
||||
def get_default_workflow():
|
||||
"""Read the default workflow from the JSON file."""
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
default_workflow_path = os.path.join(current_dir, "default_workflow.json")
|
||||
with open(default_workflow_path, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def get_large_workflow(padding_size: int):
|
||||
"""Create a large workflow by duplicating the default workflow until it's at least 500KB."""
|
||||
default_workflow = get_default_workflow()
|
||||
large_workflow = default_workflow.copy()
|
||||
large_workflow["padding"] = "x" * padding_size
|
||||
workflow_size = len(json.dumps(large_workflow)) / 1024 # Size in KB
|
||||
print(f"Large workflow size: {workflow_size:.2f} KB")
|
||||
return large_workflow
|
||||
|
||||
|
||||
@pytest.fixture(
|
||||
params=list(itertools.product(
|
||||
["simple", "default", "large", "huge"], # workflow_type
|
||||
[True, False] # embed_workflow
|
||||
)),
|
||||
ids=lambda param: f"workflow-{param[0]}_embed-{param[1]}"
|
||||
)
|
||||
def setup_test_env(request):
|
||||
"""Setup test environment with temp directory and test image, parameterized by workflow type."""
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
|
||||
test_image = Image.new('RGB', (100, 100), color='red')
|
||||
|
||||
a111_params = """
|
||||
beautiful scenery nature glass bottle landscape, purple galaxy bottle, low key
|
||||
Negative prompt: (worst quality, low quality, bad quality:1.3), embedding:ng_deepnegative_v1_75t, embedding:EasyNegative, embedding:badhandv4
|
||||
Steps: 30, Sampler: DPM++ 2M SDE, CFG scale: 7.0, Seed: 42, Size: 512x512, Model: , Version: ComfyUI,
|
||||
Civitai resources: [
|
||||
{"modelName":"Deep Negative V1.x","versionName":"V1 75T","weight":1.0,"air":"urn:air:sd1:embedding:civitai:4629@5637"},
|
||||
{"modelName":"EasyNegative","versionName":"EasyNegative_pt","weight":1.0,"air":"urn:air:sd1:embedding:civitai:7808@9536"},
|
||||
{"modelName":"badhandv4","versionName":"badhandv4","weight":1.0,"air":"urn:air:other:embedding:civitai:16993@20068"}]
|
||||
"""
|
||||
|
||||
prompt = {"prompt": "test prompt", "negative_prompt": "test negative prompt"}
|
||||
|
||||
workflow_type, embed_workflow = request.param
|
||||
|
||||
if workflow_type == "simple":
|
||||
extra_pnginfo = {"workflow": {"version": "1.0", "nodes": []}}
|
||||
elif workflow_type == "default":
|
||||
default_workflow = get_default_workflow()
|
||||
extra_pnginfo = {"workflow": default_workflow}
|
||||
elif workflow_type == "large":
|
||||
large_workflow = get_large_workflow(524288 )
|
||||
extra_pnginfo = {"workflow": large_workflow}
|
||||
# Check the size for debugging purposes
|
||||
workflow_size = len(json.dumps(large_workflow)) / 1024 # Size in KB
|
||||
print(f"Large workflow size: {workflow_size:.2f} KB")
|
||||
elif workflow_type == "huge":
|
||||
huge_workflow = get_large_workflow(2097152)
|
||||
extra_pnginfo = {"workflow": huge_workflow}
|
||||
# Check the size for debugging purposes
|
||||
workflow_size = len(json.dumps(huge_workflow)) / 1024 # Size in KB
|
||||
print(f"Large workflow size: {workflow_size:.2f} KB")
|
||||
|
||||
yield temp_dir, test_image, a111_params, prompt, extra_pnginfo, workflow_type, embed_workflow
|
||||
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"optimize",
|
||||
[True, False],
|
||||
ids=["optimize", "no-optimize"]
|
||||
)
|
||||
def test_save_png(setup_test_env, optimize):
|
||||
"""Test that complete metadata is correctly saved and can be retrieved for PNG format."""
|
||||
temp_dir, test_image, a111_params, prompt, extra_pnginfo, workflow_type, embed_workflow = setup_test_env
|
||||
image_path = os.path.join(temp_dir, f"test_with_workflow_{workflow_type}.png")
|
||||
save_image(test_image, image_path, "png", 100, True, optimize, a111_params, prompt, extra_pnginfo, embed_workflow)
|
||||
saved_image = Image.open(image_path)
|
||||
try:
|
||||
assert saved_image.info.get("parameters") == a111_params
|
||||
if embed_workflow:
|
||||
assert json.loads(saved_image.info.get("prompt")) == prompt
|
||||
assert json.loads(saved_image.info.get("workflow")) == extra_pnginfo["workflow"]
|
||||
else:
|
||||
assert set(saved_image.info.keys()) == {"parameters"}, "PNG should not contain prompt or workflow data"
|
||||
finally:
|
||||
saved_image.close()
|
||||
|
||||
def test_save_jpeg(setup_test_env):
|
||||
"""Test that metadata is correctly saved and can be retrieved for JPEG format."""
|
||||
temp_dir, test_image, a111_params, prompt, extra_pnginfo, workflow_type, embed_workflow = setup_test_env
|
||||
jpeg_path = os.path.join(temp_dir, f"test_{workflow_type}.jpeg")
|
||||
save_image(test_image, jpeg_path, "jpeg", 90, False, False, a111_params, prompt, extra_pnginfo, embed_workflow)
|
||||
saved_image = Image.open(jpeg_path)
|
||||
try:
|
||||
exif_dict = piexif.load(saved_image.info["exif"])
|
||||
user_comment = piexif.helper.UserComment.load(exif_dict["Exif"][piexif.ExifIFD.UserComment])
|
||||
assert user_comment == a111_params
|
||||
|
||||
if embed_workflow:
|
||||
if workflow_type == "simple" or workflow_type == "default":
|
||||
assert "0th" in exif_dict, "Expected workflow data in EXIF"
|
||||
# verify that prompt and workflow data are in EXIF
|
||||
expected_keys = {piexif.ImageIFD.Make, piexif.ImageIFD.Model}
|
||||
found_keys = set(exif_dict["0th"].keys()) & expected_keys
|
||||
assert len(found_keys) > 0, "Expected workflow or prompt data in EXIF"
|
||||
|
||||
if piexif.ImageIFD.Make in exif_dict["0th"]:
|
||||
make_data = exif_dict["0th"][piexif.ImageIFD.Make]
|
||||
make_str = make_data.decode('utf-8')
|
||||
# Check that workflow matches
|
||||
if make_str.startswith("workflow:"):
|
||||
make_str = make_str[len("workflow:"):]
|
||||
saved_workflow = json.loads(make_str)
|
||||
original_workflow = extra_pnginfo["workflow"]
|
||||
|
||||
assert saved_workflow == original_workflow, "Saved workflow content doesn't match original"
|
||||
|
||||
if piexif.ImageIFD.Model in exif_dict["0th"]:
|
||||
model_data = exif_dict["0th"][piexif.ImageIFD.Model]
|
||||
model_str = model_data.decode('utf-8')
|
||||
# Check that "prompt" matches
|
||||
if model_str.startswith("prompt:"):
|
||||
model_str = model_str[len("prompt:"):]
|
||||
saved_prompt = json.loads(model_str)
|
||||
assert saved_prompt == prompt, "Saved prompt content doesn't match original"
|
||||
else:
|
||||
# When workflow_type is "large", verify that the workflow is too large to embed
|
||||
if "0th" in exif_dict:
|
||||
assert not any(k in exif_dict["0th"] for k in (piexif.ImageIFD.Make, piexif.ImageIFD.Model)), "JPEG should not contain prompt or workflow data"
|
||||
else:
|
||||
# When embed_workflow is False, verify no prompt or workflow in EXIF
|
||||
if "0th" in exif_dict:
|
||||
assert not any(k in exif_dict["0th"] for k in (piexif.ImageIFD.Make, piexif.ImageIFD.Model)), "JPEG should not contain prompt or workflow data"
|
||||
finally:
|
||||
saved_image.close()
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"lossless,quality",
|
||||
[(True, 100), (False, 90)],
|
||||
ids=["lossless-max", "lossy-90"]
|
||||
)
|
||||
def test_save_webp(setup_test_env, lossless, quality):
|
||||
"""Test that metadata is correctly saved and can be retrieved for lossless WebP format."""
|
||||
temp_dir, test_image, a111_params, prompt, extra_pnginfo, workflow_type, embed_workflow = setup_test_env
|
||||
iamge_path = os.path.join(temp_dir, f"test_lossless_{workflow_type}.webp")
|
||||
save_image(test_image, iamge_path, "webp", quality, lossless, False, a111_params, prompt, extra_pnginfo, embed_workflow)
|
||||
saved_image = Image.open(iamge_path)
|
||||
try:
|
||||
# Verify a111_params is correctly stored in EXIF UserComment
|
||||
exif_dict = piexif.load(saved_image.info["exif"])
|
||||
user_comment = piexif.helper.UserComment.load(exif_dict["Exif"][piexif.ExifIFD.UserComment])
|
||||
assert user_comment == a111_params
|
||||
|
||||
if embed_workflow:
|
||||
assert "0th" in exif_dict, "Expected workflow data in EXIF"
|
||||
# When embed_workflow is True, verify that prompt and workflow data are in EXIF
|
||||
expected_keys = {piexif.ImageIFD.Make, piexif.ImageIFD.Model}
|
||||
found_keys = set(exif_dict["0th"].keys()) & expected_keys
|
||||
assert len(found_keys) > 0, "Expected workflow or prompt data in EXIF"
|
||||
|
||||
if piexif.ImageIFD.Make in exif_dict["0th"]:
|
||||
make_data = exif_dict["0th"][piexif.ImageIFD.Make]
|
||||
make_str = make_data.decode('utf-8')
|
||||
# Check that workflow matches
|
||||
if make_str.startswith("workflow:"):
|
||||
make_str = make_str[len("workflow:"):]
|
||||
saved_workflow = json.loads(make_str)
|
||||
original_workflow = extra_pnginfo["workflow"]
|
||||
|
||||
assert saved_workflow == original_workflow, "Saved workflow content doesn't match original"
|
||||
|
||||
if piexif.ImageIFD.Model in exif_dict["0th"]:
|
||||
model_data = exif_dict["0th"][piexif.ImageIFD.Model]
|
||||
model_str = model_data.decode('utf-8')
|
||||
# Check that "prompt" matches
|
||||
if model_str.startswith("prompt:"):
|
||||
model_str = model_str[len("prompt:"):]
|
||||
saved_prompt = json.loads(model_str)
|
||||
assert saved_prompt == prompt, "Saved prompt content doesn't match original"
|
||||
else:
|
||||
# When embed_workflow is False, verify no prompt or workflow in EXIF
|
||||
if "0th" in exif_dict:
|
||||
assert not any(k in exif_dict["0th"] for k in (piexif.ImageIFD.Make, piexif.ImageIFD.Model)), "WEBP should not contain prompt or workflow data"
|
||||
finally:
|
||||
saved_image.close()
|
||||
151
custom_nodes/comfyui-image-saver/utils.py
Normal file
151
custom_nodes/comfyui-image-saver/utils.py
Normal file
@@ -0,0 +1,151 @@
|
||||
import hashlib
|
||||
import os
|
||||
import requests
|
||||
from typing import Optional, Any
|
||||
from collections.abc import Collection, Iterator
|
||||
from pathlib import Path
|
||||
from tqdm import tqdm
|
||||
import folder_paths
|
||||
import re
|
||||
|
||||
def sanitize_filename(filename: str) -> str:
|
||||
"""Remove characters that are unsafe for filenames."""
|
||||
# Remove characters that are generally unsafe across file systems
|
||||
unsafe_chars = r'[<>:"|?*\x00-\x1f]'
|
||||
sanitized = re.sub(unsafe_chars, '', filename)
|
||||
|
||||
# Remove trailing periods and spaces (problematic on Windows)
|
||||
sanitized = sanitized.rstrip('. ')
|
||||
return sanitized
|
||||
|
||||
def get_sha256(file_path: str) -> str:
|
||||
"""
|
||||
Given the file path, finds a matching sha256 file, or creates one
|
||||
based on the headers in the source file
|
||||
"""
|
||||
file_no_ext = os.path.splitext(file_path)[0]
|
||||
hash_file = file_no_ext + ".sha256"
|
||||
|
||||
if os.path.exists(hash_file):
|
||||
try:
|
||||
with open(hash_file, "r") as f:
|
||||
return f.read().strip()
|
||||
except OSError as e:
|
||||
print(f"ComfyUI-Image-Saver: Error reading existing hash file: {e}")
|
||||
|
||||
sha256_hash = hashlib.sha256()
|
||||
with open(file_path, "rb") as f:
|
||||
file_size = os.fstat(f.fileno()).st_size
|
||||
block_size = 1048576 # 1 MB
|
||||
|
||||
print(f"ComfyUI-Image-Saver: Calculating sha256 for '{Path(file_path).stem}'")
|
||||
with tqdm(None, None, file_size, unit="B", unit_scale=True, unit_divisor=1024) as progress_bar:
|
||||
for byte_block in iter(lambda: f.read(block_size), b""):
|
||||
progress_bar.update(len(byte_block))
|
||||
sha256_hash.update(byte_block)
|
||||
|
||||
try:
|
||||
with open(hash_file, "w") as f:
|
||||
f.write(sha256_hash.hexdigest())
|
||||
except OSError as e:
|
||||
print(f"ComfyUI-Image-Saver: Error writing hash to {hash_file}: {e}")
|
||||
|
||||
return sha256_hash.hexdigest()
|
||||
|
||||
def full_embedding_path_for(embedding: str) -> Optional[str]:
|
||||
"""
|
||||
Based on a embedding name, eg: EasyNegative, finds the path as known in comfy, including extension
|
||||
"""
|
||||
matching_embedding = get_file_path_match("embeddings", embedding)
|
||||
if matching_embedding is None:
|
||||
print(f'ComfyUI-Image-Saver: could not find full path to embedding "{embedding}"')
|
||||
return None
|
||||
return folder_paths.get_full_path("embeddings", matching_embedding)
|
||||
|
||||
def full_lora_path_for(lora: str) -> Optional[str]:
|
||||
"""
|
||||
Based on a lora name, e.g., 'epi_noise_offset2', finds the path as known in comfy, including extension.
|
||||
"""
|
||||
# Find the matching lora path
|
||||
matching_lora = get_file_path_match("loras", lora)
|
||||
if matching_lora is None:
|
||||
print(f'ComfyUI-Image-Saver: could not find full path to lora "{lora}"')
|
||||
return None
|
||||
return folder_paths.get_full_path("loras", matching_lora)
|
||||
|
||||
def full_checkpoint_path_for(model_name: str) -> str:
|
||||
if not model_name:
|
||||
return ''
|
||||
|
||||
supported_extensions = set(folder_paths.supported_pt_extensions) | {".gguf"}
|
||||
|
||||
matching_checkpoint = get_file_path_match("checkpoints", model_name, supported_extensions)
|
||||
if matching_checkpoint is not None:
|
||||
return folder_paths.get_full_path("checkpoints", matching_checkpoint)
|
||||
|
||||
matching_model = get_file_path_match("diffusion_models", model_name, supported_extensions)
|
||||
if matching_model:
|
||||
return folder_paths.get_full_path("diffusion_models", matching_model)
|
||||
|
||||
print(f'Could not find full path to checkpoint "{model_name}"')
|
||||
return ''
|
||||
|
||||
def get_file_path_iterator(folder_name: str, supported_extensions: Optional[Collection[str]] = None) -> Iterator[Path]:
|
||||
"""
|
||||
Returns an iterator over valid file paths for the specified model folder.
|
||||
"""
|
||||
if supported_extensions is None:
|
||||
return (Path(x) for x in folder_paths.get_filename_list(folder_name))
|
||||
else:
|
||||
return custom_file_path_generator(folder_name, supported_extensions)
|
||||
|
||||
def custom_file_path_generator(folder_name: str, supported_extensions: Collection[str]) -> Iterator[Path]:
|
||||
"""
|
||||
Generator function for file paths, allowing for a customized extension check.
|
||||
"""
|
||||
model_paths = folder_paths.folder_names_and_paths.get(folder_name, [[], set()])[0]
|
||||
for path in model_paths:
|
||||
if os.path.exists(path):
|
||||
base_path = Path(path)
|
||||
for root, _, files in os.walk(path):
|
||||
root_path = Path(root).relative_to(base_path)
|
||||
for file in files:
|
||||
file_path = root_path / file
|
||||
if file_path.suffix.lower() in supported_extensions:
|
||||
yield file_path
|
||||
|
||||
def get_file_path_match(folder_name: str, file_name: str, supported_extensions: Optional[Collection[str]] = None) -> Optional[str]:
|
||||
supported_extensions_fallback = supported_extensions if supported_extensions is not None else folder_paths.supported_pt_extensions
|
||||
file_path = Path(file_name)
|
||||
|
||||
# first try full path match, then fallback to just name match, matching the extension if appropriate
|
||||
if file_path.suffix.lower() not in supported_extensions_fallback:
|
||||
matching_file_path = next((p for p in get_file_path_iterator(folder_name, supported_extensions) if p.with_suffix('') == file_path), None)
|
||||
matching_file_path = (matching_file_path if matching_file_path is not None else
|
||||
next((p for p in get_file_path_iterator(folder_name, supported_extensions) if p.stem == file_path.name), None))
|
||||
else:
|
||||
matching_file_path = next((p for p in get_file_path_iterator(folder_name, supported_extensions) if p == file_path), None)
|
||||
matching_file_path = (matching_file_path if matching_file_path is not None else
|
||||
next((p for p in get_file_path_iterator(folder_name, supported_extensions) if p.name == file_path.name), None))
|
||||
|
||||
return str(matching_file_path) if matching_file_path is not None else None
|
||||
|
||||
def http_get_json(url: str) -> dict[str, Any] | None:
|
||||
try:
|
||||
response = requests.get(url, timeout=300)
|
||||
except requests.exceptions.Timeout:
|
||||
print(f"ComfyUI-Image-Saver: HTTP GET Request timed out for {url}")
|
||||
return None
|
||||
except requests.exceptions.ConnectionError as e:
|
||||
print(f"ComfyUI-Image-Saver: Warning - Network connection error for {url}: {e}")
|
||||
return None
|
||||
|
||||
if not response.ok:
|
||||
print(f"ComfyUI-Image-Saver: HTTP GET Request failed with error code: {response.status_code}: {response.reason}")
|
||||
return None
|
||||
|
||||
try:
|
||||
return response.json()
|
||||
except ValueError as e:
|
||||
print(f"ComfyUI-Image-Saver: HTTP Response JSON error: {e}")
|
||||
return None
|
||||
205
custom_nodes/comfyui-image-saver/utils_civitai.py
Normal file
205
custom_nodes/comfyui-image-saver/utils_civitai.py
Normal file
@@ -0,0 +1,205 @@
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Tuple, Any
|
||||
|
||||
import folder_paths
|
||||
|
||||
from .utils import http_get_json
|
||||
|
||||
MAX_HASH_LENGTH = 16 # skip larger unshortened hashes, such as full sha256 or blake3
|
||||
|
||||
"""
|
||||
Represent the given embedding name as key as detected by civitAI
|
||||
"""
|
||||
def civitai_embedding_key_name(embedding: str) -> str:
|
||||
return f'embed:{embedding}'
|
||||
|
||||
"""
|
||||
Represent the given lora name as key as detected by civitAI
|
||||
NB: this should also work fine for Lycoris
|
||||
"""
|
||||
def civitai_lora_key_name(lora: str) -> str:
|
||||
return f'LORA:{lora}'
|
||||
|
||||
CIVITAI_SAMPLER_MAP = {
|
||||
'euler_ancestral': 'Euler a',
|
||||
'euler': 'Euler',
|
||||
'lms': 'LMS',
|
||||
'heun': 'Heun',
|
||||
'dpm_2': 'DPM2',
|
||||
'dpm_2_ancestral': 'DPM2 a',
|
||||
'dpmpp_2s_ancestral': 'DPM++ 2S a',
|
||||
'dpmpp_2m': 'DPM++ 2M',
|
||||
'dpmpp_sde': 'DPM++ SDE',
|
||||
'dpmpp_2m_sde': 'DPM++ 2M SDE',
|
||||
'dpmpp_3m_sde': 'DPM++ 3M SDE',
|
||||
'dpm_fast': 'DPM fast',
|
||||
'dpm_adaptive': 'DPM adaptive',
|
||||
'ddim': 'DDIM',
|
||||
'plms': 'PLMS',
|
||||
'uni_pc_bh2': 'UniPC',
|
||||
'uni_pc': 'UniPC',
|
||||
'lcm': 'LCM',
|
||||
}
|
||||
|
||||
def get_civitai_sampler_name(sampler_name: str, scheduler: str) -> str:
|
||||
# based on: https://github.com/civitai/civitai/blob/main/src/server/common/constants.ts#L122
|
||||
if sampler_name in CIVITAI_SAMPLER_MAP:
|
||||
civitai_name = CIVITAI_SAMPLER_MAP[sampler_name]
|
||||
|
||||
if scheduler == "karras":
|
||||
civitai_name += " Karras"
|
||||
elif scheduler == "exponential":
|
||||
civitai_name += " Exponential"
|
||||
|
||||
return civitai_name
|
||||
else:
|
||||
if scheduler != 'normal':
|
||||
return f"{sampler_name}_{scheduler}"
|
||||
else:
|
||||
return sampler_name
|
||||
|
||||
def get_civitai_metadata(
|
||||
modelname: str,
|
||||
ckpt_path: str,
|
||||
modelhash: str,
|
||||
loras: Dict[str, Tuple[str, float, str]],
|
||||
embeddings: Dict[str, Tuple[str, float, str]],
|
||||
manual_entries: Dict[str, tuple[str | None, float | None, str]],
|
||||
download_civitai_data: bool) -> Tuple[List[Dict[str, str | float]], Dict[str, str], str | None]:
|
||||
"""Download or load cache of Civitai data, save specially-formatted data to metadata"""
|
||||
civitai_resources: List[Dict[str, str | float]] = []
|
||||
hashes = {}
|
||||
add_model_hash = None
|
||||
|
||||
if download_civitai_data:
|
||||
for name, (filepath, weight, hash) in ({ modelname: ( ckpt_path, None, modelhash ) } | loras | embeddings | manual_entries).items():
|
||||
civitai_info = get_civitai_info(filepath, hash)
|
||||
if civitai_info is not None:
|
||||
resource_data: Dict[str, str | float] = {}
|
||||
|
||||
# Optional data - modelName, versionName
|
||||
resource_data["modelName"] = civitai_info["model"]["name"]
|
||||
resource_data["versionName"] = civitai_info["name"]
|
||||
|
||||
# Weight/strength (for LoRA or embedding)
|
||||
if weight is not None:
|
||||
resource_data["weight"] = weight
|
||||
|
||||
# Required data - AIR or modelVersionId (unique resource identifier)
|
||||
# https://github.com/civitai/civitai/wiki/AIR-%E2%80%90-Uniform-Resource-Names-for-AI
|
||||
if "air" in civitai_info:
|
||||
resource_data["air"] = civitai_info["air"]
|
||||
else:
|
||||
# Fallback if AIR is not found
|
||||
resource_data["modelVersionId"] = civitai_info["id"]
|
||||
civitai_resources.append(resource_data)
|
||||
else:
|
||||
# Fallback in case the data wasn't loaded to add to the "Hashes" section
|
||||
if name == modelname:
|
||||
add_model_hash = hash.upper()
|
||||
else:
|
||||
hashes[name] = hash.upper()
|
||||
else:
|
||||
# Convert all hashes to JSON format
|
||||
hashes = {key: value[2] for key, value in embeddings.items()} | {key: value[2] for key, value in loras.items()} | {key: value[2] for key, value in manual_entries.items()} | {"model": modelhash}
|
||||
add_model_hash = modelhash
|
||||
|
||||
return civitai_resources, hashes, add_model_hash
|
||||
|
||||
def get_civitai_info(path: Path | str | None, model_hash: str) -> dict[str, Any] | None:
|
||||
try:
|
||||
if not model_hash:
|
||||
print("ComfyUI-Image-Saver: Error: Missing hash.")
|
||||
return None
|
||||
|
||||
# path is None for additional hashes added by the user - caches manually added hash data in the "image-saver" folder
|
||||
if path is None:
|
||||
manual_list = get_manual_list()
|
||||
manual_data = manual_list.get(model_hash.upper(), None)
|
||||
if manual_data is None:
|
||||
content = download_model_info(path, model_hash)
|
||||
if content is None:
|
||||
return None
|
||||
|
||||
# dynamically receive filename from the website to save the metadata
|
||||
file = next((file for file in content["files"] if any(len(value) <= MAX_HASH_LENGTH and value.upper() == model_hash.upper() for value in file["hashes"].values())), None)
|
||||
if file is None:
|
||||
print(f"ComfyUI-Image-Saver: ({model_hash}) No file hash matched in metadata (should be impossible)")
|
||||
return content
|
||||
filename = file["name"]
|
||||
|
||||
# Cache data in a local file, removing the need for repeat http requests
|
||||
for hash_value in file["hashes"].values():
|
||||
if len(hash_value) <= MAX_HASH_LENGTH:
|
||||
manual_list = append_manual_list(hash_value.upper(), { "filename": filename, "type": content["model"]["type"] })
|
||||
|
||||
save_civitai_info_file(content, get_manual_folder() / filename)
|
||||
return content
|
||||
else:
|
||||
path = get_manual_folder() / manual_data["filename"]
|
||||
|
||||
info_path = Path(path).with_suffix(".civitai.info").absolute()
|
||||
with open(info_path, 'r') as file:
|
||||
return json.load(file)
|
||||
except FileNotFoundError:
|
||||
return download_model_info(path, model_hash)
|
||||
except Exception as e:
|
||||
print(f"ComfyUI-Image-Saver: Civitai info error: {e}")
|
||||
return None
|
||||
|
||||
def download_model_info(path: Path | str | None, model_hash: str) -> dict[str, object] | None:
|
||||
model_label = model_hash if path is None else f"{Path(path).stem}:{model_hash}"
|
||||
print(f"ComfyUI-Image-Saver: Downloading model info for '{model_label}'.")
|
||||
|
||||
content = http_get_json(f'https://civitai.com/api/v1/model-versions/by-hash/{model_hash.upper()}')
|
||||
if content is None:
|
||||
return None
|
||||
model_id = content["modelId"]
|
||||
parent_model = http_get_json(f'https://civitai.com/api/v1/models/{model_id}')
|
||||
if not parent_model:
|
||||
parent_model = {}
|
||||
|
||||
content["creator"] = parent_model.get("creator", "{}")
|
||||
model_metadata = content["model"]
|
||||
for metadata in [ "description", "tags", "allowNoCredit", "allowCommercialUse", "allowDerivatives", "allowDifferentLicense" ]:
|
||||
model_metadata[metadata] = parent_model.get(metadata, "")
|
||||
|
||||
if path is not None:
|
||||
save_civitai_info_file(content, path)
|
||||
|
||||
return content
|
||||
|
||||
def save_civitai_info_file(content: dict[str, object], path: Path | str) -> bool:
|
||||
try:
|
||||
with open(Path(path).with_suffix(".civitai.info").absolute(), 'w') as info_file:
|
||||
info_file.write(json.dumps(content, indent=4))
|
||||
except Exception as e:
|
||||
print(f"ComfyUI-Image-Saver: Save Civitai info error '{path}': {e}")
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_manual_folder() -> Path:
|
||||
return Path(folder_paths.models_dir) / "image-saver"
|
||||
|
||||
def get_manual_list() -> dict[str, dict[str, Any]]:
|
||||
folder = get_manual_folder()
|
||||
folder.mkdir(parents=True, exist_ok=True)
|
||||
try:
|
||||
manual_path = (folder / "manual-hashes.json").absolute()
|
||||
with open(manual_path, 'r') as file:
|
||||
return json.load(file)
|
||||
except FileNotFoundError:
|
||||
return {}
|
||||
except Exception as e:
|
||||
print(f"ComfyUI-Image-Saver: Manual list get error: {e}")
|
||||
return {}
|
||||
|
||||
def append_manual_list(key: str, value: dict[str, Any]) -> dict[str, dict[str, Any]]:
|
||||
manual_list = get_manual_list() | { key: value }
|
||||
try:
|
||||
with open((get_manual_folder() / "manual-hashes.json").absolute(), 'w') as file:
|
||||
file.write(json.dumps(manual_list, indent=4))
|
||||
except Exception as e:
|
||||
print(f"ComfyUI-Image-Saver: Manual list append error: {e}")
|
||||
return manual_list
|
||||
Reference in New Issue
Block a user