Add custom nodes, Civitai loras (LFS), and vast.ai setup script
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled

Includes 30 custom nodes committed directly, 7 Civitai-exclusive
loras stored via Git LFS, and a setup script that installs all
dependencies and downloads HuggingFace-hosted models on vast.ai.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-09 00:55:26 +00:00
parent 2b70ab9ad0
commit f09734b0ee
2274 changed files with 748556 additions and 3 deletions

View File

@@ -0,0 +1,164 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
ファイル構成.txt

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,147 @@
# ComfyUI-Fal-API-Flux
![example workflow](examples/workflow_fal_api_flux_dev_with_lora_and_controlnet_image_to_image_node.png "Example workflow")
This repository contains custom nodes for ComfyUI that integrate the fal.ai FLUX.1 APIs for text-to-image and image-to-image generation. These nodes allow you to use the FLUX.1 models directly within your ComfyUI workflows.
## Features
- Text-to-image generation using fal.ai's FLUX.1 [dev] and FLUX.1 [pro] models
- Image-to-image generation using FLUX.1 [dev] model
- Support for LoRA models
- ControlNet and ControlNet Union support
- Customizable generation parameters (image size, inference steps, guidance scale)
- Multiple image generation in a single request
- Seed support for reproducible results
- Safety tolerance settings for FLUX.1 [pro]
## Prerequisites
- ComfyUI installed and set up
- Python 3.7+
- PyTorch 2.0.1 or later
- A fal.ai API key with access to the FLUX.1 models
## Installation
There are two ways to install ComfyUI-Fal-API-Flux:
### Method 1: Using ComfyUI Manager (Recommended)
1. Install [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager) if you haven't already.
2. Open ComfyUI and navigate to the "Manager" tab.
3. Search for "ComfyUI-Fal-API-Flux" in the custom nodes section.
4. Click "Install" to automatically download and install the custom nodes.
### Method 2: Manual Installation
1. Clone this repository into your ComfyUI's `custom_nodes` directory:
```
cd /path/to/ComfyUI/custom_nodes
git clone https://github.com/your-username/ComfyUI-Fal-API-Flux.git
```
2. Navigate to the cloned directory:
```
cd ComfyUI-Fal-API-Flux
```
3. Install the required dependencies:
```
pip install -r requirements.txt
```
After installation using either method:
4. Configure your API key (see Configuration section below)
5. Restart ComfyUI if it's already running
## Configuration
To use these custom nodes, you need to set up your fal.ai API key:
1. Create a `config.ini` file in the root directory of the project.
2. Add the following content to `config.ini`:
```ini
[falai]
api_key = your_api_key_here
```
3. Replace `your_api_key_here` with your actual fal.ai API key.
4. Save the file.
Keep your `config.ini` file secure and do not share it publicly.
## Usage
After installation, you'll find the following new nodes in the ComfyUI interface:
1. "Fal API Flux Dev": The main node for text-to-image generation using FLUX.1 [dev].
2. "Fal API Flux Dev Image-to-Image": A node for image-to-image generation using FLUX.1 [dev].
3. "Fal API Flux Dev with LoRA": A node for text-to-image generation using FLUX.1 [dev] with LoRA support.
4. "Fal API Flux Dev with LoRA Image-to-Image": A node for image-to-image generation using FLUX.1 [dev] with LoRA support.
5. "Fal API Flux with LoRA and ControlNet": A node for text-to-image generation using FLUX.1 [dev] with LoRA and ControlNet support.
6. "Fal API Flux with LoRA and ControlNet Image-to-Image": A node for image-to-image generation using FLUX.1 [dev] with LoRA and ControlNet support.
7. "Fal API Flux Pro": A node for text-to-image generation using FLUX.1 [pro].
8. "Fal API Flux Pro V1.1": An updated node for text-to-image generation using FLUX.1 [pro] V1.1.
9. "Fal API Flux LoRA Config": A node for configuring LoRA models.
10. "Fal API Flux ControlNet Config": A node for configuring ControlNet.
11. "Fal API Flux ControlNet Union Config": A node for configuring ControlNet Union.
### Basic Usage
1. Add one of the Fal API Flux nodes to your workflow.
2. Configure the node parameters (prompt, image size, etc.).
3. Connect the output to a "Preview Image" or "Save Image" node to see the results.
### Using LoRA
1. Add a "Fal API Flux LoRA Config" node to your workflow.
2. Configure the LoRA URL and scale.
3. Connect the output of the LoRA Config node to the `lora` input of a compatible Fal API Flux node.
### Using ControlNet
1. Add a "Fal API Flux ControlNet Config" or "Fal API Flux ControlNet Union Config" node to your workflow.
2. Configure the ControlNet parameters.
3. Connect the output to the `controlnet` or `controlnet_union` input of a compatible Fal API Flux node.
### Image-to-Image Generation
1. Use a ComfyUI image loader node to load an input image.
2. Connect the loaded image to an image-to-image node (e.g., "Fal API Flux Dev Image-to-Image").
3. Configure the node parameters, including the strength of the transformation.
## Example Workflows
Example workflows are provided in the `examples` folder of this repository. To use them:
1. Locate the desired workflow image in the `examples` folder.
2. Open ComfyUI in your web browser.
3. Drag and drop the workflow image directly onto the ComfyUI canvas.
These example workflows provide starting points for using the Fal API Flux nodes in your own projects.
## Troubleshooting
If you encounter issues:
1. Ensure you have access to the FLUX.1 models on fal.ai.
2. Check the ComfyUI console for detailed error messages and logs.
3. Verify that your API key is correctly set in the `config.ini` file.
4. Make sure your LoRA URL is correct and compatible with FLUX.1 [dev].
5. For persistent issues, enable debug logging and check the logs for API responses and image processing details.
## Contributing
Contributions to improve the nodes or extend their functionality are welcome! Please feel free to submit issues or pull requests.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgements
- [fal.ai](https://fal.ai) for providing the FLUX.1 APIs
- [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for the extensible UI framework
## Disclaimer
This project is not officially affiliated with or endorsed by fal.ai or ComfyUI. Use it at your own risk and be sure to comply with fal.ai's terms of service when using their API.

View File

@@ -0,0 +1,86 @@
from .modules.fal_api_flux_dev_node import FalAPIFluxDevNode
from .modules.fal_api_flux_dev_image_to_image_node import FalAPIFluxDevImageToImageNode
from .modules.fal_api_flux_dev_with_lora_node import FalAPIFluxDevWithLoraNode
from .modules.fal_api_flux_dev_with_lora_image_to_image_node import FalAPIFluxDevWithLoraImageToImageNode
from .modules.fal_api_flux_dev_with_lora_inpaint_node import FalAPIFluxDevWithLoraInpaintNode
from .modules.fal_api_flux_dev_with_lora_and_controlnet_node import FalAPIFluxDevWithLoraAndControlNetNode
from .modules.fal_api_flux_dev_with_lora_and_controlnet_image_to_image_node import FalAPIFluxDevWithLoraAndControlNetImageToImageNode
from .modules.fal_api_flux_dev_with_lora_and_controlnet_inpaint_node import FalAPIFluxDevWithLoraAndControlNetInpaintNode
from .modules.fal_api_flux_pro_node import FalAPIFluxProNode
from .modules.fal_api_flux_pro_v11_node import FalAPIFluxProV11Node
from .modules.fal_api_flux_pro_v11_ultra_node import FalAPIFluxProV11UltraNode
from .modules.fal_api_flux_lora_config_node import FalAPIFluxLoraConfigNode
from .modules.fal_api_flux_controlnet_config_node import FalAPIFluxControlNetConfigNode
from .modules.fal_api_flux_controlnet_union_config_node import FalAPIFluxControlNetUnionConfigNode
from .modules.fal_api_flux_pro_canny_node import FalAPIFluxProCannyNode
from .modules.fal_api_flux_pro_depth_node import FalAPIFluxProDepthNode
from .modules.fal_api_flux_pro_fill_node import FalAPIFluxProFillNode
from .modules.fal_api_flux_pro_redux_node import FalAPIFluxProReduxNode
from .modules.fal_api_flux_dev_canny_with_lora_node import FalAPIFluxDevCannyWithLoraNode
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevNode": FalAPIFluxDevNode,
"FalAPIFluxDevImageToImageNode": FalAPIFluxDevImageToImageNode,
"FalAPIFluxDevWithLoraNode": FalAPIFluxDevWithLoraNode,
"FalAPIFluxDevWithLoraImageToImageNode": FalAPIFluxDevWithLoraImageToImageNode,
"FalAPIFluxDevWithLoraInpaintNode": FalAPIFluxDevWithLoraInpaintNode,
"FalAPIFluxDevWithLoraAndControlNetNode": FalAPIFluxDevWithLoraAndControlNetNode,
"FalAPIFluxDevWithLoraAndControlNetImageToImageNode": FalAPIFluxDevWithLoraAndControlNetImageToImageNode,
"FalAPIFluxDevWithLoraAndControlNetInpaintNode": FalAPIFluxDevWithLoraAndControlNetInpaintNode,
"FalAPIFluxProNode": FalAPIFluxProNode,
"FalAPIFluxProV11Node": FalAPIFluxProV11Node,
"FalAPIFluxProV11UltraNode": FalAPIFluxProV11UltraNode,
"FalAPIFluxLoraConfigNode": FalAPIFluxLoraConfigNode,
"FalAPIFluxControlNetConfigNode": FalAPIFluxControlNetConfigNode,
"FalAPIFluxControlNetUnionConfigNode": FalAPIFluxControlNetUnionConfigNode,
"FalAPIFluxProCannyNode": FalAPIFluxProCannyNode,
"FalAPIFluxProDepthNode": FalAPIFluxProDepthNode,
"FalAPIFluxProFillNode": FalAPIFluxProFillNode,
"FalAPIFluxProReduxNode": FalAPIFluxProReduxNode,
"FalAPIFluxDevCannyWithLoraNode": FalAPIFluxDevCannyWithLoraNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevNode": "Fal API Flux Dev",
"FalAPIFluxDevImageToImageNode": "Fal API Flux Dev Image-to-Image",
"FalAPIFluxDevWithLoraNode": "Fal API Flux Dev with LoRA",
"FalAPIFluxDevWithLoraImageToImageNode": "Fal API Flux Dev with LoRA Image-to-Image",
"FalAPIFluxDevWithLoraInpaintNode": "Fal API Flux Dev with LoRA Inpaint",
"FalAPIFluxDevWithLoraAndControlNetNode": "Fal API Flux with LoRA and ControlNet",
"FalAPIFluxDevWithLoraAndControlNetImageToImageNode": "Fal API Flux with LoRA and ControlNet Image-to-Image",
"FalAPIFluxDevWithLoraAndControlNetInpaintNode": "Fal API Flux with LoRA and ControlNet Inpaint",
"FalAPIFluxProNode": "Fal API Flux Pro",
"FalAPIFluxProV11Node": "Fal API Flux Pro V1.1",
"FalAPIFluxProV11UltraNode": "Fal API Flux Pro v1.1 Ultra",
"FalAPIFluxLoraConfigNode": "Fal API Flux LoRA Config",
"FalAPIFluxControlNetConfigNode": "Fal API Flux ControlNet Config",
"FalAPIFluxControlNetUnionConfigNode": "Fal API Flux ControlNet Union Config",
"FalAPIFluxDevImageToImageNode": "Fal API Flux Dev Image-to-Image",
"FalAPIFluxProCannyNode": "Fal API Flux Pro Canny",
"FalAPIFluxProDepthNode": "Fal API Flux Pro Depth",
"FalAPIFluxProFillNode": "Fal API Flux Pro Fill",
"FalAPIFluxProReduxNode": "Fal API Flux Pro Redux",
"FalAPIFluxDevCannyWithLoraNode": "Fal API Flux Dev Canny With LoRA"
}
__all__ = [
'FalAPIFluxDevNode',
'FalAPIFluxDevImageToImageNode',
'FalAPIFluxNodeWithControlNet',
'FalAPIFluxDevWithLoraImageToImageNode',
'FalAPIFluxDevWithLoraInpaintNode',
'FalAPIFluxDevWithLoraAndControlNetNode',
'FalAPIFluxDevWithLoraAndControlNetImageToImageNode',
'FalAPIFluxDevWithLoraAndControlNetInpaintNode',
'FalAPIFluxProNode',
'FalAPIFluxProV11Node',
"FalAPIFluxProV11UltraNode",
'FalAPIFluxLoraConfigNode',
'FalAPIFluxControlNetConfigNode',
'FalAPIFluxControlNetUnionConfigNode',
'FalAPIFluxProCannyNode',
'FalAPIFluxProDepthNode',
'FalAPIFluxProFillNode',
'FalAPIFluxProReduxNode',
'FalAPIFluxDevCannyWithLoraNode'
]

View File

@@ -0,0 +1,2 @@
[falai]
api_key = your_api_key_here

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

View File

@@ -0,0 +1,246 @@
import os
import fal_client
import folder_paths
import configparser
import base64
import io
from PIL import Image
import logging
import json
import requests
import numpy as np
import torch
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
class BaseFalAPIFluxNode:
def __init__(self):
self.api_key = self.get_api_key()
os.environ['FAL_KEY'] = self.api_key
self.api_endpoint = None
def get_api_key(self):
config = configparser.ConfigParser()
config_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'config.ini')
if os.path.exists(config_path):
config.read(config_path)
return config.get('falai', 'api_key', fallback=None)
return None
def set_api_endpoint(self, endpoint):
self.api_endpoint = endpoint
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"prompt": ("STRING", {"multiline": True}),
"width": ("INT", {"default": 1024, "step": 8}),
"height": ("INT", {"default": 1024, "step": 8}),
"num_inference_steps": ("INT", {"default": 28, "min": 1, "max": 100}),
"guidance_scale": ("FLOAT", {"default": 3.5, "min": 0.1, "max": 40.0}),
"num_images": ("INT", {"default": 1, "min": 1, "max": 4}),
"enable_safety_checker": ("BOOLEAN", {"default": True}),
},
"optional": {
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
}
}
RETURN_TYPES = ("IMAGE",)
FUNCTION = "generate"
CATEGORY = "image generation"
def prepare_arguments(self, prompt, width, height, num_inference_steps, guidance_scale, num_images, enable_safety_checker, seed=None, **kwargs):
if not self.api_key:
raise ValueError("API key is not set. Please check your config.ini file.")
arguments = {
"prompt": prompt,
"num_inference_steps": num_inference_steps,
"guidance_scale": guidance_scale,
"num_images": num_images,
"enable_safety_checker": enable_safety_checker
}
# Handle custom image size
if width is None or height is None:
raise ValueError("Width and height must be provided when using custom image size")
arguments["image_size"] = {
"width": width,
"height": height
}
if seed is not None and seed != 0:
arguments["seed"] = seed
return arguments
def call_api(self, arguments):
logger.debug(f"Full API request payload: {json.dumps(arguments, indent=2)}")
if not self.api_endpoint:
raise ValueError("API endpoint is not set. Please set it using set_api_endpoint() method.")
try:
handler = fal_client.submit(
self.api_endpoint,
arguments=arguments,
)
result = handler.get()
logger.debug(f"API response: {json.dumps(result, indent=2)}")
return result
except Exception as e:
logger.error(f"API error details: {str(e)}")
if hasattr(e, 'response'):
logger.error(f"API error response: {e.response.text}")
raise RuntimeError(f"An error occurred when calling the fal.ai API: {str(e)}") from e
def process_images(self, result):
if "images" not in result or not result["images"]:
logger.error("No images were generated by the API.")
raise RuntimeError("No images were generated by the API.")
output_images = []
for index, img_info in enumerate(result["images"]):
try:
logger.debug(f"Processing image {index}: {json.dumps(img_info, indent=2)}")
if not isinstance(img_info, dict) or "url" not in img_info or not img_info["url"]:
logger.error(f"Invalid image info for image {index}")
continue
img_url = img_info["url"]
logger.debug(f"Image URL: {img_url[:100]}...") # Log the first 100 characters of the URL
if img_url.startswith("data:image"):
# Handle Base64 encoded image
try:
_, img_data = img_url.split(",", 1)
img_data = base64.b64decode(img_data)
except ValueError:
logger.error(f"Failed to split image URL for image {index}")
continue
else:
# Handle regular URL
try:
response = requests.get(img_url)
response.raise_for_status()
img_data = response.content
except requests.RequestException as e:
logger.error(f"Failed to download image from URL for image {index}: {str(e)}")
continue
# Log the first few bytes of the image data
logger.debug(f"First 20 bytes of image data: {img_data[:20]}")
# Try to interpret the data as an image
try:
img = Image.open(io.BytesIO(img_data))
logger.debug(f"Opened image with size: {img.size} and mode: {img.mode}")
except Exception as e:
logger.error(f"Failed to open image data: {str(e)}")
# If opening as an image fails, try to interpret it as raw pixel data
img_np = np.frombuffer(img_data, dtype=np.uint8)
logger.debug(f"Interpreted as raw pixel data with shape: {img_np.shape}")
# If the shape is (1024,), reshape it to a more sensible image size
if img_np.shape == (1024,):
img_np = img_np.reshape(32, 32) # Reshape to 32x32 image
elif img_np.shape == (1, 1, 1024):
img_np = img_np.reshape(32, 32)
# Normalize the data to 0-255 range
img_np = ((img_np - img_np.min()) / (img_np.max() - img_np.min()) * 255).astype(np.uint8)
img = Image.fromarray(img_np, 'L') # Create grayscale image
img = img.convert('RGB') # Convert to RGB
# Ensure image is in RGB mode
if img.mode != 'RGB':
img = img.convert('RGB')
# Convert PIL Image to NumPy array
img_np = np.array(img).astype(np.float32) / 255.0
# Create tensor with batch dimension (1, H, W, C)
img_tensor = torch.from_numpy(img_np)
img_tensor = img_tensor.unsqueeze(0) # (1, H, W, C)
output_images.append(img_tensor)
except Exception as e:
logger.error(f"Failed to process image {index}: {str(e)}")
if not output_images:
logger.error("Failed to process any of the generated images.")
raise RuntimeError("Failed to process any of the generated images.")
# Stack all images into a single batch tensor
if output_images:
output_tensor = torch.cat(output_images, dim=0)
logger.debug(f"Returning batched tensor with shape: {output_tensor.shape}")
return [output_tensor]
else:
logger.error("No images were successfully processed")
raise RuntimeError("No images were successfully processed")
def upload_image(self, image):
try:
# Convert PyTorch tensor to numpy array
if isinstance(image, torch.Tensor):
image = image.cpu().numpy()
# Handle different shapes of numpy arrays
if isinstance(image, np.ndarray):
if image.ndim == 4 and image.shape[0] == 1: # (1, H, W, 3) or (1, H, W, 1)
image = image.squeeze(0)
if image.ndim == 3:
if image.shape[2] == 3: # (H, W, 3) RGB image
pass
elif image.shape[2] == 1: # (H, W, 1) grayscale
image = np.repeat(image, 3, axis=2)
elif image.shape[0] == 3: # (3, H, W) RGB
image = np.transpose(image, (1, 2, 0))
elif image.shape[0] == 1: # (1, H, W) grayscale
image = np.repeat(image.squeeze(0)[..., np.newaxis], 3, axis=2)
elif image.shape == (1, 1, 1536): # Special case for (1, 1, 1536) shape
image = image.reshape(32, 48)
image = np.repeat(image[..., np.newaxis], 3, axis=2)
else:
raise ValueError(f"Unsupported image shape: {image.shape}")
# Normalize to 0-255 range if not already
if image.dtype != np.uint8:
image = (image - image.min()) / (image.max() - image.min()) * 255
image = image.astype(np.uint8)
image = Image.fromarray(image)
# Ensure image is in RGB mode
if image.mode != 'RGB':
image = image.convert('RGB')
# Resize image if it's too large (optional, adjust max_size as needed)
max_size = 1024 # Example max size
if max(image.size) > max_size:
image.thumbnail((max_size, max_size), Image.LANCZOS)
# Convert PIL Image to bytes
buffered = io.BytesIO()
image.save(buffered, format="PNG")
img_byte = buffered.getvalue()
# Upload the image using fal_client
url = fal_client.upload(img_byte, "image/png")
logger.info(f"Image uploaded successfully. URL: {url}")
return url
except Exception as e:
logger.error(f"Failed to process or upload image: {str(e)}")
raise
def generate(self, **kwargs):
arguments = self.prepare_arguments(**kwargs)
result = self.call_api(arguments)
output_images = self.process_images(result)
return tuple(output_images)

View File

@@ -0,0 +1,49 @@
class FalAPIFluxControlNetConfigNode:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"path": ("STRING", {
"multiline": False,
"default": "lllyasviel/sd-controlnet-canny"
}),
"control_image": ("IMAGE",),
"conditioning_scale": ("FLOAT", {
"default": 1.0,
"min": 0.1,
"max": 2.0,
"step": 0.1
}),
},
"optional": {
"config_url": ("STRING", {
"multiline": False,
"default": ""
}),
"variant": ("STRING", {
"multiline": False,
"default": ""
}),
}
}
RETURN_TYPES = ("CONTROLNET_CONFIG",)
FUNCTION = "configure_controlnet"
CATEGORY = "image generation"
def configure_controlnet(self, path, control_image, conditioning_scale, config_url="", variant=""):
return ({
"path": path,
"control_image": control_image,
"conditioning_scale": conditioning_scale,
"config_url": config_url if config_url else None,
"variant": variant if variant else None
},)
NODE_CLASS_MAPPINGS = {
"FalAPIFluxControlNetConfigNode": FalAPIFluxControlNetConfigNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxControlNetConfigNode": "Fal API Flux ControlNet Config"
}

View File

@@ -0,0 +1,53 @@
class FalAPIFluxControlNetUnionConfigNode:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"path": ("STRING", {
"multiline": False,
"default": "https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/resolve/main/diffusion_pytorch_model.safetensors"
}),
"control_image": ("IMAGE",),
"control_mode": (["canny", "tile", "depth", "blur", "pose", "gray", "lq"],),
"conditioning_scale": ("FLOAT", {
"default": 1.0,
"min": 0.1,
"max": 2.0,
"step": 0.1
}),
},
"optional": {
"config_url": ("STRING", {
"multiline": False,
"default": ""
}),
"variant": ("STRING", {
"multiline": False,
"default": ""
}),
}
}
RETURN_TYPES = ("CONTROLNET_UNION_CONFIG",)
FUNCTION = "configure_controlnet_union"
CATEGORY = "image generation"
def configure_controlnet_union(self, path, control_image, control_mode, conditioning_scale, config_url="", variant=""):
return ({
"path": path,
"controls": [{
"control_image": control_image,
"control_mode": control_mode,
"conditioning_scale": conditioning_scale
}],
"config_url": config_url if config_url else None,
"variant": variant if variant else None
},)
NODE_CLASS_MAPPINGS = {
"FalAPIFluxControlNetUnionConfigNode": FalAPIFluxControlNetUnionConfigNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxControlNetUnionConfigNode": "Fal API Flux ControlNet Union Config"
}

View File

@@ -0,0 +1,65 @@
import logging
from .base_fal_api_flux_node import BaseFalAPIFluxNode
logger = logging.getLogger(__name__)
class FalAPIFluxDevCannyWithLoraNode(BaseFalAPIFluxNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-lora-canny")
def set_api_endpoint(self, endpoint):
super().set_api_endpoint(endpoint)
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
# Add control image input
input_types["required"].update({
"control_image": ("IMAGE",), # Accept input from another node
})
input_types["optional"].update({
"lora_1": ("LORA_CONFIG",),
"lora_2": ("LORA_CONFIG",),
"lora_3": ("LORA_CONFIG",),
"lora_4": ("LORA_CONFIG",),
"lora_5": ("LORA_CONFIG",),
})
return input_types
def prepare_arguments(self, control_image, lora_1=None, lora_2=None, lora_3=None, lora_4=None, lora_5=None,
**kwargs):
# Get base arguments from parent class
arguments = super().prepare_arguments(**kwargs)
# Upload the control image and get its URL
control_image_url = self.upload_image(control_image)
logger.info(f"Uploaded control image. URL: {control_image_url}")
# Update arguments with Canny-specific parameters
arguments.update({
"image_url": control_image_url
})
# Collect all provided LoRA configurations
loras = []
for lora in [lora_1, lora_2, lora_3, lora_4, lora_5]:
if lora is not None:
loras.append(lora)
if loras:
arguments["loras"] = loras
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevCannyWithLoraNode": FalAPIFluxDevCannyWithLoraNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevCannyWithLoraNode": "Fal API Flux Dev Canny With LoRA"
}

View File

@@ -0,0 +1,42 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
import fal_client
import logging
import os
logger = logging.getLogger(__name__)
class FalAPIFluxDevImageToImageNode(BaseFalAPIFluxNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux/dev/image-to-image")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["required"].update({
"image": ("IMAGE",), # This makes it accept input from another node
"strength": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 1.0}),
})
return input_types
def prepare_arguments(self, image, strength, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Upload the image and get the URL
image_url = self.upload_image(image)
print(f"Uploaded image to {image_url}")
arguments.update({
"image_url": image_url,
"strength": strength,
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevImageToImageNode": FalAPIFluxDevImageToImageNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevImageToImageNode": "Fal API Flux Dev Image-to-Image"
}

View File

@@ -0,0 +1,18 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
class FalAPIFluxDevNode(BaseFalAPIFluxNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux/dev")
@classmethod
def INPUT_TYPES(cls):
return super().INPUT_TYPES()
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevNode": FalAPIFluxDevNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevNode": "Fal API Flux Dev"
}

View File

@@ -0,0 +1,50 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_dev_with_lora_and_controlnet_node import FalAPIFluxDevWithLoraAndControlNetNode
from PIL import Image
import torch
import io
import base64
import fal_client
import logging
import numpy as np
logger = logging.getLogger(__name__)
class FalAPIFluxDevWithLoraAndControlNetImageToImageNode(FalAPIFluxDevWithLoraAndControlNetNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-general/image-to-image")
def set_api_endpoint(self, endpoint):
super().set_api_endpoint(endpoint)
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["required"].update({
"image": ("IMAGE",), # This makes it accept input from another node
"strength": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 1.0}),
})
return input_types
def prepare_arguments(self, image, strength, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Upload the image and get the URL
image_url = self.upload_image(image)
print(f"Uploaded image to {image_url}")
arguments.update({
"image_url": image_url,
"strength": strength,
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevWithLoraAndControlNetImageToImageNode": FalAPIFluxDevWithLoraAndControlNetImageToImageNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevWithLoraAndControlNetImageToImageNode": "Fal API Flux with LoRA and ControlNet Image-to-Image"
}

View File

@@ -0,0 +1,48 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_dev_with_lora_and_controlnet_image_to_image_node import FalAPIFluxDevWithLoraAndControlNetImageToImageNode
from PIL import Image
import torch
import io
import base64
import fal_client
import logging
import numpy as np
logger = logging.getLogger(__name__)
class FalAPIFluxDevWithLoraAndControlNetInpaintNode(FalAPIFluxDevWithLoraAndControlNetImageToImageNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-general/inpainting")
def set_api_endpoint(self, endpoint):
super().set_api_endpoint(endpoint)
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["required"].update({
"mask_image": ("IMAGE",), # This makes it accept input from another node
})
return input_types
def prepare_arguments(self, mask_image, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Upload the image and get the URL
mask_url = self.upload_image(mask_image)
print(f"Uploaded image to {mask_url}")
arguments.update({
"mask_url": mask_url,
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevWithLoraAndControlNetInpaintNode": FalAPIFluxDevWithLoraAndControlNetInpaintNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevWithLoraAndControlNetInpaintNode": "Fal API Flux with LoRA and ControlNet Inpaint"
}

View File

@@ -0,0 +1,80 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from PIL import Image
import torch
import io
import base64
import fal_client
import logging
import numpy as np
logger = logging.getLogger(__name__)
class FalAPIFluxDevWithLoraAndControlNetNode(BaseFalAPIFluxNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-general")
def set_api_endpoint(self, endpoint):
super().set_api_endpoint(endpoint)
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["optional"].update({
"lora_1": ("LORA_CONFIG",),
"lora_2": ("LORA_CONFIG",),
"lora_3": ("LORA_CONFIG",),
"lora_4": ("LORA_CONFIG",),
"lora_5": ("LORA_CONFIG",),
"controlnet": ("CONTROLNET_CONFIG",),
"controlnet_union": ("CONTROLNET_UNION_CONFIG",),
})
return input_types
def prepare_arguments(self, lora_1=None, lora_2=None, lora_3=None, lora_4=None, lora_5=None,
controlnet=None, controlnet_union=None, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Collect all provided LoRA configurations
loras = []
for lora in [lora_1, lora_2, lora_3, lora_4, lora_5]:
if lora is not None:
loras.append(lora)
if loras:
arguments["loras"] = loras
if controlnet:
arguments["controlnets"] = [{
"path": controlnet["path"],
"control_image_url": self.upload_image(controlnet["control_image"]),
"conditioning_scale": controlnet["conditioning_scale"]
}]
if controlnet["config_url"]:
arguments["controlnets"][0]["config_url"] = controlnet["config_url"]
if controlnet["variant"]:
arguments["controlnets"][0]["variant"] = controlnet["variant"]
if controlnet_union:
arguments["controlnet_unions"] = [{
"path": controlnet_union["path"],
"controls": [{
"control_image_url": self.upload_image(control["control_image"]),
"control_mode": control["control_mode"],
"conditioning_scale": control["conditioning_scale"]
} for control in controlnet_union["controls"]]
}]
if controlnet_union["config_url"]:
arguments["controlnet_unions"][0]["config_url"] = controlnet_union["config_url"]
if controlnet_union["variant"]:
arguments["controlnet_unions"][0]["variant"] = controlnet_union["variant"]
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevWithLoraAndControlNetNode": FalAPIFluxDevWithLoraAndControlNetNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevWithLoraAndControlNetNode": "Fal API Flux with LoRA and ControlNet"
}

View File

@@ -0,0 +1,46 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_dev_with_lora_node import FalAPIFluxDevWithLoraNode
import fal_client
import logging
import os
logger = logging.getLogger(__name__)
class FalAPIFluxDevWithLoraImageToImageNode(FalAPIFluxDevWithLoraNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-lora/image-to-image")
def set_api_endpoint(self, endpoint):
super().set_api_endpoint(endpoint)
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["required"].update({
"image": ("IMAGE",), # This makes it accept input from another node
"strength": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 1.0}),
})
return input_types
def prepare_arguments(self, image, strength, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Upload the image and get the URL
image_url = self.upload_image(image)
print(f"Uploaded image to {image_url}")
arguments.update({
"image_url": image_url,
"strength": strength,
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevWithLoraImageToImageNode": FalAPIFluxDevWithLoraImageToImageNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevWithLoraImageToImageNode": "Fal API Flux Dev with LoRA Image-to-Image"
}

View File

@@ -0,0 +1,41 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_dev_with_lora_image_to_image_node import FalAPIFluxDevWithLoraImageToImageNode
import fal_client
import logging
import os
logger = logging.getLogger(__name__)
class FalAPIFluxDevWithLoraInpaintNode(FalAPIFluxDevWithLoraImageToImageNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-lora/inpainting")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["required"].update({
"mask_image": ("IMAGE",), # This makes it accept input from another node
})
return input_types
def prepare_arguments(self, mask_image, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Upload the image and get the URL
mask_url = self.upload_image(mask_image)
print(f"Uploaded image to {mask_url}")
arguments.update({
"mask_url": mask_url,
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevWithLoraInpaintNode": FalAPIFluxDevWithLoraInpaintNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevWithLoraInpaintNode": "Fal API Flux Dev with LoRA Inpaint"
}

View File

@@ -0,0 +1,43 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
class FalAPIFluxDevWithLoraNode(BaseFalAPIFluxNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-lora")
def set_api_endpoint(self, endpoint):
super().set_api_endpoint(endpoint)
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["optional"].update({
"lora_1": ("LORA_CONFIG",),
"lora_2": ("LORA_CONFIG",),
"lora_3": ("LORA_CONFIG",),
"lora_4": ("LORA_CONFIG",),
"lora_5": ("LORA_CONFIG",),
})
return input_types
def prepare_arguments(self, lora_1=None, lora_2=None, lora_3=None, lora_4=None, lora_5=None, **kwargs):
arguments = super().prepare_arguments(**kwargs)
# Collect all provided LoRA configurations
loras = []
for lora in [lora_1, lora_2, lora_3, lora_4, lora_5]:
if lora is not None:
loras.append(lora)
if loras:
arguments["loras"] = loras
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxDevWithLoraNode": FalAPIFluxDevWithLoraNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxDevWithLoraNode": "Fal API Flux Dev With LoRA"
}

View File

@@ -0,0 +1,35 @@
class FalAPIFluxLoraConfigNode:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"lora_url": ("STRING", {
"multiline": False,
"default": "https://example.com/path/to/lora.safetensors"
}),
"scale": ("FLOAT", {
"default": 1.0,
"min": 0.1,
"max": 2.0,
"step": 0.1
}),
}
}
RETURN_TYPES = ("LORA_CONFIG",)
FUNCTION = "configure_lora"
CATEGORY = "image generation"
def configure_lora(self, lora_url, scale):
if not lora_url.startswith(('http://', 'https://')):
raise ValueError("Invalid LoRA URL. Please enter a valid HTTP or HTTPS URL.")
return ({"path": lora_url, "scale": float(scale)},)
NODE_CLASS_MAPPINGS = {
"FalAPIFluxLoraConfigNode": FalAPIFluxLoraConfigNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxLoraConfigNode": "Fal API Flux LoRA Config"
}

View File

@@ -0,0 +1,44 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_pro_node import FalAPIFluxProNode
import logging
import torch
logger = logging.getLogger(__name__)
class FalAPIFluxProCannyNode(FalAPIFluxProNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/v1/canny")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
# Add control image input
input_types["required"].update({
"control_image": ("IMAGE",), # Accept input from another node
})
return input_types
def prepare_arguments(self, control_image, **kwargs):
# Get base arguments from parent class
arguments = super().prepare_arguments(**kwargs)
# Upload the control image and get its URL
control_image_url = self.upload_image(control_image)
logger.info(f"Uploaded control image. URL: {control_image_url}")
# Update arguments with Canny-specific parameters
arguments.update({
"control_image_url": control_image_url
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProCannyNode": FalAPIFluxProCannyNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProCannyNode": "Fal API Flux Pro Canny"
}

View File

@@ -0,0 +1,17 @@
from .fal_api_flux_pro_canny_node import FalAPIFluxProCannyNode
import logging
logger = logging.getLogger(__name__)
class FalAPIFluxProDepthNode(FalAPIFluxProCannyNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/v1/depth")
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProDepthNode": FalAPIFluxProDepthNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProDepthNode": "Fal API Flux Pro Depth"
}

View File

@@ -0,0 +1,47 @@
from .fal_api_flux_pro_node import FalAPIFluxProNode
import logging
import torch
logger = logging.getLogger(__name__)
class FalAPIFluxProFillNode(FalAPIFluxProNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/v1/fill")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
# Add control image input
input_types["required"].update({
"image": ("IMAGE",),
"mask_image": ("IMAGE",),
})
return input_types
def prepare_arguments(self, image, mask_image, **kwargs):
# Get base arguments from parent class
arguments = super().prepare_arguments(**kwargs)
# Upload the image and mask and get its URL
image_url = self.upload_image(image)
mask_image_url = self.upload_image(mask_image)
logger.info(f"Uploaded target image. URL: {image_url}")
logger.info(f"Uploaded mask image. URL: {mask_image_url}")
# Update arguments with Fill-specific parameters
arguments.update({
"image_url": image_url,
"mask_url": mask_image_url
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProFillNode": FalAPIFluxProFillNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProFillNode": "Fal API Flux Pro Fill"
}

View File

@@ -0,0 +1,28 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
import logging
logger = logging.getLogger(__name__)
class FalAPIFluxProNode(BaseFalAPIFluxNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/new")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
input_types["required"]["safety_tolerance"] = (["1", "2", "3", "4", "5", "6"],)
return input_types
def prepare_arguments(self, safety_tolerance, **kwargs):
arguments = super().prepare_arguments(**kwargs)
arguments["safety_tolerance"] = safety_tolerance
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProNode": FalAPIFluxProNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProNode": "Fal API Flux Pro"
}

View File

@@ -0,0 +1,44 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_pro_node import FalAPIFluxProNode
import logging
import torch
logger = logging.getLogger(__name__)
class FalAPIFluxProReduxNode(FalAPIFluxProNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/v1/redux")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
# Add control image input
input_types["required"].update({
"image": ("IMAGE",), # Accept input from another node
})
return input_types
def prepare_arguments(self, image, **kwargs):
# Get base arguments from parent class
arguments = super().prepare_arguments(**kwargs)
# Upload the control image and get its URL
image_url = self.upload_image(image)
logger.info(f"Uploaded target image. URL: {image_url}")
# Update arguments with Canny-specific parameters
arguments.update({
"image_url": image_url
})
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProReduxNode": FalAPIFluxProReduxNode
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProReduxNode": "Fal API Flux Pro Redux"
}

View File

@@ -0,0 +1,27 @@
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_pro_node import FalAPIFluxProNode
import logging
logger = logging.getLogger(__name__)
class FalAPIFluxProV11Node(FalAPIFluxProNode):
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/v1.1")
@classmethod
def INPUT_TYPES(cls):
input_types = super().INPUT_TYPES()
return input_types
def prepare_arguments(self, **kwargs):
arguments = super().prepare_arguments(**kwargs)
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProV11Node": FalAPIFluxProV11Node
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProV11Node": "Fal API Flux Pro v1.1"
}

View File

@@ -0,0 +1,65 @@
import logging
from .base_fal_api_flux_node import BaseFalAPIFluxNode
from .fal_api_flux_pro_v11_node import FalAPIFluxProV11Node
logger = logging.getLogger(__name__)
class FalAPIFluxProV11UltraNode(FalAPIFluxProV11Node):
"""
See https://fal.ai/models/fal-ai/flux-pro/v1.1-ultra/api
"""
def __init__(self):
super().__init__()
self.set_api_endpoint("fal-ai/flux-pro/v1.1-ultra")
@classmethod
def INPUT_TYPES(cls):
# get input types from Flux Pro 1.1
input_types = super().INPUT_TYPES()
# remove `width` and `height` from inputs
# the 1.1 ultra API replaces these with `aspect_ratio`
del (input_types["required"]["width"])
del (input_types["required"]["height"])
# remove `num_inference_steps` and `guidance_scale`
del (input_types["required"]["num_inference_steps"])
del (input_types["required"]["guidance_scale"])
# add `aspect_ratio`
input_types["required"]["aspect_ratio"] = (["16:9", "4:3", "21:9", "1:1", "3:4", "9:16", "9:21"],)
# add `raw`
input_types["required"]["raw"] = ("BOOLEAN", {"default": True})
# add `output_format`
input_types["required"]["output_format"] = (["jpeg", "png"],)
return input_types
def prepare_arguments(self, prompt, aspect_ratio, num_images, safety_tolerance,
enable_safety_checker, output_format, raw, seed=None, **kwargs):
# override from base since we don't have width and height
if not self.api_key:
raise ValueError("API key is not set. Please check your config.ini file.")
arguments = {"prompt": prompt, "raw": raw, "num_images": num_images,
"enable_safety_checker": enable_safety_checker,
"safety_tolerance": safety_tolerance, "aspect_ratio": aspect_ratio, "output_format": output_format}
if seed is not None and seed != 0:
arguments["seed"] = seed
return arguments
NODE_CLASS_MAPPINGS = {
"FalAPIFluxProV11UltraNode": FalAPIFluxProV11Node
}
NODE_DISPLAY_NAME_MAPPINGS = {
"FalAPIFluxProV11UltraNode": "Fal API Flux Pro v1.1 Ultra"
}

View File

@@ -0,0 +1,15 @@
[project]
name = "comfyui-fal-api-flux"
description = "This repository contains custom nodes for ComfyUI that integrate the fal.ai FLUX.1 [pro] and FLUX.1 [dev] with LoRA and ControlNet API, specifically for text-to-image generation. These nodes allow you to use the FLUX.1 model directly within your ComfyUI workflows."
version = "1.5.2"
license = {file = "LICENSE"}
dependencies = ["requests==2.32.3", "configparser==7.1.0", "fal-client==0.4.1", "numpy==2.1.1", "torch"]
[project.urls]
Repository = "https://github.com/yhayano-ponotech/ComfyUI-Fal-API-Flux"
# Used by Comfy Registry https://comfyregistry.org
[tool.comfy]
PublisherId = "yas-ponotech"
DisplayName = "ComfyUI-Fal-API-Flux"
Icon = ""

View File

@@ -0,0 +1,5 @@
requests==2.32.3
configparser==7.1.0
fal-client
numpy==2.1.1
torch