Skip to content

Commit

Permalink
bla
Browse files Browse the repository at this point in the history
  • Loading branch information
CI/CD Bot committed Mar 19, 2024
1 parent 529449e commit f7de496
Show file tree
Hide file tree
Showing 156 changed files with 10,300 additions and 0 deletions.
63 changes: 63 additions & 0 deletions docs/md/Comfy/BasicScheduler.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# BasicScheduler
## Documentation
- Class name: `BasicScheduler`
- Category: `sampling/custom_sampling/schedulers`
- Output node: `False`

The BasicScheduler node calculates a sequence of sigma values for diffusion models based on the provided scheduler, model, steps, and denoise parameters. It adjusts the total number of steps based on the denoise factor and utilizes the model's scheduler to generate the sigma values.
## Input types
### Required
- **`model`**
- The model parameter specifies the diffusion model for which the sigma values are to be calculated. It is crucial for determining the behavior of the diffusion process.
- Python dtype: `comfy.models.DiffusionModel`
- Comfy dtype: `MODEL`
- **`scheduler`**
- The scheduler parameter determines the method used to calculate the sigma values. It affects the diffusion process by altering the noise levels at each step.
- Python dtype: `str`
- Comfy dtype: `STRING`
- **`steps`**
- Specifies the number of diffusion steps. It directly influences the granularity of the diffusion process.
- Python dtype: `int`
- Comfy dtype: `INT`
- **`denoise`**
- A factor that adjusts the total number of steps based on its value, affecting the smoothness of the diffusion process.
- Python dtype: `float`
- Comfy dtype: `FLOAT`
## Output types
- **`sigmas`**
- A sequence of sigma values calculated for the diffusion model. These values are essential for controlling the noise level throughout the diffusion process.
- Python dtype: `torch.Tensor`
- Comfy dtype: `SIGMAS`
## Usage tips
- Infra type: `GPU`
- Common nodes: `SamplerCustom,SplitSigmas,Reroute`


## Source code
```python
class BasicScheduler:
@classmethod
def INPUT_TYPES(s):
return {"required":
{"model": ("MODEL",),
"scheduler": (comfy.samplers.SCHEDULER_NAMES, ),
"steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
}
}
RETURN_TYPES = ("SIGMAS",)
CATEGORY = "sampling/custom_sampling/schedulers"

FUNCTION = "get_sigmas"

def get_sigmas(self, model, scheduler, steps, denoise):
total_steps = steps
if denoise < 1.0:
total_steps = int(steps/denoise)

comfy.model_management.load_models_gpu([model])
sigmas = comfy.samplers.calculate_sigmas_scheduler(model.model, scheduler, total_steps).cpu()
sigmas = sigmas[-(steps + 1):]
return (sigmas, )

```
50 changes: 50 additions & 0 deletions docs/md/Comfy/CLIPLoader.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Load CLIP
## Documentation
- Class name: `CLIPLoader`
- Category: `advanced/loaders`
- Output node: `False`

The CLIPLoader node is responsible for loading a CLIP model based on the specified name and type. It supports loading different types of CLIP models, such as 'stable_diffusion' and 'stable_cascade', by fetching the model from a specified path and applying the appropriate CLIP type configuration.
## Input types
### Required
- **`clip_name`**
- Specifies the name of the CLIP model to be loaded. This name is used to locate the model file in a predefined directory structure.
- Python dtype: `str`
- Comfy dtype: `STRING`
- **`type`**
- Determines the type of CLIP model to load, allowing for different configurations such as 'stable_diffusion' or 'stable_cascade'. This affects how the model is initialized and configured.
- Python dtype: `str`
- Comfy dtype: `STRING`
## Output types
- **`clip`**
- The loaded CLIP model, ready for use in further processing or analysis.
- Python dtype: `comfy.sd.CLIPModel`
- Comfy dtype: `CLIP`
## Usage tips
- Infra type: `GPU`
- Common nodes: unknown


## Source code
```python
class CLIPLoader:
@classmethod
def INPUT_TYPES(s):
return {"required": { "clip_name": (folder_paths.get_filename_list("clip"), ),
"type": (["stable_diffusion", "stable_cascade"], ),
}}
RETURN_TYPES = ("CLIP",)
FUNCTION = "load_clip"

CATEGORY = "advanced/loaders"

def load_clip(self, clip_name, type="stable_diffusion"):
clip_type = comfy.sd.CLIPType.STABLE_DIFFUSION
if type == "stable_cascade":
clip_type = comfy.sd.CLIPType.STABLE_CASCADE

clip_path = folder_paths.get_full_path("clip", clip_name)
clip = comfy.sd.load_clip(ckpt_paths=[clip_path], embedding_directory=folder_paths.get_folder_paths("embeddings"), clip_type=clip_type)
return (clip,)

```
55 changes: 55 additions & 0 deletions docs/md/Comfy/CLIPMergeSimple.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# CLIPMergeSimple
## Documentation
- Class name: `CLIPMergeSimple`
- Category: `advanced/model_merging`
- Output node: `False`

The `CLIPMergeSimple` node merges two CLIP models based on a specified ratio, excluding specific parameters related to position IDs and logit scale. This operation allows for the combination of features from both models, potentially enhancing their capabilities or creating a new model with a balanced set of characteristics from the input models.
## Input types
### Required
- **`clip1`**
- The first CLIP model to be merged. It serves as the base model for the merging process.
- Python dtype: `comfy.model_base.CLIPModel`
- Comfy dtype: `CLIP`
- **`clip2`**
- The second CLIP model to be merged. Its key patches, except for those related to position IDs and logit scale, are added to the first model based on the specified ratio.
- Python dtype: `comfy.model_base.CLIPModel`
- Comfy dtype: `CLIP`
- **`ratio`**
- Determines the proportion of features from the second model to be merged into the first model. A ratio of 1 means fully adopting the second model's features, while 0 means no adoption.
- Python dtype: `float`
- Comfy dtype: `FLOAT`
## Output types
- **`clip`**
- The resulting CLIP model after merging the specified models according to the given ratio.
- Python dtype: `comfy.model_base.CLIPModel`
- Comfy dtype: `CLIP`
## Usage tips
- Infra type: `GPU`
- Common nodes: `CR Apply LoRA Stack,CLIPSetLastLayer`


## Source code
```python
class CLIPMergeSimple:
@classmethod
def INPUT_TYPES(s):
return {"required": { "clip1": ("CLIP",),
"clip2": ("CLIP",),
"ratio": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
}}
RETURN_TYPES = ("CLIP",)
FUNCTION = "merge"

CATEGORY = "advanced/model_merging"

def merge(self, clip1, clip2, ratio):
m = clip1.clone()
kp = clip2.get_key_patches()
for k in kp:
if k.endswith(".position_ids") or k.endswith(".logit_scale"):
continue
m.add_patches({k: kp[k]}, 1.0 - ratio, ratio)
return (m, )

```
83 changes: 83 additions & 0 deletions docs/md/Comfy/CLIPSave.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# CLIPSave
## Documentation
- Class name: `CLIPSave`
- Category: `advanced/model_merging`
- Output node: `True`

The `CLIPSave` node is designed for saving CLIP models along with additional information such as prompts and extra PNG metadata. It encapsulates the functionality to serialize and store the model's state, making it easier to manage and reuse models across different projects or experiments.
## Input types
### Required
- **`clip`**
- The CLIP model to be saved. This parameter is crucial as it represents the model whose state is being serialized for future use.
- Python dtype: `torch.nn.Module`
- Comfy dtype: `CLIP`
- **`filename_prefix`**
- A prefix for the filename under which the model and its associated data will be saved. This allows for organized storage and easy retrieval of saved models.
- Python dtype: `str`
- Comfy dtype: `STRING`
## Output types
The node doesn't have output types
## Usage tips
- Infra type: `CPU`
- Common nodes: unknown


## Source code
```python
class CLIPSave:
def __init__(self):
self.output_dir = folder_paths.get_output_directory()

@classmethod
def INPUT_TYPES(s):
return {"required": { "clip": ("CLIP",),
"filename_prefix": ("STRING", {"default": "clip/ComfyUI"}),},
"hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"},}
RETURN_TYPES = ()
FUNCTION = "save"
OUTPUT_NODE = True

CATEGORY = "advanced/model_merging"

def save(self, clip, filename_prefix, prompt=None, extra_pnginfo=None):
prompt_info = ""
if prompt is not None:
prompt_info = json.dumps(prompt)

metadata = {}
if not args.disable_metadata:
metadata["prompt"] = prompt_info
if extra_pnginfo is not None:
for x in extra_pnginfo:
metadata[x] = json.dumps(extra_pnginfo[x])

comfy.model_management.load_models_gpu([clip.load_model()])
clip_sd = clip.get_sd()

for prefix in ["clip_l.", "clip_g.", ""]:
k = list(filter(lambda a: a.startswith(prefix), clip_sd.keys()))
current_clip_sd = {}
for x in k:
current_clip_sd[x] = clip_sd.pop(x)
if len(current_clip_sd) == 0:
continue

p = prefix[:-1]
replace_prefix = {}
filename_prefix_ = filename_prefix
if len(p) > 0:
filename_prefix_ = "{}_{}".format(filename_prefix_, p)
replace_prefix[prefix] = ""
replace_prefix["transformer."] = ""

full_output_folder, filename, counter, subfolder, filename_prefix_ = folder_paths.get_save_image_path(filename_prefix_, self.output_dir)

output_checkpoint = f"{filename}_{counter:05}_.safetensors"
output_checkpoint = os.path.join(full_output_folder, output_checkpoint)

current_clip_sd = comfy.utils.state_dict_prefix_replace(current_clip_sd, replace_prefix)

comfy.utils.save_torch_file(current_clip_sd, output_checkpoint, metadata=metadata)
return {}

```
46 changes: 46 additions & 0 deletions docs/md/Comfy/CLIPSetLastLayer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# CLIP Set Last Layer
## Documentation
- Class name: `CLIPSetLastLayer`
- Category: `conditioning`
- Output node: `False`

The `CLIPSetLastLayer` node is designed to modify a CLIP model by setting its last layer to stop at a specified layer index. This operation is useful for controlling the depth of the model's processing, potentially affecting the model's performance and the characteristics of the generated embeddings.
## Input types
### Required
- **`clip`**
- The `clip` parameter represents the CLIP model to be modified. It is crucial for defining the model on which the layer adjustment will be performed.
- Python dtype: `torch.nn.Module`
- Comfy dtype: `CLIP`
- **`stop_at_clip_layer`**
- The `stop_at_clip_layer` parameter specifies the index of the last layer to be used in the CLIP model. This allows for fine-tuning the depth of the model's processing, which can influence the model's output and performance.
- Python dtype: `int`
- Comfy dtype: `INT`
## Output types
- **`clip`**
- Returns the modified CLIP model with the last layer set to the specified index. This adjusted model can then be used for further processing or embedding generation.
- Python dtype: `torch.nn.Module`
- Comfy dtype: `CLIP`
## Usage tips
- Infra type: `GPU`
- Common nodes: `CLIPTextEncode,LoraLoader,CR Apply LoRA Stack,Text to Conditioning,Reroute,PromptControlSimple,BatchPromptSchedule,CLIPTextEncodeA1111,FaceDetailer,BNK_CutoffBasePrompt`


## Source code
```python
class CLIPSetLastLayer:
@classmethod
def INPUT_TYPES(s):
return {"required": { "clip": ("CLIP", ),
"stop_at_clip_layer": ("INT", {"default": -1, "min": -24, "max": -1, "step": 1}),
}}
RETURN_TYPES = ("CLIP",)
FUNCTION = "set_last_layer"

CATEGORY = "conditioning"

def set_last_layer(self, clip, stop_at_clip_layer):
clip = clip.clone()
clip.clip_layer(stop_at_clip_layer)
return (clip,)

```
44 changes: 44 additions & 0 deletions docs/md/Comfy/CLIPTextEncode.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# CLIP Text Encode (Prompt)
## Documentation
- Class name: `CLIPTextEncode`
- Category: `conditioning`
- Output node: `False`

The `CLIPTextEncode` node encodes text inputs using a CLIP model to produce conditioning information. It tokenizes the input text and then encodes these tokens to generate a conditioning vector and a pooled output, which are used for further processing or generation tasks.
## Input types
### Required
- **`text`**
- The text input to be encoded. This is tokenized and encoded to produce the conditioning information.
- Python dtype: `str`
- Comfy dtype: `STRING`
- **`clip`**
- The CLIP model used for text tokenization and encoding. It plays a crucial role in transforming the input text into a format suitable for generating conditioning information.
- Python dtype: `torch.nn.Module`
- Comfy dtype: `CLIP`
## Output types
- **`conditioning`**
- The output conditioning information, consisting of a conditioning vector and a pooled output, derived from the encoded text. This information is crucial for guiding the generation process in tasks such as image synthesis.
- Python dtype: `List[Tuple[torch.Tensor, Dict[str, torch.Tensor]]]`
- Comfy dtype: `CONDITIONING`
## Usage tips
- Infra type: `GPU`
- Common nodes: `KSampler,ControlNetApplyAdvanced,KSampler //Inspire,SamplerCustom,Reroute,KSamplerAdvanced,ACN_AdvancedControlNetApply,ToBasicPipe,FaceDetailer`


## Source code
```python
class CLIPTextEncode:
@classmethod
def INPUT_TYPES(s):
return {"required": {"text": ("STRING", {"multiline": True}), "clip": ("CLIP", )}}
RETURN_TYPES = ("CONDITIONING",)
FUNCTION = "encode"

CATEGORY = "conditioning"

def encode(self, clip, text):
tokens = clip.tokenize(text)
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
return ([[cond, {"pooled_output": pooled}]], )

```
Loading

0 comments on commit f7de496

Please sign in to comment.