Stable Diffusion | ComfyUI API 工作流自动优化

Stable Diffusion | ComfyUI API 工作流自动优化

    正在检查是否收录...

ComfyUI 可直接保存生图工作流为 API 格式,但该 API 格式文本行数较多且节点顺序与逻辑执行顺序不一致,不利于编写或修改 API 的调用代码。

在上一篇文章 Stable Cascade | ComfyUI API 工作流格式优化 中介绍了 API 工作流的结构以及手工优化 API 工作流的方法。

本文主要介绍自动优化 ComfyUI API 工作流的 Python 代码。

1. 拓扑排序函数

ComfyUI API 工作流可以看作一张有向图,因此可使用 collections 库的 deque 模块对工作流节点进行拓扑排序。拓扑排序函数如下:

from collections import deque def topological_sort(graph): in_degree = {v: 0 for v in graph} for node in graph: for neighbor in graph[node]: in_degree[neighbor] += 1 queue = deque([node for node in in_degree if in_degree[node] == 0]) result = [] while queue: node = queue.popleft() result.append(node) for neighbor in graph[node]: in_degree[neighbor] -= 1 if in_degree[neighbor] == 0: queue.append(neighbor) if len(result) != len(graph): return None return result

拓扑排序函数参考自文章:Python深度学习-有向图合并、排序、最长路径计算 。

2. 工作流转有向图

import json # 将工作流转为有向图 link_list = {} for node in workflow: node_link = [] for input in workflow[node]["inputs"]: if isinstance(workflow[node]["inputs"][input], list): node_link.append(workflow[node]["inputs"][input][0]) link_list[node] = node_link print(link_list)

示例工作流内容(workflow_api.json 文件内容):

{ "3": { "inputs": { "seed": 314307448448003, "steps": 20, "cfg": 4, "sampler_name": "euler_ancestral", "scheduler": "simple", "denoise": 1, "model": [ "41", 0 ], "positive": [ "6", 0 ], "negative": [ "7", 0 ], "latent_image": [ "34", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "6": { "inputs": { "text": "evening sunset scenery blue sky nature, glass bottle with a fizzy ice cold freezing rainbow liquid in it", "clip": [ "41", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Positive Prompt)" } }, "7": { "inputs": { "text": "text, watermark", "clip": [ "41", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative Prompt)" } }, "8": { "inputs": { "samples": [ "33", 0 ], "vae": [ "42", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "9": { "inputs": { "filename_prefix": "ComfyUI", "images": [ "8", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } }, "33": { "inputs": { "seed": 183495397600639, "steps": 10, "cfg": 1.1, "sampler_name": "euler_ancestral", "scheduler": "simple", "denoise": 1, "model": [ "42", 0 ], "positive": [ "36", 0 ], "negative": [ "7", 0 ], "latent_image": [ "34", 1 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "34": { "inputs": { "width": 1024, "height": 1024, "compression": 42, "batch_size": 1 }, "class_type": "StableCascade_EmptyLatentImage", "_meta": { "title": "StableCascade_EmptyLatentImage" } }, "36": { "inputs": { "conditioning": [ "6", 0 ], "stage_c": [ "3", 0 ] }, "class_type": "StableCascade_StageB_Conditioning", "_meta": { "title": "StableCascade_StageB_Conditioning" } }, "41": { "inputs": { "ckpt_name": "stable_cascade_stage_c.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "42": { "inputs": { "ckpt_name": "stable_cascade_stage_b.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } } }

工作流转有向图结果:

{'3': ['41', '6', '7', '34'], '6': ['41'], '7': ['41'], '8': ['33', '42'], '9': ['8'], '33': ['42', '36', '7', '34'], '34': [], '36': ['6', '3'], '41': [], '42': []}

以节点3为例,从有向图中可以看到,节点3同时指向节点41,6,7,34。需注意此时节点指向是朝向输入节点的,而工作流应该是输入指向输出方向,所以在拓扑排序后节点序列需要反向排序一次。

3. 工作流拓扑排序

调用拓扑排序函数对工作流有向图进行拓扑排序,并将排序后的节点序列反向排序。

# 工作流(有向图)拓扑排序 order_list = topological_sort(link_list)[::-1] print(f'原始工作流拓扑排序结果:{order_list}')

排序结果:

原始工作流拓扑排序结果:['41', '34', '7', '6', '3', '36', '42', '33', '8', '9']

4. 工作流节点重排序

重排序流程:

根据拓扑排序后的节点顺序,依次将节点编号替换为不与原始编号重叠的较大数字(此处将节点编号临时编至1001,1002,1003,...); 使用 collections 库的 OrderedDict 模块排序工作流; 从1开始重新编号工作流节点。
# 工作流节点临时编号(为了避免覆盖已有编号,从最大节点总数 * 10 + 1开始编号,同时可避免下一步排序时出现1,10,11,..., 2的情况) max_nodes = 100 new_node_id = max_nodes * 10 + 1 workflow_string = json.dumps(workflow) for node in order_list: workflow_string = workflow_string.replace(f'"{node}"', f'"{new_node_id}"') new_node_id += 1 workflow = json.loads(workflow_string) # 工作流排序 sorted_data = OrderedDict(sorted(workflow.items())) sorted_data = json.dumps(sorted_data) workflow = json.loads(sorted_data) # 工作流节点最终编号(从1开始编号) new_node_id = 1 workflow_string = json.dumps(workflow) for node in workflow: workflow_string = workflow_string.replace(f'"{node}"', f'"{new_node_id}"') new_node_id += 1 workflow = json.loads(workflow_string) print(workflow)

输出结果:

{'1': {'inputs': {'ckpt_name': 'stable_cascade_stage_c.safetensors'}, 'class_type': 'CheckpointLoaderSimple', '_meta': {'title': 'Load Checkpoint'}}, '2': {'inputs': {'width': 1024, 'height': 1024, 'compression': 42, 'batch_size': 1}, 'class_type': 'StableCascade_EmptyLatentImage', '_meta': {'title': 'StableCascade_EmptyLatentImage'}}, '3': {'inputs': {'text': 'text, watermark', 'clip': ['1', 1]}, 'class_type': 'CLIPTextEncode', '_meta': {'title': 'CLIP Text Encode (Negative Prompt)'}}, '4': {'inputs': {'text': 'evening sunset scenery blue sky nature, glass bottle with a fizzy ice cold freezing rainbow liquid in it', 'clip': ['1', 1]}, 'class_type': 'CLIPTextEncode', '_meta': {'title': 'CLIP Text Encode (Positive Prompt)'}}, '5': {'inputs': {'seed': 314307448448003, 'steps': 20, 'cfg': 4, 'sampler_name': 'euler_ancestral', 'scheduler': 'simple', 'denoise': 1, 'model': ['1', 0], 'positive': ['4', 0], 'negative': ['3', 0], 'latent_image': ['2', 0]}, 'class_type': 'KSampler', '_meta': {'title': 'KSampler'}}, '6': {'inputs': {'conditioning': ['4', 0], 'stage_c': ['5', 0]}, 'class_type': 'StableCascade_StageB_Conditioning', '_meta': {'title': 'StableCascade_StageB_Conditioning'}}, '7': {'inputs': {'ckpt_name': 'stable_cascade_stage_b.safetensors'}, 'class_type': 'CheckpointLoaderSimple', '_meta': {'title': 'Load Checkpoint'}}, '8': {'inputs': {'seed': 183495397600639, 'steps': 10, 'cfg': 1.1, 'sampler_name': 'euler_ancestral', 'scheduler': 'simple', 'denoise': 1, 'model': ['7', 0], 'positive': ['6', 0], 'negative': ['3', 0], 'latent_image': ['2', 1]}, 'class_type': 'KSampler', '_meta': {'title': 'KSampler'}}, '9': {'inputs': {'samples': ['8', 0], 'vae': ['7', 2]}, 'class_type': 'VAEDecode', '_meta': {'title': 'VAE Decode'}}, '10': {'inputs': {'filename_prefix': 'ComfyUI', 'images': ['9', 0]}, 'class_type': 'SaveImage', '_meta': {'title': 'Save Image'}}}

5. 工作流简化

工作流中 '_meta': {'title': '...'} 字段在调用 API 过程中无实际意义,可以删除以简化工作流。

# 删除子键"_meta": "..." for node in workflow: del workflow[node]["_meta"] print(workflow)

输出结果:

{'1': {'inputs': {'ckpt_name': 'stable_cascade_stage_c.safetensors'}, 'class_type': 'CheckpointLoaderSimple'}, '2': {'inputs': {'width': 1024, 'height': 1024, 'compression': 42, 'batch_size': 1}, 'class_type': 'StableCascade_EmptyLatentImage'}, '3': {'inputs': {'text': 'text, watermark', 'clip': ['1', 1]}, 'class_type': 'CLIPTextEncode'}, '4': {'inputs': {'text': 'evening sunset scenery blue sky nature, glass bottle with a fizzy ice cold freezing rainbow liquid in it', 'clip': ['1', 1]}, 'class_type': 'CLIPTextEncode'}, '5': {'inputs': {'seed': 314307448448003, 'steps': 20, 'cfg': 4, 'sampler_name': 'euler_ancestral', 'scheduler': 'simple', 'denoise': 1, 'model': ['1', 0], 'positive': ['4', 0], 'negative': ['3', 0], 'latent_image': ['2', 0]}, 'class_type': 'KSampler'}, '6': {'inputs': {'conditioning': ['4', 0], 'stage_c': ['5', 0]}, 'class_type': 'StableCascade_StageB_Conditioning'}, '7': {'inputs': {'ckpt_name': 'stable_cascade_stage_b.safetensors'}, 'class_type': 'CheckpointLoaderSimple'}, '8': {'inputs': {'seed': 183495397600639, 'steps': 10, 'cfg': 1.1, 'sampler_name': 'euler_ancestral', 'scheduler': 'simple', 'denoise': 1, 'model': ['7', 0], 'positive': ['6', 0], 'negative': ['3', 0], 'latent_image': ['2', 1]}, 'class_type': 'KSampler'}, '9': {'inputs': {'samples': ['8', 0], 'vae': ['7', 2]}, 'class_type': 'VAEDecode'}, '10': {'inputs': {'filename_prefix': 'ComfyUI', 'images': ['9', 0]}, 'class_type': 'SaveImage'}}

6. 工作流保存

这里按每节点一行输出至 workflow_api_ordered.json 文件。

# 保存处理好的工作流 with open('workflow_api_ordered.json', 'w') as f: f.write('{') line = 1 for key in workflow: f.write('\n') node_info = str(workflow[key]).replace("'", '"') if not line == len(workflow): f.writelines('"' + str(key) + '": ' + node_info + ",") else: f.writelines('"' + str(key) + '": ' + node_info) line += 1 f.write('\n'+'}')

workflow_api_ordered.json 文件内容:

{ "1": {"inputs": {"ckpt_name": "stable_cascade_stage_c.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "2": {"inputs": {"width": 1024, "height": 1024, "compression": 42, "batch_size": 1}, "class_type": "StableCascade_EmptyLatentImage"}, "3": {"inputs": {"text": "text, watermark", "clip": ["1", 1]}, "class_type": "CLIPTextEncode"}, "4": {"inputs": {"text": "evening sunset scenery blue sky nature, glass bottle with a fizzy ice cold freezing rainbow liquid in it", "clip": ["1", 1]}, "class_type": "CLIPTextEncode"}, "5": {"inputs": {"seed": 314307448448003, "steps": 20, "cfg": 4, "sampler_name": "euler_ancestral", "scheduler": "simple", "denoise": 1, "model": ["1", 0], "positive": ["4", 0], "negative": ["3", 0], "latent_image": ["2", 0]}, "class_type": "KSampler"}, "6": {"inputs": {"conditioning": ["4", 0], "stage_c": ["5", 0]}, "class_type": "StableCascade_StageB_Conditioning"}, "7": {"inputs": {"ckpt_name": "stable_cascade_stage_b.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "8": {"inputs": {"seed": 183495397600639, "steps": 10, "cfg": 1.1, "sampler_name": "euler_ancestral", "scheduler": "simple", "denoise": 1, "model": ["7", 0], "positive": ["6", 0], "negative": ["3", 0], "latent_image": ["2", 1]}, "class_type": "KSampler"}, "9": {"inputs": {"samples": ["8", 0], "vae": ["7", 2]}, "class_type": "VAEDecode"}, "10": {"inputs": {"filename_prefix": "ComfyUI", "images": ["9", 0]}, "class_type": "SaveImage"} }

至此,ComfyUI API 工作流优化结束。

7. 完整Python代码

以下为 ComfyUI API 工作流自动优化 完整代码(仅打印最终工作流内容)。

import json from collections import deque, OrderedDict # 有向图拓扑排序函数 def topological_sort(graph): in_degree = {v: 0 for v in graph} for node in graph: for neighbor in graph[node]: in_degree[neighbor] += 1 queue = deque([node for node in in_degree if in_degree[node] == 0]) result = [] while queue: node = queue.popleft() result.append(node) for neighbor in graph[node]: in_degree[neighbor] -= 1 if in_degree[neighbor] == 0: queue.append(neighbor) if len(result) != len(graph): return None return result # 加载工作流文件 with open('workflow_api.json', 'r', encoding='utf-8', errors='ignore') as file: workflow = json.loads(file.read()) # 将工作流转为有向图 link_list = {} for node in workflow: node_link = [] for input in workflow[node]["inputs"]: if isinstance(workflow[node]["inputs"][input], list): node_link.append(workflow[node]["inputs"][input][0]) link_list[node] = node_link # 工作流(有向图)拓扑排序 order_list = topological_sort(link_list)[::-1] # 工作流节点临时编号(为了避免覆盖已有编号,从最大节点总数 * 10 + 1开始编号,同时可避免下一步排序时出现1,10,11,..., 2的情况) max_nodes = 100 new_node_id = max_nodes * 10 + 1 workflow_string = json.dumps(workflow) for node in order_list: workflow_string = workflow_string.replace(f'"{node}"', f'"{new_node_id}"') new_node_id += 1 workflow = json.loads(workflow_string) # 工作流节点排序 sorted_data = OrderedDict(sorted(workflow.items())) sorted_data = json.dumps(sorted_data) workflow = json.loads(sorted_data) # 工作流节点最终编号(从1开始编号) new_node_id = 1 workflow_string = json.dumps(workflow) for node in workflow: workflow_string = workflow_string.replace(f'"{node}"', f'"{new_node_id}"') new_node_id += 1 workflow = json.loads(workflow_string) # 删除子键"_meta": "..." for node in workflow: del workflow[node]["_meta"] # 保存处理好的工作流 with open('workflow_api_ordered.json', 'w') as f: f.write('{') line = 1 for key in workflow: f.write('\n') node_info = str(workflow[key]).replace("'", '"') if not line == len(workflow): f.writelines('"' + str(key) + '": ' + node_info + ",") else: f.writelines('"' + str(key) + '": ' + node_info) line += 1 f.write('\n'+'}') # 打印处理好的工作流 with open('workflow_api_ordered.json', 'r', encoding='utf-8', errors='ignore') as file: print("处理好的工作流:") for line in file: print(line)

输出结果:

处理好的工作流: { "1": {"inputs": {"ckpt_name": "stable_cascade_stage_c.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "2": {"inputs": {"width": 1024, "height": 1024, "compression": 42, "batch_size": 1}, "class_type": "StableCascade_EmptyLatentImage"}, "3": {"inputs": {"text": "text, watermark", "clip": ["1", 1]}, "class_type": "CLIPTextEncode"}, "4": {"inputs": {"text": "evening sunset scenery blue sky nature, glass bottle with a fizzy ice cold freezing rainbow liquid in it", "clip": ["1", 1]}, "class_type": "CLIPTextEncode"}, "5": {"inputs": {"seed": 314307448448003, "steps": 20, "cfg": 4, "sampler_name": "euler_ancestral", "scheduler": "simple", "denoise": 1, "model": ["1", 0], "positive": ["4", 0], "negative": ["3", 0], "latent_image": ["2", 0]}, "class_type": "KSampler"}, "6": {"inputs": {"conditioning": ["4", 0], "stage_c": ["5", 0]}, "class_type": "StableCascade_StageB_Conditioning"}, "7": {"inputs": {"ckpt_name": "stable_cascade_stage_b.safetensors"}, "class_type": "CheckpointLoaderSimple"}, "8": {"inputs": {"seed": 183495397600639, "steps": 10, "cfg": 1.1, "sampler_name": "euler_ancestral", "scheduler": "simple", "denoise": 1, "model": ["7", 0], "positive": ["6", 0], "negative": ["3", 0], "latent_image": ["2", 1]}, "class_type": "KSampler"}, "9": {"inputs": {"samples": ["8", 0], "vae": ["7", 2]}, "class_type": "VAEDecode"}, "10": {"inputs": {"filename_prefix": "ComfyUI", "images": ["9", 0]}, "class_type": "SaveImage"} }

workflowflow工作流cadcliclipjsoncodeapirapcomfyuiappfixbotpromptpythoncodingpython代码深度学习
  • 本文作者:李琛
  • 本文链接: https://wapzz.net/post-17583.html
  • 版权声明:本博客所有文章除特别声明外,均默认采用 CC BY-NC-SA 4.0 许可协议。
本站部分内容来源于网络转载,仅供学习交流使用。如涉及版权问题,请及时联系我们,我们将第一时间处理。
文章很赞!支持一下吧 还没有人为TA充电
为TA充电
还没有人为TA充电
0
  • 支付宝打赏
    支付宝扫一扫
  • 微信打赏
    微信扫一扫
感谢支持
文章很赞!支持一下吧
关于作者
2.3W+
5
0
1
WAP站长官方

AI日报:文本转语音模型Fish Speech;Meta 3D Gen发布,1分钟快速构建3D模型;AI生成熊猫吃泡面视频刷屏抖音

上一篇

海淀家长疯抢的AI神器,有人用它高考前60天提分100+?星火4.0打造最强AI学习机

下一篇
  • 复制图片
按住ctrl可打开默认菜单