Decorative
students walking in the quad.

Comfyui pony workflow reddit

Comfyui pony workflow reddit. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. After all: Default workflow still uses the general clip encoder, ClipTextEncode Welcome to the unofficial ComfyUI subreddit. Less is more approach. The graphic style I think it was 3DS Max. I have a question about how to use Pony V6 XL in comfyUI? SD generates blurry images for me. 2 step loras @ 2 step also very bland, 4 step loras @ 4 step , same. It's become such a different model that most of the loras don't work with it. The ui feels professional and directed. You can also easily upload & share your own ComfyUI workflows, so that others can build on top Jul 9, 2024 · How the workflow progresses: Initial image generation; Hands fix; Watermark removal; Ultimate SD Upscale; Eye detailer; Save image; This workflow contains custom nodes from various sources and can all be found using comfyui manager. A CosXL Edit model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image, similar to InstructPix2Pix. 2 - At least with pony hyper seems better. github. Here goes the philosophical thought of the day, yesterday I blew my ComfyUI (gazilions of custom nodes, that have wrecked the ComfyUI, half of the workflows did not worked, because dependency difference between the packages between those workflows were so huge, that I had to do basically a full-blown reinstall). It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. (I've also edited the post to include a link to the workflow) Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Pony is weird. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. There are plenty of ways, it depends on your needs, too many to count. I share many results and many ask to share. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. You can't change clipskip and get anything useful from some models (SD2. Upcoming tutorial - SDXL Lora + using 1. And above all, BE NICE. This is gonna replace lightning lora's when using with pony at least for me. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. png) Flux Schnell is a distilled 4 step model. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Any suggestions? That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. Comfy Workflows Comfy Workflows. Belittling their efforts will get you banned. So, up until today, I figured the "default workflow" was still always the best thing to use. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. ComfyUI is a completely different conceptual approach to generative art. I really really love how lightweight and flexible it is. If you see any red nodes, I recommend using comfyui manager's "install missing custom nodes" function. For your all-in-one workflow, use the Generate tab. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. BTW , 1step lora's are unusable on both. Share, discover, & run thousands of ComfyUI workflows. ComfyUI is usualy on the cutting edge of new stuff. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. ) Ctrl C, then in your workflow Ctrl V. A lot of people are just discovering this technology, and want to show off what they created. It's simple and straight to the point. I've been especially digging the detail in the clothing more than anything else. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows You can just use someone elses workflow of 0. What im thinking of is setting up a workflow that uses Pony then run it back again for a second pass with IP Adapter img2img with the image from the pony pipeline and see how that goes. Please share your tips, tricks, and workflows for using this software to create your AI art. I don't have much time to type but: The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. AP Workflow v3. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. I've color-coded all related windows so you always know what's going on. But it's reasonably clean to be used as a learning tool, which is and will always remain the main goal of this workflow. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments I’m finding it hard to stick with one and I’m constantly trying different combinations of Loras with Checkpoints. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. Hi. I hope that having a comparison was useful nevertheless. Offers various art styles. It was one of the earliest to add support for turbo, for example. So, I just made this workflow ComfyUI. So I'm happy to announce today: my tutorial and workflow are available. Just my two cents. I use a lot of the merges on CivitAI, and one other key I've found is using a low CFG. 5-5 most of the time. Pony Diffusion and EpicRealism seem to be my “go to” options, but then I try something like Juggernaut or RealVis and I’m back to racking my brain. YMMV but lower CFG with pony has TREMENDOUSLY reduced my frustration with it Anyone have a workflow to do the following. For a dozen days, I've been working on a simple but efficient workflow for upscale. It shines with LoRAs but I personally haven't used Pony itself for months. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. 3 - At least to my eyes, 2 step lora @ 5 step is better than 4 step lora @ 5 steps. 0 I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. I wanted a very simple but efficient & flexible workflow. Nothing fancy. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. Hey Reddit! I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. io/ComfyUI_examples/flux/flux_dev_example. 0 and upscalers Welcome to the unofficial ComfyUI subreddit. Specializes in adorable anime characters. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. 1 or not. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. Like 2. We would like to show you a description here but the site won’t allow us. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. Welcome to the unofficial ComfyUI subreddit. It’s becoming very overwhelming and counterproductive to my workflow. It's not for beginners, but that's OK. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Welcome to the unofficial ComfyUI subreddit. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. What samplers should I use? How many steps? What am I doing wrong? Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. in your workflow HandsRefiner works as a detailer for the properly generated hands, it is not a "fixer" for wrong anatomy - I say it because I have the same workflow myself (unless if you are trying to connect some depth controlnet to that detailer node) Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. hopefully this will be useful to you. Also, if this is new and exciting to you, feel free to post Hello good people! I need your advice or some ready-2-go workflow to recreate this one workflow from A1111 in Comfy: 1 step: generating images with adding some (2-3) additional LORAs. Not a specialist, just a knowledgeable beginner. but mine do include workflows for the most part in the video description. Just load your image, and prompt and go. 9(just search in youtube sdxl 0. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". I need a img2img pony Mar 23, 2024 · My Review for Pony Diffusion XL: Skilled in NSFW content. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. com/ How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Very proficient in furry, feet, almost every NSFW stuffs etc Beside that, if you have a large workflow built out, but want to add in a section from someone else's workflow, open the other workflow in another tab, you can hold shift and select each node individually to select a bunch (or hold down ctrl and drag around a group of nodes you want to copy. Number 1: This will be the main control center. I'm not sure if IP Adapter will. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. What’s New in 4. Also, if this is new and exciting to you, feel free to post comfy uis inpainting and masking aint perfect. Using the basic comfy workflow from huggingface, the sd3_medium_incl_clips model, latest version of comfy, all default workflow settings, on M3 Max MBP, all I can produce are these noise images. I just released version 4. 0 of my AP Workflow for ComfyUI. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. Nobody needs all that, LOL. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. May 19, 2024 · Download the workflow and open it in ComfyUI. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. . Starting workflow. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). Ending Workflow. Help me make it better! Welcome to the unofficial ComfyUI subreddit. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. Hey everyone, We've built a quick way to share ComfyUI workflows through an API and an interactive widget. gidv aehm tdtmfr izjjf tmccu alkh gcivj azhc stw wnkb

--