Best sdxl upscaler reddit. 3, no added noise or other changes.


Best sdxl upscaler reddit personally, I won't suggest to use arbitary initial resolution, it's a long topic in itself, but the point is, we should stick to recommended resolution from SDXL training resolution (taken from SDXL paper). ) Please see the original 512px image: Original 512px 4x_UltraSharp upscale to 1536x1536px ESRGAN_4x upscale to 1536x1536px Exactly, why are you testing with loras and ipadapters and other stuff that inherently overwrites/adds to the models capabilities? Test with just prompt, or if you need control of the face angle - controlnet, but only one that uses facial landmarks (openpose) and nothing else, not even negative embeds (or if any, one negative one, same for all models). Here is txt2img window /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Okay. . litekite_ • Additional comment 39 votes, 18 comments. Using sdxl, I can upscale 2x one time and the result will look really good after that. Your knowledge gained from other resources (for example, resolutions around 1024 are good enough for SDXL) Tried it with SDXL-base and SDXL-Turbo. The Upscaler function of my AP Workflow 8. Any idea? (Already tried SIAX_200K, good details but adds too much texture/noise to the image. 3) SDXL 1. I just started generating landscapes any tip and advice is welcomed . 447 downscale from 4x upscale model) for reaching 1600 x 2000 resolution basicaly, from target final resolution, it will gives information: what SDXL ratio and resolution I should choose. It definitely alters the image a lot more, even making the flying car kind of blend in with the buildings, but it also GREATLY adds interesting, clear lettering to the signs the best approach here might be to run both ways, then combine them in a photo app to mask out some sections of the image to show the 0. SDXL is significantly better at prompt comprehension, and image composition, but 1. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) Do a basic Nearest-Exact upscale to 1600x900 (no upscaler model). 24 votes, 56 comments. Having played around with SDXL for a bit as well as this new Dreamshaper, I'm quite positive that NSFW isn't going to be hard to do. 5 or SDXL images using SD1. When upscaling you need to remove all parts of your prompt that relate to content (eg 1girl, woman, man) and just leave those that describe the style (best quality, masterpiece). 35, Ultimate SD upscale upscaler: 4x-UltraSharp, Ultimate SD upscale tile_width: 896, Ultimate SD upscale tile_height: 896 And it is often less good than normal image upscaler because : - It add much more details, and with that the possibility of defects in the picture ; don't go crazy with the final resolution - It only works at high denoising, and so is not good for doing a fine upscale of a picture which details you want to preserve Step 1 - Text to image: Prompt varies a bit from picture to picture, but here is the first one: high resolution photo of a transparent porcelain android man with glowing backlit panels, closeup on face, anatomical plants, dark swedish forest, night, darkness, grainy, shiny, fashion, intricate plant details, detailed, (composition:1. If it's the best way to install control net because when I tried manually doing it . Sounds like the multipurpose choice? Posted by u/July7242023 - 10 votes and 2 comments Here's a link to The List, I tried lots of them but wasn't looking for anime specific results and haven't really tried upscaling too many anime pics yet. I saw in some post that some people do it iterativelly or mixing many samplers but I don't understand much how to do that. 4x ESRGAN works fine from the ones installed by default, or download 4x Ultrasharp and put it in your models/ESRGAN folder. 5 needs to get more detail. Reply reply Tried it with SDXL-base and SDXL-Turbo. Welcome to the official subreddit of the PC Master Race / PCMR! All PC-related content is welcome, including build help, tech support, and any doubt one might have about PC ownership. Comparison of using ddim as base sampler and using different schedulers 25 steps on base model (left) and refiner (right) base model I believe the left one has more detail so back to testing comparison grid comparison between 24/30 (left) using refiner and 30 steps on base only Refiner on SDXL 0. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and prompt : A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. But if I try to upscale from there I get a message saying: "Image is too large. How many you need is extremely dependent on your subject matter, how many LORAs you use, the complexity of the prompt, the resolution, and a ton of different After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. 8 strength with SUPIR (Scaling-UP Image Restoration) based on LoRA and Stable Diffusion XL(SDXL) framework released by XPixel group, helps you to upscale your image in no time. So, I'd conclude that hires steps don't make the picture better or worse but allow you to upscale your images to a higher magnification. Here's what I typed last time this question was asked: AFAIK for automatic1111 only the "SD upscaler" No problem, I think when ControlNet is compatible with SDXL you will be able to upscale and get it even more water color like, as SDXL is really good at styles like water colors. When using Roop (faceswaping extension) on sdxl and even some non xl models, i discovered that the face in the resulting image was always blurry. A normal plain upscaler can interpolate and kind of guess details, but won't create new Same with SDXL, you can use any two SDXL models as the base model and refiner pair. 429x. Which upscaler do you use to upscale your latent before passing it to the second ksampler? I wouldn't HAVE to use an SD15 checkpoint for that second ksampler, right? What are the benefits you see in using an SD15 there instead of the same original SDXL checkpoint? Also, would you happen to have a clean workflow that demonstrates this idea? ComfyUI Workflow 4x upscaler, variable prompter (SD1. My understanding is that SUPIR isn't really an upscaler just a really fancy Controlnet. However, when upscaling Flux images with an SD1. you can drag the output images to the input so you can quickly do a 4x by doing a 2x run and then dragging the output to the input and running again. 2 and 0. CN Tile to work with a KSampler (non-upscale), but our goal has always been to be able to use it with the Ultimate SD Upscaler r/Darkroom is Reddit's best place for discussions on film developing, printing, toning and hand-coloring prints, darkroom techniques, equipment and more. Please share your tips, tricks, and workflows for using this software to create your AI art. For latent upscalers you need at least 0. 3) and sampler without "a" if you dont want big changes from original. Posted by u/Striking-Long-2960 - 102 votes and 24 comments I was wondering what the best way to upscale sdxl images are now? With 1. Jokes aside, now we'll finally know how well SDXL 1. 5 checkpoint, this NMKD This new comparison now should be more accurate with seeing which is the best TLDR: Best settings for SDXL are as follows. I tried with --medvram-sdxl and --xformers but there was no difference with either both or just --medvram-sdxl. Alternative Photography process discussion is also welcome. Yes, I agree with your theory. 5, SDXL base is already "fine-tuned", so training most LoRA on it should not be any harder than training on a specific model. This is done after the refined image is upscaled and encoded into a latent. I like to create images like that one: end result. I'm currently running SD on my laptop using the Easy Stable Diffusion UI and my laptop's CPU. The left side is my "control group" - ESRGAN upscaler, denoise 0. Reborn v2. 3 GB Config - More Info In Comments - What are the the best SDXL based models out there? How is the SDXL fined tuned models scene doing? I hear there are some fine tuned models on huggingface and civitai? For example using WebUI, it is best to generate small 512x512 images, then upscale the one you like best. Is this supposed to be used with a tile upscaler like UltimateSDUpscale, or is it just for img2img upscale and HiRes fix? Edit: I tested it out myself, and it does control the output in tiled upscaling like UltimateSDUpscale. 2 to 0. TY :) Use a DPM-family sampler. 7-0. Use the same seed as your base img. Reply reply More replies More replies More replies More replies Denoising strenght is important, you need to test what's the best for you. Welcome to the unofficial ComfyUI subreddit. I'm currently running into certain prompts where latent just looks awful. py:357: UserWarning: 1Torch was not compiled with flash attention. It didn't work out. 5 and in my experience 0. K-DPM-schedulers also work well with higher step counts. There’s a custom node that basically acts as Ultimate SD Upscale. SD Upscaler doesn't just upscale the picture like Photoshop would do (which you also can do in automatic1111 in the "extra" tab), they regenerate the image so further new detail can be added in the new higher resolution which didn't exist in the lower one. Directly returned the SR image. What methods are ppl using to say create 4k+ resolutions? Change the model from the SDXL base to the refiner and process the raw picture in img2img using the Ultimate SD upscale extension with the following settings: 235745af8d, VAE: sdxl_vae. 22 votes, 25 comments. This is the concept: Generate your usual 1024x1024 Image. Next time, just ask me before assuming SAI has directly told us to not help individuals who may be using leaked models, which is a bit of a shame (since that is the opposite of true ️) . Only instanID works like magic, but i can't run it on my laptop cuz the shitty thing can't handle sdxl i only use the demo version and it can't make realistic images. After that repeat again. 5 & SDXL/Turbo) Resource - Update I created my first workflow for ComfyUI and decided to share it with you since I found it quite helpful for me. 4 works best. 5, also have IPadapter and controlnet if needed). Also, a few comments were saying it could be integrated into the Img2Img upscale workflow that many already use to make larger images. Explore new ways of using Würstchen v3 architecture and gain a unique experience that sets it apart from SDXL and In relation to the previous point, I recommend using Clarity Upscaler combined with tools like 4x_NMKD-Siax_200k is good for detailed art and photorealistic (for photos too) but this result is not enough good for art or anime. I've been using Blur with Ultimate SD Upscale to generate images up to 12K using SDXL. Best way to upscale with automatic 1111 1. 0 Base SDXL 1. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I've struggled with Hires. 0: Options: Can use prompt, positive and negative terms, style, and negative style. You dont select an upscaler by So, I just 4x upscaled the original pic with 0. Best was to use sdxl Question - Help Hey, I use my 7900xtx to generate images with sdxl and now my question is, would it be better for me to use some Linux distro or rather Microsoft olive to get the most out of my GPU? It's not a new base model, it's simply using SDXL base as jumping off point again, like all other Juggernaut versions (and any other SDXL model really). (There’s custom nodes for pretty much everything, including ADetailer. 3 denoise strength. 5 model (1st image attached above) using the SDXL model (2nd image attached above) to add realism to the low quality faces while preserving the emotional qualities of the faces + their bone structure etc The asphalt on SD3 was the first thing that I noticed a big improvement upon, but not only that, on SDXL the shadow under the car is too dark, uncanny if anything, and it doesn't feel like the car is placed on the asphalt properly, kinda like in games where the characters feel floaty or like they are weightless due to how they step ont he ground and move, SD3 in comparison looks too Thanks for getting this out, and for clearing everything up. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). I’ve been meaning to make sure my SDXL workflow itself didn’t need fine tuning before I post something just like this. I wanted to title it "What is the BEST upscaler", but figured the answer may vary and depend on the use of every different One second before. The two tools do different things under the hood and are not interchangeable 1-to-1. Is there any way I can iterate on the output of SDXL Turbo using Comfy UI? Upscale while adding "detailed faces" positive clip to an upscaler as input? Do you have ComfyUI manager. What is considered the best artistic checkpoint -no anime- for sdxl at this time and age? 37 · 2 comments . Download the kohya_controllllite_xl_blur model from the link I provided in the last post, put it in your Controlnet model directory, then fire up controlnet. Please keep posted images SFW. Please share your tips, tricks, and Steps wise: SDXL for example tends to like higher step counts, up to a point, than 1. The right side uses the Siax upscaler and the above settings. 6? I tried the old method with Controlnet, ultimate upscaler with 4x-ultrasharp , but it returned errors like ”mat1 and mat2 shapes cannot be multiplied” (SDXL) is mostly subscription based? How long will they last and whats The more anyone engages with me and my opinions on the matter, the more people are introduced to who he is and what he has to offer. 0 and ran the following prompt in a batch of 10 at 1024x1024: Uses base, refiner, and upscale model Meantime: 38 sec Results: workflow v1. 9, end_percent 0. HiRes Fix use upscaling latency (change the size of the noise) so instead of getting a 512x512 pixel image you get a 1024 x 1024 image with more details. 50 · 22 comments . 25-0. 4x-UltraSharp is a decent general purpose upscaler. true. Not knowing what the correct way to use this was, I tried this out: Loaded up the new SDXL model 1. For some context, I am trying to upscale images of an anime village, something like Ghibli style. I'm revising the workflow below to include a non-latent option. SDXL is the best one to get a base image imo, and later I just use Img2Img with other model to hiresfix it. The blurred latent mask does its best to prevent ugly seams. Uses both base and refiner models, 2 upscale models, and the VAE model, That's why i need a good upscaler that will smooth over those areas, but it will also not compromise on my character's features. Be respectful and follow Reddit's Content Policy This Subreddit is a place for respectful discussion. Euler is unusable for anything photorealistic. I have switched over to the Ultimate SD Upscale as well and it works the same for the most part, only with better results. Then click on option "just resize latent upscale". 5 denoise works very nice. Most SDXL fine-tuned are tuned for photo style images anyway, so not that many new concepts added. 0 Alpha + SD XL Refiner 1. 45 The blurred latent mask does its best to prevent ugly seams. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Unlike SD1. 1 denoise The Upscaler function of my AP Workflow 8. 5 and SD2. Increase the hires steps to 75, and you can increase the upscale to 2 before you will run out of memory. 5 should get me around 4,480 x 3584 however I am getting 3200 x4000 with A1111 sd xl Anyone has this issue? what is the solution? I get great results, use an upscaler like remarci x4 on the settings, dont use latent, denoise about 0. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and Artists:"they stole my pics!" Programmers "they stole my code!" Writers "they stole my book!" FASHION MODELS: "they stole my look!" I have been generally pleased with the results I get from simply using additional samplers. But in SDXL, I find the ESRGAN models tend to oversharpen in places to give UniversalUpscalerV2-Sharper provides a nice amount of high frequency artifacts, which when img2img'd or hires fix'd turns into detail since its treated as noise. Img2img using SDXL Refiner, DPM++2M 20 steps 0. The only difference is that it doesn't continue on from Juggernaut 9's training, it went back to the start. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. 7, for non-latent upscalers you will get best results under 0. 0 for ComfyUI, which is free, uses the CCSR node and it can upscale 8x and 10x without even the need for any noise injection (assuming you don't want "creative upscaling"). You have a bunch of custom things in here that arent necessary to demonstrate "TurboSDXL + 1 Step Hires Fix Upscaler", and basically wasting our time trying to find things because you dont even provide If you're using SDXL, you can try experimenting and seeing which LoRAs can achieve a similar effect. The model is trained on 20 million high-resolution Experimental LCM Workflow "The Ravens" for Würstchen v3 aka Stable Cascade is up and ready for download. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. 5 we had control net and tiling etc, which last I checked isn't viable with sdxl. The noise you're seeing from the latent upscaler is from giving it the same role in the workflow as the image upscaler. for your case, the target is 1920 x 1080, so initial recommended latent is 1344 x 768, then upscale it to 1. Right out of the box, SDXL fights you a bit on nudity but it can be convinced to do breasts. The higher the denoise number the more things it tries to change. 5x, 0. . Then I send the latent to a SD1. 5, using one of ESRGAN models usually gives a better result in Hires Fix. 3 GB VRAM via OneTrainer - Both U-NET and Text Encoder 1 is trained - Compared 14 GB config vs slower 10. SwinIR_4x shows stable average results in all tests. 5. Definitely the best if you just wanna prompt stuff without thinking too hard. 6 denoise and either: Cnet strength 0. Setup is simple. 1, base SDXL is so well tuned already for coherency that most other fine-tune models are basically only adding a "style" to it. Keep "From So my favorite so far is ESRGAN_4x but I am willing to try other upscaler good for adding fine detail and sharpeness. All --medvram-sdxl did was make RAM use more memory, GPU memory utilization is the same at 90%+ and no benefit in regards to generation time. At 0,3 I got already so much more details without stupid things appearing it seems good, but still doing test In Script, select Ultimate SD Upscale. 5 workflow because my favorite checkpoint analogmadness and random loras on civitai is mostly SD1. 3 usually gives you the best results. And then find the optimum denoising: too high and you can get hallucinated faces, too low and you don't upscale enough so it remains blurry. The best result I have gotten so far is from the regional sampler from Impact Pack, but it doesn't support SDE or UniPC samplers, unfortunately. Footnotes: [1] The SD upscale used conservative settings (low CFG scale, low denoising, 20 steps) (dont recall if I So, at least for SDXL, training with base SDXL is the right choice most of the time. 5 from the usual 1280x1024, upscaling this with 3. SDXL's refiner and HiResFix are just Img2Img at their core — so you can get this same result by taking the output from SDXL and running it through Img2Img with an SD v1. The need arises from the fact that in the last training I did at 1024x1024, the resolution I obtain is too small, and I can't find any way to upscale without introducing visible artifacts cause is a concept that SDXL never saw before. The right upscaler will always depend on the model and style of image you are generating; In SD 1. 0. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. Opinion: Too expensive and sometimes ignores some part of the input SergeSDXL . I think you’d just pipe the latent image into the sampler node that receives the sdxl model? The upscaler can blend seams, but it can't account for the differing ways things are changed between tiles. Thanks Recently other than SDXL, I just use Juggernaut and DreamShaper, Juggernaut is for realistic, but it can handle basically anything, DreamShaper excels in artistic styles, but also can handle anything else well. Speed up I saw in some post that some people do it iterativelly or mixing many samplers but I don't understand much how to do that. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. The SDXL uses Positional Encoding. 786 x upscale (or using 0. 51 denoising. Hope someone can advise. Both of them give me errors as "C:\Users\shyay\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention. For what i see, model dedicated to anime and fantasy work's good with skeleton. Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10. Then get to decide for themselves how to proceed, though you too are helping drive visibility, for the greater good, as he is doing good and this here, well, it will without question result in more money in his Sdxl is good at different styles of anime (some of which aren’t necessarily well represented in the 1. 5, euler, sgm_uniform or CNet strength 0. Is there any workflow prefered when dealing with SDXL/ comfyUI? Furthermore, reddit images Only issue I'm having is with upscaling. My current workflow involves going back and forth between a regional sampler, an upscaler, and Krita (for inpainting to fix errors & fill in the details) to refine the output iteratively. 25, then up cfg scale 8 to 12 or so, then you will get a lot of micro details without random houses and people or whatever is in your prompt, play around with denoise, it thats too high, you will get floaties, if its too low the upscaling will look grainy. I understand that I can get the CN Tile to work with a KSampler (non-upscale), but our goal has always been to be able to use it with the Ultimate SD Upscaler like we used the 1. 236 strength and 89 steps, which will take 21 steps total). EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. what image you are trying to make. I'm about to downvote it too. Yes really, when it first came out Dalle3 was spitting out insanely good stuff with basic prompts that Sdxl could not handle without constant modifications and tweaks. " Comfyui SDXL upscaler / hires fix Question - Help Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. 114 votes, 43 comments. All 3 are good for hiresfix and upscaling workflows, the best one will depend on your model and prompt since they handle certain textures and styles differently. 5 Medium. 20K subscribers in the comfyui community. 5 denoising and for best results closer to 0. This can let you really play around with refiner step much MUCH further than with the standard SDXL refiner model depending on how well your model choices play together. I also see the Automatic1111 Ultjmate SD Upscaler extension. Select a non-latent upscaler from the pulldown menu in your Settings tab. I've struggled with Hires. Please share your tips, tricks, and Hello Friends, Could someone guide me on efficiently upscaling a 1024x1024 DALLE-generated image (or any resolution) on a Mac M1 Pro? I'm quite new to this and have been using the "Extras" tab on Automatic1111 to upload and upscale images without entering a prompt. SD 3. 5 version in Automatic1111. Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I With Turbo SDXL I get images like these: Which are awesome for a 1 second generation, but they are not usable in my project because of the disfigured, deformed faces. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Welcome All Jumpers! This is a Sister subreddit to the makeyourchoice CYOA subreddit. So say at Hires Fix of 15 steps and 1. There is no best at everything option IMO. Instead, I use Tiled KSampler with 0. 5 era) but is less good at the traditional ‘modern 2k’ anime look for whatever reason. Use the 8 step lora at 0. But it is extremely light as we speak, so much so the Civitai AP Workflow v3. It takes about 20 minutes to generate an image, and they're hit and miss when they do come out, so I don't want to tax things further by adding in face fixer and AI upscaling during image creation. To replicate this, I usually go to extra to upscale scale by 3. DPM++ 2M Karras still seems to be the best sampler, this is what I used. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. 5 model (I set at 0. Reply reply That's why i need a good upscaler that will smooth over those areas, but it will also not compromise on my character's features. Set low denoise (~0. Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) I try to use comfyUI to upscale (use SDXL 1. 8x resize in my upscaler). I mostly go for realism/people with my gens and for that I really like 4x_NMKD-Siax_200k, it handles skin texture quite well but does some weird things with hair if the upscale factor is too large. But, if the eyes are both on the same wide tile, the changes to both eyes will tend to be consistent Thanks, I thought I was going crazy when I did my own testing. It is tuning for Anime like images, which TBH is kind of bland for base SDXL because it was tuned mostly for non My goal is to upscale the images generated by the 1. If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. 4 for denoise for the original SD Upscale. Difference? Upscalers change the size of the image using bilinear, nearest neightbor pixel and some algorithms. 9 (right) compared to base only, working as intended Using SDXL 0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I upscaled it to a resolution of 10240x6144 px for us to examine the results. For some reason, the lower the steps, the higher those peaks are, and thus meaning the lower upscale size you can do. (A1111, combined SDXL styles + Hires fix + SD Upscale) The reason was that the SD Ultimate Upscale workflow I used had a really good 2k Upscale step but the 2nd 4K Upscale step gave bad results so I replaced the 4K step with SUPIR. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) I’ve been seeing news lately about ControlNet’s tile model for upscaling. I managed to make a very good workflow with IP-Adapter with regional masks and ControlNet and it's just missing a good upscale. This effectively r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. 9 model to act as an upscaler. 9 , euler_ancestral, karras. Workflow and process is included in the comment below . If you have a very small face or multiple small faces in the image, you can get better results fixing faces after the upscaler, it takes a few seconds more, but much better results (v2. This Subreddit focuses specially on the JumpChain CYOA, where the 'Jumpers' travel across the multiverse visiting both fictional and original worlds in a series of 'Choose your own adventure' templates, each carrying on to the next Found a pretty good way to get good pics without highres fix : you generate normal 512 img, send it to img2img. Hey, all. For example, if a seam divides a face, one tile may move an eye slightly up, while another may move the other eye slightly down. then I need 1. Explore new ways of using What so many SDXL users have been waiting for since last summer Realistic ControlNet Tile for SDXL. Specs: 3060 12GB, tried both vanilla Automatic1111 1. 0. I’ve just been using Hires Fix, and am totally unfamiliar with the first two. The reason why one might end up thinking that's best for It's 100% depending on what you are upscaling and what you want it to look like when done. SDXL base was just as bad as SD3. it should have total (approx) 1M pixel for initial resolution. I was always told to use cfg:10 and between 0. SDXL models are always first pass for me now, but 1. 3, no added noise or other changes. with the second pass applying latent upscale (1. 5 does not use Positional Encoding. The SD1. 6 and the --medvram-sdxl Image size: 832x1216, upscale by 2 DPM++ 2M, DPM++ 2M SDE Heun Exponential (these are just my usuals, but I have tried others) Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Also top tip that I didn't realise until reading the wiki properly. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. That also explain why SDXL Niji SE is so different. if you get crap result, try to generate at low cfg and size. 5 or XL checkpoints. They are completely different. 3 Denoise with normal scheduler, or 0. 4 Denoise with Karras scheduler. ) But in this post the OP is using the leaked SDXL 0. Hires. also use 768 or 1024 tilesize with 22 votes, 25 comments. Hello Everyone! I generated different world concepts using SD. 0 can be fine-tuned for NSFW and other specialized uses. 6. Usually directly return SR image at 4K resolution yields better results than SDXL diffusion. For example it could do celebrity swaps with insane accuracy on Please do not use Reddit’s NSFW tag to try and skirt this rule. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. 28 votes, 15 comments. Doesn't seem to have the issue with some other models where some areas The 4X-NMKD-Superscale-SP_178000_G model has always been my favorite for upscaling SD1. Adjust the widh and high to double dimension. Upscale (I go for 1848x1848 since this somehow results from a 1. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. The image is probably quite nice now, but it's not huge yet. I've seen best result ranging anywhere from 40 to maybe 100 or 120 steps. Latent upscalers are pure latent data expanders and don't do pixel-level interpolation like image upscalers do. 5 still has better fine details. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2 I'd never ignore a post I saw asking for help :D So when I refer to denoising it, I am referring to the fact that the lower resolution faces caused by using Reactor need to be denoised if you want to add more resolution, this requires passing through a sampler with denoising, the higher the denoising is on this sampler the more it will change and mess the face back up again. Here's a sample I made while experimenting with Hires. I can't find the original of the one I posted above but this one is a parent/child of the prompt trail I wandered down - bits of the prompt have been ignored, 3 women are named but the intent was to morph their features rather 3 women. 0 results. how much upscale it needs to reach my target res. safetensors, Denoising strength: 0. 0-RC , its taking only 7. It's hard to suggest feedback without knowing your workflow, image style is a big factor that will determine the best upscaler and workflow. Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. Please share your tips, tricks, and Hi guys, today Stability Inc released their new SDXL Turbo model that can inference an image in as little as 1 step. I can't seem to make Unless you going to downscale the final result, do not use a sharp upscaler (ie LDSR, ESRGAN, S) as the final step. 0 faces fix QUALITY), recommend if you have a good GPU: Welcome to the unofficial ComfyUI subreddit. 21K subscribers in the comfyui community. SDXL 1. So I spent 30 minutes, coming up with a workflow that would fix the faces by upscaling them (Roop in Auto1111 has it by default). 55 upscale is the most you can do, increase the hires steps to 75 and you can increase the upscale to 2 before you will run out of memory. If it wasn’t for the licensing issues I would still have hope for future SD3 model releases Normal SDXL workflow (without refiner) (I think it has beter flow of prompting then SD1. Hit generate. View community ranking In the Top 1% of largest communities on Reddit. For heavily stylized models you may want something more smooth- I like NMKD Ultra Yandere for anime models. 0 3x ultimate sd upscaler denoise comparison. Which one is best depends on the image type, BSRGAN I find is the most general purpose, Real ESRGAN is a better choice if you want things smoother or with anime, SwinIR is a good choice for getting more detail, LDSR I would only use on more important images. Speed up HunyuanVideo in diffusers with ParaAttention. comments sorted by Best Top New Controversial Q&A Add a Comment. I also wanted to know which Upscaler is best for upscaling landscapes paired with Ultimate SD Upscale . 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. It's really cool, but unfortunately really limited currently as it has coherency issues and is "native" at only 512 x 512. 5 based models are often useful for adding detail during upscaling(do a txt2img+ControlNet tile resample+colorfix, or high denoising img2img with tile resample for the most detail). It use upscaler and then use sd to increase details. SDXL is very very smooth and DPM counterbalances this. commandline args are currently empty. It already has Ultimate Upscaler but I don't like the results very much. " Under this reply, user "PsychologicalView605" confirmed this discovery in his/her own experiment. A recommendation: ddim_u has an issue where the time schedule doesn't start at 999. 9 as Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. uxhb tzyjfy sedk qiefi jhlcpc mtmwhi zqxj lcmpcei uoduh bgmfwc

buy sell arrow indicator no repaint mt5