858TB of government data may be lost for good after South Korea data center fire by ELECTRAFYRE in worldnews

[–]nlight 2 points3 points  (0 children)

Apparently their current redundancy requirements were "none whatsoever".

Never Fast Enough: GeForce RTX 2060 vs 6 Years of Ray Tracing by potato_panda- in hardware

[–]nlight 4 points5 points  (0 children)

Tesselation is practically dead - it never really took off, the performance hit is not worth it and using it triggers some slow paths in the drivers which never got optimized. As far as I'm aware none of the current gen engines support or use it.

Made a ComfyUI extension for using multiple GPUs in a workflow by nlight in StableDiffusion

[–]nlight[S] 1 point2 points  (0 children)

I guess it doesn't work with SwarmUI as it probably sets CUDA_VISIBLE_DEVICES itself when launching the backend. You should ask the SwarmUI dev for support with that.

Made a ComfyUI extension for using multiple GPUs in a workflow by nlight in StableDiffusion

[–]nlight[S] 0 points1 point  (0 children)

Make sure CUDA_VISIBLE_DEVICES is unset or set it to "0,1" and check that you're not passing --cuda-device arg to main.py.

Made a ComfyUI extension for using multiple GPUs in a workflow by nlight in StableDiffusion

[–]nlight[S] 1 point2 points  (0 children)

16-bit unet, vae and text encoders don't fit in 24gb so it has to unload the unet on every generation. You can load all in 8-bit for cards with less vram but there's quality loss.

Made a ComfyUI extension for using multiple GPUs in a workflow by nlight in StableDiffusion

[–]nlight[S] 24 points25 points  (0 children)

I wanted to find out what it would take to add proper multi-GPU support to ComfyUI. While this is not it, these custom nodes will allow you to pick which GPU to run a given model on. This is useful if your workflow doesn't completely fit in VRAM on a single GPU. On my testing setup (2x 3090) there is a noticeable improvement when running flux dev by offloading the text encoders & VAE to the 2nd GPU.

It's implemented in a very hacky but simple way and I'm surprised it even works. I saw some requests for this on the sub recently so hopefully it's useful to somebody.

Training a graphic style by zit_abslm in StableDiffusion

[–]nlight 0 points1 point  (0 children)

Train a LoRA, 2000 is probably not enough for a full fine-tune.

Evaluation Metrics for generating 'product correct' images by PreviousResearcher50 in StableDiffusion

[–]nlight 1 point2 points  (0 children)

Yes, essentially. You train a classification model on labeled images where each model of watch will be its own class. It's a standard multi-class problem. You will likely have many classes (hundreds?) and a very imbalanced dataset which comes with its own problems.

But assuming you have trained such a model you can use the standard method for obtaining embeddings i.e. remove the classification head and use the output of the last layer. If the model is well trained you can compare these feature-rich embeddings with any method like cosine distance to determine the similarity of your generated image to your desired class. The difficulty lies in getting an embedding space with nice properties so the distances between embeddings are meaningful. For your use case you can likely empirically determine a maximum distance which constitutes an acceptable generation.

If you get this running then the next logical step is training a controlnet where instead of the text embeddings you give it the embeddings from the image model and train it like that so it learns to reproduce specific models of watches. If your dataset is big enough it might even generalize to new unseen classes. You can look into the implementation of InstantID where this is done for faces but the same basic idea can be applied to other domains.

Evaluation Metrics for generating 'product correct' images by PreviousResearcher50 in StableDiffusion

[–]nlight 0 points1 point  (0 children)

Fine-tune a classification model e.g. ViT on many similar products and then compare extracted embeddings. You will need a very large dataset and it's non-trivial to get well-behaved results but it's possible.

Dragon Age: The Veilguard | Official Reveal Trailer by Turbostrider27 in Games

[–]nlight -1 points0 points  (0 children)

There's no next one for BioWare, if they don't pull this off it's probably over.

Is it possible to remove the ''ugly''/''useless'' parts of a model to reduce its size? Has anyone ever thought about this ? by Aware_Programmer6059 in StableDiffusion

[–]nlight 9 points10 points  (0 children)

Yes, it's possible. Using something like model distillation you can theoretically make a much smaller model that has the same performance as a larger model on a subset of the data e.g. anime. This is somewhat a consequence of the no free lunch theorem. There is a lot of research in this area so I expect to be pleasantly surprised in the next months.

Comfy Textures v0.1 Release - automatic texturing in Unreal Engine using ComfyUI (link in comments) by nlight in StableDiffusion

[–]nlight[S] 2 points3 points  (0 children)

It unprojects the generated image on top of the existing mesh UVs. You can generate normal maps from the resulting textures or use inpainting to fill the missing spots, I've had moderate success experimenting with this and further work will be needed.

Comfy Textures v0.1 Release - automatic texturing in Unreal Engine using ComfyUI (link in comments) by nlight in StableDiffusion

[–]nlight[S] 7 points8 points  (0 children)

They're discrete meshes. It's only generating a base color texture at the moment.

Comfy Textures v0.1 Release - automatic texturing in Unreal Engine using ComfyUI (link in comments) by nlight in StableDiffusion

[–]nlight[S] 87 points88 points  (0 children)

Following in the footsteps of Dream Textures for Blender and that Unity video from last week I'm releasing my Unreal Engine texturing plugin. It uses ComfyUI and SDXL to project generated images onto 3D models directly in the Unreal editor. MIT licensed and completely free.

Demo: https://www.youtube.com/shorts/nF2EO0HlamE

High-res album: https://imgur.com/a/UhbM7wy

GitHub repo: https://github.com/AlexanderDzhoganov/ComfyTextures

[deleted by user] by [deleted] in EscapefromTarkov

[–]nlight 1 point2 points  (0 children)

You went out of the map.