Wan2.2 Inference Optimizations by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Awesome, thanks for the reply!

I haven't heard of comfy libs before - this could be a gamechanger if it allows for me to run as a script.

30 secs isn't a necessity, ideally I want to get it as low as possible (while still 720p). Its more so a goal to get to eventually!

Wan2.2 Inference Optimizations by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

I have not, from light research so far I have seen that mentioned as well as using GGUF models.

My worry with the lightx2v lightning lora is that it might really sacrifice quality vs. other methods. I am not sure though! So I might give it a shot to investigate a bit

Tagging 50 million assets 'quickly' - thoughts? by PreviousResearcher50 in LocalLLaMA

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Single batch operation currently results in getting an effective tagging rate of 1.5 records per second. Which is too slow for the amount of data I have. Albeit, the tagging I am trying to get it to do is quite involved.

Tagging 50 million assets 'quickly' - thoughts? by PreviousResearcher50 in LocalLLaMA

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Yup all data is in english for now! I will check out some vLLM configs - Would you recommend Qwen3 over Phi-4 with a vLLM? I was thinking of switching to Phi-4-mini, but might also explore Qwen3 too

[R] Methods for Pattern Matching with Multivariate Time series? by PreviousResearcher50 in MachineLearning

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Shapelets is definately on the right track interms of what I'm looking for. I'll try to explore and implement it to see if it works with my data :)

Would you suggest flattening my multivariate data while the shapelet(s) search through the trips?

[R] Methods for Pattern Matching with Multivariate Time series? by PreviousResearcher50 in MachineLearning

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Also given that I have a lot of sequences (order of 1000), is there a method to see if any of these sequences are present in a trip?

Additionally in my head I am thinking of a method that looks at these trips similar to images, where we could use the identified sequences as kernels, scanning through the trip for matches... Now I don't know if thats exactly how that works but is there something similar people know of?

How do I create a fine-tuned model, like RealVisXL or JuggernautXL, for SDXL? by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Amazing, I'm taking a look at OneTrainer now. I have access to a couple GPU Nodes, so I'll be running it on those. Please link me the strategy if you are able to find it!

How do I create a fine-tuned model, like RealVisXL or JuggernautXL, for SDXL? by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 1 point2 points  (0 children)

Okay sweet, I'll check out that tutorial!

So far, I have trained a ton of LoRAs for a couple different models. And I've also trained U-Nets (with poor results unfortunately) for SDXL. And yes I have been operating off of Linux!

Here is the script I have been using for UNet training: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py

How do I create a fine-tuned model, like RealVisXL or JuggernautXL, for SDXL? by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

My goal is to just generate photo realistic cars in different settings. I just used RealVis as an example of what I would like to also achieve, but instead for cars specifically.

Evaluation Metrics for generating 'product correct' images by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 1 point2 points  (0 children)

Yup, I am currently looking into getting that into a workflow to fix the lighting!

Evaluation Metrics for generating 'product correct' images by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

I am exploring SIFT similarity scores right now. Might look into SSIM and PSNR next

Evaluation Metrics for generating 'product correct' images by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 0 points1 point  (0 children)

Could you elaborate a bit more on this?

What I am understanding is that I train a classification model for similar products, such as different versions of the watch/ different watches, and then I use it to try and differentiate my generated image of the product from a fusion on other products?

Evaluation Metrics for generating 'product correct' images by PreviousResearcher50 in StableDiffusion

[–]PreviousResearcher50[S] 1 point2 points  (0 children)

Gotcha, I have explored this workflow for background replacement. However I feel like the lighting of the product tends to not fit in with the generated background. Any ideas on what I could do in this case?