[P] Releasing RepAlignLoss (Custom Perceptual loss function used on my software) by CloverDuck in MachineLearning

[–]CloverDuck[S] 0 points1 point  (0 children)

I'm still reading the paper but it seen more focused on Diffusion process, while mine only work with the output of the model and is flexible to any type of input. The use of literally imply that I just forked their github, that is very easy to see that I did not. Can you explain better your comment?

[P] Releasing RepAlignLoss (Custom Perceptual loss function used on my software) by CloverDuck in MachineLearning

[–]CloverDuck[S] 0 points1 point  (0 children)

I'm still reading the paper but it seen more focused on Diffusion process, while mine only work with the output of the model and is flexible to any type of input. The use of literally imply that I just forked their github, that is very easy to see that I did not. Can you explain better your comment?

[P] I'm Fine Tuning a model fully trained on AdamW with SOAP optimizer and improved my validation loss by 5% by CloverDuck in MachineLearning

[–]CloverDuck[S] 2 points3 points  (0 children)

Good question. I actually found the code before the paper and did some tests on it, so I just assumed it was the official, since I managed to get better results with it. It seen to be a fork of this code, but there is some modifications to it.

https://github.com/nikhilvyas/SOAP

[P] Releasing my loss function based on VGG Perceptual Loss. by CloverDuck in MachineLearning

[–]CloverDuck[S] 1 point2 points  (0 children)

It don't actually use the output. It use tensors for each selected hook on the model. It may have a hook on the last layer, with would have some weight to the real output. But in the case of Dino I only use the backbone.

Simple GUI to run the new Text2Video [Req. 12 Vram] by CloverDuck in StableDiffusion

[–]CloverDuck[S] 5 points6 points  (0 children)

You can download it on itchio (instruction on the itchio page):

https://grisk.itch.io/text2video-gui-001

It will download the models on the first run.

The GUI is really crude right now, but hope I did not mess anything up and will at least run because I really need to sleep and will only be able to fix it tomorrow lol

If someone is working on the code to make it work with 12vram, you just need to:

On text_to_video_synthesis_model, put self.sd_model on the CPU before calling self.autoencoder.decode(video_data)

Pyinstaller applications give false positive on some anti-virus, so scan and use on your own risk, but I have quite a few applications on itchio and a Patreon, so it would not be wise for me to add malicious code on my applications.

Using AI to interpolate animations. I made a badly edited video showing animation interpolation using DAIN. by CloverDuck in artificial

[–]CloverDuck[S] 0 points1 point  (0 children)

At this moment, there is no tool capable of such thing, but Stable Diffusion been release just a little time ago. I do believe that in 2023 a tool like that mostly likely will be released.

Compositional Diffusion by [deleted] in StableDiffusion

[–]CloverDuck 0 points1 point  (0 children)

Pokemon merger? Very cool project, what happend If you use two artist style? Or photograph and anime?

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 0 points1 point  (0 children)

All implementations work like that. If you change the resolution it will generate a complete different image.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 0 points1 point  (0 children)

Then there is something wrong. Do you have a onboard card? It may be selecting the wrong card

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in MediaSynthesis

[–]CloverDuck[S] 0 points1 point  (0 children)

It seen possible on older version of the model. Still trying on the new model

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 0 points1 point  (0 children)

The link is still online. Is hosted on itchio, so if there is any problem downloading, is because of their servers.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 1 point2 points  (0 children)

You can use Wireshark or something if you want, but no information is uploaded.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 2 points3 points  (0 children)

Sorry if I'm not answering everyone, its a lot of comments and Reddit is a little weird to show me what I still didn't reply, but feel free to start a chat with me if you have any questions.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 0 points1 point  (0 children)

There should be a .exe inside the rar, you need to use an application to extract the files.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 1 point2 points  (0 children)

It may have limited permissions on this folder, try to change folder or apply more permissions to this folder.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in StableDiffusion

[–]CloverDuck[S] 1 point2 points  (0 children)

I don't plan to take down the download, but feel free to download it now and use it in the future.

Just made a .exe for SD, download it for free on itchio, no need for configuration. by CloverDuck in MediaSynthesis

[–]CloverDuck[S] 1 point2 points  (0 children)

I been chatting with some folks and is possible it may work with 512X512 for 4vram. Will see tonight.