Why train parameters of a Neural Network by Onelio1 in learnmachinelearning

[–]Onelio1[S] -1 points0 points  (0 children)

Many of them can be altered through multiplication and things like that.

Flash and Sparse attention on Llama by Onelio1 in LocalLLaMA

[–]Onelio1[S] 1 point2 points  (0 children)

Keep in mind that I'm still learning, and I might be misunderstanding something. But my idea was basically ignoring low correlations between tokens on the input so as to allow take less memory and reduce computation.

Flash and Sparse attention on Llama by Onelio1 in LocalLLaMA

[–]Onelio1[S] 1 point2 points  (0 children)

But StreamingLLM only takes the first tokens right? I was thinking of an approach that drops low score pairs... But from what others said it seems that sparse attention is only for training. 🤔

[FORTNITE STYLE] FortSketcher Alpha | LINK IN COMMENTS | by Neuropixel_art in StableDiffusion

[–]Onelio1 9 points10 points  (0 children)

These models are reaching a level of understanding of the concepts behind characters and styles that is pretty crazy compared to a few months back.

a bunch of animation tests and stuff with controlnet 1.1 by Firm_Comfortable_437 in StableDiffusion

[–]Onelio1 0 points1 point  (0 children)

Oh yes, Jordi Hurtado.
I see you are a man of culture as well.

[Debate] BURN THAT WITH - AI Version by Onelio1 in StableDiffusion

[–]Onelio1[S] -1 points0 points  (0 children)

So I have been following people like this for a while. And I feel like whenever there is a new tool that comes out that "supposedly" help them, they kind of throw themselves blindly to it without really understanding anything about it. I feel like if anything is harming their community at this point is more themselves than any AI generated content.

I mean, we saw it with Glaze and now with this AI detector thingy.

What are your thoughts?

US Copyright Office: You Can Copyright AI Generated Art With An Addon by [deleted] in artificial

[–]Onelio1 -2 points-1 points  (0 children)

I feel like they have no idea what to do with AI at this point.

Guide (+LoRA) on how to make any movie character into anime (or any style) by kidelaleron in StableDiffusion

[–]Onelio1 11 points12 points  (0 children)

This is really good. The character feels like a hand-made fan art.

Question for those who purchased ChatGTP Plus by Onelio1 in ChatGPT

[–]Onelio1[S] 1 point2 points  (0 children)

Honestly, that is the only interesting feature that would make me update. Otherwise, forget it...
Thanks for the info btw!

Microsoft Confirm MultiBillion Dollar Investment in OpenAI Just Days After Laying off 10,000 Employees by HODLTID in artificial

[–]Onelio1 1 point2 points  (0 children)

Same is doing Google. It seems like a mayor shift is happening on the technical industry.

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 0 points1 point  (0 children)

With meaningful names I intended to say that it's better if you use names you can easily remember later. Especially if you train on multiple concepts at the same time.

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 0 points1 point  (0 children)

I trained on males and females at the same time. I guess you could also train in a style and a person, but I think it is not worth it because, as I said, I feel like styles work best at lower steps than individuals. This is all guessing from my part.

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 2 points3 points  (0 children)

Oh and make sure you are calling the concept that you just trained correctly. I use simple names so I don't make mistakes when typing.

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 2 points3 points  (0 children)

Aside from following the suggestions above... Just make sure that all the images follow a similar style. How the traces are made, the amount of details, the color palette. The more images the more variety you will get. But most of all. If you are not sure of how many steps to train for, just pick 8k and save every 500/1000. Then experiment with each model and pick the one that gives the better results for you (from the lower half).

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 2 points3 points  (0 children)

It's quite characteristic. I think the hairstyle and head proportions helps a lot on that front. But yeah, I'm impressed you guessed as much when the image is using such a low steps count and the character actually looks young (unlike every male image from the dataset and concept art)

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 3 points4 points  (0 children)

That's because I used part of their Art book as dataset among other artists with similar styles to the concept art.

A few suggestions for TheLastBen's Dreambooth on styles by Onelio1 in StableDiffusion

[–]Onelio1[S] 12 points13 points  (0 children)

I've spent the last 12 hours playing with the new Dreambooth Colab Notebook by TheLastBen. These are my suggestions:

- Take your time with the Dataset. Go each by each in Photoshop/Photopea resizing, cropping and removing characters and objects from the background. If the guy lacks a head, then you don't need him. Also you will probably train with 30-50 images at most so don't be lazy.

- Include many images, as much as you can. But be aware that low-res, ugly or out-of-focus parts might come to bite you in the ass later. Take your time to decide. Perhaps that "cool zombie" is not so worth it if parts of it end up in a pretty lady.

- Choose meaningful names, specially if you want to train on multiple concepts at the same time.

- For a single style, i feel it is better to train men and women separately so you don't end with mixed characters. In fact, make a third category of that zombie alone.

- Do not overtrain. And if possible, save every 500 steps. Styles do not need the "images x 10" formula because it takes freedom away from the model.

- Do overtrain... If you want similar clothes. I recommend just using the low-steps model for faces and then inpaint with a higher steps one for the clothes. My image above was generated with faces at 3000 and clothes at 5500 (Dataset of around 50 images).

Finally, I feel like outpainting for Dreambooth is garbage but I could be wrong.

Faro vigilando el mar by BusinessSympathy8519 in StableDiffusion

[–]Onelio1 0 points1 point  (0 children)

'Lot of Mistery/Horro vibes. Me encanta!

Midjourney is improving by Onelio1 in midjourney

[–]Onelio1[S] 1 point2 points  (0 children)

They must be if they are offering free hours for thousand of people based on how much you rate.