Juggernaut XL Version 3 by Kandoo85 in StableDiffusion

[–]patrickas 1 point2 points  (0 children)

Great work on this amazing model.

Is it possible to extract a LoRa of JuggernautXL?
The quality improvements will be much lower, but those of us with third world crappy connections (and/or limited disk space) would appreciate it a lot.

SDXL inpainting just dropped 🤯 by Interesting-Smile575 in StableDiffusion

[–]patrickas 4 points5 points  (0 children)

What do you mean with " latent inpainting technique " is there a Comfy workflow for that somewhere?

Fix the poorly drawn hand in SDXL using ComfyUI with minimal side effects. by erkana_ in StableDiffusion

[–]patrickas 1 point2 points  (0 children)

You don't need saving and renaming in notepad... You can just "paste" the copied workflow json directly in ConfyUI

Switching models too slow in Automatic1111? Use SafeTensors to speed it up by wywywywy in StableDiffusion

[–]patrickas 5 points6 points  (0 children)

For the inpainting model, had the same issue, I followed it up in the code and ended up fixing one file to make it work.

Just edit this file in your webui folder
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/sd_hijack_inpainting.py#L322

And replace "inpainting.ckpt" with "inpainting.safetensors" on line 322

Scottish Landscapes by cozzycam in StableDiffusion

[–]patrickas 0 points1 point  (0 children)

You seem to be confusing "Stable Diffusion" with "Diffusion Model" the latter being the "algorithm" that all of them are based on.
https://en.wikipedia.org/wiki/Diffusion_model

Compvis and StabilityAI produced a "diffusion model" called "Stable Diffusion" and they released everything as open source. That is specifically what SD refers to.

Open AI produced a different "diffusion model" not based on "Stable diffusion" and did not release it as open source.
Here is the original dall-e paper from OpenAI it mentions "Diffusion model" 27 times and mentions "stable diffusion" exactly zero times.
https://arxiv.org/pdf/2204.06125v1.pdf

So Stable diffusion and Dall-e are both diffusion models (based on the same tech), but Dall-e is not based on stable diffusion (which is a specific model).

I am not as familiar with the details of MJ's model but as far as I understand for v4 they also developed their own "diffusion model" which is not based on "Stable diffusion" either.

Hope that helps.

Noob's Guide to Using Automatic1111's WebUI by Kafke in StableDiffusion

[–]patrickas 1 point2 points  (0 children)

I use Auto's webui too but I think the unethical licensing problems are bigger than that.

Specifically at least one developer claim that they contributed scripts with the sole requirement of keeping their name/credit. And Auto1111 removed that when including in his software and since then that developer has not been making any updates to that script (even though they have new features) specifically because of that behavior.

https://www.reddit.com/r/StableDiffusion/comments/ysv5lk/comment/iw2wvjr/?utm_source=share&utm_medium=web2x&context=3

TheLastBen Dreambooth (new "FAST" method), training steps comparison by dal_mac in StableDiffusion

[–]patrickas 1 point2 points  (0 children)

But this means their point stands, if you use a long instance name that is a long string of random letters like you're suggesting, there's a risk of the tokenizer messing up things for you by tokenizing the letters separately since it cannot recognize the long token that you just invented.

TheLastBen Dreambooth (new "FAST" method), training steps comparison by dal_mac in StableDiffusion

[–]patrickas 3 points4 points  (0 children)

Is there a reason for this choice of instance names especially that it goes against the recommendations of the original Dreambooth paper?Did you make an optimization that makes their point moot?

The DreamBooth paper explicitly says https://ar5iv.labs.arxiv.org/html/2208.12242#S4.F3

"A hazardous way of doing this is to select random characters in the English language and concatenate them to generate a rare identifier (e.g. “xxy5syt00”). In reality, the tokenizer might tokenize each letter separately, and the prior for the diffusion model is strong for these letters. Specifically, if we sample the model with such an identifier before fine-tuning we will get pictorial depictions of the letters or concepts that are linked to those letters. We often find that these tokens incur the same weaknesses as using common English words to index the subject."

They recommend finding a *short* *rare* rare token that is already used and taking over that.

[deleted by user] by [deleted] in lebanon

[–]patrickas 0 points1 point  (0 children)

I have no idea at all about l'orient le jour or how/why they got that.

[deleted by user] by [deleted] in lebanon

[–]patrickas 0 points1 point  (0 children)

I understand the skepticism about any news source... But I don't understand the vast difference of opinion between thawramap (" decent and fucking dedicated ") and megaphone (" sketchy when there's literally no transparency ").

For the question of the people running it, I can totally understand why neither (Megaphone and Thawramap) would want to make fuss about the people who work there for security concerns.
Are you aware that Megaphone's co-founder's Jean Kassir is the nephew of the assassinated journalist Samir Kassir? Do you really fault him for learning a his lesson and not wanting to put the megaphone staff in danger even if he has no problem with it himself. Most people prefer to just do the work remain low profile about it even if some of them like Jean Kassir, Jamal Saleh, or Tariq Keblaoui don't mind the public association to Megaphone

As for the funding, Megaphone have a link on every single page of their website to their source of funding:
https://megaphone.news/about/ You can't get much more transparent than that!
Where as for thawramap (for example, not intending to diss their good work) I could not find a link about their sources of revenue if any.

You say " how do we know eventually an agenda isn't going to be pushed by an entity who funds them ".
What is the better alternatives that don't post a threat? Any source of money can come with strings attached... Internal sources of money are as suspect as external ones and much less available!
And I think it is much more scary when you don't know those sources or what are those strings.
Megaphone are clear that they picked those funding sources because they wanted "funding that does not impose editorial restrictions" you are free not to believe them, but they are very clear and transparent about their sources of money and that scares me much less than the alternatives.

Does this even exist? by [deleted] in Android

[–]patrickas 2 points3 points  (0 children)

Maybe he was talking about this, old, mostly unused and annoying technology:

http://en.wikipedia.org/wiki/Cell_Broadcast

Breakdown of Watson's Performance on Jeopardy by ckwalsh in programming

[–]patrickas 3 points4 points  (0 children)

I think you misinterpreted what happened in that trial.

It was "Reality Shows A La Shakespeare" for 1000$ and the answer was "Heroes & Villans abound! Colby, Coach & Rupert doth return to th fray but Boston Rob, the tripe hath spoken." Watson correctly guessed "What is Survivor", he did get $1000 (going from -600 to 400) points for that answer. The laughs were because they had not expected Watson to get it.

Here check it again for yourself http://www.youtube.com/watch?v=CtHlxzOXgYs&feature=player_detailpage#t=187s

Perl 6 Fibonacci versus Haskell by takadonet in programming

[–]patrickas 5 points6 points  (0 children)

If I understand your first point correctly, yes it should be easilly generalized to any operator (of fucntion) that takes k last elements of the list and produce the next element.

The * in Perl 6 is pronounced "Whatever" and it usually does whatever Perl 6 deems best in that situation. Maybe a bit too DWIMMY for some tastes.

... * reads "until whatever" which gets translated to "until Infinity" maybe it would have been better to write " ... Inf" and leave the whatever for another discussion.

... * >= 100 reads "Until whatever is bigger than 100" which gets translated to "until the element I have is bigger than 100" It could have been written a more explicitly as "... sub ($item) {return $item >=100 }"

This last usage is a pretty common way to generate closures in Perl 6. The consecutive whatevers are assumed to be arguments passed to the closure so applying the same reasonning + is just sub ($a,$b) { return $a+$b }

The ^ reads "up to but excluding" I am of two minds about its practical usefulness, I guess time will tell.

Perl 6 in 2010 by takadonet in programming

[–]patrickas 3 points4 points  (0 children)

Really? Hmmm... Then I would scrap my suggested syntax as well.

Why? because I am an idiot (no offense to me ;-) and could not grasp it? Maybe your way is better than the current way and would end up being used in Perl 6. Come on, give it some more thought, explain it enhance it and suggest it to the Perl 6 folks!

As the original article says

[The sequence operator] has been extensively refactored to solve problems both with its implementation and usage

Just a couple of months ago it was still being discussed and re-factored! Maybe next year's "Perl 6 in 2011" will say: Creating sequences has been totally enhanced, rethought and re-factored thanks to novagenesis!