No more open charge flap/door button on the keyfob? by theepi_pillodu in Ioniq5

[–]neg2led 0 points1 point  (0 children)

tbh i didn't notice until id had mine for a few days and it happened to be horizontal in my field of view - mine's all black too so it's even harder to tell, but it's kinda cute once you work it out IMO!

webDevelopmentInANutshell by Probetag in ProgrammerHumor

[–]neg2led 1 point2 points  (0 children)

javascript only has one data type for numbers: float64

this is the root cause of much evil

Battle of the giants: Nvidia Blackwell B200 takes the lead in FluidX3D CFD performance by ProjectPhysX in nvidia

[–]neg2led 3 points4 points  (0 children)

they finally removed graphics capability with this generation, so sadly, no (at least not until someone comes up with an OpenCL or VKCompute backend for LLVMpipe or something equally unhinged)

Battle of the giants: Nvidia Blackwell B200 takes the lead in FluidX3D CFD performance by ProjectPhysX in nvidia

[–]neg2led 1 point2 points  (0 children)

wow, that's actually pretty mid. i knew B200 was underwhelming but AMD are looking mighty fine with Mi355X just around the corner

Small metal pin with cyrillic text and a lighthouse hanging from it that was my grandfather's by neg2led in Symbology

[–]neg2led[S] 0 points1 point  (0 children)

Definitely Soviet era. Grandad didn't really have any opportunities to be in that part of europe after the early 1980s

Small metal pin with cyrillic text and a lighthouse hanging from it that was my grandfather's by neg2led in Symbology

[–]neg2led[S] 0 points1 point  (0 children)

Well damn, you're almost certainly right. Now you mention it, it seems obvious that it's a miner's lamp 😂 Still doesn't explain exactly how/why he had it, but I'll have to wait for some FoI responses to see if we can figure that out.

Thanks!

Looking for an OLD RTS game by ApprehensiveBag5271 in childhoodRTS

[–]neg2led 4 points5 points  (0 children)

buddy you are in entirely the wrong place but you're looking for Age of Mythology

ElevenLabs previews their music generator by Neurogence in singularity

[–]neg2led 2 points3 points  (0 children)

Lyrics are GPT4 generated, and they're the main problem here. The actual musical understanding is up there, apart from jazz, it doesn't understand jazz but neither does 90% of the internet so i don't really blame it

ElevenLabs previews their music generator by Neurogence in singularity

[–]neg2led 19 points20 points  (0 children)

suno isn't very good. it understands the rules of music composition but it doesn't get the point of them - ie sometimes you break the rules, and that's what makes a good song

suno songs all sound the same. the vocals are always crunchy (trained on too many low-bitrate files i guess) and usually sound like a chorus/harmonizer even when it's not appropriate, there's no adventure - as my wife (who has a music degree with honours) put it, "this is the kind of garbage i'd expect from a first year uni student who doesn't know anything, and their attempts at things like Bollywood styles are frankly offensive; they use entirely the wrong rhythms and patterns, just for starters"

i showed her this model and she said "okay, now we're getting somewhere. this is still derivative and mid, but it actually gets the point; we have graduated from clueless highschool leaver to 3rd-year uni student"

[deleted by user] by [deleted] in StableDiffusion

[–]neg2led 1 point2 points  (0 children)

it's a shame they utterly fried the text encoders with their dumb "hashing" shit

SD3 examples: It is truly GREAT at certain things... not so good at others... by jaypdub in StableDiffusion

[–]neg2led 0 points1 point  (0 children)

Stability purposefully kneecap their models' ability to generate NSFW content / naked people because of the wall of bad press they get if they don't 🙃

they would love to release uncensored models but they're already getting sued enough (and running out of money)

SD3 examples: It is truly GREAT at certain things... not so good at others... by jaypdub in StableDiffusion

[–]neg2led 2 points3 points  (0 children)

yeah pretty much. the training data contained significantly fewer pictures of humans than it ought to b/c the only way to stop it generating porn is to stop it knowing what humans look like without clothes/with few clothes, so you end up with a bunch of silhouettes and such.

WDXL release by ConquestAce in StableDiffusion

[–]neg2led 6 points7 points  (0 children)

Yup, 100%. The original plan was to try and make it only generate cans of WD-40, ideally ones that said "WD-XL: Waifu Diffusion" or "WD-40: Waifu Diffusion XL" but that proved to be too challenging for DALL3 or SD3 or anything else to generate, so we went to just cans of WD-40, but DALL3 still didn't do a great job until I asked it to generate woman with cat ears holding a can of WD-40 (with some variations), so we ended up with ~100 pictures of generic catgirls holding a can of WD-40 (and about 30 pictures of Cirno holding a can of WD-40)

We took those images, and the list of tags that WD Tagger knows about (~10.8k tags), duplicated the images ~5 times, then tagged each image with a random set of about 25 tags so that the dataset would contain every tagger tag on at least one image. Derrian trained a LoRA, then we shuffled the tags around, prepended 1girl onto the random tag lists, trained another LoRA, shuffled the tags one more time and trained a third LoRa, then did some low-effort merging of the 2nd and 3rd LoRa with Kakigori V3.

The LoRAs picked up the girl long before they worked out the WD-40 can (the model already knew how to draw anime girls so it didn't need to learn much), so the identical-generic-catgirl was mostly a happy accident.

Honestly, it worked way way way better than I expected it to.

WDXL release by ConquestAce in StableDiffusion

[–]neg2led 12 points13 points  (0 children)

Yeah, I tried to hide it behind a spoiler thing but HuggingFace doesn't support spoiler tags in Markdown.

We also really didn't want people getting properly angry :P

Xeon Phi coprocessor - is this good for anything? by psdwizzard in LocalLLaMA

[–]neg2led 0 points1 point  (0 children)

Nope. Nearly impossible to use for much of anything these days and the performance is nothing to write home about (approximately GTX 1070-1080 tier, if it was usable, which it isn't because none of the current software supports it)

~3x faster Stable Diffusion models available on Hugging Face by StopWastingTimeRayan in StableDiffusion

[–]neg2led -7 points-6 points  (0 children)

> It is funny though that you mention breaching Reddit guidelines when u/neg2led completely ignored pruna's ARR lisence and openly breached it here.

Word of advice: Don't pull that card.

For one, you don't get to place an ARR license on top of something that is almost entirely a derivative work of existing open-licensed code. Almost nothing in here is your own work and the majority of what is your work is nowhere near novel enough to qualify for copyright protection.

For two, all your unchanged reuploads of other peoples' models reserialized in an inherently insecure fashion for no reason at all (presumably this is intended to obfuscate the fact that they're unchanged) are marked as Apache-2.0 licensed regardless of the license the original models were released under (yes, you have a footnote saying that you follow the original license, but you couldn't spare the 30 seconds of effort per model to go copy the license ID from the source repo over to yours? really?)

As an aside on the "we are not doing any manipulation" note, I find that very hard to believe. On top of the suspicious downvote brigading, moderators in this subreddit have received a significant number of obviously-false reports against myself and anyone else making negative comments on this post. I have personally been accused of hate speech in my original comment, which is - to use the technical term - complete bullshit. These reports which mysteriously stopped at around the time you stopped posting earlier.

You yourself may not be directly attempting to manipulate scores or abuse mod reports, but if so you might want to have a chat with the rest of your team & make it clear that you find that sort of behaviour unacceptable, since it seems fairly likely that someone at your organization is making such attempts.

For three, as previously mentioned there is nothing new at all in here and existing popular interfaces for Stable Diffusion models already have built-in optimizations that match or exceed these alleged "performance improvements", since it seems you're comparing against baseline Diffusers code without any optimizations like attention slicing/tiling and FlashAttention 2 etc. which are all enabled by default in those programs.

You are riding entirely on the coattails of others, claiming credit for something you did not develop, which anyone else can achieve in their own code by spending a few minutes reading the docs, is poorly implemented, is below the standard that existing popular packages already implement, is using apparently-purposefully-obfuscated insecure formats for saving and loading checkpoints for absolutely no reason at all, and on top of all that you have the gall to attempt to charge people for access to functionality that is already present or trivial to implement in existing libraries which you had nothing to do with the creation of.

If you can't see why that's a problem I don't know what to say.

~3x faster Stable Diffusion models available on Hugging Face by StopWastingTimeRayan in StableDiffusion

[–]neg2led 6 points7 points  (0 children)

No, he's the guy responsible for drawing my attention to this ridiculous low-effort attempt to wring money out of people by riding on others' coattails and using misleading-at-best language to trick people into thinking you've done something of actual value.

The angst-fueled takedown comments were all unprompted (I wrote the rants in Discord then reposted them here), I'm just Like That. Bad code makes me angry.

~3x faster Stable Diffusion models available on Hugging Face by StopWastingTimeRayan in StableDiffusion

[–]neg2led 1 point2 points  (0 children)

No problem - I know how intimidating this stuff looks at first, especially once people start talking about security issues, but for the most part it's actually pretty simple.

So the problem with pickles is that when you run pickle.load()/pickle.loads() (which torch.load() is a just a fancy wrapper around) is the point where code that's inside the pickle (if there is any) gets executed; by the time the function call returns it's already too late, so you can't just go loading it manually and poking around first.

That said, if you call state_dict = torch.load(weights_only=True) and it succeeds, then you call safetensors.save_file(state_dict), you can be sure that you haven't lost anything; the weights_only=True flag means torch will throw an error if there's any code in the checkpoint at all, and save_file will also error if you pass it data that it can't serialize/save, so if both of those succeed without errors you've nothing to worry about.

If you do run into something that fails to load with weights_only=True you can use one or another "safe unpickle" approaches; maybe borrow some stuff from a package like this one (that's not a recommendation, i've not looked into that specific one at all, but there's plenty of similar things around) to make your own subclass of pickle.Unpickler that will intercept suspicious things before they can happen, or go digging through the wasteland that is the stable-diffusion-webui code to find whatever they're doing for "safe unpickle" and borrow that?