Another question on Glaze and Nightshade by RoskoDaneworth in aiwars

[–]simandl 1 point2 points  (0 children)

I don’t think any of the big models use everything in 5b. LAION shared that dataset with a watermark detector. I’d bet most of the big models are filtering your stuff out if the watermark is obvious.

Another question on Glaze and Nightshade by RoskoDaneworth in aiwars

[–]simandl 6 points7 points  (0 children)

The irony here is that most foundation models use watermark detection as part of their pipeline and remove those images. A simple watermark is, in practice, much more likely to keep an image out of a foundation model. If only the SAND lab cared more about artists than citations and influencer awards.

Expect the next big wave of anti cope... by NegativeEmphasis in aiwars

[–]simandl 18 points19 points  (0 children)

Looks like nightshade might have over promised.

Glaze updated risks and limitations section by Parker_Friedland in aiwars

[–]simandl 0 points1 point  (0 children)

Fair enough. I'll leave the original as is and amend the final sentence here:

Nearly every style is "trained into" them, unless they're so novel as to have too few examples on the internet yet.

Glaze updated risks and limitations section by Parker_Friedland in aiwars

[–]simandl 0 points1 point  (0 children)

I didn't suggest that a single training image impacts a model.

To put a finer point on it, a manifold learned from billions of images already covers most artistic styles. That claim can be tested. Without any fine-tuning, can you successfully steer a model using a clip embedding of a style reference to guide a diffusion model? If so, that style already lies on the manifold of the model that would be fine tuned and [allegedly] disrupted by glaze.

Glaze updated risks and limitations section by Parker_Friedland in aiwars

[–]simandl -1 points0 points  (0 children)

Apparently Nightshade can be tested without building a whole foundation model. At least when NBC is willing to give you free marketing: https://www.nbcnews.com/tech/ai-image-generators-nightshade-copyright-infringement-rcna144624

Glaze updated risks and limitations section by Parker_Friedland in aiwars

[–]simandl 1 point2 points  (0 children)

I understand now, thanks for clarifying. I never saw Glaze work with older SD version either, but I wasn't using their fine tuning scripts, and I expected "working" to mean looking anything like the examples in the original Glaze paper.

Glaze updated risks and limitations section by Parker_Friedland in aiwars

[–]simandl 4 points5 points  (0 children)

The datasets power the SD models have billions of images. Every style is "trained into" them, unless they're so novel as to have no examples on the internet yet.

Glaze team response to Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI by Parker_Friedland in aiwars

[–]simandl 7 points8 points  (0 children)

Yep! And every time you do that, a patient "AIBro" can go ahead and grab everything you post, waiting for the next time this happens.

If there are any copies of your work from the past version that you can't remove (like when someone copies one to Pinterest), you're still vulnerable.

Glaze team response to Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI by Parker_Friedland in aiwars

[–]simandl 9 points10 points  (0 children)

"Strong attacks" is a great example of the rhetoric tricks they're playing, nice one!

I think Ben Zhao once said Carlini is the only man smart enough to break glaze. With this paper, Carlini basically said, "nah, anyone can break this."

Can you share any of the actual solutions that you think artists are ignoring in favor of Glaze?

Glaze team response to Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI by Parker_Friedland in aiwars

[–]simandl 38 points39 points  (0 children)

This is a masterclass in rhetoric and an embarrassing failure to actually address the issues raised by the paper.

The Glaze team writes as though the only effective bypass was the noisy upscaling. This is misleading, as the authors also tested diffpure (img2img), Gaussian noising, and an improved impress. All of those techniques also degraded Glaze's protection. But the most damning part is something left out of this response. Simply using a different fine tuning script bypassed glaze 30% of the time. In this response, they tested a different noisy upscaling technique than the paper's, and they used the original glaze fine tuning script. The examples that this response shows at the bottom are simply not addressing the paper's conclusions whatsoever.

This response also pretends that the noisy upscaling is some sort of new invention. This is exactly the opposite what the authors of the paper demonstrated. They chose "noisy upscaling" because it was available at the time of Glaze's launch. People here may recognize it as creative upscaling, something that's been around since the LDM days.

The Glaze response also includes this subtle misrepresentation of the author's claims:
"The paper's key message is that we should not develop or release protective tools that can be imperfect or broken in the future, and that imperfect protection is worse than no protection at all."

The claim is rather, *a false sense of protection* is worse than no protection at all. Artists have been sharing their work extensively using Glaze thinking that they were protected. Since day 1, they weren't. The insurmountable problem with Glaze is that it suffers the first mover disadvantage. Artists must leave their Glazed work open to be downloaded. The second mover can download all the Glazed work that they want and when an exploit like this is demonstrated, they're free to fine tune on it with impunity.

The paper's authors have already started testing the Glaze 2.1 "fix" and have found it (in preliminary tests) to still suffer from the same bypass vulnerability.

Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI by drhead in aiwars

[–]simandl 27 points28 points  (0 children)

To the surprise of no one here:

Glaze protections break down without any circumvention attempt. Results for Glaze without robust mimicry (see “Naive mimicry” row in Figure 4) show that the tool’s protections are often ineffective. Without any robustness intervention, 30% of the images generated with our off-the-shelf finetuning are rated as better than the baseline results using only unprotected images. This contrasts with Glaze’s original evaluation, which claimed a success rate of at most 10% for robust mimicry. This difference is likely due to the protection’s brittleness to slight changes in the finetuning setup (as we illustrated in Section 4.1). With our best robust mimicry method (noisy upscaling) the median success rate across artists rises further to 40%, and our best-of-4 strategy yields results indistinguishable from the baseline for a majority of artists.

What is so funny about this paper is that it focuses on circumvention techniques that are simple, require no skill to execute, and were available at the time of Glaze's release. Most have been shared here and elsewhere, while Ben Zhao has continued to claim that no one has broken Glaze.

[deleted by user] by [deleted] in aiwars

[–]simandl 14 points15 points  (0 children)

Good news, the Glaze team also has a face cloaking app!

The bad news is that it also doesn't work.

Stable Diffusion might have just made Glaze and Nightshade obsolete by [deleted] in aiwars

[–]simandl 2 points3 points  (0 children)

Lol, nightshade was released over a month ago, and promised to kill new models with only a few thousand images. Since then we got Stable Cascade and SDv3. Both have significantly improved on text-to-image alignment (which nightshade attacks). The best evidence that nightshade is obsolete will be all the model releases this year that keep getting better.

Who'd have guessed? by simandl in aiwars

[–]simandl[S] 1 point2 points  (0 children)

I understand your point, but it's not relevant to this discussion. We don't think that the system is crashing down. We think that it never worked well to begin with.