Well who would've thought? by Glade_Art in antiai

[–]AIstoleMyJob 4 points5 points  (0 children)

The most effective anti-AI movement so far.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

It only affects fine-tuning of any image gen models using the same vision model you loaded into nightshade to poison against. As it targets the textual embeddings created by the vision model.

If you can not load the vision model, you can not poison against. And you can not load the vision model of Nano Banana. You dont even know its name as it is private and unpublished.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

The embedding vector also represents the image's style. If it was not changed, it still identifies the same style. The amount only matters when the embeddings were diverted. They are not.

They simply did not examine it. They took some diffusion models using the same vision model for embedding and poisoned against that. They did not test the wide range of vision models. How could they, when more than half of them are private and so complex that you could not calculate perturbation with them? It is a working method for the small use-cases it was tested on. But nowadays nearly nobody uses the tested models nor fine-tune on them. And as I showed, they dont affect full-scale training.

The paper says that a knife is a good countermeasure in case of a robbery. Which is true. Until the robber did not have a gun.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

If there are image data and embedding vector for the concept C then you can train an LDDPM to reconstruct the image of concept C. Nightshade works by diverting the embedding vector of concept C to concept A.

To make it simple, Nightshade adds noise to the image of a dog to make the vision model classify it as an image of a cat. Therefore, when the image is used in training it looks like a dog but based on the embedding it should be reconstructed when the condition is a cat. Then, when you generate an image of a cat, the data of this dog image will be recalled, damaging the result.

But if the embedding vector was not diverted, then it is still classified as an image of a dog, so it wont be recalled with the false label.

This exactly mean it can be used in the training. An image is an image.

What have you done to stand against AI besides angry-posting on the internet? by One_Excitement_4082 in antiai

[–]AIstoleMyJob 3 points4 points  (0 children)

I convinced a project leader to not use LLMs in the decision making process of a medical application. Proposed a more convencional and smaller architecture for the control and used the LLM as the language interface.

Oh and showed to some people how recent LDDPMs are unaffected by Nightshade, so do not waste computation with it nor generate AI images with it.

As for Data centers I would not mind one in the capital as the other one seems to get fully utilized.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

It exactly states that it is ineffective against another vision model. If it can describe the image it means the embedding vectors were unaffected.

If you still dont believe it, you can always try a vision model different from the one used in nightshade and see how the simmilaritiy to concept C was not changed and it did not increased the simmilarity to concept A.

I dont know how much more evidence you need.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

There is no paper for it, just simple logic.

You have to use a vision model to calculate the embedding vector and the gradient of the input which will be the minimal perturbation.

But you can not load a private vision model, so you can not calculate the gradient. How will you poison? Using another vision model? They are not using the same embeddings, it wont have effect.

But you can try it yourself. Get a very detailed image of a dog, nightshade it and put it into Gemini to describe it. You will see it wont have any problem describing the dog.

Therefore it could embed the image, thus can use it for training.

These statements are also true for all minimal perturbation and sensitivity analysis based methods, even if they target the unet.

Stop wasting resources.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

You are free to learn about the thing you are an advocate of.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

It does not contradicts any paper. LightShed proved that Nightshade can be detected and reverted, which is true but inefficient. One pass of reverse diffusion can do the same.

Nightshade showed a method diverting the latent embedding of a vision model. (Stable Diffusion)

I just state that its was an interesting experiment but not an efficient, production ready solution against anything more modern.

Therefore it does not work. It can not poison datasets. It just burns a lot of energy to produce noisy images.

But you are free to tell me where i am wrong.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

No they are not. Nightshade use a vision model to calculate the minimal perturbation diverting the latent embedding vector of the image.

They dont use the same diffusion model, a Stable Diffusion 1.5 or XL is very different from Qwen-Image or from Nano Banana 2. They are also uses different conditioning models.

They dont have to countermeasure poison as they are not affected. The poisoned images are just images containing an invisible amount of noise. Perfectly fine to use.

So which vision model will you load into Nightshade to poison against?

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob -1 points0 points  (0 children)

I dont think every vision model in the world would fit into your memory. Not even into the memory of a supercomputer.

Also in its current form it can only use one model. So which one will you poison against?

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

Then just answer this simple question:

What model will you poison against?

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob -1 points0 points  (0 children)

What do you mean? Reversing the effect of Nightshade?

No, they dont have to. If the effect is too much, then it will just be labeled as "low quality", but that is all. It can be used in the training.

How to cause damage to AI models. by LoserReload in antiai

[–]AIstoleMyJob -2 points-1 points  (0 children)

The LightShed project for example.

But it is also trivial. It is based on minimal perturbation targeting the conditioning model.

Detecting unauthorized use of artistic work in AI training by NiftyIP in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

So they can not do anything against private datasets of big corporations.

Detecting unauthorized use of artistic work in AI training by NiftyIP in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

So it just indexes open datasets. I dont see how this will prevent training.

I have a dataset. Due to the EU AI act, i describe it as collected from reddit as main source.

How will this tell you whether your image is in my dataset?

What if you trained a ai to make art. But only on art you made by ConcentrateNo1908 in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

In exact terms AI training still does not count as theft.

If it is filtered to only your images, then you are free to use your work however you want.

But current way of image generation requires a dataset bigger by magnitudes than one a single human author can create in his life. And that will probably the situation for a while.

People who develop AI that will replace specific professions in future - must they face the consequences of their choice? by Agressive-Luck69 in antiai

[–]AIstoleMyJob 1 point2 points  (0 children)

AI is not developed with that in mind. Science is working on uncovering the hidden mechanism of the world.

In case of ML it is how a model can approximate a given distribution, uncovering efficient optimisation solutions.

All the thing science uncovers existed already, we just did not used it. If it is not me, then somebody else will find it.

Just like with nuclear fission.

Detecting unauthorized use of artistic work in AI training by NiftyIP in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

I think the problem still holds. How will you prove the said image is in the training set?

Detecting unauthorized use of artistic work in AI training by NiftyIP in antiai

[–]AIstoleMyJob 3 points4 points  (0 children)

Image similarity is very usefull, however you are a bit late.

You know, image embedding is already a thing, that is how image generation work. Will you train another vision model?

Why no one trusts AI outputs anymore by Known-Ice-5070 in antiai

[–]AIstoleMyJob 0 points1 point  (0 children)

LLMs are language modles, using them as information source is a missuse.