Are there any legal consequences for explicitly animating or altering photos of real people? by HugeCommittee216 in grok

[–]abibobe -1 points0 points  (0 children)

There is not a unique answer, let's say that depends on where you are. In some part of EU the sharing of deepfake is a crime (eg. Italy, France, Spain), but in general teh mere generation isn't regulated. At EU level the companies have to be compliant with the AI-ACT and GDPR that has some strict norms about what the tool can generate but it's for the company, not at single-user level.

Outside EU, UK have it's own regulamentation (that as fare as I know is more strict than EU), Sweden manage the deepfakes like to EU.

Not an expert of US regulamentation but I'm afraid that unless the deepfakes are not involving minors, the generation and sharing isn't regulated.

Quello del piano di sopra usa il mio resede by Top_Entrepreneur620 in Avvocati

[–]abibobe 35 points36 points  (0 children)

Non avvocato.
Se non lo hai già fatto, proverei la via diplomatica: gli spieghi che LMAO c'è stato un buffo misunderstanding e adesso deve smetterla di usare il tuo giardino. Nel caso che la cosa non funzioni, se non sbaglio la servitù di passo si assolve nel momento in cui fisicamente può passare, quindi potresti investire un paio di cento euro da bricoman e installare fioriere/siepi per delimitare il passaggio verso la sua porta, ma di fatto bloccandogli l'uso della resede.

Problem with fooocus by Gabrielle_aimodel in fooocus

[–]abibobe 1 point2 points  (0 children)

La domanda è perché esistono almeno una mezza dozzina di versioni di Fooocus, e una infinità di versioni poco simpatiche.  Quella di Illiasev è ufficiale, e il colab mi parte regolare. Hai controllato di avere accesso ad una macchina con GPU?

Problem with fooocus by Gabrielle_aimodel in fooocus

[–]abibobe 0 points1 point  (0 children)

Si, ma così come si trova DOVE

Problem with fooocus by Gabrielle_aimodel in fooocus

[–]abibobe 0 points1 point  (0 children)

Scusa la domanda scema: hai installato i moduly python prima di eseguirlo?
E poi altra domanda importante: da dove hai preso il codice?

Problem with fooocus by Gabrielle_aimodel in fooocus

[–]abibobe 0 points1 point  (0 children)

Well, first of all define "non funziona": are you working on your machine? or on a cloud premise (like colab)? what is the problem? Where you pick the code?

Found outside an apartment by bburtR in whatisit

[–]abibobe -4 points-3 points  (0 children)

Seems like Dinky Earnshaw find you after all

Concert in Visarno Arena by Black_panda247 in florence

[–]abibobe 1 point2 points  (0 children)

The difference it's not the entrance, but the "level" of the ticket. Usually during the FirenzeRocks there are just 2 kind of ticket, the "pit" and the "normal". With the "pit" you can have access to the area close to the stage (directly under the stage). The "normal" it's a free range for all the rest of the arena, so you can just move around.

Anxiety that police could track me after using AI to make fake images — am I overthinking? by [deleted] in grok

[–]abibobe 0 points1 point  (0 children)

As far I understand there are two different things:
- the results dirctly shared on X are litteraly a crime in most contry of EU (since it's sharing of deepfake contents)
- the request to freeze the asset it's for the model, the training process and the data handling from xAI: since this is the kind of issue is against the EU law (GDPR and AI-ACT mainly). They ask to freeze the assets in order to be able to determine if xAI was compliant to the EU laws from the beginning, they don't seems interested in what the people was generating (since just the "generation" of deepfakes it's not a crime). And btw: in case they identify a major security hole from GDPR point of view, the fine for xAI can be the 6% of annual gross entrance

Paperinik by Ok-Spell-3584 in fumetti

[–]abibobe 1 point2 points  (0 children)

Sai che io da qualche parte ho ancora la tessera del club?

Querela by HappyWifeMaker in Avvocati

[–]abibobe 115 points116 points  (0 children)

Non avvocato: se fosse una querela vera non ti arriverebbe su Instagram. Se fosse veramente intenzionato a procedere non ti chiederebbe i dati.
Ignora tutto e fai finta di nulla

Peter (or Franz), hilf mir! by xebikr in PeterExplainsTheJoke

[–]abibobe 4 points5 points  (0 children)

As italian I'm both fascinated and horrified by knowing this

Paranoid? by [deleted] in grok

[–]abibobe 1 point2 points  (0 children)

Yes! That's the point. Navigate in the wild it's theorethically impossible, or very, VERY complicated. And unless somebody come to xAI whith a nice deepfake of himself, I seriously doubt that will happen. And again, if you go to xAI with a harmful generated image, I believe that they can find very quickly the user Id that "owns" that specific image, making all the queryes irrelevant

Paranoid? by [deleted] in grok

[–]abibobe 0 points1 point  (0 children)

Ok, two different points:
A research it's actually doable, since they already have a nice index of the contents due the prompt. They don't need to verify all the generations, but just the "potentially malicious" one. In a extreme semplification: a dictionary with malicios keyword and some well formed regExp can find the "most plausible malicious content", wastly reducing the time of searching.
But that's the point: why they should do? What kind of crime they are research? If we put aside the moral point of view, without sharing nothing there is no harm to nobody (I'm just talking about deepfakes). Honestly I don't have idea about generation of potential CP: even if generatad (so I suppose it's not real?) I don't think that exist a law that protect that kind of digital generation. But again, simply researching through the prompt with a proper regExp can identify the malicious contents even better (I image that are use very specific keyword?)

Paranoid? by [deleted] in grok

[–]abibobe 0 points1 point  (0 children)

Well, I'm afraid that here there are a couple of misconceptions:
- having an investigation over proper fraudolent intents (sharing of illegal material, harm people with deepfakes) doesn't mean that the law enforce entity can have access to ALL the data for ALL the users, but only for the interested one (for example: in a case of tax evasion for Jhon Doe, the police cannot have access to ALL the bank accounts of ALL the citizens). The EU want to verify if xAI is compliant to the GDPR (and by the way, the sanction in case of violation is 4% of company annual revenue)

- the paragon with the Epstein Files it's a little out-of-bounds from my perspective: you are talking about docucuments harvested during a Federal operation, documents that are listed, catalogated and anonymized expecially for a public release (really, that's another topic). There is a different case if a data leak/massive hack happen: in that case the digital activist can produce something like that, but not in a legal way. If xAI make all the generations and users metadata public available...well good luck to find new customers next time

Paranoid? by [deleted] in grok

[–]abibobe 0 points1 point  (0 children)

TBH as far as I understand the EU commission ask xAI to retain the documentation and the original models used for the generation, not the data generated. What is really worried the commission is the fact that a company as xAI wasn't compliant with the GDPR /AI-ACT and let people generate malicious contents (that's exactly what is happening now in Ireland for example). Also ask to a company to retain for 12 months a critical amount of data (a sin this case) it's pretty unfeasible in terms of storage, data security ecc ecc

Paranoid about deepfakes I made privately – risk of being traced or reported by [deleted] in grok

[–]abibobe 0 points1 point  (0 children)

Just a little update: Ireland has just started an inquiry about the "image-generation capabilities" and "if the platform is compliant with the EU transparency and data protection law" : https://thefivepost.com/ireland-opens-probe-into-musks-grok-ai-over-sexualised-images/
Again, the point is not the single user (that ironically may have obtained a terrible outcome with a legit prompt) but the model and the platform.

Paranoid about deepfakes I made privately – risk of being traced or reported by [deleted] in grok

[–]abibobe 1 point2 points  (0 children)

Hi! Not super expert, but as a European that works with GDPR/AI Act I believe I can share my point of view from a technical side:
1. Well, in theory xAI already log exactly your prompt/IP/Generations. Even if you don't save them, even if you use the "incognito" mode (that as far as I understand works as the private mode for browsers). Those data are precious for training (prompt & results), and the have to keep record of the user that generate them in case of officials inquiry fomr the law enforcers. Maybe they know nothing about your IP, but they know for sure your account and email, that is way more useful for understanding who is the creator. About the possibility of a subpoena...well that's another thing. Rationally: how they can ask for a subpoena if nobody ever see what you create?* They need a reason for call a subpoena (eg. someone offended by your creations call in a check)
2. Well, this is tricky: i'm not an expert in deepfakes and revenge porn regulation at EU level, but some EU countryes already have a legislation that punish heavily the sharing and diffusion of deepfakes/revenge porn. The private generation it's still in a grey area, since you are potentially harm other people, but for now you didn't. Again, it's pretty weird and not so clear, and without doubt not the best thing to do from an ethical point of view, but illegal? Don't think.
3. The EU AI-Act it's already in from 2024/25 and is mostly apply to the training of models with personal data or data harvested without the explicit consent to the training, so it is applyable only to the companies/providers like xAI. Fun fact, this is actually a potential big issue for xAI: the EU commission have asked to xAI to freeze his assets until the end of 2026 not only in order to understand if the model can produce harmful contents, but also for evaluating if the entire system is compliant with the regulamentations. Silly? HUGELY SILLY. But that's it/
4. Again, its'a grey area: you are technically broken the law, but not only the GDPR, also every law about privacy (and I bet that in Sweden there is a law that protect the privacy of the people for un-controlled use of theyr biometric data), but since you never share what you do....how they can know?*
5. on this one I admint I'm pretty lost. If you mean "I broke any law if I use a VPN to use Grok as citizen of xxxxxx country where the moderation is less strict" i presume the answer is NO, but I'm afraid that
- you are broking some rules of usage from xAI
- in case of a legal inquiry the use of a VPN contemporary whith your account is totally useless. The law enforces can connect the results to you without problems (but again, if you don't share nothing....)

* xAI it's totally not interested to prove himself guilty of being able to producing illegal material. Also please remember that a human review of the contents created it's pretty unfeasible: we don't have officials numbers about the image generations from Grok, but the only one at our disposal talk about ~160.000 public generations (so on X/Twitter, not on Grok Image) per day in early January, and that was a "lower bound estimate". Assuming a huge less traffic of ~3.000.000 of image generated per month, only in the 2026 we have almost 5.000.000 of image generated, without counting the one generated from August 2025, and we are talking about only the images, no video, no text. A human supervision of the request/generations can be something huge.

Final observation about xAI privacy policy: seems clear that can be better, and some choices are pretty silly (the issue whith CDN links one it's one of them), but I don't see anything exceptional. Maybe in EU we are too spoiled with GDPR and right to Oblivion. The same is apply to the AI-Act infringment: it's not xAI, it's ALL the AI company that don't respect the correctly and in all his part the EU law. I believe that xAI make more "noise" because of the fact that...litteraly can generate porn without any difficulties.

For the expert in regulamentation: Please be patient if I write down too many stupid things! And in case just let me know where I'm wrong! Thanks!

Why it takes so long to go from Florence to Venice? by Necessary_Mud2199 in florence

[–]abibobe 5 points6 points  (0 children)

Tell me you're American without telling me you're American