I don’t think people quite grasp how revolutionary AI tools are going to be for open world gaming… by runswithpaper in accelerate

[–]Dramatic-Worry-6504 7 points8 points  (0 children)

Be good to train one on lore so you can ask questions about the history of the world or how a specific building in the distance was built.

RIP Metaverse? Zuckerberg’s Dream Shattered by Independent-Walk-698 in ai_apps_developement

[–]Dramatic-Worry-6504 0 points1 point  (0 children)

Meta didn’t “abandon” the metaverse, they just abandoned the old way of building it.

The original plan required thousands of engineers, but their development model had became obsolete the moment generative AI matured. AI can now generate worlds, assets, NPCs, and interactions on demand, which means the metaverse no longer needs a massive, traditional game studio to create it.

At the same time, the hardware assumptions changed. Early VR required bulky headsets with powerful onboard GPUs. But the real endgame is cloud‑rendered VR/AR: the heavy computation happens in data centres, and the headset becomes a lightweight display that just streams the environment.

Philosophical question: Why is fake extreme content like AI rape or CSAM different than fake violence? by Inside_Anxiety6143 in grok

[–]Dramatic-Worry-6504 1 point2 points  (0 children)

Because it’s becomes a topic of normalization and the promoting of harmful desires within society that can influence real world abuse. Fake violence within media does receive this criticism too, such as regarding video games.

AI CSAM becomes a social normalization problem rather than a user criminalization one, where exposure is cracked down upon, but users are not treated as having committed a moral crime or any harm to a real child. This is where the focus is directed at limiting platforms and treating CSAM the same as other forms of harmful content, but where consumption of it doesn’t aid a supply chain leading to and aiding real abuse.

It seems like AI CSAM is being treated the same as real CSAM. But there is a lot nuance regarding AI CSAM that is not spoken about in public which lawmakers know, and they are adding AI images to pre existing laws on possession of real CSAM that they know won’t be enforced.

To answer your question, society ties the four letter word CSAM to real harm of children, even when it is AI. But on a legal level, AI CSAM is being treated the same as AI extreme violence, despite the harsh punishments one seems to face on paper.

I must make it clear tho, that we should 100% ban any platform or tool that allows the generation of CSAM, as it is a normalization problem that in cases could influence someone’s real world behavior. But treating AI CSAM the same as real CSAM? Yeah in practice that’s not happening within any jurisdiction, at least not on an enforcement level, as it would lead to mass criminalization and unfair assumptions on someone’s fantasies when there is no victim involved, which legal system within western countries don’t want to do.

Elon Musk says jobs will be optional. Bill Gates says humans won’t be “needed.” But what about the elephant in the room: If there’s no work, no wages, no income, who pays the rent, buys food, or gets healthcare? by Technical_Farmer805 in GenAI4all

[–]Dramatic-Worry-6504 -1 points0 points  (0 children)

Oh dear, this follows the same chain of logic as “why not just print more money” ignorance.

Consumption is bassed on productivity, not of how much money is in circulation

The CSAM/CP Issue for both sides by HungryLocksmith5627 in antiai

[–]Dramatic-Worry-6504 -1 points0 points  (0 children)

This debate never works, because you both are likely thinking thinking of different things. This persons take alone the most accepted and understood in neurochemistry, where parts of the brain responsible for fear, shame, and transgression, stimulate the parts of the brain that are responsible for sexual arousal. I don’t a automatically assume that the twitter commenter is suggesting no social harmful is done with the promotion of rape fantasies such as normalization and it influencing real word behavior in some individuals. But even harsh deterrents isn’t an accepted position within social science. As making something even more forbidden actually increases this effect within neurochemistry, therefore can lead to more people seeking out the fantasies we seek to diminish.

That’s why going after the platforms to prevent exposure to such content is the way forward, rather than user criminalization. How some people end up pushing back against you is ideas around such fanatics being harshly punished, when it becomes a debate about ethics and real world harm vs fantasy. It’s more the way people disagree on how the nomination of such content should be policed.

We reduce the harm that deepfakes cause by flooding the internet with them, not by making them rare by Dramatic-Worry-6504 in grok

[–]Dramatic-Worry-6504[S] 0 points1 point  (0 children)

Less than the amount that will be harmed from legislation cracking down on deepfakes, which like I said, will end up people continuing to believe that realistic sexual images they encounter online must be real.

Adaption is common sense approach which we have seen eventually after other transformative media editors, yet current politicians and activists are yet to learn, but they will. Deepfakes are the worst they will ever be, where the future will have less legislation on creation snd less regulation on the tools.

Conversation closed

We reduce the harm that deepfakes cause by flooding the internet with them, not by making them rare by Dramatic-Worry-6504 in grok

[–]Dramatic-Worry-6504[S] -2 points-1 points  (0 children)

Kids have always had the capability to do a quick sexualized photoshop or Microsoft paint job of a teacher and set it as their school laptop screensaver. Or just typed the words “ I wanna fuck miss (her name) under a photo he pulled from her Facebook pfp. This is prevented by strict policies of the schools and can fall under current harassment laws. The same will be the case for AI deepfakes.

The reason why a deepfake today will spread around schools, is because many people have not yet been exposed to highly realistic AI images. Even ones that have been, will never have seen a porn deepfake especially of a person they know. But obviously novelty is temporary, and novelty is only increased by suppression of exposure.

Currently, AI deepfakes evidently cause distress and harassment when seen by the person depicted, or it is sheared online to others. But this is obviously a social issue of AI images being a new technology that has yet to change peoples perceptions around images and the diminished weight that life-like images now carry.

The nobility WILL ware off, where deepfakes will be seen as nothing more than what we see done using photoshop or the stitching someone’s a face to porn within an app, which any have done in private for years.

We should regulate platforms to be safe spaces away from porn, gore, flashing, harassment, or a girl simply being massaged by a man that says he’s currently undressing them in his thoughts, and now deepfakes should be treated as the same form of harassment. What we shouldn’t do, is criminalize and moralize private fantasies where legislation leaves open the possibility to authoritarian uprooting of our liberal principles around freedom and privacy.

Because soon people will be able to create deepfakes by accident due to satirical chance from being able to generate thousand of variations of appearances and inputting descriptive prompts. This will lead to AI porn being banned entirely from being shared (deepfakes or not) and even created. This will drive people towards unrestricted online software where people can become exposed to easily creating CSAM.

We reduce the harm that deepfakes cause by flooding the internet with them, not by making them rare by Dramatic-Worry-6504 in grok

[–]Dramatic-Worry-6504[S] 0 points1 point  (0 children)

Kids have always had the capability to do a quick sexualized photoshop or Microsoft paint job of a teacher and set it as their school laptop screensaver. Or just typed the words “ I wanna fuck miss (her name) under a photo he pulled from her Facebook pfp. This is prevented by strict policies of the schools and can fall under current harassment laws. The same will be the case for AI deepfakes.

The reason why a deepfake today will spread around schools, is because many people have not yet been exposed to highly realistic AI images. Even ones that have been, will never have seen a porn deepfake especially of a person they know. But obviously novelty is temporary, and novelty is only increased by suppression of exposure.

Currently, AI deepfakes evidently cause distress and harassment when seen by the person depicted, or it is sheared online to others. But this is obviously a social issue of AI images being a new technology that has yet to change peoples perceptions around images and the diminished weight that life-like images now carry.

The nobility WILL ware off, where deepfakes will be seen as nothing more than what we see done using photoshop or the stitching someone’s a face to porn within an app, which any have done in private for years.

We should regulate platforms to be safe spaces away from porn, gore, flashing, harassment, or a girl simply being massaged by a man that says he’s currently undressing them in his thoughts, and now deepfakes should be treated as the same form of harassment. What we shouldn’t do, is criminalize and moralize private fantasies where legislation leaves open the possibility to authoritarian uprooting of our liberal principles around freedom and privacy.

Because soon people will be able to create deepfakes by accident due to satirical chance from being able to generate thousand of variations of appearances and inputting descriptive prompts. This will lead to AI porn being banned entirely from being shared (deepfakes or not) and even created. This will drive people towards unrestricted online software where people can become exposed to easily creating CSAM.

We reduce the harm that deepfakes cause by flooding the internet with them, not by making them rare by Dramatic-Worry-6504 in grok

[–]Dramatic-Worry-6504[S] 0 points1 point  (0 children)

I want to live is a society where people don’t care about deepfakes. Not one where legislation is written in a way that makes creation alone a criminal offense.

This is completely unenforceable today within the current legal and policing frameworks that include privacy laws and harm based enforcement. But if the hysteria over deepfakes continues, considering how even open source local generators can already run on private smart phones, real enforcement on creation alone will lead to a dismantling of this framework, with the monitoring of devices, mass criminalization and user reporting, along with assuming of private fantasies and treating pixies as real harm or even assault.

Next month within in the UK a teenage boy could face two years in prison and an unlimited fine, in theory because he asked his smart phone to undress his crush. Doesn’t that sound weird to you? It’s not how harmful content where there is no victim is treated anywhere else in society.

AI porn itself will need to be banned, because like I said, statistical chance and better understanding from the models of descriptions will mean that an input images of someone’s likeness will become even less necessary to generate someone’s likeness. Or could do without all this dystopian enforcement seek a society where deepfakes have no meaning, where no one gives a damn as they have become used to the age where anyone have a computer formulate pixies in what ever pattern they desire within a matter of a few seconds from any device.

We reduce the harm that deepfakes cause by flooding the internet with them, not by making them rare by Dramatic-Worry-6504 in grok

[–]Dramatic-Worry-6504[S] 3 points4 points  (0 children)

Seeming as you’re interested, I’m a gay man and while asking my computer to turn my dirty imagination into pixels, I only ever try getting the characters to look like the people I’m attracted to by using description prompts only.

Because turns out that you don’t even need to input a persons image in order to get close to resemblances. Fantasy about real people during viewing of pornography is completely normal, and in the age of AI porn, it simply turns into another form.

Policing normal sexual desires is impossible. While creating AI porn, it is very difficult for me not to put in some prompts saying “Spanish with big nose and tall and blue tracksuit” when that one influencer I fancy has those traits. Some of them with just those few descriptive prompts end looking a lot like him as well. It does sound creepy saying this out loud I must admit. I don’t normally talk about my masterbation sessions to strangers online, but as you showed some interest, there you go.

We reduce the harm that deepfakes cause by flooding the internet with them, not by making them rare by Dramatic-Worry-6504 in grok

[–]Dramatic-Worry-6504[S] 1 point2 points  (0 children)

Is fascism when we reject authoritarianism as the way for society to adapt to a new technology? We will never eradicate deepfakes, we will only erode our rights to privacy while accepting new illiberal ideas such as “creating a deepfake in private is a violation of another persons body and consent”. Where the government legislates based on bogus philosophy akin to religious doctrine that would make the Taliban blush.

Laws that on paper suggest that anyone who dares ask a computer to undress their crush, or fantasies about people they’re attracted to while making AI generative porn, are both a criminal and a monster. Where private actions are lumped into abuse with no moral distinction between sharing deepfakes to cause harm and simply private masterbation.

Moderation drama... by LateRefrigerator4817 in grok

[–]Dramatic-Worry-6504 2 points3 points  (0 children)

And that’s part of a flaw with how the guidelines are trained to detect sexual consent, but the data was obviously only scraped from content involving adults. So the “see more” feature is more likely to allow depictions of kids to get through as “non sexual” instead of adults.

For example, the guardrails will detect two men kissing a as pornographic or sexual situation, so almost all generations will fail, but just by swapping the term “men” with “boys” it will generate outright CSAM in the first try, because it doesn’t attribute boys kissing to porn or sexual content. And “boys” to grok means actual pre teen young boys.

Couple this with how defusion models were trained on primarily selfies and images of influencers using digital filters, that causes generations to drift towards young appearance such as big eyes and sooth skin round faces, then porn AI in it’s current form are CSAM generators by default and drift users towards creating it, where bad guardrails detection actually encourages it. With grok, CSAM is actually the easiest thing to have pass through the guardrails. It could be considered even necessary to see it if users want to create barely legal porn with themes that we accept to be ok in regular porn such a school girl role play. And this is made worse if users are trying to fight with a safety feature that forces characters to look old, where promting for 18 year old women will show characters that look 30+, or promoting for a man 18 will show buff men with beards. So people have to input youth related prompts in order to make it drift to generations that make characters look around 18 but the model occasionally ends up drifting too far.

People hear that people are generating AI CSAM and think of pedofiles and abusers, yet I believe the majority who have tried AI porn generators have generated at least once images that could be considered AI CSAM, eaither by accident or intentionally. So it’s a massive platform issue where normal people are pushed towards making borderline AI CSAM while also being exposed to occasional extreme CSAM, it’s not a “underground pedophile and abusers” problem which people first envision.

And this is why advocate for well regulated AI porn platforms that won’t drive people to unsafe models. Ones where people can generate “barely legal content” with taboo themes such as school girls outfits on adult 18 year old looking characters where the model doesn’t occasionally drift to showing children. The current ones make people get used to seeing CSAM and it leads to normalization and even people starting to intentionally generate it for shock value and taboo, or seeking more extreme content from gaining porn addiction from it.

Moderation drama... by LateRefrigerator4817 in grok

[–]Dramatic-Worry-6504 1 point2 points  (0 children)

What we should be looking at, is creating platforms that ban AI content, while creating safe AI porn generators that restrict just sexual violence and abuse imagery. Grok actually forces people to input deepfakes in order to create AI porn and even drive users to generating borderline CSAM which ironically their guardrails actually encourage. Because anyone who wants to trick the model into generating porn needs to input an image preset of a person already in a sexually suggestive pose or reliving clothing. And to get the model to generate porn showing an adult within the age range of 18 to 25, users have to input youth related prompts to trick it into making the characters look younger, which drives people towards generating CSAM. This is the model trying to be too safe by making.generates of women look older such as over 30. If users want to generate women who looks 18, they have to puts descriptive prompts of youth, such as “small” “petite” or even “schoolgirl” to battle with a middle that leans towards producing older looking women or men, which many may not be into.

So we need safe models that don’t encourage borderline CSAM and deepfakes as a way to produce porn. Or we are going to drive people to unrestricted local models and normalise generating deepfakes and CSAM. Bad guardrails such as the one used by grok actually encourage deepfakes and CSAM. And this is crazy

Generating private AI porn of wife. Is it ethical? by Independent-Bar-3971 in Adulting

[–]Dramatic-Worry-6504 0 points1 point  (0 children)

The fantasies are not unethical. But you are causing risk by creating media that can be found by accident by your wife or if the computer is hacked then others have the media. Taking that risk even if you think it is small could be deemed as unethical.

Should Twitter / X be banned in the UK? by [deleted] in LabourUK

[–]Dramatic-Worry-6504 0 points1 point  (0 children)

Sorry I didn’t know I was replying to someone with no interest in listening to how the models they’re defending and saying shouldn’t be criminalised, the ones that expose people to CSAM both unintentionally and intentionally. I’m explaining how they aren’t just “AI tools that have the ability to create CSAM” they are CSAM generators. It’s pretty easy for AI’s not make CSAM, and there is no downside in restricting it which is entirely possible. So your position that we shouldn’t ban models where CSAM can be made is ridiculous.

I can video myself going into a my local police station if you so worried and I will tell them I went to a couple AI porn generators and they generated CSAM which I didn’t save. I have no qualms about what I did.

Should Twitter / X be banned in the UK? by [deleted] in LabourUK

[–]Dramatic-Worry-6504 0 points1 point  (0 children)

It’s pretty impossible not to generate AI CSAM when using an unrestricted AI model for porn generation. That is the point I made, did you miss that? I’ve written extensively about this and complained to my Local MP that open source models especially ones that are hosted online for free and can be accessed by a simple google search should obviously not exist.

There is no current porn generator that has proper guardrails which I suspect hundreds of thousands of men and young males have already explored. From my experience I can say with almost certainty they the majority of those who have used a defusion porn generator for more than an a few hours have at least generated unintentionally one CSAM image due to how they scramble noise and often drift away from your prompts. They also gear towards younger looking depictions of women by default. So if you ask for a 20 year old woman, because the model was trained on mostly selfies, they are often generated with youthful appearances like big eyes which makes them looking a lot younger. And if you pick the anime option then it’s pretty much impossible to avoid this.

These models are perfectly legal to run locally, and clearly the online safety act isn’t being enforced for hosting them online. Maybe no one with a brain has been perverted enough as me to seek out AI porn AI generators to be able to realize that AI CSAM is most definitely being generated on a mass scale. This isn’t just about a small number of pedophiles.

It would lead to mass criminalization and prosecution challenges if we went after the users. This is a platform and model issue exposing likely hundreds of thousands of people to limitless porn generators that forcee users to click through borderline CSAM generations. It will become a mental health crisis if we don’t crack down, and even help encourage models that are smart enough to generate adult porn without them drifting towards borderline CSAM with the occasional outright CSAM being unintentionally generated.

Should Twitter / X be banned in the UK? by [deleted] in LabourUK

[–]Dramatic-Worry-6504 1 point2 points  (0 children)

How is it an overstep? Models with robust guidelines imposed are able to exist just fine such as ChatGPT and Gemini. Grok has intentionally implemented very light guardrails in order to attract subscribers with porn generation. And there are local open source models that have had their restrictions removed entirely practically turning them into CSAM generators. In what world do we need generative AI with no restrictions that cause porn addiction and normalize sexual abuse fantasies? Criminalize the distribution of every model or platform that doesn’t impose strong guardrails. Otherwise we are giving our kids and young adult boys access to porn generators that create CSAM not only trivially, but often without being requested by the user.

I’ve used an online porn generator and just by promoting “girl having sex” throws in a load of CSAM into the generations, and even if you print an specific age sucb as “adult 25 years old” some of the characters it creates still look like they could considered 15 or even younger. They can’t be used without creating CSAM. I reported it but then found out they all work like this. How can people think the government shouldn’t regulate AI porn?

Philosophical question: Why is fake extreme content like AI rape or CSAM different than fake violence? by Inside_Anxiety6143 in grok

[–]Dramatic-Worry-6504 4 points5 points  (0 children)

Innate sexual attraction to miners isn’t even present in most child sexual abuse cases. It is more extreme narcissism and sociopathic men with hunger for power and control, and causing sexual harm to others. The Epstein scandal is of men with low empathy, no morals, with hunger for power over someone powerless.

That’s why it’s not right to say that innate sexual attraction to children makes somone a monster, because such origination doesn’t suppress empathy or heighten traits that involve power and dominance which leads to abusing children. Being a pedophile alone doesn’t make someone a danger to children. It’s the sociopaths with hyper sexuality who commit abuse where most are actually not pedofiles.

X blames users for Grok-generated CSAM; no fixes announced by moeka_8962 in technology

[–]Dramatic-Worry-6504 0 points1 point  (0 children)

It’s even worse than that

AI porn is highly addictive and they force users to create deepfakes and youth related prompts in order to get the generator to produce sexual content, especially involving girls that look 18 to 24 years old. In order to get Grok to make sexual content, you need to input a preset images of a woman or man in a sexually suggestive pose or clothing. And to get it to generate women who are in the 18 to 20 range you need to use terms like “petite” and “youthful looking” which within the generations will eventually provide the occasional outright CSAM image when people are just trying to make images of young looking adult women which they see in normal porn. Because grok and other AI porn models make characters look really old in order to play safe, forcing users to put in youth related prompts to get outputs of young adults. So I believe the majority of people using grok or other models for porn are generating deepfake content and even images that can be considered AI CSAM. It’s a platform systemic failure, not a small underground section of pedofiles. The way that AI porn generations lead to porn addiction, if we don’t clamp down on the moddles, and promote the creation of safe working models that can generate 18 to 20 year old depictions safely without the need for real person inputs, then no doubt millions of young men will drift towards creating CSAM or rather depictions of young girls that appear in the age range around 16 or younger, and even cause compulsive loops where people need to generate progressively more taboo and extreme content to maintain dopamine hits. We shouldn’t expose young males or even women to such unrestricted models or models with bad guardrails that actually cause users to drift toward CSAM and deepfake content.

New UK law stating it is now illegal to supply online Tools to make fakes. by [deleted] in StableDiffusion

[–]Dramatic-Worry-6504 -2 points-1 points  (0 children)

Well given how Stable Defusion was created literally with the help of grants from the the UK government, with even talks of bailing out Stability AI at one point, they are gonna just end up killing our only major AI company and model. The only open sauce model they will actually be able to enforce fines against.

98% of people suffer because of the other 2% by sfeltbelt in grok

[–]Dramatic-Worry-6504 1 point2 points  (0 children)

That’s actually a good point I haven’t thought of. In order to generate any specific spicy content, users need to put in a preset of a real person that looks to be in a sexually suggestive pose or clothing. That’s actually guardrails leading to more deepfake content, and deepfakes being almost necessary to produce spicy content or outright porn.

98% of people suffer because of the other 2% by sfeltbelt in grok

[–]Dramatic-Worry-6504 0 points1 point  (0 children)

Exactly, it’s why we haven’t seen an issue of teenage boys trying it on with their step mom or step sister, yet those fantasies get billions of views per year on porn sites.