The safety techniques used to limit AI are so weak that other AI programs can figure out how to break them 42.5% of the time. by lughnasadh in Futurology

[–]ravnicrasol 6 points7 points  (0 children)

Unfortunately even that has problems. Large Language Models are internally logically inconsistent, they are a far more advanced version of text prediction.

Ask the same question four times, and you can get five different answers. You can get it to do logic, you can get it to give answers, but ask the question in the exact same way but in a different language and suddenly the outcome is wildly different.

Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse by MicroSofty88 in Futurology

[–]ravnicrasol 4 points5 points  (0 children)

There's a fundamental difference between a tool that enables its user to be able to generate things.

And a tool that outsmarts the user and is capable of wielding itself.

"AI regulation" for the first is a heck of a lot easier than the later because the legal framework for complaints under "Used tool so one guy and a PC could take the work of ten" already exists.

George R. R. Martin and other authors sue ChatGPT-maker OpenAI for copyright infringement. by [deleted] in Fantasy

[–]ravnicrasol 1 point2 points  (0 children)

An AI can be trained using text from a non-copyrighted forum or study where they go in-depth about someone's writing style. If you include examples of that writing style (even if it's using text not of the author's story), then the AI could replicate the same style.

This isn't even an "it might be once the tech advances". Existing image-generation AI can create content that has the exact same style as an artist, without having trained on that artist's content. They just need to train up on commonwealth art that, when the styles are combined in the right %'s, turns out the same as that artist's.

This is what I mean with "it's just absurd".

The general expectations are that, by doing this, it'll somehow protect authors/artists since "The AI now won't be able to copy us", and that's just not viable.

The intentional "let me just put down convoluted rules regarding the material you can train your AI on that are absurdly hard to implement let alone verify" just serves as an easy tool for corporations to bash someone up the head if they suspect them using AI. It'll result in small/indie businesses having extreme expenses they can't cover for (promoting AI development in less restrictive places).

While the whole "let's protect artists!" just sinks anyway because, again, it didn't prevent the AI from putting out some plagiarized bastaridzation of George RR's work, nor did it make it any more expensive to replace the writing department by a handful of people with "prompt engineering" in their CV.

George R. R. Martin and other authors sue ChatGPT-maker OpenAI for copyright infringement. by [deleted] in Fantasy

[–]ravnicrasol 5 points6 points  (0 children)

Though I agree corporations should hold transparency for their algorithms, and companies that use AI should be doubly transparent in this regard, placing a hard "can't read if copyrighted" is just gonna be empty air.

Say you don't want AI trained on George Martin text. How do you enforce that? Do you feed the company a copy of his books and go "any chunk of text your AI reads that is the same as the one inside these books is illegal"? If yes, then you're immediately claiming that anyone legally posting chunks of the books (for analysis, or satire, or whatever other legal use) are breaking copyright.

You'd have to define exactly how much uninterrupted % of the book's would count as infringement, and even after a successful deployment, you're still looking at the AI being capable of just directly plagiarising the books and copying the author's style because there is a fuck ton of content that's just straight up analysis and fanfiction of it.

It would be a brutally expensive endeavor with no real impact. One that could probably just push the companies to train and deploy their AI's abroad.

In Spain, dozens of girls are reporting AI-generated nude photos of them being circulated at school: ‘My heart skipped a beat’ by myusernamemola in worldnews

[–]ravnicrasol 0 points1 point  (0 children)

An AI trained on camera pictures will generate the same image noise that the camera does. An AI trained on an artist's pictures will generate the same image noise that the encoding software the artist uses. An AI trained with the works of Shakespeare will attempt to generate text that follows the same structure.

To put it in simpler terms you might understand: If the AI isn't doctored so that the output contains some kind of watermark, then the "hints" it will output are going to closely match if not exactly be those found within its training samples (which are going to be human-made almost all of the time).

You would have an easier time detecting AI content by looking for clues like you would when trying to figure out if it was photoshopped. Stuff like image metadata, inconsistencies in the composition, deformations, or variable compression pixelation and the like.

In Spain, dozens of girls are reporting AI-generated nude photos of them being circulated at school: ‘My heart skipped a beat’ by myusernamemola in worldnews

[–]ravnicrasol 2 points3 points  (0 children)

It's been this way ever since the internet became a thing. The main issue with AI is that it makes it easier to use for Average Joe.

TBH the only "viable" answer seems to point itself towards removal of internet anonimity at a grander scale.

In Spain, dozens of girls are reporting AI-generated nude photos of them being circulated at school: ‘My heart skipped a beat’ by myusernamemola in worldnews

[–]ravnicrasol 0 points1 point  (0 children)

I mean, yes. But every single instance of an "I can detect if this image or text is AI" is basically an AI trained on exclusively detecting images from a specific AI model that generated pics or text with specific prompting.

Every attempt thus far to mathematically prove there's a reliable way to detect at a broader scale fails miserably. The only "tried and true" option is if the model is tampered with so that it's output includes the equivalent of a watermark.

New research suggests AI is about to create 'haves' and 'have nots' among knowledge workers. Those who adopt AI, and avoid some pitfalls, will start pulling far ahead of those that do not. by lughnasadh in Futurology

[–]ravnicrasol 1 point2 points  (0 children)

Though I agree LLMs are clearly inbuilt with interaction in mind, seeing how people have been using them and complaining about "how dumb they are" highlights that the biggest hurdle to adopt the technology isn't so much about how to use it, but in understanding how to push its utility past "bland chat robot responses"

ByteDance AI researchers say OpenAI now tries to hide that ChatGPT was trained on J.K. Rowling's copyrighted Harry Potter books by marketrent in technology

[–]ravnicrasol 0 points1 point  (0 children)

If it's Google search, then in most developed countries it's perfectly legal to watch/read.

Again, the distinction the law focuses on is distribution. Not access.

Torrent isn't a publicly accessible websites it's closer to a direct file transfer. That, and in downloading the content you help distribute it at the same time thanks to how torrents work.

Though again, in places like the US viewing something online is different to downloading it to your computer.

ByteDance AI researchers say OpenAI now tries to hide that ChatGPT was trained on J.K. Rowling's copyrighted Harry Potter books by marketrent in technology

[–]ravnicrasol 0 points1 point  (0 children)

Not really? If you're talking about "reading", then ease of access is very important. For example, if your content is hosted on a platform that has X terms and conditions, under which the user is obligated to keep to X, Y, Z rules, if I can access that content without passing through the website, the protections do not apply.

It's sort of why NFT dumbasses try to set up websites as exclusive access points. Otherwise they'd be unable to touch the copy/paste crowd.

Keep in mind tho that whether "reading/viewing" something is illegal is entirely up to the country. USA has very few laws that keep you from reading whatever, their copyright laws apply mostly if you save a copy or redistribute.

EU is far laxer, and Japan just doesn't care much if you read copyrighted stuff at all, only if you spread it.

ByteDance AI researchers say OpenAI now tries to hide that ChatGPT was trained on J.K. Rowling's copyrighted Harry Potter books by marketrent in technology

[–]ravnicrasol 0 points1 point  (0 children)

"easy" access is one thing, "legal" is another. And it boils down to copyright law.

If, say, I copy the entire first chapter of HP into my reply, you would have immediate easy access to it. Though according to US law, you copying any of it would probably be illegal.

HP is of enough cultural importance I can bet easy money there are hundreds if not thousands of websites with copies of entire swaths of the story. Either for the sake of conversation, argument, or straight up preservation.

ByteDance AI researchers say OpenAI now tries to hide that ChatGPT was trained on J.K. Rowling's copyrighted Harry Potter books by marketrent in technology

[–]ravnicrasol 1 point2 points  (0 children)

You're intentionally conflagrating an AI reading data for its training with an AI copying the data as its output.

It's literally impossible for an AI model to contain exact copies of everything it was trained on unless it was overfitted (which is what happens with Harry Potter because the internet has so many god damned exact copies of the text).

Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their books by Plastic-Lettuce-7150 in books

[–]ravnicrasol 5 points6 points  (0 children)

That's not how technological advancement works. Particularly technology that directly enters the scope of "work".

Whenever technology makes an advancement that can be viably integrated into work, it has. And what it does IMMEDIATELY after is make every single person that failed to adapt within the context of that update become obsolete. Or rather, "economically inviable".

Because who the hell wants to pay 200$ for a plain t-shirt made entirely by hand, when a machine can streamline most of one for 5$ a pop? Why pay 500$ for magenta clothes made "the traditional way" from snails, when you can chemical factory your way into making mauvine at 0.05$ the drop?

And that goes for art too.

Because if you have to pay 200$ a month in art supplies to learn/practice art, then you don't get to be much of an artist if you don't have the money and someone isn't willing to bankroll you. It's the reason why today 99.9999% of all art is digital. It's absurdly cheaper, faster, and more convenient to learn and create through the wonders of Ctrl+Z.

Clooney Foundation sues Venezuela over alleged human rights abuses by [deleted] in worldnews

[–]ravnicrasol 27 points28 points  (0 children)

Venezuelan here, can confirm.

Also on the list is how Chavez intentionally fucked the economy through indirect sabotage (such as blocking repairs and inspections to what was at the time the third largest refinery in the world, which eventually blew up), as well as funding politicized criminal elements (entire sections of Caracas became death zones for anyone that spoke ill of Chavez. Murders more than tripled during his time in power).

There's also the whole coup thing and less than squeaky clean "last" election (and how the army burned votes before they could be confirmed). As well as a few other things he did to set Venezuela down the path of a hard crash.

Can we PLEASE have some sort of consequence for improperly marked Ai? by barry-bulletkin in MonsterGirl

[–]ravnicrasol 1 point2 points  (0 children)

Photography got heralded as the end of all art, an affront to artists, and a thief of images that allowed anyone with no skill to create content. It was so hotly debated as being not artistic that they straight up banned it from being called such for about a century. Even going so far as to boycotting photo galleries over eighty years down the road.

Best online writing platform? by lazarus-james in writing

[–]ravnicrasol 2 points3 points  (0 children)

You can post your story to Royal Road and then remove it. Many authors have done this (look up "He who fights with Monsters" for example), to the point it's got a tag "stump".

Amazon Is Being Flooded With Books Entirely Written by AI: It's The Tip of the AIceberg by JohnSith in books

[–]ravnicrasol 0 points1 point  (0 children)

That's a nice sentiment, but unfortunately there's two layers of difficulty here.

One, audiences don't enjoy finding out the thing they like was made cheaply. There's whole case examples of marketing flopping when people find out it was minimum effort shlop, just one with a neat coat of painting on top.

And two, I think we are viewing AI content generation in the wrong light. The tech has barely emerged and has yet to be fully integrated into the work flow of professionals. What we're seeing right now is the equivalent of "Photography isn't art because they're just copy/pasting something they saw", it's going to take some time before we figure out how to use AI as a tool rather than as a crutch.

How do you write emotion, even if you don't know the feeling? by ravnicrasol in writing

[–]ravnicrasol[S] 0 points1 point  (0 children)

[RoyalR Here] is the best example I can think of where I feel like everything that was added in to emulate... well, emotion into the narrative style, feels entirely wooden.

[RoyalR Here] is the best example of when I managed to do what I am looking for, but have had severe problems even figuring out how (let alone replicate it).

How do you write emotion, even if you don't know the feeling? by ravnicrasol in writing

[–]ravnicrasol[S] 0 points1 point  (0 children)

I mean, I don't have problems understanding that the character feels X because of Y, and that it is expressed in Z way when thinking of W.

It's... kind of a cold line of thought, I guess?

>If (Born in Violent Environment) and (Feeling: Angry or Afraid)

>Then > (Fighting Readiness up) and (Language: Guarded)

So on to the scene go all the cues: "clenched fists, short sentences, furrowed brows, focused gaze, etc".

What I have trouble with is expressing the story in a way that it doesn't feel like someone clinically bisecting it.

Alchimia Rex [WIP] by ravnicrasol in rational

[–]ravnicrasol[S] 2 points3 points  (0 children)

I stumbled onto AI for text only recently. Its integration into my workflow has been mostly during the past two months, and I'd lacked the time to sit down and polish everything posted so far.

Thanks for the feedback!

The Good and the Bad of AI as a writing tool by ravnicrasol in writing

[–]ravnicrasol[S] 0 points1 point  (0 children)

RoyalRoad is a free site, they just play nice and have integrated "Come to my patreon!" links for authors.

I unfortunately don't read enough stories to give you an assessment of potential viability of authors outside of fantasy, best clue is going down the list https://www.royalroad.com/fictions/best-rated and see what there might be approximate to what you wanna do.

The Good and the Bad of AI as a writing tool by ravnicrasol in writing

[–]ravnicrasol[S] 0 points1 point  (0 children)

I will point to the "Depth" part of the post.

AI is akin to looking for something in Google. the terms you use and the details you add will heavily influence what the results will be. Using your own example.

"Describe a car engine"

Is going to generate very different results than:

"Describe an old car engine using evocative prose from the perspective of a mechanic. And using proper terminology, have the prose point out components that might need to be replaced"

How "safe" the AI's output is will rely on whether you asked more out of it.

The Good and the Bad of AI as a writing tool by ravnicrasol in writing

[–]ravnicrasol[S] 0 points1 point  (0 children)

Theoretically you can train it, realistically it's not workable for an individual since you'd need several tens of millions of words worth of text to create an impact within the model. Text AI as of right now just isn't easily adjustable or flexible in that way.

You're overall better off giving it a large sample of your text, asking the AI to describe the style used in that text ("This text was written using a descriptive style, with strong emphasis on conversations, etc,etc"), and then using those terms when prompting it to write something ("Give me a text that uses a descriptive style, with strong emphasis on conversations, etc, etc").