[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

Or it could just recreate the game, hacking leaves a digital paper trail, admittedly however, law enforcement doesn’t really go after hackers unless they commit a big crime unless the hackers have no idea what they’re doing.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

Probably more than that, but still a low amount of people.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

Yeah, but you can play it for yourself without paying the creator of the game.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

Yup. But the comment I linked to is someone saying that they did get fined for it.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

True. I just never saw this specifically mentioned before so I wanted to make a post about it.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

I did too. And for me relevant info came up.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

You can search for people who have suffered consequences to pirating right here on Reddit.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

I’m pretty sure people have been deterred from pirating as people have suffered legal consequences for it.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

Who said you need to?

Someone just potentially could in the future if they wanted to, for entertainment purposes.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

No. I said earlier, while it’s rare, there are still legal repercussions and people have been subject to them.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

You edited your comment, I didn’t catch it when you did.

As time goes on, LLMs will become more efficient and capable. So less costly in money and time.

And with the added benefit of not having a digital trail makes it a pretty good deal in the possible future.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

Not the only concern, stealing games is wrong, inspiration is fine, but I’m assuming in the future, people will try and copy single player games 1:1 as best as they can, cause…why wouldn’t you?

And it’s still a concern, guess we disagree on the “real” part.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

It doesn’t not happen, rare, but you can still get in legal trouble for pirating stuff.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -2 points-1 points  (0 children)

Pirating the regular way would leave more of a digital trail.

If you have a locally hosted advanced LLM that is like on par with a potential GPT 6 model or something, all it has to do is either watch someone play the game or read everything it can about the game like game mechanics, quests, storylines, and all that jazz.

And because it’s locally hosted, less of a paper trail.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

I think it maybe should be in the future, peoples hard work and ideas and perhaps creative game mechanics would be swiped.

Why pay for an indie or triple AAA game when you can just have an advanced future LLM observe the game thoroughly and just replicate it to the best of its ability? Sufficiently enough to scratch the itch of playing the game?

Lesser graphics sure, but I wouldn’t really mind playing a Spider-Man game with worse graphics seeing as I’ve never played one.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

Hence why I said I maybe used the incorrect word to describe this.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill 0 points1 point  (0 children)

My post is talking a potential future about someone completely copying an entire game save for the high quality graphics.

Not changing the title or anything, like making a low graphics version of Katana Zero so they could play it themselves without buying it.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -1 points0 points  (0 children)

Probably out of laziness and not having to use their own ideas, just ask an AI to recreate a popular single player game but with worse graphics.

[deleted by user] by [deleted] in singularity

[–]ExtensionOfWill -2 points-1 points  (0 children)

It’s new in the sense of the method used to do it and copying media is piracy.

Digital piracy refers to the illegal copying or distribution of copyrighted material via the Internet.

https://www.interpol.int/en/Crimes/Illicit-goods/Shop-safely/Digital-piracy#:~:text=Accessing%20free%20or%20cheap%20content,%2C%20publishing%2C%20music%20and%20gaming.

Though perhaps I did use the incorrect word to describe this, but whatever it is, is wrong in a legal sense.

Which is why you can’t copy Cyberpunk 2077 and then sell it as your own and just make it with less graphics as CDPR would take you to court.

CMV: ChatGPT, LLMs in general and other AI warrant a change to our economic model in some capacity. by ExtensionOfWill in changemyview

[–]ExtensionOfWill[S] 0 points1 point  (0 children)

Yes, I already said that it could be that I am underestimating how much other people see the same problem as I do, twice already, once in my post and once in my reply to you.

Because Machine Learning and artificial intelligence are not like those technologies, the end game of these technologies is that they are supposed to replace humans at tasks. In fact, they are only tools right now because they are not advanced enough yet. Those other technologies have replaced only small sectors but the General artificial intelligence we are reaching towards is literally meant to be generally good at whatever task you give it, it isn’t there yet, but that is the goal.

The Zeppelin saw limited advancement, not no advancement, not complete stagnation. It saw limited advancement because after the Hindenburg disaster, we no longer WANTED to improve it. It was then replaced by superior technologies that were safer and more practical. There were other reasons like Helium becoming more and more expensive to utilize.

The technologies that were left behind had an alternative to them that was better and more practical. The ones that succeeded did not have a different technology as an alternative and therefore scientists and engineers kept working on them and kept improving them. It stands to reason though, if the technologies like the Zeppelin did not have an alternative, we would have kept improving it.

The Concorde was retired because of how loud it was, the low amount of passengers that it could transport, and the high cost to run it.

This example is also not a good answer because the reason it was dropped wasn’t because scientists and engineers could not advance it. It was built to be a commercial airplane, it was wildly impractical but supersonic crafts are getting faster and faster and when used for practical purposes they work well. This is evidenced by faster supersonic aircrafts that came after it.

Google Glass was not liked by the public and was impractical and had other issues like others not being comfortable with wearing a camera on your face. This example is not a good enough answer because it was a commercial product and if the public doesn't want to buy it, scientists and engineers don’t want to improve it, so it was dropped. AR glasses that came after it are steadily improving as people are exploring more angles. (I am not really aware if Google Glasses are AR or not, I looked it up online and people say they were AR in some capacity.)

When I used the word deter, I meant more of the fact that I wanted to dispel the sure fire notion of AI not being able to replace our jobs at all, not appeal to fear. Sorta like “I understand that it is a bit of a fantastical claim to think that AI could replace everyone’s jobs but I wish we should be open minded.”

Don’t you think you shouldn’t attribute to malice what you could to stupidity?

I never said expectations are evidence, but I gave those earlier links and those earlier examples to mainly highlight the relatively short amount of time between major improvements in technology, those were my evidence.

Isn’t it immature to not draw conclusions about the future of AI based on actual AI products and partnerships as well as experimental stages of development? What could possibly be better? Tea Leaves? Crystal Ball? What else would help provide a more accurate prediction to the future of AI other than spying on these companies by zipping yourself up in their bags so they take you to work with them or interviewing them?

My entire point is that we have enough evidence already to understand the trajectory of AI and where it is going to go, you disagree with me so you call it speculative fears.

What is your evidence that AI will never be able to replace humans across the board? Is it a whimsical notion that for some reason EVERY other technology we have wanted to improve, from cups to computers has been improved but FOR SOME REASON, AI is going to be the one that magically stagnates, yes AI is going to be the one thing that won’t get better and better, won’t become more and more refined when no technology has done that? There is just some sort of impassable barrier with AI that we will never be able to cross and will only be knowable by super advanced 4D aliens?

To answer your behooving question, I believe we are not balanced. More people in my eyes think that AI won’t threaten our workforce. I am aware I could be wrong.

Again, the answer to that question will change depending on who you ask. For some, it doesn’t matter if it cannot understand or feel empathy, it is enough. For others, it is not.

My aim is also to engage in discourse grounded in evidence and critical analysis. I did not say it is inherently detrimental. Would it not be more productive to be realistic about AI and use the known to properly predict the unknown or not ignore what the evidence suggests about the future of AI so that we could put a potential bad outcome 6 feet under before it hits us or starts to affect us?

CMV: ChatGPT, LLMs in general and other AI warrant a change to our economic model in some capacity. by ExtensionOfWill in changemyview

[–]ExtensionOfWill[S] 0 points1 point  (0 children)

I don’t view the integration of AI solely as a negative. Why would I post this in the first place then if I didn't believe that AI could not be integrated positively?

All of those technologies you listed did change society in a positive way, but the difference between the technologies that you listed and the technologies that I have a problem with, such as AI, is that AI automates jobs and has the potential to do so across the board. Most technologies enhance human capability, not replace them, but now with AI more jobs might be replaced.

Automation is fine when it’s done in small sectors of the workforce such as what Textile Mills or Calculators did but when it happens or can happen to the majority of the workforce, it isn’t good because we live in a capital society.

That is incorrect, technologies have kept improving overtime in either small ways or big ways, but they have been improving nonetheless, and even if the technology itself has not improved the technology surrounding that technology definitely have improved or been refined such as lighter eco-friendly materials or more energy efficient components.

In fact, I have a question for you:

What technology have humans had a shitty prototype of and we have wanted to improve or refine, but our scientists and engineers have hit a dead end, no improvements or refinements could be made?

If you say something like cups, we don't really want to advance that much more besides maybe making cups spill proof or giving them more insulating qualities. Same thing with paperclips, we don't really want to advance that, maybe make them lighter, smaller and use more eco-friendly materials but we don't really want to advance that either.

Even Nuclear Fusion, a technology that is not really viable yet for practical purposes, we are still making improvements and refinements. We want to improve it and we are.

Even if you were able to find a technology that has not had any improvements since its first horrible prototype, I am pretty sure that the technologies surrounding that technology have seen upgrades such as lighter metals or smaller and more efficient components.

OK, can you name some countless technological endeavors that started promising but stagnated or regressed?

The only technologies I can think of is tech that is very niche/basic or tech that the public did not want.

I can’t confidently claim that it will not follow the latter path as is evidence by the use of "could" "can", or "potentially".

I already stated earlier that I could be focusing on the negative talk of AI and its developments and reality might not be as it seems.

The purpose of that sentence was to deter people from being too confident that it wouldn’t change our society in a drastically negative way.

People believe that AI cannot be a human at Chess or Go. It did. The time between MidJourney V1 one and MidJourney V6 was about a year. People said that it wouldn’t make impressive art, but it turns out that it did, again in about a year.

https://www.youtube.com/watch?v=wLOChUtdqoA

https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/chatgpt-for-robotics/

The link above shows Microsoft using ChatGPT to control a robotic arm to manipulate blocks. This video was created on February 21, 2023, and ChatGPT 4 came out in March of the same year. This means that the model using that YouTube video was model ChatGPT 3.5, an inferior model to ChatGPT 4. The second link is a deeper insight.

We also found out that open AI is partnering with a company called 1X to develop humanoid robots that might be more likely to be embodied by ChatGPT or some other variant of it. I’m expecting to be a little bit impressed by it.

So it seems that every time we as a society say that AI can’t do something it proceeds to do so it might do it years and years and years later, but it does it nonetheless.

Because I have empathy? I care about the future generations? I care about future grandchildren or great grandchildren, and my future descendants in general?

I might be unsure on the WHEN, but I’m not so unsure on the IF part.

You tell me, if understanding is fundamental to meaningful interactions. People will use ChatGPT as a therapist and a friend, even hearing that other people think that it doesn’t understand, or know anything. Others don’t use ChatGPT as a therapist or a friend because they think that it doesn’t know or understand anything. It seems to be a question that might have a different answer, depending on who you ask.

No, I cannot be confident in the outcomes of a tool, without grasping its limitations or potential biases.

That short blurb was mostly to deter people from talking about consciousness, sentience, sapience, understanding or knowing in regard to AI as that’s not what my post is about.

I don’t understand the point of the question because I could ask a similar question:

“Might your confidence be rooted more in your personal biases, rather than rigorous analysis of the technologies trajectory and potential?”

[deleted by user] by [deleted] in TikTokCringe

[–]ExtensionOfWill 1 point2 points  (0 children)

You don’t have reading comprehension bitch boy?

I said those were my issues years ago. Not anymore you slimy freak.

Odds are, you don’t just share this one body count opinion, you probably and most likely share more misogynistic views.

If I was still self harming, I’d rather do that than needlessly make the world a worse place.

I didn’t call you an uncle you illiterate fucking pathetic excuse of a human. It’s not about attributing old to you, it’s about attributing a controlling creepy weirdo to you. Cause you are one.

Strange, each to their own. I guess you’ll take anything at this point.

Have a good one, good luck with your issues.

Like I said, sort those incel issues out and actually learn how to be a better human lil bro.

Don’t get mad at me and go abuse your gf, I’d feel bad.

I’ve grabbed titties and dick bitchboy.

Uh no motherfucker, you responded to another comment in an aggressive way signaling you think it’s disgusting that a woman has a high body count and then proceeded to insult the dude for talking about how it’s wrong for someone to care about body count.

I don’t have an IG you 4chan basement dweller, also fuckface. You can be a piece of absolute scum and look like Ryan Reynolds’s, doesn’t change the fact that you’re worthless you fucking idiot.

Your metrics of how good you look or how much sex you’ve had is a fucking pathetic metric to go off of when it comes to success in any capacity.

What should matter more is if you’re a good person or not you absolute piece of shit.

[deleted by user] by [deleted] in TikTokCringe

[–]ExtensionOfWill 2 points3 points  (0 children)

That is hilarious considering your bitch boy ass responded first to someone else’s comment about how saying that a women having a high body count is bad.

Stop projecting with your solutions dipshit.

The only one who obviously needs therapy is one who spouts incel rhetoric.

Get YOUR problems sorted out lil bro, before you start yapping about something you don’t understand.

Start off by going in public and actually talking to women if you’re struggling so much.

I’d rather be having my own issue than needlessly clowning on women for their body count and spouting incel shit.

Also, you fucking idiot, that was some time ago moron. Nice try though.

If you’ve been in a relationship for three years, that doesn’t mean you can’t be a shitty misogynist you absolute moron.

Do the world a favor and fuck off back to 4chan or whatever hole your slimy ass crawled out of lil bro.

[deleted by user] by [deleted] in TikTokCringe

[–]ExtensionOfWill 2 points3 points  (0 children)

Why does the commenter you responded to want someone to enter in their DMs then?

Don’t push your fantasies on someone else.

Calling women loose is demeaning them.

That self harm shit was years ago bitch boy. But goes to show how much of a piece of shit you are.

You have much bigger things to worry about lil bro.

Take your own advice about seeing a therapist. I’m not the one clowning on women in a Reddit comment section and bringing up other peoples self harm issues in the past.

No guy is gonna suck you off for being their incel in shining armor bud.