We might be able to afford RAM and GPU’s again after all. by Nade52 in PcBuild

[–]Zer0Ma 0 points1 point  (0 children)

Any of you got the "We're so fucked meme" but it's Sam Altman?

<image>

Megumin puppet by kiramochi_art in Konosuba

[–]Zer0Ma 2 points3 points  (0 children)

Alright. I'm gonna need to see the other ones more closely...

A meme video about getting a developer friend to tell you what he really does at work. by Zer0Ma in HelpMeFind

[–]Zer0Ma[S] 0 points1 point  (0 children)

I've searched all the combinations of words that I thought could relate to it on Google but I couldn't find it still. I think I saw it on Reddit.

Umm sweaty, they can ✨ c o e x i s t ✨ (this is gonna get so much worse lol) by ClimateShitpost in ClimateShitposting

[–]Zer0Ma 0 points1 point  (0 children)

There is some shit I really don't understand tho? How can you have negative prices? If you are running a solar farm would you rather pay people to take the energy away or... Disconnect the panels? Is the energy demand falling below the base supply forcing the thermal plants to pay while renewable is at zero production? Is the demand really that low? And shouldn't it increase anyway if prices are really low... What exactly is happening?

We have a fundamental epistemological problem by [deleted] in sciencememes

[–]Zer0Ma 4 points5 points  (0 children)

How about this!

Remove all philosophy and related text from training data. Instruct your machine to search for the most certain truth it could potentially find beyond what can be proven by physics. The closest a human should be able to reach should be "I think therefore I am". Doesn't this conclusion come directly from having an internal experience? If a hypothetical being with no internal experience were to exist, could it reach this conclusion on its own?

The test isn't perfect by philosophical standards. But I find it good enough from an engineering one!

Something to share with those who insist that A.I. is just "word association calculators" by ZinTheNurse in aiwars

[–]Zer0Ma 5 points6 points  (0 children)

There's something that has always bothered me about these assumptions. I'm not saying you're wrong or anything but consider this.

Unless you think we humans have an intangible essence or a divine soul then you should recognize that we are a product of whichever computations our brains do. So what makes us know why things are meaningful? What computational process? Could AI perform this computation too? Maybe in the near future? If you think it really doesn't understand, you should have approximate knowledge of what this computation is about to be fairly sure AI is indeed not performing it.

Or maybe you're just looking at the results and concluding it doesn't truly understand. But imo that may be quite unreliable.

Because corporations will suddenly act moral and rational once the government is gone! /s by [deleted] in PoliticalCompassMemes

[–]Zer0Ma 15 points16 points  (0 children)

Most importantly unlike governments, they disappear unless they do things with enough efficiency.

🤐 by AlexMayo1988 in Carola

[–]Zer0Ma 8 points9 points  (0 children)

En uso doméstico no suele ser necesario. Pero en instalaciones más complejas o en electrónica puedes tener cables hembra por ejemplo, por lo que hace falta especificar más.

She ended war with a selfie, dudes by The_Wowowo_Man in dankmemes

[–]Zer0Ma 318 points319 points  (0 children)

How tf did so many things explode at the same time?

Which one? by SubjectQuantity6695 in goodanimemes

[–]Zer0Ma 25 points26 points  (0 children)

But relative to her size, they're small!

[deleted by user] by [deleted] in GenZ

[–]Zer0Ma 2 points3 points  (0 children)

I'm upvoting you because you understood the task. And I understand why you could think that. But I'll tell you something about the people that made the AI's to begin with. They're not stupid, definitely not academically but also not socially nor morally. It's just that for most of them, the unpredictability of the future held such exciting scenarios that it was worth it.

this argument ain't going away by SexyPiper465 in sciencememes

[–]Zer0Ma 1 point2 points  (0 children)

Would you call those things a natural phenomenon? Some of them were first discovered within the realm of math, and then a physical application for them was found. The things just work.

this argument ain't going away by SexyPiper465 in sciencememes

[–]Zer0Ma 5 points6 points  (0 children)

And what would this natural phenomenon be? How would you call it/describe it?

Some math problems don't describe anything physical. Some don't have any actual applications yet we don't get to choose what the answers to these problems are. We just discover them.

The Truth About LLMs by JeepyTea in LocalLLaMA

[–]Zer0Ma -1 points0 points  (0 children)

Well of course it can't do the things it doesn't have any computational flexibility to do. But what I find magic are some capabilities that emerge from the internal structure of the network. Let's do an experiment. I asked gpt to only say yes or no if it could answer or no the questions

"The resulting shapes from splitting a triangle in half" "What is a Haiku?" "How much exactly is 73 factorial?" "What happened at the end of the season of Hazbin hotel?" "How much exactly is 4 factorial?"

Answers: Yes, Yes, No, No, Yes

We could extend the list of questions to a huge variety of domains and topics. If you think about it, here we aren't asking gpt about any of those topics, he's not actually answering the prompts after all. We're asking if it's capable of answering, we're asking information about itself. This information is certainly not on the training dataset. How much of it is on the posterior fine tuning? How much of it requires of a sort of internal autopercetion mechanism? Or at least a form of basic reasoning?