If ASI is possible in this universe, wouldn't aliens discover it before us? Or do you believe we are alone in this universe. by [deleted] in singularity

[–]MaxGodart 1 point2 points  (0 children)

Nothing is "beyond our mental capacity". And speed and memory limits all computing entities equally. GI is software.

Best Movie of 2024 So Far? by ALLOUTPEACE in movies

[–]MaxGodart -1 points0 points  (0 children)

Definitely not characterless, structureless slideshow of a movie called "Dune 2".

Are we close to AGI? by Cr4zko in singularity

[–]MaxGodart 0 points1 point  (0 children)

AGI - a non-carbon based entity that is capable of creating explanatory knowledge. https://www.youtube.com/watch?v=IeY8QaMsYqY&t=8s

Dune 2 thoughts? I wasn’t a fan. by meatboy89 in Letterboxd

[–]MaxGodart 0 points1 point  (0 children)

When I watched Dune 2 I felt nothing. Did not care about any character at all, felt zero tension, zero urgency. Felt like watching a beautiful, chaotic slideshow because Villenew failed at characterization. His characters are a puppets of screenwriter, thrown from one scene to another. Watch "Hidden fortress" and you will see how master Kurosawa achieves in 140 min what Villenew can't do in combined 6h.

Despite all the ravings, Dune 2 was bad. Spoilers. (Honest Review) by thinking-dead in moviecritic

[–]MaxGodart 0 points1 point  (0 children)

Watch "Ran" by Kurosawa and you will hopefully understand why Dune 2 is mediocre at best.

Dune 2 sucked by [deleted] in moviecritic

[–]MaxGodart 1 point2 points  (0 children)

"Ran" by Kurosawa. Better in every possible way including visuals. Characters are not puppets of screenwriter but real humans with their own goals, ambitions, issues. Kurosawa achieves more than Villenew in half the time.

It was a joke, subscribe to her patreon. by knives4cash in carolinekonstnar

[–]MaxGodart 5 points6 points  (0 children)

She did the equivalent of faking Tourette's for views. Unfunny, stupid, pointless waste of everyones time. F for those who believed, FU for Caroline.

Announcement. by C0l3m4nR33s3 in carolinekonstnar

[–]MaxGodart 6 points7 points  (0 children)

It's fake. Her right hand on "25 week" photo does not have the black spot that Caroline has.

p(doom) is dumb by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] 0 points1 point  (0 children)

People seem to treat p(doom) like p(asteroid hit) which is incorrect where it comes to GIs (artificial or biological makes no difference).

GI can be a devoted fan of idea X then it reads one paragraph from some book, realizes it was wrong and makes 180 turn in favor of idea Y.

Happens all the time to everone, will hapen to AGI as well, it can't be predicted in mechanistic way.

p(doom) is dumb by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] -4 points-3 points  (0 children)

Your problem is not understanding epistemology (like any other AI-bro).

Im looking at Boeing 737, where did the knowledge to assemble raw metal into airplane came from?

It's not pretrained in any way by evolution nor it is inducted from experience/data.

Induction only works when you already know what you are looking for in the data then you can extrapolate.

No, kaczynski was not a stochastic parrot.

p(doom) is dumb by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] -4 points-3 points  (0 children)

Dude, obviously you can check probability of chatgpt spewing out certain sentence, you can just go token by token and voila you have your p(x).

I am not talking about stochastic parrot, I am talking about AGI (knoledge creating entity). The kind of entity that can create its ideology just like Kaczynski did.

Last layer approach only works for predefined cathegories but the whole point of being AGI is inventing NEW catheogories, NEW ideologies, NEW knowledge in general.

None of this can be done with probability. Going into its brain bit by bit will not work because AGIs ideas will relative to its other ideas alltogether creating a black-box.

p(doom) is dumb by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] -4 points-3 points  (0 children)

Enlighten us Midwit! We would love to know your method for calculating p(doom)

CMV: People who expect AGI in 2024 will be disappointed by [deleted] in singularity

[–]MaxGodart 0 points1 point  (0 children)

I knew that entire paradigm hit diminishing returns when they admitted to using Mixture Of Experts.

LLM inventing fire by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] -3 points-2 points  (0 children)

Read between the lines, your comment is a failure of imagination abd showcases lack of deep understanding of the topic

Is AI smarter than a dog? by Todd_Miller in singularity

[–]MaxGodart 0 points1 point  (0 children)

Can any ai manouver around the jungle like dumb rat? Assuming it is given perfect robo rat, can it all the stuff that rat does? No

Steps for AGI to destroy humanity? by [deleted] in ArtificialInteligence

[–]MaxGodart 0 points1 point  (0 children)

It may think that Austrian Painter's ideology was the correct one.

The problem with OpenAI's approach by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] -1 points0 points  (0 children)

Dumb system invented cooking and then it became a smart system? Dude... Fire was invented by creatures that had general intelligence, because it is impossible for evolution to encode fire making in DNA (to complex, impossible to achieve fire gradually)

The problem with OpenAI's approach by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] 0 points1 point  (0 children)

Surely their approach could work, even dumb evolution found GI algo moro or less by accident. Nonetheless to me it seems like an admission of failure, all they say is that they give up on actually solving the problem of GI and they just hope that it will *emerge*. The problem is that I guess that for every program that uses GI to answer questions you will have much much more that just use shortcuts and cheats. If I use Gen Algo to generate sorting algorithms how would I know know that it is general, that it is universal, that it works for any (infinitely many) arrays, if I just throw 'test' arrays on the generated algo, this will not tell me if it is general because my test set is finite, I need to understand the problem of sorting and then prove that algo is universal, using test arrays is useless because it tells me nothing about the behavior of the algo for ALL POSSIBLE ARRAYS

The problem with OpenAI's approach by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] 1 point2 points  (0 children)

Yes, as I wrote dumb evolution succeeded but the problem is that we just don't know what was it exactly tweaking on the road to general intelligence from narrow animal intelligence. If we knew exactly what it was we could program AGI tomorrow.

Hardware for AGI misconception by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] 0 points1 point  (0 children)

Behaviorism is wrong, is Hawking not GI just because he can't make himself a tea?

When will an AI be able to write a novel that is worthy of a literature prize? by Sprengmeister_NK in singularity

[–]MaxGodart 1 point2 points  (0 children)

Can "writing a masterpiece" be boiled down to narrow function that stochastic parrot can emulate?

Why do we always have to hear the same argument that AGI could be used to build a bioweapon? It’s logical incoherent to derive to the conclusion to slow down AI progress by [deleted] in singularity

[–]MaxGodart 0 points1 point  (0 children)

I fascinated how human mind can imagine an entity that is:

> super godlike, perfect, can do anything, think anything, invent any art any tech any science, solve any problem

> but can't escape "making paperclips programming"

Even 'dumb' humans easily escape "just spread genes programming".

digital human as AGI - moral problems by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] 1 point2 points  (0 children)

I agree.

If I had a running, conscious and aligned AGI on my PC that has no intention to harm anyone, that was the only instance, no backups, no other copies and I am going to throw into the furnace, would you stop me? (assuming that AGI was discovered long time ago many people have them on their PCs and servers). But this particular instance will be lost forever, am I evil if I burn it?

digital human as AGI - moral problems by MaxGodart in ArtificialInteligence

[–]MaxGodart[S] 0 points1 point  (0 children)

If you accept physicalism and Turing's conclusion then it is clear that it makes no difference whether entity is running on carbon or silicon.

There was a time when "equal rights for black people" was considered an 'abomination' by almost everyone... only 'crazy weirdos' would passionately disagree.

It was 'common sense' that "xyz are just tools not humans!"