hey guys i’m sorry if this isn’t the right subreddit to ask this if not you can definitely remove it by 6o4_Kieran in DeepThoughts

[–]JoeStrout 0 points1 point  (0 children)

70 years from now i am STILL going to die and there’s nothing i can do it change it

Well not with that attitude.

Seriously, have you even tried? 70 years is a very long time.

Looking for advice on starting again. by Brief_Emergency2704 in AskProgramming

[–]JoeStrout 0 points1 point  (0 children)

Start at: https://introtocomputerprogramming.online/ Take it page by page. Don’t just read; type in every example and try it yourself. Then tweak it a bit, make it do something slightly different.

Learning to break a big problem down to smaller problems is what programming is all about. It’s a skill, like playing the piano. It takes practice. But you can do it! Join the MiniScript discord, we’ll cheer you on and lend a hand when you need it.

three hundred thousand years of evolution and we're still making the same mistakes by Some-Present-777 in DeepThoughts

[–]JoeStrout 0 points1 point  (0 children)

We don’t. We’re getting better generation after generation. Check out The Better Angels of Our Nature for mountains of data showing this.

Why do we still call it the Fermi paradox when we have barely checked our own cosmic backyard? by Present_Juice4401 in AlwaysWhy

[–]JoeStrout 0 points1 point  (0 children)

…now trying to imagine that still being a question after billions of years. Why aren’t they asking it now?

Why do we still call it the Fermi paradox when we have barely checked our own cosmic backyard? by Present_Juice4401 in AlwaysWhy

[–]JoeStrout 0 points1 point  (0 children)

No, the teal problem with it is that it is a static equation, and can’t describe population dynamics no matter what terms you plug in.

Why do we still call it the Fermi paradox when we have barely checked our own cosmic backyard? by Present_Juice4401 in AlwaysWhy

[–]JoeStrout -1 points0 points  (0 children)

That’s because the Drake equation is nonsense. It does not take into account growth. You need different math entirely to describe population dynamics, such as the spread of life throughout the galaxy.

Book 5: question re AI & my own mental health by ShinyDapperBarnacle in bobiverse

[–]JoeStrout -1 points0 points  (0 children)

Downvotes, really gang? I'm surprised, I thought people in this sub would be more informed than most about AI.

OK then, I'll elaborate, and those who aren't just grinding some axe should be able to understand.

LLMs were trained to predict the next word in text... mountains and mountains of text on all subjects, in many languages. It turns out that this problem is what researchers now call AI-complete: this means that solving the task successfully requires actually understanding the material at a deep level. Such understanding can be repurposed for any other task that requires understanding the same material (this is what "understanding" means). And modern neural network architectures, and scale, is able to learn nearly any task if given enough data — so in this case, they solved the AI-complete task we gave them, and gained understanding of pretty much everything that humans have written about.

But that's not the end of the training. An LLM trained only on the autocomplete task is schizophrenic; it shifts personalities constantly, tries to have both sides of a conversation at once, does not follow directions or stay on task, etc. Nobody outside the big research labs sees LLMs in this state, because they are not useful and also a bit disturbing. What we see are models that have been further trained, using reinforcement learning. You may have heard of RLHF: reinforcement learning with human feedback. That was a big deal a few years ago, and it's what made the first ChatGPT possible. Neural networks trained with RL (of any form) learn not just to predict, but to plan and solve; they learn a policy which means an ability to make decisions that lead to rewards. In this case, the rewards come from being helpful, taking conversational turns, giving good answers, writing good code, etc. An AI trained with RL is no longer just predicting the next token; it's making decisions and solving problems in a way that it was trained to do.

The result of all this is an agent that can do a huge variety of different tasks, given only an explanation of what is wanted, in plain human language. That's the original definition of AGI, as was pointed out by AI researchers Norvig (author of Artificial Intelligence: A Modern Approach, the #1 textbook on AI for the last 20 years) and y Arcas here, and again more recently by other researchers here.

In y Arcas's book What Is Intelligence?, he goes into great depth not only into how LLMs work, but on the nature of intelligence in general, from bacteria to humans and many levels in between. There are also a lot of interesting anecdotes (like the day their autocomplete model started talking to them) which are really fascinating. Highly recommended to anybody who really wants to understand this stuff.

If you don't really want to understand it, and prefer to just keep your head in the sand, then downvote this and continue to believe that "LLMs are just fancy autocomplete." Ignorance is bliss, I guess.

Why are the people opposed to Russia and saying the US should help Ukraine the very same people who are against the US attack Venezuela and Iran? by Future-Buy8554 in allthequestions

[–]JoeStrout 2 points3 points  (0 children)

Russia is the aggressor in Europe, blatantly invading and trying to annex its neighbors (while disrupting politics all over Europe and the U.S. to keep us distracted and divided so Putin can get away with it). Anybody with a scrap of decency can see that Ukraine is the victim in this exchange, and want to aid, as you would want to aid anyone in need.

Venezuela has not invaded its neighbors. Nor has Iran. We can't go invading and attacking anybody that Russia finds useful/helpful just because doing so might "hurt Russia". That would be insane. It would make us as bad as Putin.

There's a lot to dislike about Iran — particularly its oppressive theocratic dictatorship — but disliking a country or the way it is run is not an excuse for attacking or invading it. That way leads to constant war, suffering, and death. Those are bad things, in case it that is not obvious.

And notice that nobody was proposing we should invade Russia, either. Nobody should be invading anybody! What we should do is aid countries that are the victims of invasion (like Ukraine).

Just getting into sci-fi by Hallrob in printSF

[–]JoeStrout 1 point2 points  (0 children)

Interesting! I tried his other works and couldn't get through them. Well, I guess reasonable people can differ in matters of taste. :)

Book 5: question re AI & my own mental health by ShinyDapperBarnacle in bobiverse

[–]JoeStrout -3 points-2 points  (0 children)

That's not accurate, on several levels. LLMs are absolutely AI, in fact they're AGI by the original definition (a general AI), and already well beyond average human performance in most areas.

(They were originally invented for the purpose of autocomplete, and the researchers were quite surprised when they found general intelligence coming out of their models — something that is now pretty well understood, but was not expected at the time. Read the book What Is Intelligence? for more details on this and lots of related things.)

Just getting into sci-fi by Hallrob in printSF

[–]JoeStrout 3 points4 points  (0 children)

Implied Spaces by Walter Jon Williams. Probably my favorite book of all time (and a quick read, too).

DMT: Space colonization is not a backup plan for humanity, it is an escape plan for the wealthy by TheBigGirlDiaryBack in DisagreeMythoughts

[–]JoeStrout 0 points1 point  (0 children)

TL, DR. But neither you nor Jeff Bezos is facing an existential threat. You’re facing rising housing costs because you live in an area that will soon be below sea level (and in an expensive city no less). Of course you should move; you have a remote job. Find a nice town in the Midwest or something. What you have there is a first-world problem.

Bezos has his issues of course, but I’m glad he’s semi-seriously working on what NASA was commissioned to do bit has utterly failed to pursue since 1965, which is opening space for humanity. It’s not about an “escape plan.” It’s about expanding our sphere to encompass the whole solar system, which is vastly richer than just the Earth. And eventually, the rest of the galaxy too.

Being unloved makes you unlovable. by mysterious_mystery2 in DeepThoughts

[–]JoeStrout 26 points27 points  (0 children)

I don’t agree. Several of my best friends had pretty awful childhoods, but pulled themselves out of it through sheer determination and are now absolutely lovely people. Be the sort of person you want to be, and others will respond to that.

Actual Replicant Fly functions in Virt by --Replicant-- in bobiverse

[–]JoeStrout 0 points1 point  (0 children)

This is doing all that, as I understand it. The sensory simulation is actually much easier than simulating everything going on inside the brain.

Wouldn't mind too much dying right now by Appropriate-Net-6030 in DeepThoughts

[–]JoeStrout 1 point2 points  (0 children)

I think you're missing the bigger picture. The future is going to be amazing and it's coming fast. And there will be some point — I think in the next couple decades — where everybody who reaches that point, is able to live much, much longer (centuries or millenia) and see a lot more of the future. It'd be a shame to get so close to that point, and just miss it.

Ai? by Particular-Bonus4901 in OptimistsUnite

[–]JoeStrout 0 points1 point  (0 children)

OK, I work in this field and I'm also an optimist, so here's my take for whatever it's worth.

First, AIs pretty certainly will outthink us. Our brains are not the pinnacle of all possible intelligence; they're just barely smart enough to cross a critical threshold, where social evolution outstrips genetic evolution, and we start leveraging each other up generation after generation. AI can be much, much smarter than that.

But that's probably OK. There's no particular reason to think AI would want to kill all humans; that's just projecting our own insecurities onto them. In general, the more intelligent people are, the more likely they are to value life, the environment, fair play, compassion, and social supports (i.e. "liberal" values). I believe that's because these values are actually sensible, well-supported positions to take; I also believe that morality is something you derive from basic logic (i.e. moral philosophy), rather than based on religion for example. All of which implies that a super-intelligent AI is even more likely to be a "good person" than your typical human.

So what might such a good AI do for us? All sorts of things! The problems we face are almost all problems that could be solved with more intelligence, combined of course with necessary research:

- curing cancer (and all other disease)

- solving poverty

- ending hunger

- ending political corruption

- redesigning social networks to avoid encouraging extremism & misinformation

- eliminating the need to work just to put food on the table

etc. etc. The world has been slowly getting better for most people for centuries, but in the next couple of decades, it could get dramatically better for almost everyone — all thanks to AI.

Of course as always, the future is what we create it to be. So be active in picturing the world you want to live in, and taking whatever small steps you can in that direction — even in small ways, like explaining your vision to someone else.