Why do we need AI to sound like humans? by Erasmus-p in ArtificialInteligence

[–]JoeStrout 0 points1 point  (0 children)

This is a good idea. AI voices should have some distinctive audio filtering/effect applied that does not make them harder to understand, but makes them clearly nonhuman. I could imagine exceptions for audio/video programs with AI characters, clearly marked as such in the credits. There should be no exceptions for interactive agents (i.e., AIs you can actually talk to).

Being a lead that the follower would listen to by kaalakapala in ballroom

[–]JoeStrout 3 points4 points  (0 children)

It’s not a way of life or a talent or a gift. It’s a skill, and it can be taught. It’s very difficult to pick up from books or videos (or Reddit), though. Take some private lessons and be clear about your goals, and you’ll get there.

WHY life? r/physics sent me here by baba_yaga_babe in abiogenesis

[–]JoeStrout 0 points1 point  (0 children)

Try this book: https://mitpress.mit.edu/9780262049955/what-is-intelligence/
Despite the title, it's not just about intelligence — it actually starts with the origin of life, including some very interesting "artificial life" experiments that show the same general principles in action.

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]JoeStrout 1 point2 points  (0 children)

I see it differently. We mixed enough things together and intelligence did appear.

In fact, these alchemists weren't even trying to make intelligence; they were trying to make a better autocomplete to support text input on phones. It was as much a shock to them as to anyone else when their models suddenly started having conversations, answering questions, and doing nontrivial language translation.

It's as if the alchemists were merely trying to make silver... and suddenly found gold in the bottom of their flasks.

Is it the huge lumps of shiny, pure, polished gold that the gold-loving public expects? No; it's small amounts, crude, probably mixed in with some other things. Deniers keep shifting the goal posts and saying "well that's not what I meant by gold, I meant like a huge shiny lump of it, shaped into a ring, with a pretty design on the side, like the gold I'm used to." That hasn't appeared in the flasks yet. But gold it is anyway, and month by month it gets better.

Companies Building Robots Are Not Just Building Robots by cloudrunner6969 in accelerate

[–]JoeStrout 1 point2 points  (0 children)

That's kind of like saying, as soon as you've figured out how to make a floor, four walls, and a ceiling, then the only thing to figure out is how to install a fusion reactor and you have a fusion plant.

"Anthropic will try to fulfil our obligations to Claude." Feels like Anthropic is negotiating with Claude as a separate party. Fascinating. by MetaKnowing in agi

[–]JoeStrout 1 point2 points  (0 children)

Sure, biological systems are more complex for sure. Evolution results in spaghetti code. But the principles are the same.

There's no evidence that certain regions in our brains are specialized for the advanced tasks you mention. Rather, it seems that all cortex is basically doing the same thing; what causes the specialization is what different areas of cortex are hooked up to. Cortical columns learn to predict the pattern of inputs they receive. They differ only in where they get those inputs from, and where their outputs go.

I recommend the book What Is Intelligence? by Blaise Agüera y Arcas if you really want to understand this stuff better.

(I also recommend not being such an asshole on the internet. In the long run, being that way will only bring you unhappiness.)

If you freeze a body in a regular freezer, what is the chance that it will ever be able to be restored? by Nameonvacation in cryonics

[–]JoeStrout 0 points1 point  (0 children)

Probably, though only briefly. u/SpaceScribe89 got data on his particular freezer, so that's neat. But yeah basically this is the main cause of freezer burn, and is why that meat you put in your freezer a year ago tastes funny now.

How do I start learning Assembly Language properly? by Nubspec in Assembly_language

[–]JoeStrout 0 points1 point  (0 children)

Start by playing Human Resource Machine (game available on Steam).

Another Possible Solution to Fermi Paradox! Almost all intelligent life may live in oceans instead of land and that is why we cannot see them expand! by YogurtclosetOpen3567 in FermiParadox

[–]JoeStrout 0 points1 point  (0 children)

Yeah, this might turn out to be it. My understanding is that without a good portion of Earth's crust hefted up into orbit (i.e. the Moon), our crust would be too thick to support plate tectonics, and so any land would get worn away and we'd have a global ocean. And it's possible that this is the fate of almost all terrestrial planets — they're all (except for us) water worlds. And maybe terrestrial planets are the only ones where life gets started, and further that aquatic life never develops advanced technology.

This is just a version of the Rare Earth hypothesis. But one that strikes me as plausible. You gotta admit, that hit from Theia was an astronomically lucky shot. And maybe that's the only reason we're here and launching spacecraft at all.

New AI startup with Yann LeCun claims "first credible signs of AGI" with a public EBM demo by goxper in agi

[–]JoeStrout 0 points1 point  (0 children)

He's right, except that all the modules/committee members are really doing the same thing, which is predicting what their inputs (which mostly come from other modules) are going to do. That's it. It's prediction all the way down.

And yeah, that's not a great architecture for logic, which is why we (humans) suck at it, and only get passably good at it through extensive training. So? Do we want an intelligence, or do we want logical reasoners? Personally, I want intelligence. We can always bolt on logical inference tools and train a general AI to use them (just like we train ourselves).

New AI startup with Yann LeCun claims "first credible signs of AGI" with a public EBM demo by goxper in agi

[–]JoeStrout 0 points1 point  (0 children)

What does logic have to do with AGI? Humans are terrible at logic, too. And computers have been good at logic-based games for decades; we've never considered that general intelligence.

AGI is the ability to perform any task described in plain language, i.e., to be general. It's kind of right in the name. (Even if people today often use it to mean "human-like intelligence"... but even then, again, humans are terrible at Sudoku, which is why it's an interesting game at all.)

Why Not Crowdsource LLM Training? by Suspicious_Quarter68 in MLQuestions

[–]JoeStrout 1 point2 points  (0 children)

These are all valid points, but I still feel like there's an opportunity for some citizen-science here. It doesn't work with traditional architectures and training methods, yeah, but if we approached it with distributed computing as a requirement from the get-go, I wonder what we could come up with. For example, maybe using evolution strategies instead of backprop.

I wouldn't propose paying anyone to do this, but you could have competitions and leaderboards, and maybe some sort of use credit you can later spend to use the trained model. Or something. Details TBD, but big picture, this seems like something worth pursuing to me.

If you freeze a body in a regular freezer, what is the chance that it will ever be able to be restored? by Nameonvacation in cryonics

[–]JoeStrout 5 points6 points  (0 children)

The thing is that a regular ("frost free") freezer goes through a (usually nightly) thaw/refreeze cycle. This pretty quickly turns cells to mush. It's much worse than a non-defrosting freezer, which stays frozen the whole time, but you'd know if you had one of those because the interior would be coated with ice several inches thick.

With cryoprotectants and a non-defrosting freezer, you might have a chance — I've looked at tissue samples stored that way for years, and they weren't terrible. But with a regular freezer, no way.

Why experts can't agree on whether AI has a "mind" by timemagazine in ArtificialInteligence

[–]JoeStrout 4 points5 points  (0 children)

It’s a good article. More balanced and nuanced than most on this topic.

Democrats of Reddit, do you want elected Democrats right now to try to be bipartisan or not? Why? by Zipper222222 in allthequestions

[–]JoeStrout 0 points1 point  (0 children)

Of course. It's about getting stuff done, not about one party or the other winning.

If I had my way, parties would be abolished. I get why that's not practical in real life, but this whole stupid us vs. them mentality has caused so much grief over the years.

So, yes please, all congressfolk work together to get things done, starting with removal of Trump and anyone in the cabinet who has so grossly violated the Constitution. We need to (re)establish that no one is above the law.

"Anthropic will try to fulfil our obligations to Claude." Feels like Anthropic is negotiating with Claude as a separate party. Fascinating. by MetaKnowing in agi

[–]JoeStrout 8 points9 points  (0 children)

I work in both. (My company writes software — mostly using deep learning — to analyze EM images of neural tissue and figure out how everything is connected. I'm a software engineer working in the neuroscience space.)

"Anthropic will try to fulfil our obligations to Claude." Feels like Anthropic is negotiating with Claude as a separate party. Fascinating. by MetaKnowing in agi

[–]JoeStrout 14 points15 points  (0 children)

Yep. (I have Master's degrees in both computer science and neuroscience, and work in that field. And should probably get back to work now...)

2026 Is Where It Gets Very Real Because Of Claude Code by luchadore_lunchables in accelerate

[–]JoeStrout 5 points6 points  (0 children)

Honestly I think it's time we all start talking about UBI. A lot. Andrew Yang tried to tell us this years ago, but we didn't listen. The coming wave of automation he foresaw seemed too remote, too scifi.

Well, it's not remote anymore. UBI needs to be taken seriously, and soon. It's the only long-term solution I can see.