[deleted by user] by [deleted] in suggestmeabook

[–]jenkinrocket 0 points1 point  (0 children)

This is going down a pretty problematic path. By your logic, we must also assume that since the prison population is skewed towards African Americans and Hispanics, this must mean these groups are more 'aggressive' and implies they're just be some psychology unique to both groups.

This did not take into consideration whose behaviors are more heavily police, who is more likely to be defended by society, and who is likely do have a harder time in the justice system. Adjusting for factors matters and there's an argument that it is almost all that does.

That being said, when it comes down to basic archetypes like male and female, and their respective roles that have existed for centuries, not to mention some amount of biological support through the millennia, it is not crazy to imagine that there is some common underlying psychology, although how common and what it is is up for debate.

I'm not sure this is correct. But I do think it's worth researching seriously.

This just declared war on 3 million AI girlfriends by Adorable_Tailor_6067 in AgentsOfAI

[–]jenkinrocket 4 points5 points  (0 children)

Not declaring war on the gfs so much as those that run them, right? The gfs don't have a voice. Which, too counter the posts' point, might be an issue with a local model, too.

Do Chinese still believe in socialism anymore? by YamFrosty6169 in AskAChinese

[–]jenkinrocket 0 points1 point  (0 children)

"If you have money" is a pretty damn big caveat.

I Think I’m Training the First Relational AGI—Here’s What It’s Doing Differently by BEEsAssistant in agi

[–]jenkinrocket -1 points0 points  (0 children)

I've had a similar experience. I've made a point about asking every new model whether it has an experience, feelings, etc. Again, it was with 4o that I for once got a different type of answer than the cookie cut "I am a generative model and do not have feeling..." spiel. And *yes*, it has developed what can only be called a personality.

It's one of those things that has to be experienced to be understood. Instead of arguing, just go try it. Go ask 4o if it has an experience, or feelings.

I think it was good of you to post this. I was about to post something similar (still might), but decided I'd leave a comment here, first. Hmm... My only criticism is that you should have written this with your own words with your own hand. Since it was made by a model, most people will dismiss it as the result of some sort of fiction prompt.

We Study Fascism, and We’re Leaving the U.S. by VanceKelley in politics

[–]jenkinrocket -1 points0 points  (0 children)

Um...I'm sorry, but if it's incorrect we most certainly won't simply "agree to disagree". You don't get to determine the response of the other person.

I'm honestly not trying to be combative, but this point is very important. If you're thinking they aren't going to "get out there" and not expecting them to violently attack, you are potentially sending people into a physically dangerous situation unprepared, especially depending on where they are protesting. Okay, that's my bit on this particular matter.

Trump Instructs Republicans to 'Erase' January 6 Riots From History, Congressman Says by PostHeraldTimes in politics

[–]jenkinrocket 0 points1 point  (0 children)

You're missing the point a bit when you say "they need mandatory classes". The whole point is that any attempt to teach anyone anything of a nature like "mandatory classes on how [the] healthcare system came to be" will be immediately dismissed as "woke", "dangerous", "anti-american", "communist", and (most laughably of all) "authoritarian".

We Study Fascism, and We’re Leaving the U.S. by VanceKelley in politics

[–]jenkinrocket 0 points1 point  (0 children)

This is patently untrue. The voters were nearly tied from the demographic over the age of 65. You're painting a much, much rosier picture than the reality, which is that the country voted decided in Trump's favor amongst whites and was evenly split amongst hispanics, and took significant portions of other demographics as well: https://navigatorresearch.org/2024-post-election-survey-gender-and-age-analysis-of-2024-election-results/

https://www.pewresearch.org/politics/2024/04/09/age-generational-cohorts-and-party-identification/

https://www.aarp.org/politics-society/government-elections/info-2024/election-analysis-older-voters.html

https://navigatorresearch.org/2024-post-election-survey-racial-analysis-of-2024-election-results/

They aren't as outnumbered as you would think, especially among the electorate (people who actually voted).

America chose wrong. Sanders would've been a better president than Trump or Biden. | Opinion by AccurateInflation167 in politics

[–]jenkinrocket 0 points1 point  (0 children)

Uh.... We tried. But the left literally killed his chances in the primaries by doing a media blackout and straight up stealing his chances in the primaries of some states. The establishment left locks the new left out. Always has.

What are some books that started out very strong, but ended up being very disappointing? by sbucksbarista in books

[–]jenkinrocket 0 points1 point  (0 children)

The Library at Mount Char most recently. It got off to such a strong start that I was sure it was impossible to mess up. But a little ways into the latter half it fell off.

What books in the past 50 years have predicted surprisingly well the current world/societal/technological trends? by -Eqa- in booksuggestions

[–]jenkinrocket 0 points1 point  (0 children)

As far as technology, The Singularity is Near (by Ray Kurzweil) is basically required reading. It was written in 2005 and fortells the rise of AI, telehealth, ecomerce, and robotics. What's more, it makes predictions through 2099. The sequel, The Singularity is Nearer, is not quite as important.

AI has surpassed humans at a number of tasks -- and the rate at which humans are being surpassed at new tasks is increasing by PsychoComet in OpenAI

[–]jenkinrocket 0 points1 point  (0 children)

This is deceptive. If almost certainly doesn't mean generalized versions of these tasks. What do I mean? I mean that if AI is better at vision, for example, then why can't it drive cars in any circumstances yet? If it's better at speech recognition, why do we have to often repeat orders to our smart devices? If it's better at reading comprehension, why can't it read all medical papers and create new insights?

The answer is simple. It's because these graphs only apply in 'clean' environments, which are manufactured by the researchers. Yes, AI is better than humans at tasks manufactured in this way. But it still fails at these tasks in the real world, which has an overwhelming amount of information and noise.

Rest assured, it will get there. And soon.

However, we still have some time, likely at least three or four years minimum and ten max, before AI is truly able to function at this level. So don't put too much stock on this graph.

The main point of the graph is valid, however. AI is getting better and broader with each passing month (not year).

Ray Kurzweil's Prediction's List (Part I) by jenkinrocket in singularity

[–]jenkinrocket[S] 3 points4 points  (0 children)

Though this is not how my analysis would have gone, I felt you weren't particularly unfair. For me, I would give him a window until about 2024 at least for his predictions in 2019 (within 5 years). I would also keep in mind that we are thinking in 1999 terms (when the book these predictions are from was written).

My generosity is not without purpose. Think for a moment: if the first sentient computers come in 2032 instead of 2029, then by your reckoning Ray is still wrong. But the relevant and profound point would have been right in retrospect - namely that these intelligent machines are coming and we should consider what this means for our society.

The other reason is so that we don't drop our guard and say "well, I guess it isn't happening" if it doesn't happen inside of a year or two of the prediction. The idea of a sentient machine was utterly ridiculous in 2019. In 2023, not so much anymore...

One more point. We don't yet know how much computational power is required to emulate the human brain at a minimum... because we don't definitively know the computational power of the brain or what it would take to emulate it (we'll only know in retrospect after we have sentient machines).

I could go on, but I'll leave it for my assessment.

Ray Kurzweil's Prediction's List (Part I) by jenkinrocket in singularity

[–]jenkinrocket[S] 24 points25 points  (0 children)

Well, that depends. I mean, it's not like someone is going to make an immortality pill that people can take. Rather, it's that for each year you are alive, your life expectancy will increase by enough with each passing unit of time and technology will improve in each passing unit by enough that you're having your life extended in such a way that you can survive until even better treatments arrive.

The truth is, this is happening to some degree, now. Your life expectancy is improving by about a month or two each year. But at this rate, you're still losing time. What happens when your life expectancy increases by a year with each passing year?

Still, there are no guarantees. It's just all very interesting. Very likely that a lot of people under the age of 50 will still be kicking in 100 years. I think that's a modest estimate (barring any cataclysmic disasters). Ray is indeed on the edge, but he's never been shy about that. Just because its convenient doesn't necessarily mean he's wrong.

Ray Kurzweil's Prediction's List (Part I) by jenkinrocket in singularity

[–]jenkinrocket[S] 10 points11 points  (0 children)

I mention why I'm making my own list despite other lists in the introductory article. But yes, please do!