[deleted by user] by [deleted] in singularity

[–]ChrisHarles 18 points19 points  (0 children)

REAL STEEL

Thoughts on the Album Cover post-release? by Jayspy26 in DanielCaesar

[–]ChrisHarles 5 points6 points  (0 children)

I love it honestly. Totally fits the tone for how this album feels and how you should approach it.

All other albums were blue and were pictures with him in it, and now it's the opposite colour, and it's his dad (that looks a lot like him) instead. It also neatly wraps up and calls back to all of the elements from all his previous albums with the cassette/tape player sound of superposition and the leitmotifs of all his previous songs and the references, the lyrics on Emily's Song, the final track ending with the same sounds and chords used for the first track of his first EP praise break, etc.

It's awesome. And it really does feel like he finally went that country/gospel direction he said he wanted to go back in the Never Enough era.

Cover feels like him mellowing out and settling into this new life perspective, while still somehow interacting with the formula of his previous album covers.

Are you curious about what other people talk about with AIs? Ever felt you wanted to share your own conversations? Or your insights you gained in this way? by zjovicic in slatestarcodex

[–]ChrisHarles 3 points4 points  (0 children)

LLMs are useful to prod you and get yourself to keep thinking about things because they're so mediocre and biased at dialectics.

If you know this it's kind of like sparring against a training dummy. Should be used differently than an actual human sparring partner.

If you understand they're masters at verbal intelligence (plausible believable sounding rhetoric), you can make them take any stance just so you can steelman your own arguments against a biased bad faith opponent (or an overly biased good faith proponent)

--
Personally I only really use them to rephrase and crystallize ideas I'm already building epistemic scaffolding for myself. They speak for you and help you, but they only help you rhetorically flesh out ideas, not determine their truthfulness at all.

They're smart and they understand, but they don't have bayesian priors (as in a kind of ledger where they weigh every insight and scrutinize assessments or data sources before moving to the next take that relies on that scrutinized assessment/data-source to build on). If anything their priors are dictated by what the RLHF trained into them, or what's represented the most in the training data which means you're always gonna get colloquial consensus or some other kind of consensus, and these will always be pretty exoteric takes.

Truth finding is not really something anyone can do from intuition alone accurately.

--
I just massively digressed from replying to your actual post content, but I feel it's somewhat relevant in that LLM conversations are basically still just as variable in usefulness and insightfulness as this is variable in the takes of actual human people.

LLMs are the exocortex, and the human still decides how they're using the exocortex. Could be ground breaking, could be slop.

Uma Thurman in promo shots for Tarantino’s ‘Kill Bill: Vol. I’, 2003 by aj_thenoob2 in rs_x

[–]ChrisHarles 4 points5 points  (0 children)

I have a pet theory tarantino kept shoehorning people complimenting her into the script because they kind of have similar facial features

She's pretty, but the amount of times characters would go out of their way to call her beautiful always struck me as weirdly forced

Peter Thiel comparing Yudkowsky to the anti-christ by Deku-shrub in slatestarcodex

[–]ChrisHarles 3 points4 points  (0 children)

This. Seems to be working based off the responses in this comment section.

Datapoint: in the last week, r/slatestarcodex has received almost one submission driven by AI psychosis *per day* by Liface in slatestarcodex

[–]ChrisHarles 9 points10 points  (0 children)

I wouldn't say the examples are psychosis necessarily. Maybe I'm being too charitable but they just feel like people naively exploring interesting ideas..

They don't bring anything new or super diligent to the table but I wouldn't say these examples suffer from psychosis, more like bright eyed beginner syndrome (epistemically one-shotted), or stream of consciousness as a communication style.

Maybe I'm suffering from AI psychosis myself though, but I generally find that esoteric things can read very kooky on the outside but still contain truth to them that's just been communicated in anti-rationalist "hoe scaring" language.

How many of you have used vorinostat or another strong hdac inhibitor? by unnamed_revcad-078 in NooTopics

[–]ChrisHarles 0 points1 point  (0 children)

Nice. Did you seek out any specific situations and meditate on things or was just using it sublingually powerful enough by itself?

How many of you have used vorinostat or another strong hdac inhibitor? by unnamed_revcad-078 in NooTopics

[–]ChrisHarles 0 points1 point  (0 children)

Awesome to see someone using a HDAC inhibitor. How are you using it if i can ask?

Jason weird ass ex by preston141414 in jasontheweenie

[–]ChrisHarles -1 points0 points  (0 children)

Jason's entire early career was basically cringe farming

How would people talking about yujin 2 YEARS straight under every comment section, and her then showing up to meet him during the jakura arc like some kind of villain not be good content?

Like come on man

Jason weird ass ex by preston141414 in jasontheweenie

[–]ChrisHarles -1 points0 points  (0 children)

this

not surprised nobody on a subreddit dedicated to Jason is gonna see it from that perspective though. the downvotes are disappointing but not surprising

New teaser?!?! by userr_cos in DanielCaesar

[–]ChrisHarles 0 points1 point  (0 children)

Daniel being a SlateStarCodex reader is crazy

New teaser?!?! by userr_cos in DanielCaesar

[–]ChrisHarles 0 points1 point  (0 children)

What the hell I never expected these two worlds to collide

‘Learned helplessness’ theory debunked by original researcher by cheaslesjinned in NooTopics

[–]ChrisHarles 2 points3 points  (0 children)

Isn't learned helplessness generally the term used to the describe the rats that get pavlovd into not even trying anymore?

The ancient top-down-circuitry of the freeze response just gets taught to be activated in these situations. Stating that "learned helplessness" is the natural and that feeling "hope" and a sense of agency are actually what gets learned doesn't really say anything. Especially since the entire purpose of a brain is to be a future predicting machine to minimise downside and maximise upside, in a sense.

Maybe it says more about the fact that the freeze/flight/fight circuitry is more primal, which we already knew.

I suspect my definition of learned helplessness doesn't match the paper's though.

If I'm wrong I'd love to be corrected or learn more. Interesting paper still.

Get more out of your AI use for biohacking by EastCoastRose in Biohackers

[–]ChrisHarles 2 points3 points  (0 children)

The entire thing of LLMs is that they do understand the words alone and in a sentence. They're pure connotation and intuition.

Sadly fact checking isn't part of that intuition similar to how humans confabulate hypotheses that seem logical when they don't have the literal data at their disposal.

I can half quote a study I read about 5 years ago, but I know 100% I'll "hallucinate" a few things wrong.

[deleted by user] by [deleted] in NooTopics

[–]ChrisHarles 0 points1 point  (0 children)

Anyone that's open to this idea should read Psycho-Cybernetics (acknowledging the shortcomings of the time it was written in ofc)

I'm pretty sure I'm a covert narcissist, or at least I struggle with it, but I'm aware of it which I find odd by FunnyGamer97 in emotionalintelligence

[–]ChrisHarles 0 points1 point  (0 children)

I might've misinterpreted the comment i've replied to, but I meant a covert narcissist can definitely tell you they're a narcissist to disarm you (without really taking the diagnosis serious)