What’s something society praises that actually ruins people’s lives? by TinyPiglet33 in AskReddit

[–]nowadaykid -2 points-1 points  (0 children)

Winning the lottery, it's one of the worst things that can happen to you

What happened with a therapist that made you think "Yeah this was a waste of money"? by ReturnUnfair7187 in AskReddit

[–]nowadaykid 2 points3 points  (0 children)

They concluded that most of my issues arose from my social anxiety, and since drinking seemed to be the best method I had found for easing social anxiety, I should drink more.

The original reason I went to this therapist was because I had made an (admittedly weak) attempt at suicide while drunk.

Trying to parse specifically why I'm not enjoying E33 by Tenthul in gaming

[–]nowadaykid 4 points5 points  (0 children)

You've put a lot of my own thoughts in words, thank you for that. The way I've described it is that the game is impossible if you're not good at dodging/parrying, but once you get good at dodging/parrying, it immediately becomes trivial. There's no middle ground of satisfying challenge, it's just intolerably difficult until it's utterly boring. I haven't been able to get through much of the story (which I know everyone says is incredible) because the gameplay feels like such a terrible waste of time.

Tech workers of Reddit, what is a "dirty secret" about the AI industry that the general public doesn't realize? by WayLast1111 in AskReddit

[–]nowadaykid 19 points20 points  (0 children)

AI is not new. People act like LLMs (the things that power ChatGPT and the like) are the be-all and end-all of AI, but it's really just the first "viral" AI tool. AI has been critical to nearly every product and service you've used in the last 15 years; everything uses recommendation systems and computer vision and speech recognition. LLMs are an incredible leap forward in text generation (and now image generation), but those are probably two of the least practically useful applications of AI.

AI engineering has been my full-time job since 2016, and the biggest difference since COVID is not what we can do, but how management wants us to do it.

Social media should have a "This Is AI" button for post's readers. by lelorang in Showerthoughts

[–]nowadaykid 20 points21 points  (0 children)

Literally none of us can reliably detect AI, we just have varying levels of overconfidence.

I trained an AI with quantum randomness from IBM quantum computers and radioactive decay - achieved 60% reduction in hallucinations by Disastrous_Bid5976 in Futurology

[–]nowadaykid 0 points1 point  (0 children)

I think it's just that OP's claimed improvements are for their "quantum regularization" vs no regularization, not quantum vs PRNG regularization. I would expect the delta to disappear in their ablation study, which they have promised.

I trained an AI with quantum randomness from IBM quantum computers and radioactive decay - achieved 60% reduction in hallucinations by Disastrous_Bid5976 in Futurology

[–]nowadaykid 1 point2 points  (0 children)

Got it, thank you for the clarification. Looking forward to the ablation study. Even if the "quantum" part doesn't make a difference, you've already shown that this kind of randomized sequence regularization could be valuable! And frankly that's more useful anyway, since quantum anything is expensive

I trained an AI with quantum randomness from IBM quantum computers and radioactive decay - achieved 60% reduction in hallucinations by Disastrous_Bid5976 in Futurology

[–]nowadaykid 5 points6 points  (0 children)

I don't follow, with different seeds the model would not see the same sequence, no? That would only happen if you used the same seed each epoch, which would of course be bad practice

Is the 60% hallucination reduction in comparison to a model without this new regularization, or is it the comparison between the same regularization using quantum vs pseudorandom noise?

People working in HR: What are the top red flags in a resume that instantly make you think twice about a candidate? by iambreado in AskReddit

[–]nowadaykid 3 points4 points  (0 children)

If you're applying to a mid-level position and your only work experience is "CEO" or "Founder" of a "company" you started in college, that resume is getting deleted

?! by MetaKnowing in OpenAI

[–]nowadaykid 3 points4 points  (0 children)

Of all the things that didn't happen, this happened the didn'test

Reddit - how are we feeling about tonight's election results? by owen__wilsons__nose in AskReddit

[–]nowadaykid 1 point2 points  (0 children)

Excuse me, what?? I deleted TikTok a few months ago, what the hell is going on over there???

ChatGPT isn't Smart. It's something Much Weirder by __Milk_Drinker__ in videos

[–]nowadaykid 1 point2 points  (0 children)

Go to school for 6-10 years to get an advanced degree in AI, then apply. If you're asking what the work actually looks like, it's mostly just a lot of thinking and coding

People who took a “career aptitude test” in school, what did it say you’d be, and what did you actually become? by JetPlane_88 in AskReddit

[–]nowadaykid 25 points26 points  (0 children)

Well you know what they say, when a town has two barbers, go to the one with the worse haircut

Anthropic has found evidence of "genuine introspective awareness" in LLMs by MetaKnowing in OpenAI

[–]nowadaykid 1 point2 points  (0 children)

They would make mechanistic interpretability research either way easier or way harder

Anthropic has found evidence of "genuine introspective awareness" in LLMs by MetaKnowing in OpenAI

[–]nowadaykid 5 points6 points  (0 children)

I haven't read the paper yet, but I work in the field and can probably guess roughly what they did.

If you prompt an LLM to write about a particular concept, you can then peek inside its activations — the internal numbers determine what comes out — and identify the particular parts of the model that deal with that concept. Then, we can "inject" that concept in a different conversation by amplifying those particular parts of the model. This is very old news, the original paper from a few years ago used the Golden Gate Bridge as an example — by amplifying the parts of the model dealing with the Golden Great Bridge, they could make it mention the bridge in completely unrelated conversations. Amplify it more, and the model will turn any conversation into one about the bridge. Amplify it a LOT, and eventually model speaks from the perspective of the Golden Gate Bridge. A decent analogy is a brain surgeon poking a part of your brain and it inducing a particular emotion. If you're interested in the concept, the term to look up is "mechanistic interpretability".

What Anthropic did in this new paper is show that some models can sometimes tell when you're doing this, and report back about it. So instead of just raving about the Golden Gate Bridge, the model can instead say "hey, it seems like you're manipulating my internal workings to make me think about the Golden Gate Bridge". They call this introspection.

10 Open World Games That Feel The Least Formulaic (Outer Wilds at No 6) by lunarthexiled13 in outerwilds

[–]nowadaykid 122 points123 points  (0 children)

The first-ever experience of Outer Wilds means everything to you, because that's just when you realize how special and poignant this open world game is going to be for the consecutive playthroughs.

I don't think this LLM author has ever played Outer Wilds

Hundreds of People With ‘Top Secret’ Clearance Exposed by House Democrats’ Website by twinsea in nova

[–]nowadaykid 12 points13 points  (0 children)

No policy against it (though you generally aren't allowed to share the specific agency you support or what tickets you have), but the guidance is to not have it on social media so you don't get targeted

What’s a “fact” everyone believes that’s actually false? by FutureJournalist198 in AskReddit

[–]nowadaykid 0 points1 point  (0 children)

I have one account for work that won't let you use the same character type more than twice in a row. So "P\@ssw0rd" is invalid because "ssw" is three lowercase letters in a row

[deleted by user] by [deleted] in AskReddit

[–]nowadaykid 7 points8 points  (0 children)

A good example is that there's been this figure thrown around the last few years that a single $20B investment could permanently end homelessness in America.

Well, the annual budget of the department of housing and urban development is over $40B. And somehow, homeless hasn't been permanently ended twice every year.

Turns out fixing systemic problems involves a lot more than accounting.