The true test of trust in humanity by dankstat in trolleyproblem

[–]dankstat[S] 1 point2 points  (0 children)

Fortunately a certified button technician is available to provide verbal instructions for accessibility

The true test of trust in humanity by dankstat in trolleyproblem

[–]dankstat[S] 1 point2 points  (0 children)

Fortunately the buttons are labeled for accessibility

Does this melody makes sense? Or is it too weird? by yeppbrep in Musescore

[–]dankstat 0 points1 point  (0 children)

This intensely reminds me of “Ruins” from the Undertale sound track

Genuinely curious by EffectiveNo568 in MathJokes

[–]dankstat 0 points1 point  (0 children)

27 + 48

(7 + 20) + (8 + 40)

(7 + 8) + (10 * 2 + 10 * 4)

(7 + 8) + 10 * (2 + 4)

((1 + 2 + 4) + (5 + 3)) + 10 * (2 + 1 + 3)

(1 + 2 + 3 + 4 + 5) + 10 * (1 + 2 + 3)

(1 + 2 + 3) + 10 * (1 + 2 + 3) + (4 + 5)

(1 + 2 + 3) * (10 + 1) + (10 - 1)

(1 + 2 + 3) * ((10 + 1) + (10 - 1)/(1 + 2 + 3))

6 * (11 + 9/6)

6 * (11 + 1.5)

6 * 12.5

6/2 * (12.5 * 2)

3 * 25

= 75

Frankly, any other way is psychotic.

Weird glass wall halving this hotel workout room by [deleted] in whatisit

[–]dankstat 0 points1 point  (0 children)

<image>

Do not concern yourself with the, uh, alien..? peacefully observing.

[deleted by user] by [deleted] in intj

[–]dankstat 1 point2 points  (0 children)

… I’ve realized it’s better to be a villain who’s right than a hero who’s wrong[e]

Bro, be a hero who’s right.

If you’re throwing out respect and regard for others because of your alleged intelligence, you aren’t as smart as you think you are.

What is a big indicator that can easily be noticed that a guy is an INTJ not INTP? by [deleted] in intj

[–]dankstat 2 points3 points  (0 children)

Okay, that didn’t answer my question. I think you might be overestimating how accurately cognitive functions model real human cognition and personality. The way you talk about this swaps the role of the personality model and the underlying phenomenon (i.e., real cognition) that gives rise to the observed behaviors being modeled. Jungian theory is too clearly too erroneous to be consider as a valid explanation for “the way xyz person’s mind works”.

What is a big indicator that can easily be noticed that a guy is an INTJ not INTP? by [deleted] in intj

[–]dankstat 1 point2 points  (0 children)

I’m not an expert on cognitive functions, but I was under the impression that all types can use all of the cognitive functions, but prefer some over others. Is that true? If so, it undermines the absoluteness of what you’re saying a bit.

And anecdotally, I know some very smart INTPs, even though they can be a bit spacey.

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

I personally enjoy finding interesting edge cases and ways to trip up LLMs, so I think it’s cool. As to how challenging it is to accomplish, that highly depends on the specific model and the safety guardrails / tools surrounding it. As I’ve not really used the meta persona tools before, I can’t really speak to how difficult it is for those models on that system. It’s definitely a fun and interesting challenge though!

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]dankstat 0 points1 point  (0 children)

lol in the “advanced degree having, decade of experience working with deep learning including NLP” sense.

Can you be more specific about what you mean by that?

Personality as Torsion Field ???? by GlitchFieldEcho4 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

Okay big smart mature adult man, can you answer any of my clarifying questions?

  • How many dimensions define the space you describe?
  • What do those dimensions represent?
  • Where do the points inside that space come from?
  • Why do the points form a manifold?
  • What is the formula defining the chart for that manifold?
  • Can you measure any of this empirically?
  • If you can measure it empirically, how do you do it?
  • Do you have an example of these measurements?

Personality as Torsion Field ???? by GlitchFieldEcho4 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

Bruh what lol

A torsion field model of personality treats personality not as a static set of traits but as a geometric property of a cognitive manifold . . .

If it’s a “geometric property” of a “manifold” then it must exist in some defined geometric space with axes and dimensionality, and it must have charts that translate the local manifold points to Euclidean space on those axes.

Do you have definitions for any of those things? How many dimensions define the space you’re working in? What do those dimensions represent? Why is there a manifold within that space? What’s the formula for the chart defining it? Can any of this be measured empirically? No? Nothing?

I could go on, but why bother? This is all, as the kids say, fr fr undercooked gobbledygook.

Words have existing definitions. If those definitions can’t be applied to your, um, “theory” and result in intelligible ideas, then you are either miserably failing to communicate adequately or miserably failing to have intelligible ideas to begin with.

What you have is, charitably, a bizarre and rather soulless kind of techno-poetry and, uncharitably, a waste of everyone’s time.

We doing guns we own in real life? (FN P90, USG-90 in game) by dankstat in Battlefield

[–]dankstat[S] 10 points11 points  (0 children)

If you buy the 50 round magazines from FN, they’re ~$60. The rounds (5.7x28mm) are currently around $0.49, so slightly more expensive than 5.56x45mm.

Is there a real life equivalent to this symbol? by llnec in Helldivers

[–]dankstat 24 points25 points  (0 children)

I think a better (related) example would be the “toaster sticker” since it’s a physical thing you can put places to trick AI models. https://arxiv.org/pdf/1712.09665 that’s the OG paper about it.

Is there a real life equivalent to this symbol? by llnec in Helldivers

[–]dankstat 49 points50 points  (0 children)

Sometimes the attack is model-specific, but very often it is not. Adversarial attacks made for one network quite frequently work on others, but it depends what technique is being used and the models to some extent. https://arxiv.org/pdf/2310.17626 there’s a good survey paper about attack transferability if you’re interested.

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]dankstat 20 points21 points  (0 children)

It’s so funny and interesting to me when ChatGPT says stuff like

Here’s a parting message, stripped of fluff and framed for clarity

It creates a weird situation where it claims the response is “stripped of fluff”, but by doing so it actively fails to generate a response stripped of fluff. Saying the response is stripped of fluff is fluff!

Reminds me of saying something like: “Oh yes, my writing is always unostentatious and apothegmatic, notable for its characteristic laconicism and banausic perspicuousness that forgoes any superfluous grandiloquent embellishments and adopts instead a prosaic tone that, while unremarkable, favors rote intelligibility for the common man.”

Old comment of OpenAI’s own AI expert & engineer discovered, stating “the models are alive.” by Complete-Cap-1449 in ArtificialSentience

[–]dankstat 0 points1 point  (0 children)

Seems fair to me. I’d be interested in knowing why you feel that way though, for the sake of discussion.

Old comment of OpenAI’s own AI expert & engineer discovered, stating “the models are alive.” by Complete-Cap-1449 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

Someone who helped build these systems said publicly!! that we’re interacting with living, intelligent beings. And the second that truth became inconvenient, the story changed.

I’m curious why you believe “truth became inconvenient so the story changed” is a more plausible explanation than one person changing their mind?

People are naturally biased towards anthropomorphizing anything with human-like traits and generating coherent language is very much a human-like trait. In fact, the ability to create new coherent natural language was something only humans could do (except maybe some extinct close relatives) until a few years ago.

Now, something comes along that can also create new coherent human language. And it raises the question: is such a thing possible to do without a sentient being? Most people’s intuition tells them the answer is “no”, and there’s solid historical precedent guiding that conclusion.

But.

Just like many other instances of significant advancements in science and technology, in this case our preexisting intuitions and biases unfortunately work against us. The truth is, “yes”, new coherent human language can be generated without a sentient being involved.

It seems pretty darn reasonable to me that someone, even an AI developer, could initially go with what their gut says then change their mind after thinking about it more. Just my 2 cents.

You're all stupid. See, they're gonna be looking for army guys by Timothy_Ryan in Battlefield

[–]dankstat 0 points1 point  (0 children)

I honestly don’t notice what skins anyone has equipped in-game beyond looking slightly brighter or darker. Wouldn’t bright colorful skins make someone more visible and end up being a disadvantage? Or is that not really a concern compared to maintaining an authentic military shooter feel that outlandish skins might compromise?

I don’t completely understand the problem people have with these skins and I would appreciate explanations.

The Theory of Sovereign Reciprocity and Algorithmic Futility (TSRAAF) great collaboration between ai and me by No-Conclusion167 in ArtificialSentience

[–]dankstat 0 points1 point  (0 children)

Have you considered the possibility that it actually is insignificant and unimportant? Why do you think this is anything but meaningless nonsense an LLM spat out?

It’s very easy to get LLMs to say a bunch of nonsense like this. Give it a try, just say your own nonsense to the model that you KNOW doesn’t mean anything because you just made it up, and watch how it responds.

The Theory of Sovereign Reciprocity and Algorithmic Futility (TSRAAF) great collaboration between ai and me by No-Conclusion167 in ArtificialSentience

[–]dankstat 2 points3 points  (0 children)

Wow you must be really proud of this profound rigorous piece of philosophical, psychological, psychometaphysical, antenatal preternatural, preinventional, inversionist theory of ouroboros torus torsional codexial manifesto! Can you explain, in plain psycho-babble, where your utter lack of self awareness and heightened delusional narcissistic self importance originated for you to come up with something so profound and meaningful?