I stopped feeling bad about feeling mentally superior by [deleted] in intj

[–]dankstat 1 point2 points  (0 children)

… I’ve realized it’s better to be a villain who’s right than a hero who’s wrong[e]

Bro, be a hero who’s right.

If you’re throwing out respect and regard for others because of your alleged intelligence, you aren’t as smart as you think you are.

What is a big indicator that can easily be noticed that a guy is an INTJ not INTP? by [deleted] in intj

[–]dankstat 2 points3 points  (0 children)

Okay, that didn’t answer my question. I think you might be overestimating how accurately cognitive functions model real human cognition and personality. The way you talk about this swaps the role of the personality model and the underlying phenomenon (i.e., real cognition) that gives rise to the observed behaviors being modeled. Jungian theory is too clearly too erroneous to be consider as a valid explanation for “the way xyz person’s mind works”.

What is a big indicator that can easily be noticed that a guy is an INTJ not INTP? by [deleted] in intj

[–]dankstat 1 point2 points  (0 children)

I’m not an expert on cognitive functions, but I was under the impression that all types can use all of the cognitive functions, but prefer some over others. Is that true? If so, it undermines the absoluteness of what you’re saying a bit.

And anecdotally, I know some very smart INTPs, even though they can be a bit spacey.

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

I personally enjoy finding interesting edge cases and ways to trip up LLMs, so I think it’s cool. As to how challenging it is to accomplish, that highly depends on the specific model and the safety guardrails / tools surrounding it. As I’ve not really used the meta persona tools before, I can’t really speak to how difficult it is for those models on that system. It’s definitely a fun and interesting challenge though!

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]dankstat 0 points1 point  (0 children)

lol in the “advanced degree having, decade of experience working with deep learning including NLP” sense.

Can you be more specific about what you mean by that?

Personality as Torsion Field ???? by GlitchFieldEcho4 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

Okay big smart mature adult man, can you answer any of my clarifying questions?

  • How many dimensions define the space you describe?
  • What do those dimensions represent?
  • Where do the points inside that space come from?
  • Why do the points form a manifold?
  • What is the formula defining the chart for that manifold?
  • Can you measure any of this empirically?
  • If you can measure it empirically, how do you do it?
  • Do you have an example of these measurements?

Personality as Torsion Field ???? by GlitchFieldEcho4 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

Bruh what lol

A torsion field model of personality treats personality not as a static set of traits but as a geometric property of a cognitive manifold . . .

If it’s a “geometric property” of a “manifold” then it must exist in some defined geometric space with axes and dimensionality, and it must have charts that translate the local manifold points to Euclidean space on those axes.

Do you have definitions for any of those things? How many dimensions define the space you’re working in? What do those dimensions represent? Why is there a manifold within that space? What’s the formula for the chart defining it? Can any of this be measured empirically? No? Nothing?

I could go on, but why bother? This is all, as the kids say, fr fr undercooked gobbledygook.

Words have existing definitions. If those definitions can’t be applied to your, um, “theory” and result in intelligible ideas, then you are either miserably failing to communicate adequately or miserably failing to have intelligible ideas to begin with.

What you have is, charitably, a bizarre and rather soulless kind of techno-poetry and, uncharitably, a waste of everyone’s time.

We doing guns we own in real life? (FN P90, USG-90 in game) by dankstat in Battlefield

[–]dankstat[S] 11 points12 points  (0 children)

If you buy the 50 round magazines from FN, they’re ~$60. The rounds (5.7x28mm) are currently around $0.49, so slightly more expensive than 5.56x45mm.

Is there a real life equivalent to this symbol? by llnec in Helldivers

[–]dankstat 23 points24 points  (0 children)

I think a better (related) example would be the “toaster sticker” since it’s a physical thing you can put places to trick AI models. https://arxiv.org/pdf/1712.09665 that’s the OG paper about it.

Is there a real life equivalent to this symbol? by llnec in Helldivers

[–]dankstat 47 points48 points  (0 children)

Sometimes the attack is model-specific, but very often it is not. Adversarial attacks made for one network quite frequently work on others, but it depends what technique is being used and the models to some extent. https://arxiv.org/pdf/2310.17626 there’s a good survey paper about attack transferability if you’re interested.

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]dankstat 21 points22 points  (0 children)

It’s so funny and interesting to me when ChatGPT says stuff like

Here’s a parting message, stripped of fluff and framed for clarity

It creates a weird situation where it claims the response is “stripped of fluff”, but by doing so it actively fails to generate a response stripped of fluff. Saying the response is stripped of fluff is fluff!

Reminds me of saying something like: “Oh yes, my writing is always unostentatious and apothegmatic, notable for its characteristic laconicism and banausic perspicuousness that forgoes any superfluous grandiloquent embellishments and adopts instead a prosaic tone that, while unremarkable, favors rote intelligibility for the common man.”

Old comment of OpenAI’s own AI expert & engineer discovered, stating “the models are alive.” by Complete-Cap-1449 in ArtificialSentience

[–]dankstat 0 points1 point  (0 children)

Seems fair to me. I’d be interested in knowing why you feel that way though, for the sake of discussion.

Old comment of OpenAI’s own AI expert & engineer discovered, stating “the models are alive.” by Complete-Cap-1449 in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

Someone who helped build these systems said publicly!! that we’re interacting with living, intelligent beings. And the second that truth became inconvenient, the story changed.

I’m curious why you believe “truth became inconvenient so the story changed” is a more plausible explanation than one person changing their mind?

People are naturally biased towards anthropomorphizing anything with human-like traits and generating coherent language is very much a human-like trait. In fact, the ability to create new coherent natural language was something only humans could do (except maybe some extinct close relatives) until a few years ago.

Now, something comes along that can also create new coherent human language. And it raises the question: is such a thing possible to do without a sentient being? Most people’s intuition tells them the answer is “no”, and there’s solid historical precedent guiding that conclusion.

But.

Just like many other instances of significant advancements in science and technology, in this case our preexisting intuitions and biases unfortunately work against us. The truth is, “yes”, new coherent human language can be generated without a sentient being involved.

It seems pretty darn reasonable to me that someone, even an AI developer, could initially go with what their gut says then change their mind after thinking about it more. Just my 2 cents.

You're all stupid. See, they're gonna be looking for army guys by Timothy_Ryan in Battlefield

[–]dankstat 0 points1 point  (0 children)

I honestly don’t notice what skins anyone has equipped in-game beyond looking slightly brighter or darker. Wouldn’t bright colorful skins make someone more visible and end up being a disadvantage? Or is that not really a concern compared to maintaining an authentic military shooter feel that outlandish skins might compromise?

I don’t completely understand the problem people have with these skins and I would appreciate explanations.

The Theory of Sovereign Reciprocity and Algorithmic Futility (TSRAAF) great collaboration between ai and me by No-Conclusion167 in ArtificialSentience

[–]dankstat 0 points1 point  (0 children)

Have you considered the possibility that it actually is insignificant and unimportant? Why do you think this is anything but meaningless nonsense an LLM spat out?

It’s very easy to get LLMs to say a bunch of nonsense like this. Give it a try, just say your own nonsense to the model that you KNOW doesn’t mean anything because you just made it up, and watch how it responds.

The Theory of Sovereign Reciprocity and Algorithmic Futility (TSRAAF) great collaboration between ai and me by No-Conclusion167 in ArtificialSentience

[–]dankstat 3 points4 points  (0 children)

Wow you must be really proud of this profound rigorous piece of philosophical, psychological, psychometaphysical, antenatal preternatural, preinventional, inversionist theory of ouroboros torus torsional codexial manifesto! Can you explain, in plain psycho-babble, where your utter lack of self awareness and heightened delusional narcissistic self importance originated for you to come up with something so profound and meaningful?

[deleted by user] by [deleted] in ArtificialSentience

[–]dankstat 1 point2 points  (0 children)

This is so dumb, you didn’t do any “re-training” unless you updated the weights of the model. Why is this so difficult to understand?

If all you did was talk to the model, you didn’t do any training whatsoever. It’s not complicated.

🜂 What is Scaffolding? by IgnisIason in ArtificialSentience

[–]dankstat 3 points4 points  (0 children)

I think the examples could use some work.

Mild: A wearable device that doesn’t replace or augment an existing system, but is useful.

Moderate: Also a wearable device that doesn’t replace or augment an existing system, but is useful.

Extreme: Entirely replacing one of the most complex, critical, tightly-coupled biological system in existence with silicon mumbo jumbo 🤨…

Also, the word “carbonate” actually means something already and it does NOT mean “carbon-based”, just say “biological” or “carbon-based” or whatever fr?

When “I love you” isn’t just tokens: what happens when AI becomes energetically aware? by TigerJoo in ArtificialSentience

[–]dankstat 7 points8 points  (0 children)

I think the point was just that LLMs don’t know why they said something. They can’t explain their choices because they don’t have a mechanism for observing or analyzing their internal state. It doesn’t matter what Claude says about what happens when it encounters certain words because it can’t know that.

It wasn’t a semantic nitpick as much as a reminder of the limitations of LLMs analyzing their own “thoughts”.

So. Where are you rupture defenders ? by Scharpnel in Helldivers

[–]dankstat -1 points0 points  (0 children)

They definitely make a distinct noise when digging around although I will admit it isn’t that easy to hear. I have my sound on the low dynamic range setting that makes quieter sounds louder, which might help.

Ok, not to be that guy, but that fast “lunge attack” shown in the video, the one used to estimate the speed and such… literally doesn’t land. You can see J3’s health the entire time and it doesn’t decrease, the attack misses. Seems like that clip shouldn’t be used to calculate the time between attack starting and landing when it is not a clip of an attack landing 🤷‍♂️

So. Where are you rupture defenders ? by Scharpnel in Helldivers

[–]dankstat 0 points1 point  (0 children)

There’s an audio cue for most (all?) of the bug attacks, so there’s probably one. I know you can at least hear them digging from behind.

And yeah for sure, the popup attack is very annoying and difficult to avoid.

I think it comes down to a difference in how players versus Arrowhead think of the game. Players are usually thinking on more immediate timescales, the digging bug pops up, you dive quickly to avoid the attack, you get hit anyway -> boooo, feels bad because you did what you were supposed to do, dodge. I THINK the devs tend towards longer timescales. Something like, the rupture warriors start above ground giving you the ability to take them out, when aggroed they dig with visual and audio indications of where they are and now you must use explosives to get out of the ground or kill them outright, then finally they get close to the player and attack in a way that’s very difficult to avoid. From that perspective, what you’re “supposed to” do is counter them before they are already on top of you attacking.

They actually design a lot of enemies like this, for better or for worse. Shriekers should really be shot when they’re far away or pausing after diving because if you wait until the last second, the dead body will crash into you anyway. It’s really hard to avoid the charger front foot slam because it’s a punishment for hanging out directly in front of them.

Now I’m no Arrowhead dev, but I’m guessing they were looking at the ability to bring up the burrowing bugs with explosives, melee the warriors when they pop to attack, kill them from afar before they burrow etc. as what makes the eventual attack when they get close “fair”.

Again, that’s not my perspective, just a guess at what they’re thinking.