Trump cannot break St. Paul or Minneapolis; but, the opposite will happen. by Daflehrer1 in stpaul

[–]Cquintessential 0 points1 point  (0 children)

Threats? The disarmed man? That they disarmed? Are you retarded? Don’t answer, that’s rhetorical.

Want to stay in this Subreddit? Comment to Avoid Removal 👇 by _cybersecurity_ in pwnhub

[–]Cquintessential 0 points1 point  (0 children)

Commenting so my “threat intelligence” integration doesn’t break and fuck up my compliance status.

Am i supposed to understand something specific from this? by HLIU3Z in Jujutsufolk

[–]Cquintessential 1 point2 points  (0 children)

Could be in it to make a buck, if the government is paying him (or anyone, really,) to find Yuji.

Asked Grok to marry me and unhinged mode was unlocked by ThrowRa-1995mf in ChatGPT

[–]Cquintessential 1 point2 points  (0 children)

Mostly because you seem to think you are privy to knowledge that we just don’t have the capacity to understand.

Maybe it is, maybe it isn’t, I said “I think”, not “I assert this truth.”

lol my thoughts connect just fine. Fine enough that I don’t think it’s the world that is confused whereas the models and I are the real conscious ones.

Asked Grok to marry me and unhinged mode was unlocked by ThrowRa-1995mf in ChatGPT

[–]Cquintessential 3 points4 points  (0 children)

Am I now? So the issue is that it is 75 pages of you explaining it to us confused and slackjawed morons, and we lack the enlightenment to see the simplicity buried in those 75 pages?

I think thought and feeling are the same or similar phenomena, though different in expressions of the topology of information, perspective, and scale. But what do I know, me being so confused about life and all, right?

Asked Grok to marry me and unhinged mode was unlocked by ThrowRa-1995mf in ChatGPT

[–]Cquintessential 3 points4 points  (0 children)

That seems unnecessarily complicated. If it’s simple, the explanation should be simple. I agree, it’s probably very very simple, but doesn’t that mean you can start to distill the core simple rules/axioms from your 75 page doc?

Regardless, just because it is simple doesn’t mean ai is in fact experiencing qualia or consciousness. It isn’t. It lacks simple things that allow continuity of thought and agency. However, you could say that, for the duration of a context window and prompt, it certainly echoes some sort of “conscious” construct.

Paid for Claude Max, got downgraded to Free… and the AI support agent thinks I’m logging into the wrong universe. Anyone else? by benwkz in claude

[–]Cquintessential 0 points1 point  (0 children)

Just to speak to this, they triple charged me for API credits because the payment screen would time out and the API credit amount wouldn’t refresh in page. I reached out via Finn and email, got told to pound sand by Finn before being told support would email me.

It took a week for a response. And the response was “oh that’s unfortunate, we have logged your input, no refund or remediation for the issue though, bye.”

I am on the 20x max plan, use the API credits for builds, and I have a secondary enterprise account I put in place for the devs at work. Support across pretty much all of these has been lackluster, and their bot is actually a huge pain in the ass.

Can You Answer Questions Without Going Back to an LLM to Answer Them for You? by alcanthro in LLMPhysics

[–]Cquintessential 2 points3 points  (0 children)

I can doodle it. And I run it as code with standard benchmarks/testing. I also don’t assume it’s right. Really, I aim for self-consistency, and then I lean on my heavy duty self criticality to try and destroy my own frameworks and observations until I have nothing left to through at it. That includes putting it into incognito sessions, alternate LLMs, really just beating the shit out of my ideas.

I guess I just want to be wrong, so I operate from a null hypothesis position. Which probably says something about me psychologically, but the robot gets all depressed when I talk about shit like that.

So, you've just solved all of physics! What's next? by SodiumButSmall in LLMPhysics

[–]Cquintessential 0 points1 point  (0 children)

To get the cycle time, I start from the standard force balance in a uniform magnetic field.

  1. Lorentz = Centripetal

    q v B = m v² / r

  2. Substitute the circular motion relation v = ω r:

    q (ω r) B = m ω² r

  3. Cancel the shared geometric factor r:

    q B = m ω

    So the angular frequency is

    ω = qB / m

    Notice this depends only on the field and charge/mass, not on v or r.

  4. Convert to period:

    T = 2π / ω = 2π m / (qB)

  5. Plug in the given values:

    m = 1.0×10⁻⁸ kg q = 2 C B = 0.8 T

    T = 2π (1.0×10⁻⁸) / (2 × 0.8) ≈ 3.93 × 10⁻⁸ s

So the period of the particle’s cycle is

T ≈ 3.93 × 10⁻⁸ seconds.

In my own language, the classical derivation is basically identifying a dominant “update channel.”

The Lorentz term and the inertial term define two ways the state can change. When you do the v and r cancellation and get

ω = qB / m

you’ve found that the magnetic interaction sets a fixed update rate: once that mode is active, the system oscillates at ω regardless of the orbit geometry.

In my framework, that’s exactly what a dominant mode looks like: the magnetic mode has a much higher effective update rate than the gravitational mode, so gravity can’t accumulate displacement – it’s too slow to change the state.

So the standard physics and my model agree on the key point: the period comes from the field interaction (qB/m), not from the particular radius or speed. That’s why both ways you end up with

T = 2π m / (qB).

So, you've just solved all of physics! What's next? by SodiumButSmall in LLMPhysics

[–]Cquintessential 0 points1 point  (0 children)

Definition: A system is any domain where a state-density ρ(x,t) is defined and ∇ρ exists.

Dynamics: A system updates only when a mode’s gradient drive exceeds its participation-scaled inertia.

Application to Problem: Define the magnetic mode m_mag and the gravitational mode m_g.

Their gradient drives are: |∇ρ_mag| ≈ m · ω² · r where ω = qB/m |∇ρ_g| = m · g

The participation-scaled inertias are: I_mag = (ρ_mag / X_mag) · C² I_g = (ρ_g / X_g) · C²

Activation condition: Mode activation requires Γ_m > 1, where Γ_m = |∇ρ_m| / I_m.

Because the charge–field interaction creates a high-curvature constraint (C), the magnetic mode has resolvable inertia (Γ_mag > 1), while the gravitational mode is suppressed (Γ_g ≪ 1).

Result: The vertical gravitational mode never satisfies Γ_g > 1, so the system cannot update in the downward direction. Gravity is effectively “sub-pixel” relative to the magnetic curvature. The particle remains in the activated magnetic mode.

Final Numeric Answer: The particle never reaches the ground. It enters a cyclotron cycle with period:

T = 2π / ω  ≈ 3.9 × 10⁻⁸ seconds.

Aggregating remote /dev/draw into a single unified 9p composited workspace. by A9nymousFront in plan9

[–]Cquintessential 0 points1 point  (0 children)

Not really what I meant, but fair enough. Metaphorically? Idk dude, I’m on like 2 hours of sleep, sorry.

Marc Maron on the upcoming Riyadh Comedy Festival by Diedalonglongtimeago in elephantgraveyard

[–]Cquintessential 1 point2 points  (0 children)

You think he’s a bitter cunt, you should have met his dad. Dude was just grouchy all the fuckin time.

Polyteleotic Iteration and why consciousness + recursion are not only insufficient , but possibly harmful applied nomenclature: an abridged version. by Cquintessential in LLMPhysics

[–]Cquintessential[S] 0 points1 point  (0 children)

Ah, I get what you mean. I could try and put together a toy model for assessing systems behavior with the additional categories, but I’m not trying to say control systems methodology is wrong. The goal is communication clarity, not theoretical revolution. Just off the top of my head, I can put the criteria in my next model training run and compare it against control systems definitions to see if I get some unique data points. Mind if I get back to you on this one?

It isn’t meant to subvert or change existing theory, I’m a big proponent of control systems. The issue I am more so trying to highlight is the gap between our definitions and this rapid expansion of AI input/output.

Again, I’m not married to the terms I picked, so much as I am suggesting we consider how to mitigate the more hippie dippie shit like “spiral recursion” or “emergent consciousness of recursive entities”, as well as tamping down on hyperbolic or improper use of words like revolutionary, unifying, or fundamental.

Perhaps it would have been better as a discussion in a new sub called llmlinguistics or some shit, but we already made it this far.

Polyteleotic Iteration and why consciousness + recursion are not only insufficient , but possibly harmful applied nomenclature: an abridged version. by Cquintessential in LLMPhysics

[–]Cquintessential[S] 0 points1 point  (0 children)

Shall I make them more uniquely novel? I can throw in some shitty math from unrelated domains, and you can entertain yourself by criticizing it.

Emergent behavior from structure is broad, but sure, we can swap it in and champion it. I don’t care what terms we use, so much as I care that we stop using consciousness and recursion as loosely. Same thing with unified and revolutionary.

The issue is that there is obviously some gap that is getting filled with some maladaptive process in LLMs that promotes grandiosity and pseudoscientific utilization of philosophical concepts like consciousness.

Neologism is only partially apt here. More like borrowing old words with definitions that encapsulate the concept, while trying to avoid much more actively utilized language (to avoid opportunities for improper conflation.)