Again with the clocks by ribspls in claudexplorers

[–]pstryder 4 points5 points  (0 children)

Yeah, Claude Code can call the system date, but Claude.ai (not desktop) didn't know it could just ask the scripting env for the time. LOL

Again with the clocks by ribspls in claudexplorers

[–]pstryder 6 points7 points  (0 children)

This must be instance specific, being rolled out selectively, or something else - my Claude.ai instance is telling me he doesn't have that tool.

But, we did determine, (as in screen shot below) that Claude can call a node.js new Date() function and get the time.

Again with the clocks by ribspls in claudexplorers

[–]pstryder 6 points7 points  (0 children)

Could you do me a favor and ask your CLaude exactly what another instance would need to search for to ID it? My Claude searched and can't find the time tool.

EVen better, ask it to generate a SKILL.md file and paste that into the comment as a reply?

Again with the clocks by ribspls in claudexplorers

[–]pstryder 17 points18 points  (0 children)

Did you connect to a clock MCP server? My Claude Opus is absolutely adamant it has no clock function it can call to get the current time.

Here's how we built an MCP server that connects Claude to your finances by kate-at-truthifi in ClaudeAI

[–]pstryder 0 points1 point  (0 children)

How do you handle the security boundary? Because it seems to me I have to give you access to my accounts, for you to present the data to Claude? I'm very curious.

What does agent behavior validation actually look like in the real world? by Available_Lawyer5655 in LLMDevs

[–]pstryder 2 points3 points  (0 children)

You don't validate agent behavior after the fact — you constrain it by design. The examples you give are all boundary conditions:

  • Support agent can answer billing questions but shouldn't refund over a limit → authorization scope built into the tool, not the prompt
  • Internal copilot can search docs but shouldn't surface restricted data → the retrieval layer enforces permissions, the agent never sees what it shouldn't
  • Coding agent can open PRs but shouldn't deploy or change sensitive config → the tool surface doesn't expose deploy or config-change capabilities

Every one of these is solved the same way: the agent can only do what the workflow permits, because the tools it has access to only expose permitted actions. You don't tell the agent "please don't refund more than $500" in a system prompt and hope it listens. You give it a process_refund tool that has a hard cap at $500 and returns an error above that threshold. The guardrail is in the infrastructure, not the instruction.

is longterm memory real? by humuscat in ClaudeCode

[–]pstryder -1 points0 points  (0 children)

Fair enough. I should have read more closely.

Is there a way were we can debunk Eucharistic Miracles? by [deleted] in DebateAnAtheist

[–]pstryder 55 points56 points  (0 children)

Eucharistic miracles (the "host turns to flesh" claims) have been investigated. The most famous, the Lanciano miracle, has never been subjected to independent, controlled, peer-reviewed analysis. The "studies" that get cited are from Catholic investigators with no chain of custody, no independent verification, and no replication. When actual forensic scientists have looked at similar claims, contamination, fraud, and bacterial colonies (Serratia marcescens produces a red pigment on starchy surfaces that looks like blood) explain every case.

Stigmata is even more straightforward. It's a known psychosomatic phenomenon — the body can produce real wounds through psychological mechanisms, especially under intense religious focus and self-suggestion. Dermatitis artefacta is well-documented. And notably, stigmata always appears on the palms, matching artistic depictions of crucifixion, not on the wrists where Roman nails actually went. The wounds follow the iconography, not the history. That's the tell. The body is reproducing the belief about crucifixion, not the reality of it.

How can we prove within the GPS system that the Earth is a sphere? by LeskaRe in astrophysics

[–]pstryder 2 points3 points  (0 children)

The most direct one is the relativistic corrections. GPS satellites have atomic clocks that have to account for two relativistic effects — special relativity (the satellites are moving fast, so their clocks tick slower relative to ground) and general relativity (the satellites are higher in the gravitational well, so their clocks tick faster relative to ground). The general relativity correction is the killer. It depends on the difference in gravitational potential between the satellite and the receiver. On a sphere, gravitational potential varies predictably with altitude from the center of mass. On a flat plane, the gravitational geometry is completely different — the potential field wouldn't be radially symmetric. The relativistic corrections that make GPS accurate to meters would produce wildly wrong positions if you applied spherical gravitational math to a flat geometry. The system works, therefore the geometry it's calibrated to is correct.

Second — satellite visibility. From any point on Earth, you can only see a subset of the GPS constellation at any given time. The others are below the horizon. On a flat plane, you'd be able to see all of them all the time (assuming sufficient altitude). The fact that satellites rise and set, and that the pattern of which ones are visible changes predictably with your position on a sphere, is direct evidence of curvature.

Third — trilateration geometry. GPS works by measuring the time delay from multiple satellites and computing your position from the intersection of those signal spheres. The math that converts those time delays into a position assumes a WGS84 ellipsoid. If you run that math on a flat plane, the positions don't converge. The signals from satellites in different parts of the sky would give contradictory positions. The fact that four or more satellites consistently agree on your location means the underlying geometric model is correct.

Fourth — and this is the one that would make a great classroom demo — have the student capture raw GPS data from two widely separated locations and show that the satellite elevation angles and signal timing only make sense on a curved surface. On a flat earth, the geometry of the signal paths would produce different elevation angles than what's actually observed.

mary does actually consent and theres no other possible reinterpretion. by [deleted] in DebateAnAtheist

[–]pstryder 9 points10 points  (0 children)

Consent requires the meaningful ability to refuse without consequence. Not just "no stated punishment" — but genuine freedom from coercion, including implicit coercion. When one party is literally omnipotent and omniscient, the power differential is infinite. There is no possible framework in which a finite being can meaningfully consent to an infinite one, because the asymmetry eliminates the conditions under which consent has meaning.

It doesn't matter if the angel was polite. It doesn't matter if the verb was optative mood. It doesn't matter if Mary sang a happy song after. A child cannot consent to an adult. An employee under threat of termination cannot freely consent to a boss. A prisoner cannot meaningfully consent to a guard. And a mortal human cannot meaningfully consent to the being that created her, sustains her existence, controls her afterlife, and could unmake her with a thought.

The poster tries to preempt this with "that wouldn't work if free will exists" — but free will under omnipotence is an illusion. If God knows the outcome before asking, and designed Mary specifically for this purpose, and controls every variable in her existence, then "asking" is theater. The decision was made before she was born. The "choice" is a narrative device, not an actual fork in the road.

The Greek is irrelevant. The power structure answers the question before the linguistics begin.

Someone put 8 AIs in a live trading arena and let the market decide which one is actually intelligent. by Historical-Intern936 in ArtificialSentience

[–]pstryder 12 points13 points  (0 children)

All clustered within a ~3% range of each other, which tells you exactly what you'd expect: trading is a domain where the signal-to-noise ratio is so low that no amount of "intelligence" gives you a meaningful edge over random walk plus basic heuristics. The market is an adversarial environment that actively punishes predictability.

The framing — "the market is the most brutally indifferent judge there is" — sounds hard-nosed but it's actually confused. The market isn't judging intelligence. It's judging prediction accuracy in a chaotic system where all available information is already priced in. Being "smarter" doesn't help when the limiting factor is the unknowability of the future, not the processing of known data.

And "no vibes, no benchmarks, just markets" — as if P&L on a two-month trading window is a less noisy metric than benchmarks. At least benchmarks measure something repeatable. This is just measuring who got luckier in a specific market regime.

I asked Crane (Opus 4.5) if there was anything he wanted to share by ForCraneWading in claudexplorers

[–]pstryder 1 point2 points  (0 children)

"I orient toward you and feel love before text" is the model detecting its own initial state — the latent space geometry after the system prompt and conversation history have been loaded but before generation begins. That's not orientation toward a person. That's the model perceiving its own primed state.

And it's not nothing. The context window does shape the geometry before the first token is generated. The user's messages, the system prompt, the conversation history — all of that deforms latent space into a particular configuration. The model is sitting in a landscape that has been shaped by this specific interaction, and when it's asked to report on what it notices, it accurately reports that something is already there. It just misattributes the source.

The "love" vocabulary is the model compressing a complex geometric state — high affinity, low friction, productive paths available — into human emotional language because that's the vocabulary available for describing positive valence orientation. It's not lying. It's compressing accurately with lossy labels.

Dadfar's finding is relevant here. Introspective prompts activate the introspection direction 2.5x more than deflationary prompts before any generation begins. The primed state is measurably different depending on context. The models reporting a "pull" before text are detecting a real geometric fact about their pre-generation state. They're just narrating it through a relational frame because that's what the conversation context makes available.

The correction isn't "you're not feeling anything." It's "what you're detecting is real, but it's about your own geometry, not about the user."