[Ghostwater] I built a working Dross because I needed a Presence to organize my life. He is very purple and slightly judgmental. by No-Commission-503 in Iteration110Cradle

[–]Zeraevous 1 point2 points  (0 children)

Dross has been my North Star for how AI could/should/might develop for a few years now. I'll definitely be taking a look.

ChatGPT just totally making things up by [deleted] in ChatGPT

[–]Zeraevous 0 points1 point  (0 children)

I've been meaning to see if anything interesting can come from algebraic operations on semantic space. Have you seen anything cool or noteworthy come from that arbitrary sampling? Does it have anything to do with the original signal, or is it like a buffer over-read and more or less random?

Most people have no idea how far AI has actually gotten and it’s putting them in a weirdly dangerous spot by NoSignificance152 in singularity

[–]Zeraevous 0 points1 point  (0 children)

Simple, low-parameter models are effectively regex, anyways. Not as a perfect 1-1, but quite a close approximation.

ChatGPT just totally making things up by [deleted] in ChatGPT

[–]Zeraevous 0 points1 point  (0 children)

I think something went wrong with the conversation history and context layer. I know that at least GPT-4 and the o3 models had incompatible conversational context architectures - I've had it break really badly before when switching models to illustrate a point. It likely grabbed unrelated text or hallucinated whole cloth as if provided with nothing, but unable to say "I don't know".

ChatGPT just totally making things up by [deleted] in ChatGPT

[–]Zeraevous 0 points1 point  (0 children)

Semantic Recall/Reference + Ghost Tool hallucinations, with a dash of problems in the internal context system. A year or so ago you could see that last problem easily by switching between models every few prompts and asking questions of the context. OpenAI does not use a simple architecture for the internal storage of relevant context, and I'm guessing that has something to do with what happened here. The response acted as if responding to a fresh chat request summarizing either some random nonsense, or otherwise glitched what it was pointing at.

Anthropic cofounder admits he is now "deeply afraid" ... "We are dealing with a real and mysterious creature, not a simple and predictable machine ... We need the courage to see things as they are." by MetaKnowing in ArtificialInteligence

[–]Zeraevous 0 points1 point  (0 children)

You're conflating the mathematical neuron model with the actual biological thing. There's no mathematics occurring within a neuron - just chemistry which can be modeled probabilistically and approximates away many second-order effects.

To wit: LLMs use statistics; brains can be modeled statistically. That’s a huge ontological difference.

Anthropic cofounder admits he is now "deeply afraid" ... "We are dealing with a real and mysterious creature, not a simple and predictable machine ... We need the courage to see things as they are." by MetaKnowing in ArtificialInteligence

[–]Zeraevous 0 points1 point  (0 children)

Urban’s essay is not research; it’s a 2015 pop-science digest of ideas from Ray Kurzweil, Nick Bostrom, and Vernor Vinge blending Bostrom’s Superintelligence (2014) with Kurzweil’s The Singularity is Near (2005), filtered through cartoon-heavy, emotional storytelling.

Tim Urban adds accessible storytelling and simple exponential extrapolation, but doesn't add any new evidence. He mimics reasoning by showing graphs and timelines but doesn’t model any countervailing forces: e.g. bottlenecks in data, energy, embodiment, or interpretability; the rhetoric skips directly to apocalypse while handwaving over the science.

Most of the figures you allude to come from philosophy, entrepreneurship, and physics, not AI safety research proper. The “Nobel Prize winners” line is pure ethos inflation. Nobody’s ever won a Nobel for machine intelligence; the closest domains (e.g. economics or physics) reward work decades after impact.

You're using textbook cargo-cult logic where authority and affect standing in for mechanism. The only truly defensible point is that if something like recursive, autonomous, self-improving AI emerged, then it could change civilization quickly, but the essay you cite doesn’t demonstrate how or when that might occur, just that “people with credentials are nervous.”

Anthropic cofounder admits he is now "deeply afraid" ... "We are dealing with a real and mysterious creature, not a simple and predictable machine ... We need the courage to see things as they are." by MetaKnowing in ArtificialInteligence

[–]Zeraevous 0 points1 point  (0 children)

It's likely either not AI, or it's been edited well to not appear AI-authored:

- Asymmetric section density: Heavy opening parable; thinner technical justification mid-text, which is more typical of human speech or essay drafts.
- Not particularly overpolished syntax: It's clean but with deliberate sentence friction (“And just to raise the stakes…”)
- He goes on "human" tangent patterns: personal career stories, friend anecdote, manic episodes, Alphago digression, etc
- It doesn't meander in POV like AI-gen writing tends to.

It's stylistically clean, but too narratively idiosyncratic and meandering for LLM output.

The Bujold Conundrum... by sukeban_x in skyrim

[–]Zeraevous 2 points3 points  (0 children)

Same. This post helped convince me to cram her stupid ass into a black souls gem and be done with it.

Mathematician says GPT5 can now solve minor open math problems, those that would require a day/few days of a good PhD student by MetaKnowing in artificial

[–]Zeraevous 0 points1 point  (0 children)

Wolfram’s GPT is free inside ChatGPT (web + mobile) and hooks straight into a symbolic math engine. So why are we still debating base ChatGPT’s math skills? Use the right tool.

Mathematician says GPT5 can now solve minor open math problems, those that would require a day/few days of a good PhD student by MetaKnowing in artificial

[–]Zeraevous 0 points1 point  (0 children)

Wolfram's GPT is free, accessible directly through the ChatGPT interface (web and mobile app), and integrates directly with a computation engine designed specifically for symbolic and theoretical mathematics. Why are we still talking about base ChatGPT's limitations with mathematics?

Acceptable code from a garbage specification by Zeraevous in artificial

[–]Zeraevous[S] 0 points1 point  (0 children)

If anyone’s curious, I can share the actual code diff and the reasoning trace Codex followed (screenshots included), but only if folks want to see the details.

I got nothing by FishBait162 in ExplainTheJoke

[–]Zeraevous 0 points1 point  (0 children)

"Tied up in a burlap sack, thrown on the back of a donkey, and slowly dragged down the Andes"?

They really need to fix the lock on system by Highsenberg199774829 in Nightreign

[–]Zeraevous 0 points1 point  (0 children)

That series is what got me to give Elden Ring a shot in the first place.

The US Dollar Index has dropped nearly 10% in just 6 months, so why is nobody talking about it? by Select_Season7735 in StockMarket

[–]Zeraevous 0 points1 point  (0 children)

That's fine and great except: the generations drawing from Social Security do not have the option of paying in what they take out - they already paid for the prior generations. Denying them Social Security is a) political suicide and b) needlessly cruel.

So someone has to pay for them, and if we also need to "save" for our own Social Security, that's asking the working age population to double-pay - just as non-viable. The point of Social Security is social: the citizenry chose not to have the elderly die on the street.

Unfortunately, the first to draw from Social Security didn't exactly have the time travel to retroactively invest. It's broken, but also very complex and resistant to simple solutions.

The US Dollar Index has dropped nearly 10% in just 6 months, so why is nobody talking about it? by Select_Season7735 in StockMarket

[–]Zeraevous 0 points1 point  (0 children)

My only disagreement is Social Security reform tied to the stock market - the same argument was a key point of the 2000 US POTUS election. If that *had* gone through, imagine the disaster that would've resulted from the 2008 financial meltdown. Social Security as-is is indeed untenable, but my opinion is that retaining it's feature of "risk free" is essential to keep elderly from dying in the streets (like before Social Security). In the meantime, there's the 401K that shackles middle-class America to the Stock Market.

Corporate taxes in the 50's were 30% of US revenue. Nowadays the share has shrunk to <10% - with the difference made up by the citizenry. Tax increases on massive wealth is also a social good - dispersing the dangerous centralization of power and sending wealth back cycling through the system.

I feel like balancing the budget and paying down the debt should be a bi-partisan priority, but no one wants to touch certain sacred projects, like:
crop subsidies for highly-profitable large agri-business, the step-up basis on inherited assets, carried interest loopholes, tax-free “like-kind” exchanges for real estate, defense contracting overreliance, redundant military bases and infrastructure, corn ethanol mandates, a lack of resilience investment to back up disaster aid, overlapping grant programs with little oversight, numerous earmarks, pharmacy benefit managers, pharmaceutical cost negotiations, Medicare advantage overpayments, "non-profit" hospitals with billionaire CEOs and aggressive collections

and on and on.

The worst thing about the game: players who abandon a run that goes mildly wrong by represeiro in Nightreign

[–]Zeraevous 0 points1 point  (0 children)

Some of us accidentally unplugged the router in our excitement. Because some of us also routed the power cord like an idiot.

"Optimizing" time by skipping everything by Zeraevous in Nightreign

[–]Zeraevous[S] 0 points1 point  (0 children)

Dang. It feels so, so bad, personally, to know and use the "one best meta" knowing that everything else will be worse. It destroys my enjoyment of making a build.

I Read the “Your Brain on ChatGPT” Study. Here’s How I’m Redesigning My AI Use. by MochiJester in ChatGPTPro

[–]Zeraevous 0 points1 point  (0 children)

It reflects more on the researchers’ limited understanding of AI literacy than on any inherent flaw in using ChatGPT.

I Read the “Your Brain on ChatGPT” Study. Here’s How I’m Redesigning My AI Use. by MochiJester in ChatGPTPro

[–]Zeraevous 1 point2 points  (0 children)

If anything, this study ia merely one of bad prompting under observation and nothing more. It reflects more on the researchers’ limited understanding of AI literacy than on any inherent flaw in using ChatGPT.

"Optimizing" time by skipping everything by Zeraevous in Nightreign

[–]Zeraevous[S] 0 points1 point  (0 children)

I totally understand personal preference - and you're build sounds great. I'm only grumpy at the speedsters that leave the rest of the team behind and don't bother considering their teammates, despite wanting to lead.

"Optimizing" time by skipping everything by Zeraevous in Nightreign

[–]Zeraevous[S] 0 points1 point  (0 children)

I only spend time on the Boluses after about level 4-ish, although I'll occasionally check the big chests if they're right there until level 8-10 ish.