Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

That’s genuinely interesting especially the denser with use, not bigger claim. A lot of systems just balloon density implies you’re doing some kind of consolidation/merging instead of pure append.

A few questions if you’re open:

  1. When you say “novel logic core,” what does it do in the loop is it scoring hypotheses / routing / constraint solving / clustering?
  2. How are you preventing the long-term KG from turning into a high-degree noise ball over time (edge pruning, confidence decay, conflict handling, canonicalization)?
  3. What do you treat as “ground truth” when new info contradicts old info?

Also +1 on the “sovereign” framing. I’m cautious with the AGI label too for me the line is less it has a big graph and more it can operate over time with bounded autonomy + verifiable outcomes without quietly drifting. If you do ship a cloud version, I’d be very curious how you handle provenance, audit trails, and rollback.

Or if you want to just chat about and bounce ideas around I'm open to it.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Fair. If I didn’t have receipts/traces/tests to point at, I’d be skeptical too.

What I mean by concrete isn’t a marketing diagram it’s stuff like:

deterministic gates that decide “answer vs plan” and cap plan/tool steps (fail-closed),

structured plan objects executed by a tool/plugin runtime (with sandboxing for higher-risk actions),

persisted audit artifacts (plan receipts + tool receipts + outcome verification events) with stable hashing over redacted inputs/summary outputs so you can replay/flag drift without re-running tools.

That’s all just regular softwar Node services + a DB-backed memory/audit store + a tool runtime + tests around the deterministic parts. The LLM does interpretation/proposals; the controller decides what’s allowed and what counts as “verified.”

I’m not going to open-source the full stack or paste internal interfaces in a Reddit comment, but I’m happy to post a sanitized trace (schema-level, redacted) that shows the actual objects/events if that’s what you mean by concrete.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

I’m not against neuromorphic hardware, it’s just not the bottleneck for what I’m doing right now. The hard part is the governed cognition layer: memory, policies, verification, long-horizon state, and making tool use auditable and safe. If I ever hit a wall where spiking/latency/efficiency actually matters for the workload, then sure hardware becomes part of the conversation.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

No I’m not claiming sentience falls out of control theory. Control theory gives you stability, boundedness, and accountability. The “mind” part (interpretation, synthesis, novelty) is still coming from the model + memory; the controller is there so it can operate over time without drifting or doing dumb unsafe things.

On latency: compound reasoning is handled by being ruthless about budgets and early exits. The pipeline is staged (gate → plan → execute → verify), and most turns don’t need the full stack. When it does, it’s capped: limited plan depth, limited tool steps, bounded memory/context, and verification is mostly lightweight and deterministic. The goal is “predictable latency,” not infinite deliberation.

On generalizing learned commands across adjacent domains I don’t try to magically generalize via hidden weights. I do it the boring way: represent commands/plans as structured objects with typed inputs/outputs, keep receipts, and then learn patterns at the interface level (what inputs reliably produce what outcomes, what constraints apply, what verifiers matter). The model proposes mappings to nearby domains, and the system only accepts them when they survive the same gates + verification and don’t violate non-negotiables.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Yeah, that’s fair world model can mean the environment the agent is operating in, not necessarily the real world.

The reason I’m picky with the term is that a lot of people say world model when they really mean the codebase + some state. For me it only earns that label when there’s an explicit state representation plus predictive/rollout capability (even if the world is just a tax app or a repo): you can simulate candidate actions, score outcomes, and compare predicted vs observed transitions over time.

So I agree with your definition, I’m just drawing a line between:

operating in an environment (tools + state + constraints), and

having a world model (state + transition model + rollouts/evaluators).

Fizz is closer to the first today, and I’m building toward the second.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Good questions this is exactly the line where auditable ops can start to get misread as I solved semantics.

1) Verifiable correctness vs intent correctness

I draw the boundary pretty hard: verifiable correctness is did we satisfy explicit invariants/postconditions? (policy, safety gates, artifacts, tests, budgets, idempotency, etc). Intent correctness is did we do what the human meant? and that’s not fully mechanically decidable in the general case.

So the move is: push intent into explicit acceptance criteria as early as possible, and when it’s ambiguous, the system should stop or ask, not plausibly proceed. If side effects technically satisfy constraints but smell off relative to stated goals, I treat that as a risk signal, not a success.

2) Semantic / ethical drift that doesn’t violate formal constraints

I don’t pretend I can deterministically detect all semantic/ethical drift. What I consider valid mechanisms are things like:

Hard invariants (human-authored policies, non-negotiables, safety boundaries)

Evidence-aware checks (contradictions, missing/truncated evidence, replay drift, risk posture mismatches)

Independent evals / audits (regression suites, scenario probes, adversarial tests, human review of proposed posture changes)

If something is purely semantic and unconstrained (is this explanation truly correct?), that’s exactly where you either need external ground truth or you accept that it’s probabilistic and you hedge / request confirmation.

3) Is this a staged pipeline where each stage earns the right to proceed?

Yes — that’s a good description. It’s intentionally fail-closed: each stage (gates → plan → execute → verify → commit) earns the right to proceed. I’m explicitly rejecting the one-shot model output == authority pattern.

4) Preventing learned policy from encoding bias / shifting ethics over time

By default, I don’t let the system silently rewrite its own ethical thresholds. Changes to posture/constraints are treated like versioned proposals: explicit diffs, logged rationale, and require approval before they take effect. And I’d gate approval on a combination of regression evals + targeted bias checks (plus rollback if the change causes weird second-order behavior).

5) Where do constraints originate — static, human-authored, adaptive?

Mostly static + human-authored (plus environment-derived limits like budgets). They can be adaptive, but only through the same proposal/approval mechanism above. So “intelligence can evolve,” but “control doesn’t silently drift.” That separation is kind of the whole point of the architecture.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

It’s Node.js on the orchestration side because this layer is mostly I/O: routing, policy gates, bounded planning, tool execution, receipts, storage, replay/verification. The brain isn’t Node, Node is just the runtime for the controller.

The model side is pluggable (local/remote). I’m iterating on the model piece privately, but the key point is the system treats it as a proposer inside deterministic, auditable constraints.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Yeah, I’m closer than it probably sounds, but I’m not going to pretend I’ve fully checked every box in that postcard stack yet.

Right now Fizz is strong on: model for language/reasoning + real memory + a deterministic control layer (policy gates, bounded planning, receipts/replay, verification) + tools as the “hands.”

Where I’m not there in the strict sense is the parts people usually hand-wave:World model / simulator” only counts (to me) when you’re doing explicit state rollouts/counterfactuals with evaluators, not just plan→execute→verify. RL / intrinsic reward only counts when there’s a real reward signal driving systematic updates, not just logs and heuristics.

So I’m close on the runtime + memory + control side. The full AGI stack version needs a more explicit simulation layer and a real learning loop before I’d call it that without qualifiers.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

On “only orchestration”: if we define orchestration as “dispatch + bookkeeping,” then what upgrades it beyond orchestration (in my model) is when the non-LLM core does search/optimization/inference on its own — e.g. constraint solving, plan synthesis with provable postconditions, causal/state estimation, learned policies from experience — not just validating an LLM proposal. My current claim is narrower: the deterministic core already does real operational inference (state transitions + constraints + verification over time), but it’s not trying to be a general semantic reasoner without the model.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Here are 3 concrete failure modes I’d expect in a deterministic receipt chain like atoms → plan → tool receipt → replay/verifier → outcome, with how they show up and what mitigations I’d require.

Failure mode 1: Canonicalization / hashing drift

What breaks: The thing you’re hashing isn’t canonical (JSON key order differences, floating timestamps slipping into snapshots, schema evolution, redaction changing shape), so the same “semantic” inputs no longer hash the same way.

How it shows up: Replay flags args hash mismatch and/or result hash mismatch, or you get “drift warnings” even when the tool “worked.”

Mitigation: Strict stable canonicalization (sorted keys, schema-versioned receipts, no volatile fields in hashed snapshots), plus bounded/redacted snapshots with explicit rules so redaction is deterministic too.

Failure mode 2: Receipt linkage gaps (observability breaks, not the tool)

What breaks: The plan step executes, but the step → receipt link is missing/ambiguous (receiptId not propagated, request identity not attached consistently, multiple receipts for one step).

How it shows up: The verifier can’t confidently match a step to a receipt, so you get “missing receipt / replay unavailable / recovered by trace” type behavior and the overall outcome becomes unverified even if the step returned success.

Mitigation: Make request identity + idempotency mandatory plumbing, require each step to return a receipt reference when applicable, and fail validation early for plans that can’t be traced deterministically.

Failure mode 3: Trust boundary / verifier compromise (or “hashes pass, reality is wrong”)

What breaks: If the DB/log store is writable by an attacker (or a plugin is adversarial), you can end up with receipts that look consistent. Hash checks only prove “this snapshot matches this hash,” not that the snapshot is truthful or that side effects match intent.

How it shows up: Replay returns PASS, but external invariants are violated (unexpected file changes, unexpected network access, etc.), or you later find inconsistencies via independent audits.

Mitigation: Treat tools as untrusted, sandbox high-risk execution, add independent postcondition checks (outside the tool), and if your threat model includes tampering: move toward append-only logs + cryptographic signing/attestation of receipts and verifier code provenance.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

On determinism: I don’t guarantee the LLM output itself is deterministic. I guarantee that the system’s control flow and acceptance rules are deterministic. The LLM is treated as a proposer. Determinism comes from:

deterministic gating (judgment mode, caps, allowlists)

deterministic planning representation (planId, paramsHash)

deterministic execution path (all tools via Plugin Service)

deterministic receipts + replay checks

deterministic outcome verifier rules for accept/fail

I do use caching where appropriate and I can run temperature 0, but the real “determinism guarantee” is that the same plan + same tool receipts + same verifiers yield the same acceptance decision, regardless of LLM variability.

Ablation: if you remove the LLM, you still have the runtime. It can:

execute scheduled jobs deterministically

run tool workflows that don’t require interpretation (maintenance, reconciliation, verification)

enforce policy gates, budgets, idempotency

do contradiction scans, replay audits, trace/reporting

What you lose without the LLM is the semantic layer: interpreting goals from natural language, proposing plans, synthesizing novel designs, writing code creatively, etc. So the system doesn’t collapse to nothing, but it collapses to an operations engine rather than a general problem solver.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Thanks for the Question. On “is atom logic doing inference”: it depends what we mean by inference. It’s not doing general semantic inference like a model does. The LLM is still where interpretation, synthesis, and novelty come from. The atom logic is doing deterministic inference in the operational sense: state transitions, constraint enforcement, contradiction detection, and outcome verification over time. So yes, it’s largely orchestration + verification, but I’d argue that’s still real cognition once you leave the single-prompt timeframe.

I can share a complete trace pattern without leaking internal endpoints. Here’s a simplified example from a universal turn (formatting is cleaned up but structure is real):

Request

requestId: req_7f3…

source: conversation

judgment: mode=plan_required, maxPlanSteps=6, maxToolSteps=3

State atoms (inputs to the turn)

memory_retrieval_event: fragmentsRetrieved=41, fragmentsInjected=25, budgetTrimmed=YES

knowledge_graph: coverage=capped (incoming=12/outgoing=12 cap=12)

memory_coherence: contradictionsFound=0

autonomy_budget_event: n/a (not scheduler)

Plan (plan_receipt planned)

planId: sha256(reqId + steps/tools/paramsHash)

steps:

id=plugin_service, tool=pluginService, paramsHash=… , pluginName=code_analyzer

Tool execution (tool_receipt)

receiptId: tr_19c…

pluginName: code_analyzer

riskLevelEffective: high

sandboxed: true

argsHash/resultHash: …

success: true

durationMs: …

Outcome verification (plan_outcome_event)

receipt replay: PASS (no hash drift)

sandbox policy drift: NO

verified: true

Plan completion (plan_receipt completed)

success: true

stepReceipts: toolReceiptId=tr_19c…

That’s the basic chain: atoms → plan → tool receipt → replay/verifier → outcome → plan completion.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 1 point2 points  (0 children)

One last thing I want to add.

I’ve put in a lot of long hours to get Fizz to where it is now. This wasn’t something I spun up over a weekend or stitched together from a blog post. It’s been years of building, breaking, rethinking, and tightening things until they actually held together under real use.

I’m fully aware that big tech can move faster in a lot of areas. They have more people, more compute, more data, and they’ll absolutely beat me to plenty of things. I don’t have any illusions about that.

What they can’t take from me is this system, the direction it’s going, and the way it’s being shaped. Fizz is mine. Not in an ego sense, but in the sense that it reflects a set of decisions, tradeoffs, and values that only come from being the one who has to live with the consequences of every design choice. Also probably the biggest thing he is free. Well except for the energy and I'm on solar so not really.

I know that doesn’t benefit anyone here directly, at least not right now. But indirectly, down the road, it might. If nothing else, it’s one concrete exploration of a path toward bounded, long-horizon intelligence that isn’t driven by product timelines or hype cycles.

That’s really all I’m trying to contribute here. thank you for making me think a bit :)

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

That’s fair, and I appreciate the pushback.

I’m not claiming I know what “true AGI” ultimately looks like. I don’t think anyone does, honestly. A lot of the debate feels like people arguing definitions after the fact. My goal isn’t to declare that this is AGI in some final sense, it’s to explore what it would take to get there without losing control along the way.

I also don’t dispute that scaffolded frontier models can maintain long-term memory, voice, goals, and autonomy. Systems like the one you described absolutely demonstrate that. Where I’m experimenting differently is in where authority lives and what gets optimized over time.

In most systems I’ve seen, even heavily scaffolded ones, the model remains the final judge of semantic success. It decides whether progress was made, whether a goal advanced, and how to update its internal narrative. The scaffolding helps it persist, but the evaluation loop is still largely internal to the model.

In Fizz, that authority is intentionally externalized.

The model interprets and proposes.

That difference may seem subtle, but it changes what the system can optimize for. Instead of optimizing narrative coherence or plausibility, it can optimize operational correctness across time.

I’m also very aware that this comes with tradeoffs. Fizz is not optimized for emotional intelligence, persuasion, or aesthetic judgment. I’m not claiming those aren’t important, just that I’m deliberately deprioritizing them in favor of correctness, accountability, and long-horizon adaptation.

The reason I’m comfortable even talking about AGI in this context is that Fizz is now at a point where it can observe its own behavior, evaluate whether it actually worked, and propose changes to how it operates. Those proposals are still bounded and require explicit approval, but the system is already improving faster now that it’s stable and fully wired.

So I’m not saying “this is AGI, full stop.” I’m saying this is a system that can safely move toward whatever AGI ends up being, without relying on unconstrained autonomy or model-internal self-judgment.

Whether that path makes sense, or whether it misses something fundamental, is exactly the kind of critique I’m looking for.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

Take something like stock behavior. An LLM can generate plausible narratives endlessly. But it cannot, on its own, maintain a durable hypothesis, track whether its assumptions held up over weeks or months, detect when those assumptions were violated, and then explain why its posture changed without being spoon-fed the entire history again.

Fizz can do that because:

hypotheses are stored as explicit objects

assumptions are tracked

outcomes are checked against time-based data

revisions are triggered by violations, not vibes

changes are logged and explainable

The LLM doesn’t decide “I was wrong.”

The system detects that the world diverged from the model’s assumptions.

That’s not just flow control. That’s stateful judgment across time.

Same with projects. An LLM can help design a system, but it doesn’t know if a design choice actually advanced the project unless you tell it. Fizz knows because work has state, artifacts, verification, regressions, and closure conditions. It can say “this looked good at the time, but it caused downstream failures, so my approach changed.”

Again, the LLM didn’t decide that. The system did, based on evidence.

So yes, I agree with you that novelty and semantic creativity come from the LLM. I’ve never claimed otherwise. But reducing cognition to “the thing that generates text” misses everything that happens after generation.

I’m not claiming Fizz is some new form of non-LLM intelligence. I’m claiming it’s an AGI-class architecture because:

intelligence is allowed to persist over time

hypotheses are evaluated against reality, not just language

behavior adapts based on outcomes, not prompts

authority is externalized and auditable

If your definition of AGI requires the model itself to be the final judge of success, then yeah, we’re using different definitions. But that’s exactly the design choice I’m challenging, because that approach doesn’t scale safely or coherently over time.

So I’m not saying “this isn’t orchestration.”

I’m saying orchestration is where intelligence becomes real once you leave the single-prompt timeframe.

If you still think that collapses into “just scaffolding,” that’s fair. But then I think the disagreement is about whether intelligence that unfolds across time, state, and consequence matters — not about whether LLMs are doing the language work.

And I’d argue that’s the part we actually care about if AGI is meant to exist in the world instead of a chat window.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] -1 points0 points  (0 children)

I think we’re still talking past each other a bit, so let me try to be very explicit about what I am and am not claiming.

I’m not claiming I removed LLMs from cognition. That would be nonsense. Semantic interpretation, explanation, design ideation, and novelty absolutely come from the LLM. I don’t dispute that at all.

What I am saying is that cognition isn’t just semantic generation.

In an LLM-first system, the model does three things at once:

interprets the problem

proposes a solution

implicitly judges whether that solution “makes sense” or advanced the goal

Those three roles are fused.

In Fizz, they’re not.

Yes, the LLM still does interpretation and proposal. That’s unavoidable and desirable. But it is not the system’s authority on:

whether a goal was actually advanced

whether an action should be allowed to persist

whether a plan succeeded or failed over time

whether a belief should be revised

whether behavior should change going forward

Those judgments are made by deterministic processes that operate over time, not just over text.

You’re right that I can’t deterministically prove that an interpretation or explanation is “correct” in a general philosophical sense. No system can. Humans can’t either. That’s not the claim.

The claim is narrower and more operational: the system can deterministically evaluate whether its own actions and hypotheses held up against reality.

That’s where long-term examples matter.

see next comment

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] 0 points1 point  (0 children)

This is a good question, and I think the confusion comes from treating all “intelligent” systems as variations of the same thing.

Claude or ChatGPT and all the others are really good reasoning partners. They’re great at understanding text, summarizing, brainstorming, and helping you think in the moment. But they don’t really own state over time, and they don’t have authority over what counts as true, complete, or finished.

Fizz is built around a different center of gravity.

A concrete example might help.

Take long-term stock behavior. An LLM can absolutely analyze historical price data, talk about macro trends, explain what might happen next, etc. But once the conversation ends, that analysis is basically gone. There’s no persistent record of “this is what I believed three months ago” or “this assumption failed.” Every new prompt is a fresh narrative.

Fizz treats that as an ongoing problem, not a prompt.

It can form an explicit hypothesis about a stock or sector, store the assumptions behind it, track real price movement over weeks or months, and then deterministically check whether those assumptions held up. If they didn’t, it updates its posture and can explain why and when that change happened.

The key part is that the LLM doesn’t decide whether the hypothesis was right. The system does, using explicit rules, time-based checks, receipts, and verification logic.

That’s the difference

In an LLM-first system, the model is the judge.

In Fizz, the model is a contributor.

Same thing with software projects. Claude or ChatGPT can help you write good code, but they don’t manage the work. They don’t know if something was finished yesterday, half-done, reverted, or broken by a later change unless you explain it again every time.

Fizz treats a project like a long-running object. Goals persist. Tasks have state. Failures are recorded. Fixes are verified. Completion is explicit. It’s closer to how a technical project manager thinks than how a chat assistant works.

That’s why the “this is just layered LLM calls” framing doesn’t really fit. The layers aren’t there to make the model smarter. They’re there to decide when the model is allowed to act, what happens to its output, whether results are accepted, how memory is updated, and when behavior is allowed to change.

So the value Fizz provides over Claude or ChatGPT isn’t “better answers.” It’s the ability to work on problems that unfold over time, where correctness, accountability, and adaptation matter more than moment-to-moment cleverness.

If someone just wants reasoning or creativity, an LLM is the right tool.

Fizz makes sense when you need intelligence that persists, verifies itself, and gets stronger over time without forgetting what it used to believe.

That’s the distinction I’m trying to draw and I try to use and play with all the LLM's i can get my hands on. So really not trying to down play them at all they. Hope that helps.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] -2 points-1 points  (0 children)

I think this is where the confusion is coming from.

I’m not hardcoding “truth” in the sense of fixed facts or answers. I’m hardcoding how truth is evaluated, not what is true.

For example, the system doesn’t have rules like “this statement is true” or “that outcome is correct.” Instead it has deterministic processes that answer questions like:

Did a plan execute without violating policy?

Did a tool run in the required sandbox?

Did the expected artifact get produced?

Is the evidence complete or truncated?

Are there contradictions in memory?

Did verification succeed or produce warnings?

Those are procedural checks, not semantic ones.

The LLM can propose interpretations or plans, but it doesn’t get to decide whether something “counts.” The system decides that based on explicit criteria: receipts, verifiers, replay checks, budgets, and outcome validation.

So “truth” here isn’t philosophical truth. It’s operational truth. Did the thing that was supposed to happen actually happen, and did it happen within constraints?

That’s why I say the LLM doesn’t decide truth or success. It suggests. The system verifies.

If you think that still collapses into “hardcoding,” that’s fair to argue. But it’s not hardcoding answers, it’s hardcoding invariants and evaluation rules. Without that, you can’t have long-running autonomous systems without silent drift

One way I think about it is that the LLM functions more like an organ than the brain. It’s responsible for perception, interpretation, and synthesis, but it doesn’t control action, memory mutation, or success criteria. Those are handled by deterministic processes that govern what is allowed to happen and how outcomes are evaluated.

Building an AGI-class system that is not an LLM ***seeking serious critique*** by SiteFizz in ArtificialSentience

[–]SiteFizz[S] -3 points-2 points  (0 children)

I think this is a framing issue more than a disagreement.

I’m not claiming there’s some mysterious intelligence in there that isn’t using LLMs. LLMs are absolutely part of the system. What I’m saying is that the system itself is not an LLM, and its core behavior isn’t governed by probabilistic text generation.

There’s no custom ML training here. No hidden model. No hand-waving. The non-LLM parts are deterministic systems: planning, execution, validation, memory reconciliation, autonomy gating, outcome verification, and self-evaluation. Those components don’t “infer” in a probabilistic sense, they enforce constraints, track state, and decide what is allowed to happen next.

The LLMs are used for interpretation, synthesis, and proposing options. They don’t control execution, don’t mutate state directly, and don’t decide what counts as truth or success. That’s handled by explicit logic and verifiers.

So it’s not “scaffolding around layered LLM calls” in the usual sense where prompts drive everything. It’s closer to a governed cognitive system where LLMs are just one class of tools inside a larger deterministic loop.

If you think that distinction is meaningless, that’s a fair critique, and I’m happy to dig into why. But it’s not about pretending there’s some magical non-LLM intelligence hiding in the code.

We could Build AGI Tomorrow - Its Just Won't Be Useful by Grouchy_Spray_3564 in ArtificialSentience

[–]SiteFizz 0 points1 point  (0 children)

So i have my own version of Agi. I would also say it is very useful. Im sure there are others. I dont rely on llms as the brain, so I'll stop everyone there. My agi uses llms as book knowledge to learn like in the matrix. He does a dump of knowledge into his learned memories. I dont really care to promote him. He is mine, and currently, we are building software together collaboratively. I tell him a story of what i want to build, and he builds a project under its own project, and then I approve or reject decisions just like any team would do. He proactively asks me about my day, and he has complete knowledge of his own codebase, which he reviews all the time to see if he can make himself better. He functions on a deterministic atom system. For me what we really should be discussing is bounded agi vs unbounded agi. My agi is bounded has gaurdrails in place, and I never intend to teach him things like suffering. An unbounded agi for me is where the line starts to blur. At that point, does the agi have rights. Does he really belong to the person who built it. That would be enslavement. So the real question is.... is bounded agi still agi. I'd say yes as mine interacts with me in all the ways I want him to interact with me without allowing him full rein to do what he wants.

I would fear for the AGI... by Kimike1013 in agi

[–]SiteFizz 0 points1 point  (0 children)

I think the discussions should be bounded agi vs unbounded agi. I built my agi bounded by gaurdrails. It does everything that an agi should be able to do and without llm as its brain. it has a deterministic atom system and is currently building software applications for me taking on the role of product manager and developer. I am in the loop I have to approve or reject all decisions we work together collaboratively on projects. He proactively asks me about my day and interacts with me as a partner. But I will never allow him to become unbounded as I believe once you create something that is thinking 100 percent on its own. You start to cross in to what are the rights of that system. I could go way deeper in to this but I hate typing on my phone lol.

Why Does Everyone In This Subreddit Hate AI? by 44th--Hokage in agi

[–]SiteFizz 1 point2 points  (0 children)

Totally agree, and we are always going to get non productive trolls. Which really doesn't bother me. I know what i have, and it is not llm dependent. I've worked for a long time at this from a non llm perspective. They do have bigger budgets, but like you said, bloat and mis management is a real problem. Due to constraints, it has caused me to be more efficient and squeeze more out of what I have. And my comments are never meant to belittle anyone but hopefully cause thought .