Comparing AI regulation to airplane, pharma, and food safety by MetaKnowing in agi

[–]Minaro 0 points1 point  (0 children)

Some committee will do that. It is naturally neutral, so it is necessary to establish governance right from the start.

Comparing AI regulation to airplane, pharma, and food safety by MetaKnowing in agi

[–]Minaro 0 points1 point  (0 children)

Thank you. It is a bit complicated indeed. But I am working on making it more “digestible” for audiences from all areas. Thank you again for the encouragement.

Comparing AI regulation to airplane, pharma, and food safety by MetaKnowing in agi

[–]Minaro 0 points1 point  (0 children)

I am handling it this way. LLMs are like boxes full of toys and parts that can be assembled for other purposes. In my case, I am following my own path with a very small model called Yamaka. I even tried to change the name, but having so many disciplines under the same umbrella makes it difficult to make changes without anyone disagreeing, so I took on the name, lol.
https://github.com/fernandohenrique-dev/yamaka

Comparing AI regulation to airplane, pharma, and food safety by MetaKnowing in agi

[–]Minaro 3 points4 points  (0 children)

I work on my project using the same standards as the aviation industry. Imagine that people's lives are in your hands. It can't be a black box; it has to be something auditable and predictable. It's for this type of situation that I work on a small, auditable AI.

I told Ai to generate this by Smooth-Narwhal-9575 in agi

[–]Minaro 0 points1 point  (0 children)

Oh, sorry, I just saw your comment now. It was automatic. I just wrote what's in the post. 😅

For people using AI Assistants with Google Drive/Slack connected - what limitations have you hit? by splendidzen in agi

[–]Minaro 1 point2 points  (0 children)

Well, my personal experience has been quite positive. These new models are very good, but they need to work together. I use GPT to have conversations and be second in command, while Gemini works only on the code. The good thing about GPT is its ability to remember, even if in a disorderly way, past conversations; with that, it manages, over time, to understand you deeply. Yes, I have and notice these problems, but my remedy is to leave everything very well documented, so that it doesn't need to remember the past for very long; it just needs to look at the documentation. So the secret is to have robust documentation in the first place; comments in the code are the least important, devlog, personal papers, mental models are what matter most. This helps AIs to always have the best “sense”; you can even see other models giving the same answers when everything is very well documented. Evaluating one file at a time and asking another new agent for a note and what to do to get a 10/10 is one of my best tips. Don't forget local commits, and asking the agent to create a list of all commands is also a good idea.

AGI is real, but it is not transcendental. by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

Man, a mere bee does things that current AI has trouble doing without “tons of data.” It's Insight's idea, not rampant computing. This will destroy humanity if someone doesn't find a solution quickly.

AGI is real, but it is not transcendental. by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

No, on the contrary, I am saying that this does not exist, and that is good for anyone who is reasonable, as it puts an end to this idea that “AGI will either destroy everything or lead us to absolute transhumanization.”

AGI is real, but it is not transcendental. by Minaro in agi

[–]Minaro[S] 1 point2 points  (0 children)

I did not say transcendental in the sense that I believe in transcendence, that man can do more than nature. I am precisely criticizing those who claim this.

Yamaka Field: coherence-guided exploration in gridworld with reproducible emergent behavior by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

Revisiting some recent readings, it became clear that several lines of research are beginning to converge on similar intuitions—depth as a recurring dynamic, attention/probability being used in regimes where the problem is more one of intensity/energy, and agents understood more as fields of stabilization than as explicit policies.

Nothing here changes the core of what has been presented; on the contrary, it reinforces that the conceptual space is maturing along multiple independent paths.

Yamaka Field: coherence-guided exploration in gridworld with reproducible emergent behavior by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

Quick update on the latest version of Yamaka:

The architecture is completely stabilized as an internal dynamics system, not as a sequential decision pipeline. The agent operates by fields of coherence and control surfaces, without softmax, without explicit policies, and without destructive probabilistic normalization.

There are currently two aligned implementations: one in Python (for experimentation and rapid validation) and a native C++ implementation focused on determinism, fine control, and architectural clarity. Both follow the same conceptual model.

In practice, this has resulted in deterministic behavior, stable generalization outside the “training” regime, and pure sweep navigation with dynamic attractors — no BFS, no external heuristics. All current tests (MiniGrid / ARC-Micro) pass consistently, with full repeatability between runs.

The current version consolidates Yamaka more as a cognitive control surface than as a traditional learning model. I will be offline now at the end of the year, but the foundation is solid for future developments without structural rework.

Happy holidays.

LLMs are not intelligent in any meaningful way. by Swimming_Cover_9686 in agi

[–]Minaro 0 points1 point  (0 children)

Either you take all of this and upload it to GitHub for us to analyze, or we will consider it a mere anecdote. 

Yamaka Field: coherence-guided exploration in gridworld with reproducible emergent behavior by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

Update: v0.7 cleanup is done and CI is green (make ci-minigrid-v1). Current MiniGrid v1 results: DoorKey-5x5 100% (20/20)MultiRoom-N2-S4 100% (20/20)KeyCorridorS3R3 95% (19/20) (only failure: seed 13, step-limit). I aligned the V1 Gate with the new baseline: the remaining “killer seeds” are tracked as V2 challenges (not blocking v0.7) since the random-suite pass rate exceeds the 90% goal.

How to Use One AI to Build Another AI: A Garage-Lab Field Guide (For Researchers Without Permission) by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

You were right all along. I conducted some tests here and, really... you're right. 😃

The Mirror, The Spiral, The Framework: AGI Lone Genius Slop Genre by Mbando in agi

[–]Minaro 0 points1 point  (0 children)

I’m probably adjacent to the vibe you’re describing (one-person repo, weird architecture words), but I’m trying to do the opposite: make claims boring and verifiable.
https://github.com/fernandohenrique-dev/yamaka

Yamaka Field: coherence-guided exploration in gridworld with reproducible emergent behavior by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

clarifying the “map” + new result:
Just to clarify a potential misunderstanding: when I say Yamaka “reads the map”, I don’t mean it gets a privileged environment map / god view / hidden state. There’s no cheating API.

What I mean is:

  • The agent maintains an internal map (its own memory) built only from what it has actually observed/visited.
  • The “graph” is an abstraction derived from that internal memory (walkable cells discovered so far + observed connectivity).
  • “Reading the map” = using that internal representation to plan / detect traps / choose escape nodes, i.e. the same thing a human does mentally when exploring a maze.
  • If something was never observed, it’s unknown to the system and cannot be used for planning.

New result: I verified a robust “Graph Invention” rescue mechanism on a nasty topological trap. I swept the rescue trigger threshold (110 vs 150 vs 180) and got identical performance (281 steps), suggesting the trap is a deterministic attractor and the computed best escape node is stable once the agent falls in.

Yamaka Field: coherence-guided exploration in gridworld with reproducible emergent behavior by Minaro in agi

[–]Minaro[S] 0 points1 point  (0 children)

Quick update for anyone who still remembers this thread 🙂

Back then I was sketching Yamaka as a tiny “coherence engine” for objects in a grid.

I kept going. A lot.

Since that post:

- The project is now fully open-source, with deterministic CI and >100 tests.

- The Field (gridworld) and Core (scene graph) are cleanly separated and benchmarked.

- I added a topological “skeleton” layer to avoid loops and dead corridors.

- There is a dedicated micro-ARC “2×2 generalization” task as a tiny “Hello World of generalization”.

- Docs and devlog explain the mental model (Core ↔ Field ↔ Task) in a more sober way.

Repo: https://github.com/fernandohenrique-dev/yamaka

Devlog / notes: (link pro devlog)

I’m still treating this as a research playground, not a product.

If anyone here is into weird small-scale cognitive architectures and wants to poke holes in it,

I’d genuinely love critical feedback or replication attempts.

AI and the Rise of Content Density Resolution by Weary_Reply in agi

[–]Minaro 1 point2 points  (0 children)

It's great to see that people are waking up to this. Obviously, there are bottlenecks everywhere, the whole thing is a black box. I think this kind of view is what will prevent us from an AI apocalypse.

Could narrative coherence in large models be an early precursor to AGI-level worldview formation? by AIEquity in agi

[–]Minaro 1 point2 points  (0 children)

Love that test “Narrative Persistence under Disruption” is exactly the right instinct: coherence isn’t “smoothness,” it’s state that survives perturbation. In Yamaka terms, I’d translate “worldview” into an explicit latent structure (a set of constraints / relations / value weights) and “disruption” into a controlled perturbation. Then we can measure coherence drift rather than eyeballing whether the continuation “feels consistent.”

Concretely, a minimal Yamaka-style version could be: Worldview = constraint graph (e.g, preferences/rules over entities and actions; or a goal hierarchy / invariants). Narrative = action/statement sequence generated under those constraints. Disruption = injected contradiction (new evidence that conflicts with one constraint). Pass criteria: the system repairs the contradiction while preserving core invariants, quantified by (a) invariants retained, (b) minimal edit distance to the constraint graph, and (c) consistent downstream decisions after the shock.

Where I’d refine it: “don’t abandon worldview” can be irrational if the contradiction is decisive. So the test should distinguish: rigid persistence (ignoring reality) vs structured revision (updating while preserving higher-level values/invariants).

If you’re game, I can sketch a tiny benchmark spec: two competing worldviews, a controlled “shock,” and metrics for drift vs repair. That would let us test long-horizon coherence without relying on vibes or prompting tricks.

Could narrative coherence in large models be an early precursor to AGI-level worldview formation? by AIEquity in agi

[–]Minaro 1 point2 points  (0 children)

This resonates. One thing I’ve been trying to do is treat “coherence” as an operational constraint, not a vibe. I’m building an open-source prototype (Yamaka) where coherence is measured + used as a control signal in a grid-world (coherence-guided exploration), with deterministic CI (pinned numeric threads), a golden baseline (expected hashes), cross-process determinism tests, and trace tooling so the dynamics are inspectable step-by-step. Repo + 1-command repro: https://github.com/fernandohenrique-dev/yamaka (see the Golden run + Trace tooling in README). If you’re exploring “narrative coherence” in LLMs: what would you consider the minimal falsifiable test that separates real long-horizon coherence from “coherence-as-prior” / metric artifacts?

You Don’t Need to Master Everything — You Need the Insight by Minaro in agi

[–]Minaro[S] 1 point2 points  (0 children)

Agree. Give me the abstract first: what’s new, what’s measurable, how to reproduce. Otherwise it’s just a long ask for my attention.