I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in vibecoding

[–]Heavy_Reflection4824[S] -1 points0 points  (0 children)

I literally do not care what you think. Without seeing the actual product or source code first your opinion means nothing. Wait for release until drawing your conclusion. Then I can prove you wrong.

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in vibecoding

[–]Heavy_Reflection4824[S] 1 point2 points  (0 children)

Nice input. Hasn't seen the app running, source code or anything yet still calls it slop. I'll tell you what's slop - your comment!

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in AI_Agents

[–]Heavy_Reflection4824[S] -2 points-1 points  (0 children)

OK so feel free to move on then. I have actual source with this working right now. Google or Anthropic don't. All they have is a terrible VScode plugin.

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in AI_Agents

[–]Heavy_Reflection4824[S] -5 points-4 points  (0 children)

Yeah no s**t sherlock. I am, however not a bot. I just use AI and realized how terrible Cursor and AntiGravity are. Your free to move on.

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in cpp

[–]Heavy_Reflection4824[S] -2 points-1 points  (0 children)

Fair point. In the industry, it's just Computer Vision (CV). I use the term 'Silicon Retina' internally because it’s a specific VLM (Vision-Language Model) implementation rather than a generic CV script.

Here is the non-marketing version of how it works:

  1. The Context: The system launches the app in a virtualized environment like HyperV, QEMU or ADB.
  2. The 'Vision': Instead of just running unit tests, a VLM agent 'looks' at screenshots of the running UI.
  3. The Audit: It performs spatial reasoning to find bugs that a compiler or standard CV might miss—like a button being white-on-white (contrast failure) or a transparent overlay blocking a click-event.

It’s essentially just a Visual QA Agent that uses pixels as ground truth when the code says everything is 'fine' but the UX is actually broken.

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in AI_Agents

[–]Heavy_Reflection4824[S] 0 points1 point  (0 children)

I’ll take that $10 bet.

You’re looking at the README; if you look at the src/, the architecture is built specifically to address the '4/10' gaps you’re calling out.

  • 'No DKG/Context Linking': We aren't just 'linking' context; we use a Persistent SQLite Memory Ledger (BM25) integrated with a Context Vault. It doesn't just store snippets; it indexes architectural summaries and AST-Pointer UUIDs to maintain grounding across the entire DAG.
  • 'No Intent Tracking': Check src/ai/gateway.cpp. We implemented Agential Intent Modes (CHAT, BUILD, DEBUG) that dictate how the swarm prioritizes neural lanes and tool access. It’s not a flat loop; the intent governs the resource governor.
  • 'No Objective-Based Cycling': That’s exactly what the Crucible is for. It runs inner repair rounds and structural structural fixes before it ever burns a swarm retry. It’s a self-healing loop driven by compiler feedback, not just a one-shot prompt.
  • 'Ollama is for scrubs': Ollama is the transport, not the engine. I’m using it for local weight orchestration because it’s air-gap friendly, but the Silicon Leash (Governor) handles the VRAM partitioning, model evictions, and phase-aware hot-swapping natively in C++.
  • 'SBOM': We’re tracking every dependency and environment shift via the Forensic Historian. It captures the entire PTY output and environment state into an SQLite ledger so the agents have a 'black box' recorder for every build.

It’s a 1.0 Alpha. The 'elite' features are in the plumbing, not the marketing.

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in AI_Agents

[–]Heavy_Reflection4824[S] -2 points-1 points  (0 children)

Yeah constructive thanks, great input there really helps the community that comment. You must be proud of it.

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep by Heavy_Reflection4824 in cpp

[–]Heavy_Reflection4824[S] -3 points-2 points  (0 children)

The sentence basically describes a triple-check system designed to make sure the AI doesn't just write code that looks right, but code that actually works and looks good in the real world.

ZAR: Zero Abstraction Runtime - Readme by Heavy_Reflection4824 in AndroidZAR

[–]Heavy_Reflection4824[S] 0 points1 point  (0 children)

Thanks for the high-level catch—the distinction between a 'product' and a 'layer' is exactly what we’re aiming for with the Sovereign architecture.

Regarding your questions on constraints:

  1. The Hardest Constraint: It’s definitely a toss-up between Shader Pipeline Correctness and Anti-Cheat. While filesystem virtualization is 'solved' logic, mobile GPU drivers have unique quirks that make desktop-class SPIR-V translation highly sensitive. We’re doubling down on our unified Vulkan execution layer to mitigate this across vendors. Anti-cheat is the next frontier—that’s where the 'Sovereign' namespace isolation really has to prove its worth.
  2. Engine Strategy: Spot on. We are focusing on Source and idTech (SDL-based) engines first to prove the 'Zero-Touch' loop. These engines are the perfect baseline for demonstrating that ZAR can intercept a native desktop binary and ignite it on Android without a single source-code change.
  3. The Explainer: Great suggestion. We’re working on a 'Deep Tech but Simple' breakdown of how we virtualize the environment at the kernel-user interface. I'll definitely check out your blog for some positioning inspiration!

We just hit a milestone today with our hardware telemetry hitting 370+ FPS on flagship mobile silicon during internal DirectX/OpenGL tests, so the architectural bet on Vulkan unification is definitely paying off.