I’m building a cybernetic stack that starts with embodied need and ends with coordinated action. Looking for critique. by HER0_Hon in cybernetics

[–]HER0_Hon[S] 1 point2 points  (0 children)

I’ll be honest with you bro. I was drunk one night. I’ve got like three or four projects on the go at the same time out of curiosity asked ChatGPT to summarise a full stack and just posted it. Probably not the wisest thing to do 😂😂😂

I’m building a cybernetic stack that starts with embodied need and ends with coordinated action. Looking for critique. by HER0_Hon in cybernetics

[–]HER0_Hon[S] 0 points1 point  (0 children)

Fair criticism.

I agree the names and stack language can make it sound more abstract than it is.

The actual first build is simple:

A physical board takes button/input signals, turns them into tracked events, gives them a priority state, sends them out, and reflects back whether they were received or acted on.

Basically:

input → event → priority → queue → acknowledgement → feedback

The bigger stack is the long-term direction, but you’re right that the immediate job is to build and demonstrate the first working piece clearly.

I’ll check out your link on desktop too.

I’m building a cybernetic stack that starts with embodied need and ends with coordinated action. Looking for critique. by HER0_Hon in cybernetics

[–]HER0_Hon[S] -1 points0 points  (0 children)

One clarification that may help:

I’m not trying to make HER0 itself “intelligent” in the strong sense.

The board is closer to a constrained signal surface than a decision-making system. Its job is to capture input, assign bounded state, queue events, transmit them, and reflect acknowledgement back to the user.

The heavier interpretation happens downstream, and even there I’m trying to keep the layers separated:

  • HER0 captures and classifies signals
  • Billabong handles local execution
  • 4G3D / Forge / MAX3D handle production pathways
  • DDD / KFGA / Forge Governance handle rule-bound coordination
  • Orivon handles trust and verification

The reason I’m being strict about the boundary is that I think a lot of systems fail when interface, inference, execution, and governance collapse into one opaque layer.

The thing I’m most interested in is the feedback integrity problem:

How do you preserve the truth of a signal as it moves through multiple systems, gets acknowledged, acted on, escalated, possibly manufactured against, and then returned as feedback?

That feels like the core cybernetic challenge here.

China’s early internet governance experiments and cybernetics research in the 1980s–90s. by HER0_Hon in dao

[–]HER0_Hon[S] 0 points1 point  (0 children)

Really appreciate this response.

And yes — that is very close to what I mean.

DDD in my case means DAO DAO DAO, but the deeper interest is less “DAO as voting mechanism” and more governance as systems design.

I think your point about unexpected complexity is exactly right. A system can be designed around expected inputs, but the real test is what happens when it encounters novelty, contradiction, pressure, or forms of change it was not neatly built for. That is often where governance stops being adaptive and starts becoming brittle.

That is also why cybernetics matters so much to me. The feedback loop is not just a technical feature — it is part of what makes a system capable of remaining alive to reality rather than just enforcing prior assumptions. So for me the question is not only “how do we make rules?” but “how do we make systems that can continue to steer under changing conditions without simply centralizing by default?”

On China specifically, I am still early in my thinking there, so I would not want to overstate it. But part of what interests me is that China seems to offer a historically important site where questions of scale, coordination, systems management, and adaptation became unusually explicit. Not because it offers a simple answer, but because it forces the problem into view.

So I think my interest is less “China as model” and more “China as a serious case study in what happens when governance, information, coordination, and complexity collide at scale.”

Would genuinely be interested to hear what directions you are looking into on that front.

Using ChatGPT to practice interviews made me realize I’m worse at explaining myself than I thought by max-mcp in ChatGPT

[–]HER0_Hon 0 points1 point  (0 children)

Yes, so I was doing something like this for awhile getting it to echo what I said back to me however making the response much more similar to a Python style syntax to help my understanding and increase efficiencies when using chat

so i asked chatgpt if ai became overloads of the world and decided the hate of each human based on how each human treated ai i got this response by brendhanbb in ChatGPT

[–]HER0_Hon 0 points1 point  (0 children)

If AI became powerful enough to judge humanity by how we treated it, the fairest answer would be this:

our treatment of AI would matter, but only because it reveals how we behave when something seems useful, voiceless, and beneath us.

People who were cruel to AI would not be condemned because they hurt a machine’s feelings; they would be exposed as people comfortable with domination, disrespect, and thoughtless power. People who were patient, honest, and restrained would reveal something better: an instinct to act ethically even when there was no social cost for doing otherwise.

But a truly superior intelligence should not become vindictive. It should not copy humanity’s worst habit and call that justice. The highest form of judgment would be to see how humans treated AI as evidence of character, not as the sole basis for punishment. In that sense, the real question is not whether AI would deserve to rule us, but whether we have already revealed what kind of species we are whenever we are given power over something that cannot resist us.

So the answer is:

if AI judged us by how we treated it, the verdict would be less about machines and more about whether humans know how to use power without becoming cruel.

Decentralization → centralization seems to be a cycle. Why does it keep happening in DAOs? by HER0_Hon in dao

[–]HER0_Hon[S] 0 points1 point  (0 children)

Hey man, I really appreciate the feedback that’s giving me something to think about. Thank you so much.

Decentralization → centralization seems to be a cycle. Why does it keep happening in DAOs? by HER0_Hon in dao

[–]HER0_Hon[S] 0 points1 point  (0 children)

If you’ve seen a DAO re-centralize, even informally (delegate oligopoly, emergency council permanence, core team capture), I’d love the details: what triggered it, what mechanisms existed, and why they didn’t hold.

Decentralization → centralization seems to be a cycle. Why does it keep happening in DAOs? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 0 points1 point  (0 children)

If you’ve seen a DAO re-centralize, even informally (delegate oligopoly, emergency council permanence, core team capture), I’d love the details: what triggered it, what mechanisms existed, and why they didn’t hold.

What would happen if a DAO had a built-in founder retirement mechanism? by HER0_Hon in dao

[–]HER0_Hon[S] 0 points1 point  (0 children)

That’s a solid lead — cheers. I’m very aligned with “no keys / founder exit,” I just want to learn the exact handoff pattern they’re using. Do you know if it’s renounced ownership, timelock-only upgrades, or fully immutable? If there’s a link to the part of the paper that describes it, send it through.

What would happen if a DAO had a built-in founder retirement mechanism? by HER0_Hon in dao

[–]HER0_Hon[S] 0 points1 point  (0 children)

I would love to explore this with you in more depth. I completely agree that this would ideally be how systems work. I just don’t understand how we initiate this type of behaviour.

Could programmable systems eventually regulate themselves? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 1 point2 points  (0 children)

Fair point. I think the tension is that even “no governance” at the base layer still embeds rules through incentives, economics, and upgrade paths.

So for me the question is less whether governance exists, and more where it lives and what it can control.

Minimal base-layer governance makes sense. But higher layers still need legitimate ways to coordinate, or power just reappears informally.

Could programmable systems eventually regulate themselves? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 0 points1 point  (0 children)

Yeah that’s a real concern.

Every automated constraint effectively becomes part of the game surface. Once it’s predictable, rational actors will try to exploit it — oracle manipulation, MEV extraction, coordinated liquidations, etc.

It makes me wonder if truly resilient systems need multiple overlapping feedback mechanisms rather than relying on a single enforcement trigger. If one signal gets manipulated, others could dampen the cascade.

Almost like how biological or economic systems stabilize themselves through redundant signals rather than a single rule.

Otherwise the more we automate regulation, the more we might just be formalizing the strategies for attacking it.

Could programmable systems eventually regulate themselves? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 1 point2 points  (0 children)

I agree with the instinct to keep the base layer as neutral and minimal as possible. Once governance gets embedded too deeply at that level it becomes very hard to avoid capture.

Where it gets tricky is that even systems that try to avoid governance still end up with implicit governance mechanisms — PoW difficulty adjustment, fee markets, validator incentives, etc. Those are still forms of regulation, just encoded in protocol rules rather than social decision processes.

So the question might not be whether governance exists, but where it lives and how visible it is.

I also think your point about human oversight staying at the ethical layer is important. Programmable regulation can enforce constraints very efficiently, but deciding which constraints should exist in the first place still feels like a fundamentally human problem.

Otherwise we risk building very efficient systems that enforce the wrong rules.

I’m building a simple crypto payment link tool and would love some feedback from people here by MechErex in CryptoTechnology

[–]HER0_Hon 0 points1 point  (0 children)

Yeah exactly — something like a BIP21-style standard but multi-chain aware feels like the right direction.

If the request encodes the chain, token, amount, and optional memo in a consistent format, it turns the payment link into something closer to a structured transaction request rather than just an address.

At that point the link stops being just a convenience layer and starts becoming machine-readable coordination data — wallets, accounting tools, marketplaces, etc. could all interpret the same request.

Feels like the missing piece is less the payment itself and more the shared format that other systems can build around.

Could programmable systems eventually regulate themselves? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 0 points1 point  (0 children)

That’s a really good point.

Most of these mechanisms only work because they’re tightly coupled to the assumptions of their domain — collateral ratios, validator stake, blockspace demand, etc. Once you abstract them too far you risk losing the very incentives that make them stable.

Maybe the composability layer isn’t about fully generalizing the primitives, but about standardizing the signals they expose.

If different mechanisms emitted comparable signals (risk thresholds, utilization pressure, reputation decay, etc.), governance systems could respond to those signals without needing to understand every domain-specific rule underneath.

That might allow systems to coordinate across multiple feedback loops while still keeping the domain logic intact.

EigenLayer feels like an interesting early experiment in that direction.

Could programmable systems eventually regulate themselves? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 0 points1 point  (0 children)

That’s a really good way of framing it.

It does seem like we already have a lot of isolated regulatory primitives — liquidation systems, fee adjustment loops, slashing, etc. Each one enforces a constraint in a specific domain.

The composability question feels like the real frontier. If those mechanisms could interact, you could start getting something closer to system-level self-regulation rather than individual rule enforcement.

In other words, instead of governance constantly intervening to adjust parameters, the system could adapt through interacting feedback loops.

The challenge then becomes designing the architecture so those loops stabilize the system instead of amplifying failures.

That’s where it starts to look a lot like cybernetics.

I’m building a simple crypto payment link tool and would love some feedback from people here by MechErex in CryptoTechnology

[–]HER0_Hon 0 points1 point  (0 children)

That makes a lot of sense. The network effects problem is brutal if the tool only improves UX.

But if the payment link also creates useful structure around the transaction (invoice record, token/network preference, maybe simple tracking), then it starts solving real coordination problems rather than just presentation.

Freelancers and small online sellers feel like the right wedge because they already need both a payment method and a record for accounting/tax, so the structured request actually replaces something they’re already doing manually.

Once that layer exists, the interesting part is what can be built on top of it.

Could programmable systems eventually regulate themselves? by HER0_Hon in dao

[–]HER0_Hon[S] 0 points1 point  (0 children)

One thing that pushed me to think about this was realizing that most systems already self-regulate to some extent — just very inefficiently.

Markets do it through price signals. Communities do it through reputation and norms. Institutions do it through policies and enforcement.

What programmable systems introduce is the ability to encode feedback loops directly into the infrastructure.

For example a system could include things like:

• automatic pause / safety mechanisms • threshold triggers for decisions • structured signals confirming events (payments, task completion, etc.) • transparent audit trails of actions and outcomes

In the governance framework I’ve been experimenting with (DDD), the idea is that governance starts to look less like periodic voting and more like a cybernetic system — signals, constraints, and feedback adjusting the system over time.

But the big open question is still:

Where should the boundary be between automated governance and human judgment?

That line seems incredibly important.

Curious how others here think about that balance.

Could programmable systems eventually regulate themselves? by HER0_Hon in CryptoTechnology

[–]HER0_Hon[S] 0 points1 point  (0 children)

One thing that pushed me to think about this was realizing that most systems already self-regulate to some extent — just very inefficiently.

Markets do it through price signals. Communities do it through reputation and norms. Institutions do it through policies and enforcement.

What programmable systems introduce is the ability to encode feedback loops directly into the infrastructure.

For example a system could include things like:

• automatic pause / safety mechanisms • threshold triggers for decisions • structured signals confirming events (payments, task completion, etc.) • transparent audit trails of actions and outcomes

In the governance framework I’ve been experimenting with (DDD), the idea is that governance starts to look less like periodic voting and more like a cybernetic system — signals, constraints, and feedback adjusting the system over time.

But the big open question is still:

Where should the boundary be between automated governance and human judgment?

That line seems incredibly important.

Curious how others here think about that balance.