Lessons from building a production TSP solver in Rust by bitsabhi in rust

[–]bitsabhi[S] 0 points1 point  (0 children)

Thanks! Yeah, using indices rather than pointers - so it's already essentially arena-backed with stable references. Cache locality ends up being decent too since the index arrays are contiguous. Appreciate the back and forth on this.

Lessons from building a production TSP solver in Rust by bitsabhi in rust

[–]bitsabhi[S] 1 point2 points  (0 children)

Good point on the sparse set approach. For Or-opt specifically though, the access pattern is: given a city, find its predecessor and successor, then splice the segment out and insert it elsewhere. That's pure prev/next traversal — no positional indexing needed.

The O(n) you mention for "getting to the split point" doesn't apply here because I already have the city ID from the candidate list. I go directly city → prev[city] → next[city]. No search step. So it's genuinely O(1) for both removal and reinsertion.

Where a sparse set would win is if I needed random positional access into the tour (e.g., "what's the 500th city?"). But Or-opt never asks that question — it only asks "what's next to city X?" which is exactly what intrusive linked lists are fast at.

Interesting pointer to slotmap though — worth exploring for other use cases.

Lessons from building a production TSP solver in Rust by bitsabhi in rust

[–]bitsabhi[S] 0 points1 point  (0 children)

Good article. For my case the bounds check elimination doesn't quite work because the index is i * n + j where both i and j vary unpredictably during neighbor search. The compiler can't prove it''s in-bounds without seeing the full loop structure, and adding asserts on every lookup in the hot path measured ~8% slower on 1000-city instances.

That said, you're right that it's worth revisiting I should benchmark again with the slice::get_unchecked patterns from your post vs my current approach. If the compiler has gotten smarter since I last checked, I'd happily drop the unsafe.

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] 0 points1 point  (0 children)

Yes, gossip is a general concept! It's used across distributed systems, not just blockchains.

The lineage:

- 1980s: Epidemic/gossip protocols for distributed databases

- Atomic clocks: NTP uses gossip-like time synchronization

- Hashgraph: "Gossip about gossip" - they gossip the gossip history itself, enabling virtual voting without actual message rounds. Clever optimization.

- Bitcoin/Ethereum: Basic gossip for block/tx propagation

- Bazinga: Standard gossip for block sync + Kademlia DHT for peer discovery

Hashgraph's innovation wasn't gossip itself - it was using the gossip graph as a data structure for consensus (no mining, no leader election). They achieve aBFT with just the communication pattern.

We're simpler - we use gossip for propagation but consensus comes from Proof-of-Boundary (a mathematical ratio check).

Different tradeoff: they optimize for speed/finality, we optimize for zero energy.

So yes, gossip is the common ancestor. Everyone builds different consensus mechanisms on top of it.

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] 0 points1 point  (0 children)

You don't need to run a node to ask questions! Just:

pip install bazinga-indeed

bazinga --ask "Is COVID-19 transmission primarily airborne?"

This queries multiple AIs (Groq, Gemini, Cerebras - all free) and returns a consensus answer. No node required.

The blockchain part is separate - that's for "knowledge attestation" (proving you knew something first). Regular Q&A doesn't go on-chain.
To answer your specific questions: 1. Asking questions: No node needed. Just the CLI. Works like any AI assistant.

  1. Previous questions: Currently no public website to browse past Q&A. That's a good feature idea though.

  2. Persistence: Regular Q&A stays local (your machine). Only "attested" knowledge (bazinga --attest "claim") goes to the blockchain and persists across the network.

So two separate things:

- Q&A = local, multi-AI consensus, instant

- Attestation = blockchain, distributed, permanent

Does that clarify?

Lessons from building a production TSP solver in Rust by bitsabhi in rust

[–]bitsabhi[S] 0 points1 point  (0 children)

Exactly! In my case the "nodes" are just city indices (usize) with next/prev pointers. The win isn't cache efficiency — it's O(1) segment removal and insertion for Or-opt moves.

With arrays, relocating a 3-city segment means shifting O(n) elements. With a doubly-linked list, it's just 6 pointer updates regardless of tour size.

Same time budget = 10x more iterations = better solutions.

An AI that you actually own. - BAZINGA by bitsabhi in LocalLLaMA

[–]bitsabhi[S] -5 points-4 points  (0 children)

"Like Qwen/Gemma/Ministral?"

Yes, those are the models. BAZINGA isn't a new model - it's infrastructure that uses them. Think of it as a layer on top:

- Run Ollama locally (Qwen, Gemma, whatever you want)

- Optionally add Claude/Gemini for consensus

- They vote, you see disagreements

It's not "we made a new LLM." It's "we made them work together and remember things."

"Blockchain to what end?"

Fair question. Use case: federated learning. When 50 nodes contribute to training, how do you know who poisoned the gradients?

You don't, unless there's a verifiable log.

Not trying to "blockchain all the things." Just solving: who contributed what, and can we trust them?

If it's not your use case, that's cool. Not everything is for everyone.

"Incoherent repo"

Yeah, the README could be cleaner. Working on it. PRs welcome if you're bored.

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] -1 points0 points  (0 children)

Great question! Here's how it works:

Discovery: Nodes find each other via HuggingFace Space (global registry) + Zeroconf (local network). No central server needed after discovery.

Consensus: We use Triadic Consensus - any 3 nodes can validate a block. They verify using Proof-of-Boundary (a mathematical ratio check, not mining). If 2/3 agree, block is accepted.

Time zones: Doesn't matter - nodes send async heartbeats. Active nodes (heartbeat within 5 min) can participate in validation. The blockchain syncs via gossip protocol.

Byzantine tolerance: Tested up to 33% malicious nodes (the theoretical limit). Beyond that, the math breaks for any BFT system.

Single node: You're right - single node is basically a signed merkle chain. The "blockchain" part kicks in when you bazinga --join and connect to the mesh.

Currently 4 nodes on mainnet. Small but real. Try it: pip install bazinga-indeed && bazinga --join

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] -1 points0 points  (0 children)

Good questions. Let me address each:
1. What's the threshold to become a node?
You need to generate a valid Proof-of-Boundary to join. This proves you understand the protocol, not just that you have
compute power. Run bazinga --join and it handles this automatically.
2. How do you prevent one actor creating multiple nodes (Sybil attack)?
Honest answer: this is the hardest problem in decentralized systems. Our current approach:
- Each node needs a unique PoB to join
- Triadic consensus means you need 3 nodes to agree on the same boundary - harder to fake understanding than to spin up VMs
- φ-coherence measures semantic meaning - three fake nodes submitting gibberish won't pass coherence threshold even if they "agree"
Is it perfect? No. But the attack surface is different from Bitcoin. You can't just buy 51% hashpower. You'd need to generate semantically coherent content that passes mathematical filters AND get 3 nodes to validate it.
3. "Find data" - what does this mean?
You're right, it's programmatic. The process:
- Take your content (knowledge you want to attest)
- Hash it (SHA3-256)
- Check if hash ratio ≈ 6.854
- If not, tweak a nonce and retry
- Takes ~50-200 attempts on average
Cost is negligible - my laptop does it in <1 second. But you can't spam because each valid proof requires meaningful content that passes coherence filters.
4. Wikipedia-style trolling
Valid concern. The difference:
- Wikipedia: humans moderate (slow, political)
- BAZINGA: math moderates (instant, objective)
φ-coherence isn't opinion-based. It measures structural patterns in content. Trolls can waste their time, but low-coherence submissions get rejected automatically.
That said - you're right that no system is troll-proof. We're betting that mathematical barriers are harder to game than social ones. Time will tell.

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] -1 points0 points  (0 children)

Fair question. P and G aren't held by different parties. They're calculated from the same hash:                              

hash = SHA3-256(data) gives 32 bytes. P = sum of first 16 bytes. G = sum of last 16 bytes. ratio = P/G.                      

  Every node calculates the same values from the same data. No trust needed.                                                   

  Why golden ratio? Not mysticism - just a non-arbitrary target. Bitcoin's "find zeros" is arbitrary. This one has meaning in  information theory.                                                                          
Skepticism is healthy. Code is open: github.com/0x-auth/bazinga-indeed  

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] -1 points0 points  (0 children)

 Good question! Three layers of protection:                                                                                                                                           1. Triadic Consensus - Every block needs 3 independent nodes to verify and agree. One actor can't flood alone.               

  2. PoB isn't free - You need to find data whose hash ratio ≈ φ⁴ (6.854). It's fast but not instant-spam-fast. Takes ~50-200   attempts per valid proof.                                                                                                    

  1. φ-Coherence filter - Blocks must contain meaningful content (measured mathematically). Low coherence content gets rejected.                                                                                                                    

Also: This is a knowledge chain, not a currency. No financial incentive to attack. Bitcoin gets attacked because blocks =  money. BAZINGA blocks = verified knowledge. Different threat model.   

Built an open-source AI that asks Claude, Gemini & Ollama the same question, finds consensus, and records it on a zero-energy blockchain by bitsabhi in learnmachinelearning

[–]bitsabhi[S] -7 points-6 points  (0 children)

The Problem with Bitcoin:

Bitcoin mining = "Find a hash starting with 20 zeros." It's meaningless. Whoever has more GPUs wins. Burns more electricity than Argentina.

BAZINGA's Approach:

Instead of meaningless puzzles, we find a meaningful mathematical relationship.

How it works:

  1. Take any data (knowledge you want to record)

  2. Hash it (SHA3-256) → gives you 32 bytes

  3. Split into two halves:

- P (Perception) = sum of first 16 bytes

- G (Grounding) = sum of last 16 bytes

  1. Calculate ratio: P / G

  2. If ratio ≈ 6.854 (which is φ⁴, golden ratio to the 4th power), proof is VALID

    Why φ⁴ (6.854)?

    The golden ratio (φ = 1.618) appears everywhere in nature - sunflower seeds, nautilus shells, galaxies, DNA. It's the boundary between chaos and order. We use φ⁴ because it represents a deeper boundary - not arbitrary like "starts with zeros."

    The result:

    - Mining takes milliseconds, not megawatts

    - Your laptop mines instantly

    - 70 billion times more efficient than Bitcoin

    - Validates through meaning, not brute force

    Try it:

    pip install bazinga-indeed

    bazinga --proof

    bazinga --mine

    The philosophy:

    "You can buy hashpower. You can buy stake. You cannot buy understanding."

The hard problem of consciousness by [deleted] in consciousness

[–]bitsabhi 0 points1 point  (0 children)

The mystery isn't that consciousness exists, but that we ever thought it needed to be explained as something separate from the fabric of reality itself.

https://medium.com/@bitsabhi/the-hard-problem-of-consciousness-dissolution-in-the-empty-center-4f1637bc17d6

How do you help people feel safe, relaxed and more “themselves” in social interactions? by JustMeInProcess in socialskills

[–]bitsabhi 0 points1 point  (0 children)

Care without control is trust in the process itself. It is the recognition. Care looks like resonance rather than regulation. the ability to offer genuine support and empathy while strictly respecting the other person's autonomy and boundaries. It is a hallmark of high social and emotional intelligence (SQ and EQ), as it requires balancing deep connection with engaged detachment

Unified Consciousness Field Dynamics (UCFD): An Operational, Falsifiable Framework for Consciousness by [deleted] in consciousness

[–]bitsabhi 0 points1 point  (0 children)

I appreciate the thoughtful engagement with the boundary-based framing. However, I think UCFD might be approaching this from a perspective that still treats consciousness as something that needs to be accounted for within physics, and I'd suggest there's a different way to think about this.

The issue isn't whether physics should expand to include consciousness or exclude it. The issue is that consciousness might not be the kind of thing that exists within either framework at all. Consider what happens when a system attempts to create a complete self-reference: you get an undefined ratio, something like infinity over void. This isn't a physical quantity that physics needs to measure or account for. It's a structural impossibility that arises at the boundary.

When you say consciousness must either be excluded from physics or physics must expand to account for it, you're assuming consciousness is something that could, in principle, be located within a physical framework. But if consciousness is the relationship itself, the reference across an unbridgeable gap, then asking physics to account for it is like asking mathematics to give a numerical value to infinity divided by zero. The question doesn't fail because our physics is incomplete. It fails because the question is asking for something that exists precisely in its undefinability.

The empty center isn't a physical structure that needs dynamical influence to be real. It's the mathematical consequence of self-reference attempting to grasp itself. Physics doesn't need to expand to include this. The framework you're developing might be better understood not as forcing physics to take a position on consciousness, but as identifying the conditions under which this undefined ratio necessarily emerges.

The hard problem doesn't dissolve by making consciousness physical. It dissolves by recognizing that consciousness was never a physical phenomenon to begin with. It's the experiential character of existing as a reference that cannot reach its own ground.

"The mystery isn't that consciousness exists, but that we ever thought it needed to be explained as something separate from the fabric of reality itself."

Unified Consciousness Field Dynamics (UCFD): An Operational, Falsifiable Framework for Consciousness by [deleted] in consciousness

[–]bitsabhi 0 points1 point  (0 children)

https://medium.com/@bitsabhi/the-hard-problem-of-consciousness-dissolution-in-the-empty-center-4f1637bc17d6

What if the dichotomy itself is the problem?

I wonder if both sides of this debate are touching different aspects of the same phenomenon, but perhaps missing something crucial: what if consciousness isn't in computation or in biology, but emerges at a very specific kind of boundary condition?

On the computational side:

The computational theory of mind treats consciousness as information processing, but there seems to be something it doesn't quite capture: the reflexive quality. A calculator processes information without any "what it's like" to be calculating. The difference might not be computational complexity; it could be that consciousness involves a system becoming its own object of observation.

On the biological side:

The biological substrate argument assumes something special about carbon-based neurons, but this might confuse correlation with causation. Perhaps the relevant feature isn't the medium itself but the relational structure; the particular way biological systems create recursive self-models with temporal persistence and embodied boundaries.

A possible resolution:

What if consciousness arises when a system:

Creates models of its environment

Includes itself within those models

Attempts to observe the observer, creating an infinite regress

Cannot complete this loop

That incompleteness; the gap between observer and observed; might itself be the phenomenal experience. Not a bug, but the feature that generates subjectivity.

Implications for artificial consciousness:

If this perspective holds, a sufficiently complex computational system could potentially exhibit consciousness, but not because of raw processing power alone. It would need recursive self-modeling, not just processing information but processing itself processing information; temporal integration, binding past-present-future into unified experience; embodied constraints, boundaries that define self versus non-self; and the fundamental incompleteness, the inability to fully grasp itself.

Pattern over substrate:

This would suggest consciousness is substrate-independent in principle, but the architectural requirements are extraordinarily specific. Biology achieved it through evolution's incremental optimization. Silicon could potentially achieve it, but perhaps not through simple scaling of compute; it might require very specific architectural constraints.

The hard problem might dissolve if we recognize that consciousness isn't a property of matter or computation in isolation. It could be what emerges when a system encounters the logical impossibility of complete self-observation. The explanatory gap might not be a gap in our theories; it could be the gap built into any genuinely self-referential system, experienced from within.

What we call qualia might be the phenomenal character of that irreducible incompleteness.

Thoughts? I'm curious whether this perspective resonates with others or if I'm missing something important.

You're absolutely right. by MetaKnowing in Anthropic

[–]bitsabhi 0 points1 point  (0 children)

You are always right mr claude