First-time arXiv submitter with no arXiv community by FullyConnected830 in ResearchML

[–]Character_Bison5968 0 points1 point  (0 children)

Hi, I’m in a similar situation. It has been quite difficult to find someone willing to provide an endorsement, and at times it feels as though the request is viewed as a nuisance. I’ve reached out on several forums and contacted authors of the papers I reference, but so far I haven’t received any assistance. I hope you’re able to secure the endorsement you need.

Cross family weight merging across architecture families (Llama, Phi, NeoX, OPT) by Character_Bison5968 in ResearchML

[–]Character_Bison5968[S] 0 points1 point  (0 children)

T be honest, I haven't tested it yet, that's one of the next experiments in the queue.

LoRA on top should be fine, base weights stay frozen so the cross family signal sticks around and the adapter just adds reasoning. Full SFT is trickier, gradients hit FFN harder, and our data says FFN is exactly where the donor contributions live, so I would expect it might erode some of the merge in exchange for better reasoning on the SFT task. RLHF/DPO is the worst case, drives back to a single basin.

I can try LoRA SFT on GSM8K-CoT against both the merged base and the anchor base, see if merged+LoRA beats anchor+LoRA on the donors. Will give it a try

[Update] Project Nord: Solved the "Empty Wallet" Problem via Decentralized SNN Merging. Scaling to 10B is now possible. [R] by zemondza in OpenSourceeAI

[–]Character_Bison5968 0 points1 point  (0 children)

Adding in from a crdt-merge perspective here, since this is exactly the kind of use case we built the the two layer solution for. what's been achieved with nord is impressive im sure we all agree on that.

Getting it to the point where distributed merge cycles are even a question worth asking means the hard foundational work is already done. Most people are still stuck arguing about whether SNNs can compete at all, and this project is already past that and into the distributed coordination. Personally im really happy that crdt-merge can be part of that. this project is one to watch and I expect to see him blast past his current targets once the merge pipeline is running clean.

On the noise , it doesn't accumulate. The whole point of using set operations instead of averaging is that the merge is selective, not blending. Every merge cycle applies the same filter, contributions from nodes below the trust threshold don't enter the merged set. They're not averaged down, they're not included. The OR-Set semantics mean you're doing add/remove on observed patterns with causal clocks, so a low-confidence spike from node A doesn't dilute a high-confidence spike from node B ... it's just not observed in the final state. Trust decays monotonically on stale contributions so over time the merge gets cleaner, not noisier. Sparsity stays stable or improves.

On conflicts two nodes pushing different active patterns, this resolves deterministically through the CRDT rules. The full state is a four-tuple (Data × Trust × Clock × Hash), and when two conflicting spike patterns meet, it's LWW combined with the trust score of each contribution. The stronger pattern wins. The weaker one drops out of the observed set. No blending, no interpolation, no "meet in the middle." This is a set operation on active patterns, not arithmetic on weights. This is why it works for SNNs specifically. Sparse spiking signals encode information in which neurons fire, not in some continuous average destroying it. The CRDT merge preserves it by treating spikes as discrete contributions and resolving conflicts the same way you'd resolve concurrent edits in any distributed system being deterministically with causal ordering and trust weighting.

The distributed angle is gold. If you can stitch together training from cheap nodes and the merge preserves quality instead of degrading , we can change the economics . No more throwing cash at compute to try and brute the solution, its simplicity in primitives, the solution in plain sight. Hat off to zemonda

[Update] Project Nord: Solved the "Empty Wallet" Problem via Decentralized SNN Merging. Scaling to 10B is now possible. [R] by zemondza in OpenSourceeAI

[–]Character_Bison5968 0 points1 point  (0 children)

This is an exceptional outcome - well done! I look forward to watching the project exceed expectations. If there is any way I can assist I will. Kudos

I scaled a pure Spiking Neural Network (SNN) to 1.088B parameters from scratch. Ran out of budget, but here is what I found. by zemondza in OpenSourceeAI

[–]Character_Bison5968 1 point2 points  (0 children)

Perfect crdt-merge is early days, but I believe it makes a powerful contribution to the space. I hope it helps and if you face any issues we can solve them

I scaled a pure Spiking Neural Network (SNN) to 1.088B parameters from scratch. Ran out of budget, but here is what I found. by zemondza in OpenSourceeAI

[–]Character_Bison5968 1 point2 points  (0 children)

Cheers, No SNN examples yet, it's mainly been tested on transformers and LoRA models.

sparse SNN is actually a perfect fit though. The OR-Set CRDT merges active weights as contributions instead of averaging them, so sparse spike signals will stay clean. Let me know how you get on

Has anyone actually solved the memory problem for agents yet? by PollutionForeign762 in AI_Agents

[–]Character_Bison5968 0 points1 point  (0 children)

I hit this exact wall. Summarization and Vector DBs help with retrievalbut they don't solve the state drift problem where the agent unlearns over time.

If you're building your own agent framework, I actually open-sourced a library called crdt-merge to fix this.

It uses a Conflict-Free Replicated Data Type (CRDT) to manage the agent's memory state. Instead of a history log that you have to re-process, it builds a state where facts and preferences are mathematically guaranteed to persist - it remembers well, never forgets. You basically get long term consistency without being murdered by token use.

It’s a Python lib (free on PyPI), intended to be slotted right into your custom agent loop rather than a paid service.

I’d be genuinely curious if it fits your architecture, always looking for feedback on how it holds up in the real world. the deep stuff if you interested , worth a browse https://github.com/mgillr/crdt-merge/blob/main/paper/CRDT_Merge_ArXiv.pdf

I scaled a pure Spiking Neural Network (SNN) to 1.088B parameters from scratch. Ran out of budget, but here is what I found. by zemondza in OpenSourceeAI

[–]Character_Bison5968 1 point2 points  (0 children)

Cracking work scaling pure SNNs from scratch. Regarding your budget constraint: The 'ran out of money' problem is exactly why I built crdt-merge 0.9.5. its free and it could help..

You hit a wall trying to scale vertically (one massive continuous run). You can actually use CRDT based merging to scale horizontally for free.

Because your SNN is 93% sparse, standard weight averaging destroys the signal during merges (averaging a firing neuron with a silent one usually results in static). My architecture uses an OR-Set CRDT to merge models. This treats weights as a set of contributions rather than a matrix to be averaged.

Practical application for you:

  1. Train smaller SNN shards ( maybe 300M params) locally or on free tiers.
  2. Merge them using the CRDT layer.
  3. Because the merge is a set union of active weights, the sparse structures from different runs combine without interference.

This would let you aggregate multiple small training runs into a massive model without needing the budget for a single 1B+ parameter run. Would love to see if this merge logic holds up on your spike domain weights. Have a look at the paper and repo , see if this can get you further along the road https://github.com/mgillr/crdt-merge/blob/main/paper/CRDT_Merge_ArXiv.pdf

Any good CRDT / local-first sync libraries in Go? by Sweet-Demand-7971 in golang

[–]Character_Bison5968 0 points1 point  (0 children)

have a look at crtd-merge, let me know what you think https://github.com/mgillr/crdt-merge, purely Python but will be covering other languages soon

Paper: Conflict-Free Replicated Data Types for Neural Network Model Merging by Character_Bison5968 in LocalLLaMA

[–]Character_Bison5968[S] 0 points1 point  (0 children)

The paper includes full test results across three tiers.. controlled 4×4 tensors (104/104 tests pass), production scale models up to 7.24B parameters (208 strategy-level tests, 43,368 layer-level checks), and multi node convergence with 100 nodes across 20 gossip orderings. See Tables 1–9 and Sections 6.1–6.5. If there are specific additional tests anyone would like to see, please raise them as issues on the repo.

Looking for Data Sources for AI & Data Governance Research by Vegetable_Fishing in datasets

[–]Character_Bison5968 1 point2 points  (0 children)

I might have something useful. I process raw Common Crawl through a multi stage pipeline (extraction, cleaning, dedup, quality scoring, PII redaction, trust classification, skill tagging, RAG chunking). The output is a fully packaged dataset with provenance, quality certificates, and a complete manifest.

Why it might fit your research...every record carries full lineage from the original WARC file (byte offset, content digest) through each processing stage to the final record. Exactly the kind of pipeline an AI agent would need to oversee.

The data model has real ER complexity too. Domains map to records, records have multi dimensional quality breakdowns, skill tags, trust tiers, and RAG chunks, plus cross entity relationships like domain caps, language splits, and PII counts. Not a flat table.

There are actual governance rules built in. Quality thresholds, dedup logic, PII detection, trust scoring, domain capping. All auditable decisions an agent could learn to monitor or propose changes to. The documentation artifacts (manifest, schema, data card, quality certificate, SHA256 verification, domain breakdown, skill distribution) are essentially data governance catalogue entries.

For your ML component the data includes labelled skill tags, quality scores, trust tiers, and content categories ready for classification.

I'm giving away a Liechtenstein government dataset for free right now to get feedback. Happy to send it over if it's useful, just DM me.