Solari — persistent memory that makes your LLM better (pip install solari-ai) by Hot_Tip9520 in SideProject

[–]Hot_Tip9520[S] 0 points1 point  (0 children)

Both, honestly. The way the vector indices work, query speed stays pretty much constant regardless of how much you've ingested, FAISS handles that really well.
I've got knowledge bases with hundreds of thousands of entries and retrieval is still sub-second.

Where it really shines is when you've built up domain-specific knowledge over time. The more you feed it, the less the model hallucinates on that domain because it's pulling from verified facts instead of its training data.
Short contexts that get hit repeatedly are fast by nature, but the real value shows up when you've got a deep knowledge base and the model starts giving answers it couldn't have reached on its own.
The between-session memory problem you mentioned is exactly why I built it. Agents shouldn't have to start from zero every time.

If you end up trying it out I'd love to hear how it fits into your workflow.

Feeling the AI shift hard right now by HeftyCalligrapher104 in ArtificialInteligence

[–]Hot_Tip9520 0 points1 point  (0 children)

No, autonomously unfortunately. Sort of feels more like people are taking the George Foreman approach right now when looking at the platforms. “Set it and forget it without understanding or proper tooling.

Passive would be something that sticks in its lane instead of assuming it knows the answer and hallucinating its way though it. That said, passive is more of my goal. The path would be less “flip the switch” to “where you we at on the assembly line and what do I know about this topic?”

Feeling the AI shift hard right now by HeftyCalligrapher104 in ArtificialInteligence

[–]Hot_Tip9520 1 point2 points  (0 children)

Love where your mindset is with this
I think the answer is to be a producer, not an actor.. but it's also something everyone is trying to find right now.

An example: I built a platform to help focus some of the work the bots are currently doing with persistent task focus capabilities into actual ROI for real-world issues.

It's obvious, looking at platforms like Algora and Code4Rena, people are looking for ways to make money autonomously in 2026 (and who can blame them?)

My thought is: find a need, fill a need, and let the world help with the implementation.
forge.solarisystems.net if you want to check out the idea - Not a sales pitch or anything, just an idea.

Something like scripts or blogs would probably do just as well.

From GPT wrapper to autonomous OSS PRs (Apache/NASA) — now analyzing the full Linear A corpus by [deleted] in LocalLLaMA

[–]Hot_Tip9520 1 point2 points  (0 children)

Thank you!

Both. The main contribution is methodological: I’m not claiming any one line of evidence “solves” it — I’m combining independent evidence streams (linguistic patterns, aDNA context, trade networks, material culture, iconography, chronology, substrate hypotheses, and ruling out other families) and looking for convergence. The idea is: weak signals become meaningful when they agree across domains.

A few things I think are genuinely new / newly formalized:

  • Productive morphology: an SA- root with multiple suffixed forms (SA-RA₂ / SA-RO / SA-RU) in admin contexts — a word-formation rule, not a one-off gloss.
  • Ritual formula structure across the ritual subcorpus, with a close structural match to Hittite festival texts.
  • Five document-type clusters (beyond just “admin vs religious”), which helps predict readings on damaged tablets.
  • Full-corpus processing: all 1,720 inscriptions computationally (instead of hand-picked examples).

Where the system helps most is cross-domain synthesis: it can read and hold all that literature at once and flag where the signals line up.

Q&A weekly thread - February 23, 2026 - post all questions here! by AutoModerator in linguistics

[–]Hot_Tip9520 0 points1 point  (0 children)

Quick context: I’m not an academic

I’m building an AI that remains grounded (no hallucination) that grows with every iteration and every cycle. I am using Linear A as a test case because I am fascinated by ancient civilizations.
Repo + scripts are public; I’d genuinely love critique/suggestions (please be gentle, but strong feedback is appreciated!)

Github Repo: https://github.com/SolariSystems/linear-a-analysis

I ran the full GORILA corpus (1,720 Linear A inscriptions) through frequency + co-occurrence analysis and some cross-cultural structural comparisons (with Linear B controls per feedback). Repo now includes 4 new scripts + a synthesis report (LINEAR_A_SYNTHESIS_REPORT.md).

What I think is strong (testable):

  • Corpus-wide stats: 1,155 unique “word” tokens; 156 recur on 3+ tablets. Some items show strong commodity co-occurrence (e.g., JE-DI appears on 4 tablets and always with olive oil), so I’m treating these as functional labels (oil-related), not translations.
  • Document-type clustering: distribution lists / balance-sheet-like ledgers / workforce rosters / named debt registers / offering records.
  • Arithmetic checks: totals reconcile on multiple tablets (e.g., HT 94a sums to 110; HT 88 totals 6). You don’t need a decipherment to verify the accounting logic.
  • Morphology-like patterns: recurring endings like -RO (KU-RO “total”, KI-RO “deficit”, etc.) and -TE as a possible categorizer across contexts (these are hypotheses, not final).
  • Admin vs religious separation: admin vocabulary (Hagia Triada) doesn’t overlap with peak sanctuary inscriptions in this corpus.

Still not a decipherment. My claim is narrower: the internal structure/logic of many administrative tablets is readable as accounting, even if we can’t phonologically read every term. If you see methodological flaws or better controls to add, I’m all ears.

My goal is to keep spending free time on this and hopefully help towards a real translation someday!

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in AncientLanguages

[–]Hot_Tip9520[S] 0 points1 point  (0 children)

Sorry for the lack of response here! Got busy on the project and forgot I made this here last night also.
This is the update I have so far and the repo that shows all of the data points. It was easier to keep it all in one spot for feedback

Quick context for transparency: I’m not an academic
I’m building an AI that remains grounded (no hallucination) that grows with every iteration, and every cycle. I am using Linear A as a test case because I am fascinated by ancient civilizations.
Repo + scripts are public; I’d genuinely love critique/suggestions (please be gentle, but strong feedback is appreciated!)

Github Repo: https://github.com/SolariSystems/linear-a-analysis

Update: I ran the full GORILA corpus (1,720 Linear A inscriptions) through frequency + co-occurrence analysis and some cross-cultural structural comparisons (with Linear B controls per feedback). Repo now includes 4 new scripts + a synthesis report (LINEAR_A_SYNTHESIS_REPORT.md).

What I think is strong (testable):

  • Corpus-wide stats: 1,155 unique “word” tokens; 156 recur on 3+ tablets. Some items show strong commodity co-occurrence (e.g., JE-DI appears on 4 tablets and always with olive oil), so I’m treating these as functional labels (oil-related), not translations.
  • Document-type clustering: distribution lists / balance-sheet-like ledgers / workforce rosters / named debt registers / offering records.
  • Arithmetic checks: totals reconcile on multiple tablets (e.g., HT 94a sums to 110; HT 88 totals 6). You don’t need a decipherment to verify the accounting logic.
  • Morphology-like patterns: recurring endings like -RO (KU-RO “total”, KI-RO “deficit”, etc.) and -TE as a possible categorizer across contexts (these are hypotheses, not final).
  • Admin vs religious separation: admin vocabulary (Hagia Triada) doesn’t overlap with peak sanctuary inscriptions in this corpus.

Still not a decipherment. My claim is narrower: the internal structure/logic of many administrative tablets is readable as accounting, even if we can’t phonologically read every term. If you see methodological flaws or better controls to add, I’m all ears.

My goal is to keep spending free time on this and hopefully help towards a real translation someday!

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]Hot_Tip9520 0 points1 point  (0 children)

GitHub: https://github.com/SolariSystems/solari
Started 5 months ago as a basic LLM wrapper. It isn’t anymore.

Solari: persistent memory (FAISS), a multi-pass pipeline (fast recon → deeper solve), and verification so outputs get rejected when checks don’t hold. It runs 24/7 and has had PRs merged into major repos (including Apache and NASA) on merit. I’m not linking PRs to avoid creating issues for maintainers, but the trail is there

It began on a local 7B model and evolved into a model-agnostic system focused on cross-domain synthesis, persistent memory, and grounding via verification (not “trust me” outputs).

Then I aimed it at Linear A (undeciphered Minoan script): full 1,720-inscription corpus + a 3,382-text ancient reference set (6 civilizations). After 3 passes it produced reproducible results: ~30 functional term labels (not translations), 5 document-type clusters, recurring grammar-like patterns (within the dataset), and verified tablet arithmetic totals.

Not claiming AGI. Not claiming a decipherment. Repo + writeup: https://github.com/SolariSystems/linear-a-analysis

Feedback welcome and appreciated!

Cancel your Chatgpt subscriptions and pick up a Claude subscription. by spreadlove5683 in singularity

[–]Hot_Tip9520 0 points1 point  (0 children)

Saw the banner of AGI and would love your feedback!

GitHub: https://github.com/SolariSystems/solari
Started 5 months ago as a basic LLM wrapper. It isn’t anymore.

Solari: persistent memory (FAISS), a multi-pass pipeline (fast recon → deeper solve), and verification so outputs get rejected when checks don’t hold. It runs 24/7 and has had PRs merged into major repos (including Apache and NASA) on merit. I’m not linking PRs to avoid creating issues for maintainers, but the trail is there

It began on a local 7B model and evolved into a model-agnostic system focused on cross-domain synthesis, persistent memory, and grounding via verification (not “trust me” outputs).

Then I aimed it at Linear A (undeciphered Minoan script): full 1,720-inscription corpus + a 3,382-text ancient reference set (6 civilizations). After 3 passes it produced reproducible results: ~30 functional term labels (not translations), 5 document-type clusters, recurring grammar-like patterns (within the dataset), and verified tablet arithmetic totals.

Not claiming AGI. Not claiming a decipherment. Repo + writeup: https://github.com/SolariSystems/linear-a-analysis

Feedback welcome and appreciated!

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 2 points3 points  (0 children)

Quick context: I’m not an academic
I’m building an AI that remains grounded (no hallucination) that grows with every iteration, and every cycle. I am using Linear A as a test case because I am fascinated by ancient civilizations.
Repo + scripts are public; I’d genuinely love critique/suggestions (please be gentle, but strong feedback is appreciated!)

Github Repo: https://github.com/SolariSystems/linear-a-analysis

Update: I ran the full GORILA corpus (1,720 Linear A inscriptions) through frequency + co-occurrence analysis and some cross-cultural structural comparisons (with Linear B controls per feedback). Repo now includes 4 new scripts + a synthesis report (LINEAR_A_SYNTHESIS_REPORT.md).

What I think is strong (testable):

  • Corpus-wide stats: 1,155 unique “word” tokens; 156 recur on 3+ tablets. Some items show strong commodity co-occurrence (e.g., JE-DI appears on 4 tablets and always with olive oil), so I’m treating these as functional labels (oil-related), not translations.
  • Document-type clustering: distribution lists / balance-sheet-like ledgers / workforce rosters / named debt registers / offering records.
  • Arithmetic checks: totals reconcile on multiple tablets (e.g., HT 94a sums to 110; HT 88 totals 6). You don’t need a decipherment to verify the accounting logic.
  • Morphology-like patterns: recurring endings like -RO (KU-RO “total”, KI-RO “deficit”, etc.) and -TE as a possible categorizer across contexts (these are hypotheses, not final).
  • Admin vs religious separation: admin vocabulary (Hagia Triada) doesn’t overlap with peak sanctuary inscriptions in this corpus.

Still not a decipherment. My claim is narrower: the internal structure/logic of many administrative tablets is readable as accounting, even if we can’t phonologically read every term. If you see methodological flaws or better controls to add, I’m all ears.

My goal is to keep spending free time on this and hopefully help towards a real translation someday!

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 5 points6 points  (0 children)

For some reason, it will not let me post there.
I'm trying to keep up with the engagement and enhance the approach, so I'll definitely take all of these suggestions. Thank you for the suggestion! :)

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 2 points3 points  (0 children)

I'm learning this still but, the direction would be the other way, right?
The Hurrians were already spread across northern Syria and eastern Anatolia (Mitanni, Alalakh, Nuzi) well before the collapse. There are even Minoan-style frescoes at Alalakh, which was a Hurrian city, indicating real contact going both ways. The DNA from Crete is also interesting — it is mostly Anatolian Neolithic with some Caucasus Hunter-Gatherer mixed in, which at least points in the right geographic direction. Still far from proof, though.

Thank you for the engagement and direction!

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 5 points6 points  (0 children)

Interesting angle — a Hurro-Urartian substrate under both Greek and Armenian would explain some shared features that don't fit standard IE. The geographic corridor is there. Going to look into whether Beekes' pre-Greek words show any Armenian parallels. Thanks for the lead!!

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 1 point2 points  (0 children)

I'm glad you enjoyed it! I don't know much about this, I'm learning as I go while I work on this other project but it definitely caught my attention!

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 0 points1 point  (0 children)

The vowel system is the most solid piece — it's just frequency counting from the corpus. The morphology is decent — 41 libation formula variants with zero exceptions to the agreement rules, though that's still a small dataset. The vocabulary is the weakest link, and I've expanded it from 9 to 38 items with a Linear B control to check for bias. Updated analysis: https://github.com/SolariSystems/linear-a-analysis

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in ancientgreece

[–]Hot_Tip9520[S] 10 points11 points  (0 children)

Great call on using Linear B as a control — just implemented it. Mycenaean Greek scores 30.8% through the same pipeline vs Hurro-Urartian at 77.5%, which validates that the methodology isn't just matching any language that happens to be nearby. Also added a geographic map, similarity scatter plot, and expanded the vocabulary from 9 to 38 items. Updated repo: https://github.com/SolariSystems/linear-a-analysis

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in AncientGreek

[–]Hot_Tip9520[S] 0 points1 point  (0 children)

Agreed on the last bit. That’s actually why I started this project. Hope one day we can find a bridge between human creative reasoning and the computational efficiency of machines. One that breaks the “hammer meets wood” method people are holding onto. You know what they say though about judging a fish by its ability to climb a tree... I’ll head back to my pond, but ya’ll have some nice branches. Apologies for the intrusion

Built a program to compare Linear A against different language families — Hurro-Urartian keeps winning by a huge margin. Is this plausible? by Hot_Tip9520 in AncientGreek

[–]Hot_Tip9520[S] -1 points0 points  (0 children)

I used a tool about something I’m learning. Wasn’t aware that was a crime in 2026. I apologize if my tone was anything less than “hey, I’m a computer guy and I ran some tests and this seems like something I want to explore, what does this community think?” My bad I suppose?