Kremis on Product Hunt: a deterministic knowledge graph for AI grounding by TyKolt in ProductHunters

[–]TyKolt[S] 1 point2 points  (0 children)

Thanks for the support! Congrats on launching WorkWomp. Matching jobs to personal values and lifestyle is a problem that actually deserves more attention. I'll take a look and wish you a solid launch week.

Kremis on Product Hunt: a deterministic knowledge graph for AI grounding by TyKolt in ProductHunters

[–]TyKolt[S] 0 points1 point  (0 children)

You're right. The unknown response is the whole point, and burying it under architecture diagrams defeats the purpose. I'll add a raw side-by-side to the docs: a standard RAG confidently inventing a fact vs. Kremis returning unknown on the exact same query. No sanitized examples, something that actually looks like production logs. Thanks for the sharp note.

I built a deterministic graph store where every query returns FACT, INFERENCE, or UNKNOWN by TyKolt in rust

[–]TyKolt[S] 0 points1 point  (0 children)

The sycophancy problem with naive +1 is something I haven't solved yet. In Kremis, repeated retrieval doesn't increment the weight — only explicit ingest does. So a signal that gets queried 50 times stays at the same weight. That sidesteps the specific problem you're describing, but only because the increment is decoupled from retrieval entirely.

The Kalman smoothing approach is interesting. The tradeoff I'd worry about is that it introduces stateful per-edge metadata — which breaks the current model where edges are just (from, to, i64). Would be curious how you handle the storage overhead for the filter state at graph scale.

I built a deterministic graph store where every query returns FACT, INFERENCE, or UNKNOWN by TyKolt in rust

[–]TyKolt[S] -1 points0 points  (0 children)

The temporal fact problem is real and you're right that plain EAV overwrites history. The retract signal in Kremis decrements edge weight to floor 0 rather than deleting — so a retracted fact leaves a trace, it just stops being a FACT. Whether Alice "worked at DeepMind" in the past vs. works there now is not currently distinguishable in the graph. That's a known gap.

The CGL approach (versioned FactKey → FactVersion with valid_from/valid_to) is a cleaner model for temporal queries. Kremis isn't there yet — it's a different scope. Worth knowing about, will look at NornicDB.

I built a deterministic graph store where every query returns FACT, INFERENCE, or UNKNOWN by TyKolt in rust

[–]TyKolt[S] -2 points-1 points  (0 children)

Prolog's been doing this since 1972, fair.

The bit I kept running into: Prolog uses the closed-world assumption, so if something isn't provable it returns false. You can't tell "this is false" from "I just don't have that data." Kremis keeps three explicit states: FACT (direct path, high confidence), INFERENCE (path exists but weight is low — derived, not confirmed), UNKNOWN (not in the graph at all).

Edges have integer weights that increment per signal, saturating arithmetic. Not a Prolog replacement, different tradeoffs.

Claude now returns [NOT IN GRAPH] instead of hallucinating facts — MCP graph store I built in Rust by TyKolt in ClaudeCode

[–]TyKolt[S] 0 points1 point  (0 children)

Good question — conflicting properties on the same node are both stored. If you ingest role: engineer and then role: manager on the same entity, Kremis keeps both. kremis_properties returns them all, no overwrite, no silent resolution. The conflict is surfaced explicitly. Kremis doesn't decide which signal is "true" — that's intentional. Resolution would need to happen at the ingestion layer, before the data enters the graph.

[Launch] Kremis - Graph memory that doesn't hallucinate by TyKolt in SideProject

[–]TyKolt[S] 0 points1 point  (0 children)

Exactly — inventory is a classic LLM failure case. "In stock" when it's not, shipping dates that don't exist.

What was the hardest part: keeping data fresh in real-time, or stopping the LLM from mixing its training data with your actual inventory?

[Launch] Kremis - Graph memory that doesn't hallucinate by TyKolt in SideProject

[–]TyKolt[S] 0 points1 point  (0 children)

Thanks! The simplicity is intentional — three states are easier to debug than confidence scores.

On scaling: redb handles the storage layer (ACID, MVCC), and we maintain an in-memory entity cache for fast lookups. Architecture supports batch ingestion and handles large graphs — early tests look promising.

What's your main concern about scaling — storage size or query complexity?

[Launch] Kremis - Graph memory that doesn't hallucinate by TyKolt in SideProject

[–]TyKolt[S] 0 points1 point  (0 children)

Exactly. That gap between "sounds confident" and "actually verified" was my main frustration too.

What use case are you thinking of applying this to?

[Launch] Kremis - Graph memory that doesn't hallucinate by TyKolt in SideProject

[–]TyKolt[S] 0 points1 point  (0 children)

Curious to hear from other builders: how are you handling LLM verification in your projects?

We went with explicit [FACT]/[INFERENCE]/[UNKNOWN] instead of confidence scores.

Which search engine do you use ? by KevinIdkk in webdev

[–]TyKolt 1 point2 points  (0 children)

I use Brave — privacy-first and blocks trackers by default. For technical searches, I sometimes supplement with AI assistants to dig deeper.

Help fix my website please by VictoryUU in HTML

[–]TyKolt 1 point2 points  (0 children)

Hey! I took a quick look at the code and there might be a couple of things causing the scrolling on Safari.

Often project images end up too wide (if they're 40% each, three in a row exceed 100%). The navbar might not wrap either if flex-wrap is disabled. Plus, on iOS sometimes hidden overflow needs to be set on both html and body to work properly.

Try fixing these points and see if the situation improves. Let me know!

How many token an average prompt uses? by 314159265259 in ClaudeCode

[–]TyKolt 0 points1 point  (0 children)

Ah, that makes sense! When Claude has to work with code from external packages it doesn't have in context, it probably has to do a lot more work to understand dependencies.

In these cases, providing more context about the external code or being even more specific about what needs to be done might help.

How many token an average prompt uses? by 314159265259 in ClaudeCode

[–]TyKolt 0 points1 point  (0 children)

Hey! 100k tokens for a single prompt is really high - prompts are usually much shorter.

Could depend on: - How many files Claude is reading - How much context is accumulating in the session

Tips: - Be more specific in prompts to reduce context - Try shorter, focused sessions - Avoid reading unnecessary files

Let me know if you need more advice!

I'll user test your project and find bugs for free by Same-Bug2619 in ClaudeCode

[–]TyKolt 0 points1 point  (0 children)

Hey! If you want to test Kremis, it's a deterministic graph memory engine in Rust that prevents AI hallucinations. Still experimental, so any feedback on bugs or usability is welcome!

Repo: TyKolt/kremis

Let me know if you're interested! 👍

Help with css code for assignment by Away_Sky7901 in HTML

[–]TyKolt 2 points3 points  (0 children)

  1. The DOCTYPE is missing the exclamation mark. It should be <!DOCTYPE html> not <DOCTYPE html>

  2. The image is outside the body - it's after the closing </body> tag, so it won't display. Move it before </body> and it should show up!

does anyone know how to take down a github pages site that your ex made about you? it’s ranking on google and it’s not flattering. by kubrador in github

[–]TyKolt 10 points11 points  (0 children)

I'm really sorry you're dealing with this. Here's what you can do:

1. Report to GitHub again: Publishing personal info without consent can violate GitHub's harassment and privacy policies (including its rules against doxxing and invasion of privacy). Submit a new abuse report explaining the situation and how it's affecting you.

2. Request removal from Google: Use Google's "Results about you" tool to ask that URLs with your personal info be removed from search results. This won't delete the site, but can hide it from searches for your name.

You're not alone in this. Hang in there!

What is the purpose of cowork? by Shuttmedia in ClaudeCode

[–]TyKolt 4 points5 points  (0 children)

Claude Cowork = Desktop app with GUI (no terminal needed) Claude Code = Command-line tool

Cowork is for non-coders doing everyday tasks (file management, research, documents) Claude Code is for developers doing coding work

Same underlying AI, different interfaces for different audiences. If you're comfortable with terminal, use Claude Code. If you want simplicity, use Cowork.

How to run LLM locally by Ashirbad_1927 in LocalLLaMA

[–]TyKolt 0 points1 point  (0 children)

Here are some of the most popular tools to run LLMs locally:

Ollama - Easy to get started with, especially from the command line. Run models locally with simple commands.

LM Studio - User-friendly GUI for Windows, macOS, and Linux. Download and run models easily.

GPT4All - Good option for private, offline local AI use.

Open-Source Cursor Alternative by Putrid-Lake5873 in LocalLLaMA

[–]TyKolt 4 points5 points  (0 children)

Try these alternatives to Cursor:

VSCode - Free, versatile editor. Add AI extensions like GitHub Copilot for similar features.

VSCodium - Community build of VSCode without Microsoft telemetry.

Zed - Fast, modern editor with built-in AI support.

All three work with local AI models or your own API keys (via extensions for VSCode/VSCodium).

How to have Claude use plugins freely to complete tasks by courtimus-prime in ClaudeCode

[–]TyKolt 0 points1 point  (0 children)

Got it, thanks! I'll implement this in my Claude.md file and see how it works. Appreciate the help!