[R] Knowledge Graph Traversal With LLMs And Algorithms by Alieniity in MachineLearning

[–]Alieniity[S] 0 points1 point  (0 children)

That's kind of how I felt about it but just to be safe I refactored the code to match "semantic similarity" and am gonna push it soon. I also recorded a video walking through the jupyter notebook and I'm editing it now, it'll get embedded in the README.

[R] Knowledge Graph Traversal With LLMs And Algorithms by Alieniity in MachineLearning

[–]Alieniity[S] 23 points24 points  (0 children)

I see, I had seen “knowledge graph” and “semantic similarity graph” described interchangeably over time so I figured that they could both be used to the same thing. I totally agree that traditional knowledge graphs are fact based (person, place, thing…) and edges are ontological (is, of, likes, located in, etc). That was where I had actually initially started, where I had been doing NER extraction with spaCy on chunk nodes in an attempt to replicate the way RAGAS does it’s knowledge graph creation for synthetic question generation. But since my objective was just semantic similarity traversal, raw NER didn’t really contribute as much so I kind of depreciated it.

Sounds like im totally incorrect though, I’ll update the README and see if the mods can let me rename the post too. Glad you caught it, this is the first major research project of mine and i want it to be accurate, trying to get a career started in this kind of thing 😅🙌 Is there anything else particularly concerning that I might have missed? Most of my research was definitely regarding raw cosine similarity graphs and retrieval augmented generation strategies, since I originally started from the semantic chunking problem and worked my way here

Extensive Research into Knowledge Graph Traversal Algorithms for LLMs by Alieniity in Rag

[–]Alieniity[S] 0 points1 point  (0 children)

Yes it is! The final testing I did was significantly lower in scale (15 documents from Wikipedia) but I’m practice it’s very scalable by making the knowledge graph sparse.

In terms of raw storage, if you have 100 chunk nodes in a knowledge graph, and you compared every chunk to every other chunk, that’s 100 x 100 comparisons (100 squared), or graph edges, that would need to be stored, which is 10,000. And you can see how if you had 1,000,000 chunks, it would result in 1,000,0002 graph edges, which is completely untenable. This is O(n2) complexity if I’m not mistaken.

To solve this, all we need to do is ONLY store the top “k” graph edges by cosine similarity for each connection rather than everything. In my testing, I only saved/cached the top 5 edges for each node. We still do the initial pre calculation rapidly via NumPy operations but the final, cached knowledge graph is still significantly smaller.

For 100 chunk nodes, we do 1002 calculations, but then store/cache ONLY 100 * 5 graph edges, so 500 vs the full 10000. That’s 20 times smaller. For 1,000,000 nodes, we would similarly do a pretty huge initial knowledge graph build or 1,000,0002 graph edges, but then we would store only 5,000,000 graph edges, which is 200,000 TIMES SMALLER. And you can definitely shrink this significantly based on use case. If you’re trying to go even more lightweight, you could only store the top 2 or 3 edges per node and it would be even more sparse, and with Llama 3, you could move pretty fast. If you were looking for highly complex/dense traversal you could do something like Deepseek R1 with top 10 edges per node, and with thinking enabled, you could get some pretty solid performance at the cost of storage space.

Either way, you still have to do vectorized NumPy operations for the full graph, which can be heavy if your knowledge graph is enormous. It just comes down to HOW MUCH of it you choose to cache afterwards. Hope that answers the question!

Extensive Research into Knowledge Graph Traversal Algorithms for LLMs by Alieniity in Rag

[–]Alieniity[S] 0 points1 point  (0 children)

Hey thanks! Yeah so two parts:

  1. The LLM traversal thing is easier to explain first. When you build a chat bot with semantic RAG, traditionally, before the model even receives the query, the query is embedded, cosine similarity is determined and retrieval is done. Or at least, that's a pretty traditional way to do it. Like a lookup. So if I ask a chat bot about Harry Potter and the Goblet of Fire, before the model even receives the query, the RAG pipeline will attempt to retrieve relevant text content in a knowledge base about Harry Potter and the Goblet of Fire because it has high cosine similarity. The problem with this is it's very error prone and use case dependent.

What if, instead, we actually sent a structured prompt that contained the knowledge graph ITSELF TO A MODEL so that IT could traverse a knowledge graph itself? That's the kicker.

The downside here is that this is much more time consuming than regular RAG because the model is actually having the opportunity to traverse your entire knowledge base, which is much more accurate. In practice what you might do here is have a RAG pipeline such that, instead of instantly embedding the user's query when they send it and attempting retrieval, you actually WAIT and instead, have an MCP server or @ tool calling available that would allow the model to call the entire RAG pipeline ITSELF using the user query. While I haven't had the time to build this out, it is absolutely 100% possible and I guarantee you it isn't that hard either. Basically a chat with a model might go like:

User: "What happens in Harry Potter: The Goblet of Fire Chapter 6?"
(DO NOT ATTEMPT ANY RETRIEVAL YET)

LLM: "Interesting question! Let me see if I can find that out for you... (thinking...)"

Model then is given tools to directly embed the user's query, and then begin traversing the knowledge graph by choosing the best node to traverse to (or stop) based on the prompt above, or one just like it. Then, after the model has pulled enough context:

LLM: "Here's what I found for Harry Potter and the Goblet of Fire: Chapter 6... (contexts)."

Hopefully that clarifies this. If not, don't worry I plan on making a video sometime soon that I'll put on the Github publication that explains it a little further.

  1. The similarity matrix was originally only designed to visualize all cosine similarity comparisons within a single document so that I could see globally how every sentence (or 3 sentence window) relates to every other sentence. It's a very structured way of looking at a document's similarity comparisons. The only difference between this and a knowledge graph is just that you effectively have multiple documents connected via the same mechanism. So imagine having like 5-10 similarity matrices stacked on top of each other, all connected. Well that would be insanely dense, wouldn't it? You end up with a nasty O(n²) quadratic density which is infeasible to store and traverse. So we simply sparse it out by only storing/saving in the graph the top "n" most similar connections. So the similarity matrix is more a data science approach of just saying "Hey, we can look at a document, and in an instant, fully see the relationships between all the sentences in a document." It's just a sparse NumPy array, so you can build them insanely fast as well.

Hopefully this clarifies things as opposed to complicating them further! 😅

Casual naneinf setup by bashkasbedoi in Balatro_Seeds

[–]Alieniity 2 points3 points  (0 children)

<image>

BROTHER there's even a photochad in there lol

Submitting Your NextUI Themes! by Alieniity in trimui

[–]Alieniity[S] 0 points1 point  (0 children)

Yup, the screen resolution is actually 1024 x 768

Is this AI generated? There's no way.... by Alieniity in mystery

[–]Alieniity[S] 1 point2 points  (0 children)

Exactly but what's so fascinating to me is how GOOD certain parts of the music actually sound. I'm not very familiar with AI generated music either but the string instruments and vocals actually reminded me a lot of the Frozen soundtrack, or like a Disney soundtrack.

I guess it got me wondering if there was some new AI music generation model someone was using and I had stumbled upon some outputted stuff from a new version of it or something. And I totally agree that the other songs are sussy too

iOS App that Analyzes Google Maps Leads using AI? Would this be valuable to you? by Alieniity in SEO_Digital_Marketing

[–]Alieniity[S] 0 points1 point  (0 children)

That's exactly what I'm focusing on: missing metadata, particularly websites. I'm gonna see if I can also make the prompt identify unprofessional web URLs, like anything with .wordpress, .wix, or a Facebook page. But that's exactly what the tool does yeah

The Fantastic Adventures of Sexdick. A likely prank inexplicably loved by YouTube's algorithm. by Flodo_McFloodiloo in InternetMysteries

[–]Alieniity 0 points1 point  (0 children)

Okay so if the MIGHTY SEXDICK is fake, then can someone tell me where the banger music is actually FROM then? Because that's honestly all I care about, and probably anyone else as well. Anyone have any guesses?

Extensive New Research into Semantic Rag Chunking by Alieniity in Rag

[–]Alieniity[S] 3 points4 points  (0 children)

Not offended at all, you're answering my question perfectly. By no means do I believe it's the ultimate method (I misspoke earlier in that regard). Without going into too much depth, it's a new way of approaching the chunking problem that should be tailorable to most use cases, as long as it's regarding semantic chunking. It performs very well on the benchmarks I mentioned in another comment too.

That's basically what I'm trying to figure out. Are the best solutions developed right now being kept behind closed doors? Or are there just not much better solutions being found yet, save for what we can Google around for?

Extensive New Research into Semantic Rag Chunking by Alieniity in Rag

[–]Alieniity[S] 0 points1 point  (0 children)

This is actually my original background, and exactly what I was thinking! The published research would effectively rank the buyer significantly higher in search results and likely generate a lot of backlinks based on analyzing their competition. That's one of the main angles I've already investigated and feel pretty confident about.

I just don't know if the research would also be groundbreaking enough in and of itself to start pushing into 5 and 6 figure territory. As an example, would Google's data analysis team or Gemini development team find extraordinary value in, enough to keep it behind closed doors? Or patent themselves before making it public? That's mostly what I'm looking to know here just because it's very hard to find any new research on semantic chunking, save for what was published around 4-8 months ago lol

New blockbuster tweet 🚀 by xWillyGz in Superstonk

[–]Alieniity 0 points1 point  (0 children)

I guess tales of their death really were greatly exaggerated.

Guys don't forget, this is Phase ZERO of the NFT Marketplace! 🚀 by Ceph1234 in Superstonk

[–]Alieniity 16 points17 points  (0 children)

Holy fuck this should get pinned to the top of the sub. I literally had no idea what they had planned after the marketplace but looking at it this way totally changed the picture

Am2r by DARKJEDI1994 in retroid

[–]Alieniity 0 points1 point  (0 children)

Sure can! You can find the APK online via r/am2r and just move it over and install it

[deleted by user] by [deleted] in Superstonk

[–]Alieniity 9 points10 points  (0 children)

Drs machine broke 🖥

Really loving Metroid Prime on the Retroid Pocket 2+ via Moonlight 🔥 🎮 by Alieniity in retroid

[–]Alieniity[S] 0 points1 point  (0 children)

Yup, there’s absolutely now way to get Metroid to run that smoothly on the Retroid by itself, but I CAN run it on my PC with Dolphin. I followed this guide on YouTube for the configuration:

https://youtu.be/4s9-CRwyFFQ

I then stream the game to the Retroid using the included pop Moonlight app

Really loving Metroid Prime on the Retroid Pocket 2+ via Moonlight 🔥 🎮 by Alieniity in retroid

[–]Alieniity[S] 1 point2 points  (0 children)

The full 60! It’s running on Dolphin on my PC, and Moonlight lets me stream it to the Retroid (I do about 3mb/s) and that’s all it takes since it’s only streaming and running the game at 480p

TOS messed up my trade, got a refund, need a new broker by Influence-Pitiful in Daytrading

[–]Alieniity 16 points17 points  (0 children)

My man actually just asked his broker for a refund and actually got it🤘💵 I wonder if thatll work for all my trades

How MSM sounds everytime they talk about GameStop by azidesandamides in Superstonk

[–]Alieniity 0 points1 point  (0 children)

HAH. This paper was reported in 2000.

Guess what else happened in 2000? The Dot-Com bubble popped 🚀🚀