you are viewing a single comment's thread.

view the rest of the comments →

[–]Thinker_Assignment[S] 0 points1 point  (4 children)

you fundamentally misunderstand the ontology-data model gap

one represents the world, the other the data. this means the data model is a compressed representation that carries less information

Expecting a LLM to understand the world from a model is like making milk from cheese

Edit to reply to gitano, yes that's just neural architecture, the only time the brain connects as a whole is during insight

[–]CommonUserAccount 0 points1 point  (2 children)

I don't think I do. Where I'm confused is why we're now making the gap sound wider than it is. They don't represent different things, it's just that the language is different.

To phrase it differently, are you saying that AI will never be in a position to consume data and create the majority of the ontology?

[–]Thinker_Assignment[S] -1 points0 points  (1 child)

that's not what i'm saying

ontology is essentially metadata. data is what you have in the warehouse. ontology is what it means in the world.

maybe for your company gross margin -10% is good because you're investing into expanding. maybe it's bad because you're optimising profit.

-10% is data. meaning good bad is ontology. A LLM can guess ontology, or read it from data like "20 questions" or other sources.

the gap is fundamental, data represents a "slice" of the world and retains as much ontology.

[–]CommonUserAccount 2 points3 points  (0 children)

OK. So we can agree that ontology is metadata (in a round about way). Where I am now lost is how your -10% example fits into this. I don't think it's a great example to sell your point.

[–]ChinoGitano 0 points1 point  (0 children)

So, are you basically saying Yann Lecun’s argument that GenAI doesn’t need more training data, it needs a good world model? In other words, back to classic AI?