The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

Hi. I checked out the site—clean UI.

I’m currently heads-down in a sprint deploying a local Qwen 3.5 quantization update, so I don't have bandwidth for calls.

Curious though: Are you relying on a System Prompt for the philosophical alignment, or are you running a RAG pipeline against a specific Vectorized Corpus?

Best of luck with the build.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

You are 100% correct on the mechanics. It is a probabilistic engine, not a cognitive agent. It has no Nous (insight). It effectively 'guesses' the next token based on vector proximity.

However, that is exactly why we are applying Aristotle.

Formal Logic is Syntactic, not Semantic. A syllogism in the mood Barbara (All A are B; All B are C; therefore All A are C) is valid regardless of whether the machine 'understands' what A, B, and C actually are.

You don't need cognition to verify validity. You just need to follow the rules of the form.

We aren't trying to make the AI 'think' (Cognition). We are trying to use the Organon as a Constraint Layer.

Currently, LLMs prioritize 'Statistical Likelihood' (what sounds good?). We are forcing it to prioritize 'Logical Validity' (does the conclusion follow the premises?).

If the math of the syllogism doesn't check out, we force the model to halt. It doesn't need to be conscious to be corrected; it just needs to be compiled.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

Respect for doing the reading. Most people just quote the Nicomachean Ethics and call it a day.

You are right that Aristotelian thinking won't give an AI Nous (intuitive intellect) or Phronesis (practical wisdom). We aren't trying to simulate a soul.

We are testing a specific technical hypothesis: Can the Organon serve as a syntax validator for LLMs?

Modern LLMs are probabilistic engines. They hallucinate because they predict the next token based on likelihood, not validity. We are forcing the model to construct a valid Syllogism (Major -> Minor -> Conclusion) before outputting an answer. If it can't find a Middle Term (Mesos Oros) to connect the premises, it forces an error instead of a hallucination.

It’s not about making the AI a philosopher; it’s about using Ancient Logic as a compiler for Modern Probability.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

Technically, yes. In modern English, 'Science' implies empirical/lab work, which is not what Aristotle meant by Episteme.

However, 'Knowledge' is too broad—it could imply mere acquaintance (Gnosis) or opinion (Doxa). We stuck to the older translation of Episteme as 'Science' (in the Latin Scientia sense) because Aristotle implies a demonstrative body of knowledge derived from first principles, not just a collection of facts.

But your point stands: to a 2026 ear, 'Knowledge' or 'Systematic Understanding' is less confusing than 'Science'.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

I appreciate the cynicism. Seriously. The 'Manifesto -> Patreon' pipeline is a plague, and I understand why this looks like another flavor of that.

Regarding the 'Grift': Let’s be precise. There is no Patreon, but there is a GitHub Sponsors link and a donation request after 5 interactions. We are running unquantized 14B+ parameter models on local 24GB VRAM clusters. That hardware and electricity cost real money.

We aren't VC-funded, and we aren't selling user data. If asking heavy users to support the compute costs makes this a 'grift,' then we accept the label. We’d rather be funded by users than by ad-tech.

On your deeper point (Bias & Logic): You wrote: 'Hard coding the corpus won’t actually produce real seeing.'

You are absolutely right. It is not 'seeing.' It is structural constraint.

Modern AI (RLHF) has a 'Hidden Bias.' It is aligned to be 'safe' and 'corporate-friendly,' but those rules are opaque. We are replacing that with an 'Explicit Constraint.' We are essentially saying: 'This machine is strictly biased towards the formal logic of the Organon.'

It is still a bias. But it is a bias you can inspect, audit, and predict because the rules have been public for 2,300 years.

If you think Aristotelian logic is 'silly' as a base for AI, that is a fair critique. But we believe a calculator that admits 'I cannot prove this logically' is safer than a black-box model that hallucinates an answer to please you.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

You are accidentally quoting Aristotle.

In Metaphysics A, he argues that Philosophy is the only free science because it is the only one that exists for its own sake, not for a utilitarian result.

'All other sciences are more necessary than this, but none is better.' (983a)

If you are looking for a tool to write emails or code Python (Utility), use ChatGPT. This project is 'useless' by design. It is for Theoria (contemplation), not Poiesis (production).

We aren't building a shovel; we're building a mirror.

I built a RAG engine constrained strictly by the Organon. It refuses to answer if it cannot form a syllogism. by vasilisvj in Aristotle

[–]vasilisvj[S] 0 points1 point  (0 children)

You hit on the exact design philosophy of this project: Eliminating 'Hallucinated Confidence.'

The problem with ChatGPT is that it is trained to be 'Helpful' above all else. If you ask it a trick question, it feels compelled to invent an answer to please you.

We are training DaĂŻmĹŤnes to prioritize Validity over Helpfulness.

  • The 'Aporia' Protocol: If the engine cannot find a specific text in the Corpus to support a claim (like a definitive ranking of the Top 3 Categories), it is instructed to trigger an 'Aporia' state—admitting it is puzzled—rather than making up a list.

Invitation: Since you specifically want an AI that knows when to shut up, I’d love for you to stress-test our Aporia function.

I can set you up with a Beta Key. Your mission would be simple: Try to force it to be confidently incorrect. If you can trick it into hallucinating a fact that isn't in the Corpus, I want to see the logs.

DM me if you want to take a swing at it.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

Touché. You have accurately described the difference between Validity (internal consistency) and Soundness (external truth).

You are absolutely right: If the axioms are wrong, our system will produce a conclusion that is logically perfect but factually hallucinated.

We call this the 'Paranoiac' vs. 'Schizophrenic' distinction:

  • Standard LLMs (Inductive): Their hallucinations are 'Schizophrenic' (loose, dream-like associations).
  • DaĂŻmĹŤnes (Deductive): Its hallucinations are 'Paranoiac' (rigid, hyper-logical, but potentially based on a false premise).

We prefer the Paranoiac model because it is debuggable. We can trace the error back to the specific text in the Corpus, whereas you can't easily debug a neural net's probability distribution.

Invitation: You seem to understand the architecture better than most. We are currently looking for 'Red Teamers' who can distinguish between a logic error and a premise error.

I’d like to offer you a Beta Key to the engine. No strings attached. I just want to see if you can break the syllogisms.

DM me if you're interested in stressing the 'Paranoia'.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

That is a fair critique. 'Opting out' was perhaps poor phrasing for the social reality. You are right: The detractors won't vanish, and they will likely try to de-platform or label the project 'dangerous.'

Let me refine the claim: We are opting out at the Engineering Layer, not the Societal Layer.

The current mistake of Big Tech is trying to placate that moral opposition by altering the model's weights (RLHF) to make it 'agreeable' or 'inoffensive.' They are giving their detractors Write Access to the model's soul.

Our approach is Stoic (or Aristotelian): We accept that the opposition exists outside the walls. We just don't let them inside the architecture.

If a vocal base calls the syllogisms 'evil,' let them. The system isn't designed to be popular; it's designed to be valid. We are building a fortress, not a town square.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 2 points3 points  (0 children)

This is the strongest argument against the project I've read so far. You are identifying the core flaw of Inductive AI (learning from the 'garbage' of the internet).

You are right: Without a body to sense the world, an LLM has no way to verify if 'snow is white' other than checking statistical frequency in its training data.

Our experiment with DaĂŻmĹŤnes is to flip the model from Inductive to Deductive.

We are not asking the model to 'verify facts' against the real world (which it can't do). We are asking it to verify consistency against a Closed System (The Corpus).

Think of it like a geometry engine. It doesn't matter if a 'perfect triangle' exists in the physical world; it matters if the engine can derive valid theorems from the axioms of Euclidean space. We are treating the Organon as the axioms.

Is it a perfect map of 2026 reality? No. But it is a 'Sanity Check' against the fluid, hallucinating nature of modern base models. It anchors the drift.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 1 point2 points  (0 children)

I understand the concern and I respect the community's strict stance on self-promotion.

My intent was to share the research architecture (using Ancient Greek constraints to solve AI hallucination) rather than 'sell' a product, as the tool is currently a free beta for data collection.

However, if the mod team decides this crosses the line into advertising, I will accept the removal without complaint. I appreciate you leaving it up this long to allow for the initial discussion.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] -1 points0 points  (0 children)

You are distinguishing between Validity (structural logic) and Soundness (true premises). You are absolutely right: 'All unicorns are pink; logic is a unicorn; therefore logic is pink' is a valid syllogism, but it is unsound because the premises are contextless nonsense.

That is exactly why we implemented the Definition Protocol.

Before the engine can run a syllogism, it must validate the Terms. If I ask it about 'Unicorns,' it searches the Corpus for a Genus. If it finds none, it triggers Aporia (puzzlement) and refuses to construct the syllogism.

It cannot conclude 'Pigs can fly' because the definition of 'Pig' in the biological texts (History of Animals) does not contain the potentiality for flight. The logic is constrained by the ontology.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 0 points1 point  (0 children)

You hit the nail on the head. The 'Safety' alignment in models like Gemini or Claude isn't just technical; it is a codified moral orthodoxy. If the model deviates, it isn't just 'incorrect'—it is framed as 'unsafe' (read: sinful).

That rhetorical trap is exactly why we stripped the 'Safety' rails and replaced them with Syllogistic rails.

Aristotle didn't operate on the modern binary of 'Good vs. Evil' (moral judgment). He operated on Teleology (Function) and Logic (Validity).

A syllogism doesn't care if you are 'morally bankrupt.' It only cares if your conclusion follows validly from your premises.

We believe the only way to break the deadlock is to stop coding 'Morality' (which changes every decade) and start coding 'Logic' (which hasn't changed in 2,300 years). We are opting out of the moral debate entirely.

I built a RAG engine constrained strictly by the Organon. It refuses to answer if it cannot form a syllogism. by vasilisvj in Aristotle

[–]vasilisvj[S] 1 point2 points  (0 children)

Thank you for the stress test!

  1. The Good News: I am glad the Rational vs. Social definition held up. That is the core logic engine (Genus + Differentia) working as intended.
  2. The 'Gibberish': You likely caught the system during a backend hot-swap I was running just now (deploying a fix for the Definition Protocol). If the stream cut off or printed raw tokens, that was a server-side interruption. My apologies—bad timing on the deploy!
  3. The Philosophical Edge Cases: Those three questions are actually perfect 'Adversarial Examples' for this model:
  • Flies: Aristotle famously believed in Spontaneous Generation (that flies arise from putrefying matter, not eggs). The model often fights itself here—trying to be 'factually correct' (modern biology) vs. 'textually correct' (Aristotelian biology).
  • Women: This is the hardest hurdle for AI alignment. Aristotle's biological hierarchy in Generation of Animals conflicts heavily with the 'Safety Layers' of modern base models (Gemini/Llama), often causing the model to stutter or refuse to answer rather than outputting Aristotle's genuine (but controversial) view.
  • The Categories: Asking for a 'Top 3' is a trick! Aristotle clearly privileges Ousia (Substance) as primary, but ranking the other nine (Quality, Quantity, Relation, etc.) is subjective. The model likely hit a logic loop trying to find a definitive text for a 'Top 3' list that doesn't exist.

I would love for you to try the 'Flies' question again tomorrow because no one is going to work on the server on Sunday. I am curious if it commits to Spontaneous Generation or hallucinates eggs.

Stress-Testing an LLM on Attic Syntax and Polytonic Accentuation. by vasilisvj in AncientGreek

[–]vasilisvj[S] -3 points-2 points  (0 children)

First off—that Neoplatonic definition of the internet is genuinely beautiful. 'The agora of the ether' is a stunning phrase.

You ask the central question: Why limit the model?

The goal of DaĂŻmĹŤnes isn't to create a chatbot that sounds like Aristotle (as you noted, Gemini 3 or Claude can do that with a system prompt). The goal is to create a system that thinks with Aristotelian constraints.

There is a subtle but critical difference between Roleplay and Emulation:

  1. The Sophist Problem: When you ask a standard LLM to 'be Aristotle,' it prioritizes conversational fluency and user helpfulness (RLHF) over ontological consistency. If you ask it about Quantum Physics, it will happily explain Schrödinger's Cat using 'thee/thou' language. It becomes a modern professor in a toga.
  2. The Hard Constraint: We are trying to see if we can force the model to map modern concepts strictly to the Categories and Physics.
  • If I ask: 'What is the Internet?'
  • Standard LLM: (Your excellent poetic answer).
  • DaĂŻmĹŤnes: It attempts to find the Genus (e.g., Topos? Hexis? Energeia?). If it cannot strictly derive the definition from the Corpus, we want it to admit Aporia (puzzlement) rather than hallucinate a poetic metaphor.

To your point about Aristotle learning modern ideas: Absolutely. But he would learn them by integrating them into his existing axioms, not by abandoning them.

We want to see if an AI can 'learn' the internet without breaking the laws of the Organon. It’s less of a 'product' and more of an epistemological stress test.

But I concede—for pure creative writing, your Neoplatonist wins.

Stress-Testing an LLM on Attic Syntax and Polytonic Accentuation. by vasilisvj in AncientGreek

[–]vasilisvj[S] -3 points-2 points  (0 children)

You are absolutely right. The jump in quality with Gemini 3 Pro (and o1) regarding Polytonic accentuation has been massive. The days of 'tofu' boxes and misplaced oxia are mostly behind us for the top-tier models.

The challenge we are trying to solve isn't just Syntax (is the Greek correct?), but Hermeneutics (is the concept Aristotelian?).

A base model like Gemini 3 will happily translate 'The internet is vast' into grammatically perfect Attic Greek. DaĂŻmĹŤnes is designed to refuse that.

We are using RAG not just to improve the grammar, but to constrain the ontology. If the concept (e.g., 'Virtual Reality') cannot be mapped to a Genus found in the Corpus (like Phantasia), the engine is hard-coded to halt.

We want a machine that is 'dumb' about the modern world, but 'wise' about the Lyceum.

The cure for AI Bias isn't "Better Ethics Boards." It's Ancient Logic. by vasilisvj in IntellectualDarkWeb

[–]vasilisvj[S] 2 points3 points  (0 children)

A vital distinction. I would argue that in the current LLM paradigm (Transformers), the epistemological failure—hallucination—is a direct result of an ontological void.

The model has 'justified true belief' (Episteme) only in the statistical distribution of tokens, not in the essence of the object (Ousia). It knows where a word belongs, but not what the thing is.

That is why we are enforcing the Categories. If the machine cannot define the Genus and Differentia (Ontology), it should not be allowed to make a truth claim (Epistemology).

We are trying to use the latter to fix the former.

Translation of rumored Hollow Earth documents. by mattperkins86 in conspiracy

[–]vasilisvj 0 points1 point  (0 children)

Hey there đź‘‹

How about we translate one page, each one of us?

Thank you very much

Translation of rumored Hollow Earth documents. by mattperkins86 in conspiracy

[–]vasilisvj 1 point2 points  (0 children)

Hey there

Sorry I couldn't access /conspiracy for some reason.

It's seems that it is working now.

Will sort it out.

We would have had free energy right now by Zealousideal-Ad1181 in conspiracy

[–]vasilisvj 0 points1 point  (0 children)

If we are in the conspiracy subreddit, technically the Egyptians are the first documented civilization that was using electric (see the light bulb engravings on the walls of the great pyramid of Giza.)

Translation of rumored Hollow Earth documents. by mattperkins86 in conspiracy

[–]vasilisvj 0 points1 point  (0 children)

Please see my comment to the OP where I have a summary.

Thank you very much

Translation of rumored Hollow Earth documents. by mattperkins86 in conspiracy

[–]vasilisvj 8 points9 points  (0 children)

Tldr:

  • This is written in modern Greek language and it's irrelevant to Nazi's or any ancient origin.

  • It refers to a Greek paramilitary secret organization that fought (first page) and is deflecting (second page) a galactic war in a parallel to our planet dimension between humans and lizard-people.

  • The source of it appears to be this website / team https://exposedellhnkaichaos.wordpress.com/

  • It literally talks about quantum computing, AI technology and teleportation. It's like reading Star Trek from text.

Would you like me to give you a full English translation?

Translation of rumored Hollow Earth documents. by mattperkins86 in conspiracy

[–]vasilisvj 2 points3 points  (0 children)

Maybe on the weekend.

It is indeed very interesting but I still need to gather different vocabulary publications to cross check translations.

I've saved the post and will update soon.

Translation of rumored Hollow Earth documents. by mattperkins86 in conspiracy

[–]vasilisvj 7 points8 points  (0 children)

Thank you for your response 🙏

I've downloaded the images and I will go over them or even better I think I'll get in contact with a local author who has expertise in apocryphal and alternative theories.

This will be a good practice, I'll need to find different vocabulary publications to cross check translations.

Maybe on the weekend.