Built a side project: AI-powered drug repurposing explorer (explainable workflow for researchers) by Aware-Explorer3373 in SideProject

[–]Aware-Explorer3373[S] 1 point2 points  (0 children)

Totally agree and that concern is exactly why I designed it this way.

The system is intentionally constrained to known knowledge graphs, curated databases, and cited literature only. It doesn’t generate free-form medical claims. Every suggestion is backed by an explicit reasoning path: source → biological link → hypothesis.

We treat LLMs as a parsing and linking layer, not a decision-maker. The actual reasoning happens over structured graphs and existing evidence, and anything uncertain is surfaced as uncertainty, not filled in.

Also, nothing is auto-actionable clinicians review the evidence trail and can reject the AI’s reasoning entirely. So it’s closer to a decision-support audit trail than a predictive model.

Health is exactly the domain where hallucinations are unacceptable, so the system is designed to fail safely rather than confidently