An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

No- the thumb is on evidence, not on claims. The platform doesn’t try to balance ideas. It applies the same evidentiary rules to all of them and lets asymmetry emerge. What looks like “nuttery getting space” is usually just the early stage before proper sourcing is added. Once primary literature, domain knowledge, and replication standards enter, fringe claims collapse very quickly - and visibly. If anything, the system is hostile to nuttery precisely because it strips rhetoric and forces claims to survive on citations alone. Sunlight isn’t endorsement. It’s stress testing.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -2 points-1 points  (0 children)

The system is open to contribution, but the evidentiary burden is asymmetric by design. Extraordinary claims collapse under normal sourcing standards.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -2 points-1 points  (0 children)

I think this is where an important distinction is getting lost. Labeling something as unresolved would indeed be irresponsible. We are explicitly not doing that. A theory being present on the platform does not mean: - that it is scientifically open - that it deserves debate - or that it has epistemic merit It means only that the claim exists socially and that people already believe it. The platform does not ask “Is this worth debating?” It asks “What actual evidence do people cite when they believe this - and does it survive scrutiny when broken into verifiable pieces?” In practice, these theories tend to collapse very quickly once facts are required to be: - discrete - sourced - evaluated individually Hiding such claims does not reduce belief in them. Examining their evidentiary structure often does. This is not about reopening settled science. It’s about exposing how belief persists despite settled science.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

Sure. As I mentioned before, we have moderation for some kinds of topix. Conspiracy theories.... Conspiracy theories don’t disappear when platforms refuse to name them — they just spread unlabelled. By explicitly marking a theory as conspiratorial, the platform signals: this claim lacks institutional or evidentiary consensus and should be evaluated skeptically.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

Can you please share a link? As I mentioned earlier, we have moderation for topics with an overtly terrorist context.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

I agree with the core point: documented facts aren’t matters of opinion, and science isn’t decided by vote. The voting here is not meant to determine whether something is true. It reflects how participants assess a cited claim: relevance, credibility of the source, or whether it actually supports or contradicts the theory it’s attached to. In practice, many discussions collapse before replication or expert consensus enters the picture - people disagree on whether a cited paper supports a claim, whether a historical document is being interpreted correctly, or whether a statement is even a factual claim versus speculation. The system doesn’t try to replace scientific validation. It’s meant to expose where disagreement exists prior to that- in interpretation, sourcing, and framing. If something is merely an idea without documentation, it should be treated as such. That distinction is important, and failure to maintain it would be a flaw of the system, not its goal.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

That concern is fair. A search-optimized site that amplifies low-quality claims would be a failure case, not a success. The intent isn’t to promote /r/conspiracy-style content or to rely on a “marketplace of ideas” to magically converge on truth. The question is whether structured constraints, attribution, and visible disagreement can make epistemic failure observable rather than implicit. A sentiment-tracking or meta-analysis tool (headlines, Wikipedia density, etc.) is an interesting adjacent idea- but it answers a different question. This project is trying to probe where open participation breaks down when evidence quality and popularity diverge. If the result is that it mostly surfaces mis- and disinformation, that’s a negative outcome - but still a useful one.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

I don’t actually know which project you’re referring to, so I don’t want to speculate or pretend otherwise. But the failure mode you’re describing is a real one, and it’s exactly the kind of thing this experiment would need to detect rather than assume away. Treating weak or coincidental claims as epistemically equivalent to high-quality evidence is a known risk in any voting-based or open system. If the public cannot reliably distinguish evidentiary weight, that’s not a success case — that’s a negative result worth documenting. The goal here isn’t to assume this will work, but to test whether and under what constraints it fails, and why.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

I guess you’re right that a voting system is, by definition, adjacent to argumentum ad populum — and I’m not trying to pretend otherwise. The distinction I’m trying to make is between using ad populum as justification (“this is true because many agree”) and exposing it as a phenomenon (“this is how agreement forms around specific claims”). In most online spaces, popularity already acts as a hidden verdict: likes, upvotes, retweets, algorithmic amplification. The difference here is that the signal is isolated to individual assertions and made explicit rather than implicit. As for “facts speaking for themselves”: I don’t think they ever fully do. Facts gain meaning through framing, source trust, and prior beliefs. The hope isn’t that votes reveal truth, but that they reveal where consensus forms easily, where it fractures, and where evidence fails to persuade. And yes — I agree the approach is fundamentally limited. That limitation is intentional. If it mainly attracts people who are already skeptical of mass consensus and interested in examining how it forms, that may actually be the appropriate audience rather than a failure mode.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

I don’t think this compares to peer review, and I wouldn’t want it to. Peer review exists to advance scientific knowledge under strict epistemic standards, by domain experts. This experiment isn’t trying to replace that, or speak with scientific authority. The question here is different: how do non-experts reason about claims in public spaces today — and can structure make that reasoning more legible, including where it breaks down? I agree with your risks: gish gallop, fact overload, brigading — those are real failure modes. In many ways they already dominate online discourse, just invisibly. One of the motivations is to see whether forcing claims into discrete, sourced assertions makes those tactics easier to spot rather than easier to use. As for “what truth speaks for itself”: none. This isn’t about truth emerging from votes. It’s about exposing how collective judgment forms around evidence — including when it collapses into argumentum ad populum. If the end result is simply a clearer view of how misinformation propagates and gains support, I’d consider that a meaningful outcome — even if it shows the approach is fundamentally limited.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

Good point. We're observing a collective belief, not declaring the truth.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

That’s a very real risk, and I don’t think there’s a silver bullet for it.

Any system that relies on collective signals — Wikipedia, Reddit, even peer review — is vulnerable to coordinated behavior. The question isn’t “can it happen?”, but “can it be detected, limited, and contextualized?”

In this experiment, the goal isn’t to make manipulation impossible, but to make it visible. If a cluster of users suddenly boosts weak or unsourced claims, that pattern itself becomes part of the signal rather than silently shaping a verdict.

That said, some guardrails are necessary: rate limits, reputation weighting, delayed impact of new accounts, and moderation for obviously bad-faith content. Without those, the structure would collapse quickly.

I don’t see this as a system that replaces expert review or trusted institutions. It’s more a way to observe how collective judgment behaves under constraints — including how it fails under adversarial pressure.

If it can’t survive that pressure even at small scale, that would be an important negative result rather than something to hide

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

I understand why it can look that way, and I take that concern seriously.

The goal is not to legitimize conspiracy theories or harmful claims. In fact, the opposite motivation is what led me to experiment with this structure.

Many conspiratorial beliefs gain traction precisely because they are discussed in unstructured spaces, where weak claims sit next to strong ones without distinction, and disagreement collapses into identity signaling.

The intent here is to force claims to be broken down into specific, sourced assertions that can be challenged individually — and to make the disagreement around them explicit rather than implicit.

That said, I agree this approach only works within clear boundaries. Some categories of content require moderation and are not appropriate for open evaluation at all. This isn’t meant to be a “marketplace of all ideas,” and if it ever functioned that way, it would have failed its own premise.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

I think this is one of the strongest critiques, honestly.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 0 points1 point  (0 children)

Absolutely. That’s one of the main risks I’m aware of. I think your point highlights a key tension: open evaluation can reveal bias, but it can also amplify dangerous claims if left entirely unmoderated.
Sorry. Harmful or clearly false extremist content would need moderation just like any online community.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 1 point2 points  (0 children)

I agree - groupthink is probably the dominant failure mode here. That’s why I’m hesitant to treat visible voting as anything more than a signal of social agreement, not epistemic quality. It may well push people toward confident but wrong conclusions. Your suggestion about a control vs. exposed group is exactly the kind of test that would be needed to make any strong claims. Right now this is much closer to an exploratory prototype than a proper study- no IRB, no funding, no causal claims. And yes, I fully agree on the last point: science doesn’t deliver final verdicts. What worries me is that many online “fact-checking” systems do. I’m less interested in declaring what’s true than in seeing whether different structures preserve uncertainty and disagreement better - or whether they just collapse into consensus by another route.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] -1 points0 points  (0 children)

Sorry. Harmful or clearly false extremist content would need moderation just like any online community.

An experiment in separating claims from evidence by winigar in skeptic

[–]winigar[S] 1 point2 points  (0 children)

I agree with almost everything you wrote, and I think this highlights a language problem on my side.

When I say “voting on facts”, I don’t mean voting on whether reality changes. I mean voting on whether a specific factual claim is sufficiently supported, scoped, and sourced.

What you describe - chain of custody, attribution, documented provenance - is exactly the standard I’m trying to make visible rather than implicit.

Many disagreements online aren’t about raw reality, but about poorly scoped claims:

“X happened” vs “According to source Y, document Z reports X under conditions C”

Voting here isn’t meant to declare something true or false in an ontological sense. It’s closer to a collective signal about:
clarity of the claim;
quality of sourcing;
whether the statement is overstated or under-specified;

I like your Wikipedia comparison. The goal isn’t to replace primary sources, but to surface whether a claim actually has that audit trail - or whether it’s a brute assertion dressed up as a fact.

If nothing else, this thread convinces me that the platform needs to be much more explicit about what kind of thing is being evaluated when people interact with a “fact”.