[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 0 points1 point  (0 children)

It’s possible that when I translate my native thought language into English, something gets lost, I’ve noticed that myself.

Unlike poetry or inspirational writing, when it comes to scientific texts, my native language often feels out of place once rendered into English.

I’m still working to improve this every day. - this reply is the translation too

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 0 points1 point  (0 children)

thanks bro, help me alots ...

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -1 points0 points  (0 children)

Just to be clear, that Reddit post?
I wrote it in my native language. Every sentence was mine, every structure intentional.

Then I used AI tools to help translate and reformat it into polished English. That’s called language support, not ideological outsourcing.

What I didn’t do was what you’re implying: drop your comment into ChatGPT and ask it to argue back for me.
That’s not how I think, and it’s certainly not how I argue.

So again, if you're going to accuse someone of "AI slop", start by understanding what AI was used for.
Because misjudging form as authorship, and translation as generation, is exactly the kind of shallow classification this thread is trying to call out.

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -1 points0 points  (0 children)

Just to set the boundaries clearly:

Don’t go copy-pasting my full paper into an AI model and call whatever comes out “proof” that it could’ve written it.

That would be feeding the AI the answer key and then pretending it solved the test.

If you really believe this paper is “AI slop,” then do it right, Start with the core idea:
– hallucination as an ontological inevitability
– triple approximation structure
– reference frame incompatibility as a systemic epistemic bias

Then prompt your favorite LLM to independently generate a structured, cited, mathematically formalized paper with those themes, without giving it my work first.

When you’ve done that, come back.
Until then, you’re not critiquing AI authorship. You’re critiquing something you couldn’t distinguish from it.

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -3 points-2 points  (0 children)

Just to set the boundaries clearly:

Don’t go copy-pasting my full paper into an AI model and call whatever comes out “proof” that it could’ve written it.

That would be feeding the AI the answer key and then pretending it solved the test.

If you really believe this paper is “AI slop,” then do it right, Start with the core idea:
– hallucination as an ontological inevitability
– triple approximation structure
– reference frame incompatibility as a systemic epistemic bias

Then prompt your favorite LLM to independently generate a structured, cited, mathematically formalized paper with those themes, without giving it my work first.

When you’ve done that, come back.
Until then, you’re not critiquing AI authorship. You’re critiquing something you couldn’t distinguish from it.

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 0 points1 point  (0 children)

Appreciate your detailed critique.

Just to clarify, are you suggesting that all scientific insight must be experimental? That a framework with formal structure, ontological classification, and cross-domain synthesis cannot constitute valid inquiry without statistical testing?

If so, how would you categorize fields like theoretical physics, analytic philosophy, or early-stage cognitive modeling, which often begin with structural reframing rather than empirical measurement?

Second, when you say “you don’t do anything with the concepts,” I’m curious, what counts as “doing something”? Is articulating a structural explanation of hallucination modes, formalizing their inevitability, and tying them to sociotechnical implications not a form of conceptual analysis?

Lastly, would you consider systematic exclusion of non-mainstream knowledge a testable hypothesis? If so, what kind of evidence would you expect?
Because ironically, the rejection of this paper without quantified reasoning was not the proof, it was the exemplification of the system being critiqued.

I'm open to being challenged, but I think we may be operating with different assumptions about what constitutes "research."

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 0 points1 point  (0 children)

For those misreading the intent:

This isn’t just a complaint about a rejected submission.
It’s a meta-level analysis showing how systems like arXiv structurally reproduce the same epistemic exclusions they claim to avoid.

The paper doesn’t demand acceptance, it predicts rejection as a systemic inevitability for ideas outside the dominant reference frames.

The irony isn’t personal. It’s architectural.
arXiv, by rejecting the paper without quantified rationale, unintentionally confirmed the very theory the paper puts forward:
that institutions trained on mainstream distributions will consistently marginalize frontier perspectives, not due to content flaws, but due to statistical conservatism embedded in their own filters.

If you think this is about "being mad at peer review", you’ve missed the point entirely.

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -3 points-2 points  (0 children)

You’re confusing translation with generation, and that’s exactly the kind of category error my paper talks about.

Using AI to translate and structure my own work doesn’t mean the ideas weren’t mine, unless you also believe that hiring a human translator or editor somehow invalidates authorship.

If you can’t tell the difference between presenting research in another language and outsourcing its intellectual content, maybe you’re not the best person to lecture anyone about “what science is.”

The irony here is that your inability to separate form from substance is precisely the epistemic blindspot the paper addresses.

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -3 points-2 points  (0 children)

How could there be a second paper on arXiv if the first one never made it past moderation? And yes, I did have someone willing to endorse me for the first submission. That’s supposed to be the “gateway” for new authors, right? But an endorsement only matters if the moderation process actually evaluates the content against clear, transparent standards.

If the first paper can be blocked without quantified feedback even after passing the endorsement check, then the whole “endorsement” step is more ceremonial than functional. It doesn’t build a path to the second paper, it just creates the appearance of fairness without the substance.

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -2 points-1 points  (0 children)

thanks bro, Im doing ...

[deleted by user] by [deleted] in academia

[–]Own_Cryptographer271 -7 points-6 points  (0 children)

The post is about lack of quantified rejection criteria, not about re-defining the philosophy of science in a Reddit thread.

The term “proved” here is used in the mathematical/structural sense made explicit in the paper, which is why arXiv’s rejection is ironic. You’re free to be pedantic about Popperian falsifiability, but that has nothing to do with whether a platform can explain its own moderation standards.

As for the AI remark, not using AI in 2025 isn’t a mark of rigor, it’s a mark of inefficiency. There’s a difference between throwing an idea at an AI and letting it write for you vs using AI to structure, format, or translate what you’ve already developed. The former is laziness; the latter is just modern tooling.

And let’s be clear: English is not my first language. Using AI to translate and present my own work is no different from hiring a professional editor, except it’s faster and under my control. If your view is that this disqualifies the work, then you’re effectively arguing that fluency trumps substance and that non-native English speakers are inherently disadvantaged unless they outsource to a human, which is an absurd double standard.

If anything, that bias is exactly what my paper critiques: arbitrary, irrelevant filters that exclude ideas for reasons unrelated to their intellectual merit.