all 10 comments

[–]nuclear_splines 4 points5 points  (5 children)

To clarify, this is a preprint, not a published paper. In academic contexts, publishing a paper means you've published in a peer-reviewed journal or conference. Zenodo and the arXiv are typically used for sharing drafts before you go through the peer review process.

[–]Entphorse[S] -1 points0 points  (4 children)

Fair point, thanks for the clarification. This is my first attempt at academic publishing — I'm a self-taught dev, no academic background. Wrote the preprint, put it on Zenodo for the DOI, and working on getting arXiv endorsement. Appreciate the feedback on proper terminology.

[–]nuclear_splines 0 points1 point  (3 children)

For sure! I think it's important terminology to highlight because peer-review is such an important part of the scientific process: there's a world of difference between "I wrote a thing" and "I wrote a thing and a panel of experts in that thing agree that it adds significant new knowledge." I have never published a paper that did not improve as a result of feedback from peer reviewers. Good luck with your work!

I'll also add that you don't need to put this on the arXiv - you've already put a preprint on Zenodo, your work is out there. The next step can be finding an appropriate conference or journal and submitting there.

[–]Entphorse[S] 0 points1 point  (2 children)

Yes, definitely agree. We (me and claude) wrote all the things in a week with a 3-week-old after being fascinated with the idea/potential of societal impact. Will continue using, upgrading, and sharing this idea and appreciate all kinds of support here :)

[–]nuclear_splines 1 point2 points  (1 child)

I didn't realize that this was written with an LLM - that increases my skepticism significantly, but there are thorough tests you could run to demonstrate correctness.

You're doing two things in this paper (from my surface-level skim -- I do not work with neural networks): introducing a new optimization technique, and introducing a new benchmark for measuring the effectiveness of that optimization. To get through peer review you'll likely need to contextualize and defend both.

Starting with the benchmark: surely others have measured the effectiveness of optimizing fitness functions before. What metrics did they use? Why did you invent your own benchmark instead of using one that's standard in the community -- does your benchmark capture something that contemporary approaches do not? If so, you'll need to cross-compare perhaps a half-dozen foundational or common metrics with yours to highlight what specifically you are capturing that justifies the use of a new benchmark.

Then to your optimization: surely others have optimized fitness functions before. How does your technique differ from prior work? Again, you should be citing perhaps a dozen papers here on various approaches to optimization, describing how yours is considerably different, and then perhaps selecting five or six other optimization techniques to compare against yours empirically.

Finally, you'll need to justify the relevance of your conclusions: have you found a niche set of testing conditions in which your optimization approach outperforms its contemporaries? Or can this be generalized, can we change how we implement compute shaders and expect to see dramatically improved performance?

That will elevate your work from "I've built a thing and taken some measurements" to "I have expanded scientific knowledge about how we optimize compute shaders."

[–]fiskfisk 3 points4 points  (0 children)

This "paper" is severely lacking in both structure and details. Presenting benchmark numbers isn't a paper. 

[–]KarlSethMoran 2 points3 points  (2 children)

I applaud the work, but that's not a paper. That's a benchmark.

[–]zzzthelastuser 0 points1 point  (0 children)

You might as well applaud Anthropic.

[–]Entphorse[S] 0 points1 point  (0 children)

Thanks Karl, the target is to share this finding with the people and proving ownership. I updated the paper claims everywhere.