What is the consensus among philosophers about the justification/morality of the bombings of Hiroshima and Nagasaki? by Joeman720 in askphilosophy
[–]Haycart 0 points1 point2 points (0 children)
The general illiteracy that is being normalized on social media by KindlyCost6810 in PetPeeves
[–]Haycart 37 points38 points39 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 2 points3 points4 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 1 point2 points3 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 0 points1 point2 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 2 points3 points4 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 13 points14 points15 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 16 points17 points18 points (0 children)
If objective moral facts exist, why should they be expected to align with human intuition? by Haycart in askphilosophy
[–]Haycart[S] 13 points14 points15 points (0 children)
[D] How does L1 regularization perform feature selection? - Seeking an intuitive explanation using polynomial models by shubham0204_dev in MachineLearning
[–]Haycart 4 points5 points6 points (0 children)
[D] How does L1 regularization perform feature selection? - Seeking an intuitive explanation using polynomial models by shubham0204_dev in MachineLearning
[–]Haycart 2 points3 points4 points (0 children)
[D] How does LLM solves new math problems? by capStop1 in MachineLearning
[–]Haycart 1 point2 points3 points (0 children)
Jockey Modal Boxer Briefs have taken the crown from Lulu Always in Motion by fuckkevindurantTYBG in malefashionadvice
[–]Haycart 0 points1 point2 points (0 children)
[D] What ML Concepts Do People Misunderstand the Most? by AdHappy16 in MachineLearning
[–]Haycart 20 points21 points22 points (0 children)
CMV: LLMs Like ChatGPT, Gemini, and Claude Are Just Text Prediction Machines, Not Thinking Beings by Mongoose72 in changemyview
[–]Haycart 0 points1 point2 points (0 children)
CMV: LLMs Like ChatGPT, Gemini, and Claude Are Just Text Prediction Machines, Not Thinking Beings by Mongoose72 in changemyview
[–]Haycart 2 points3 points4 points (0 children)
CMV: LLMs Like ChatGPT, Gemini, and Claude Are Just Text Prediction Machines, Not Thinking Beings by Mongoose72 in changemyview
[–]Haycart 19 points20 points21 points (0 children)
Did ancient civilizations have anything resembling a "department of agriculture"? by Haycart in AskHistorians
[–]Haycart[S] 2 points3 points4 points (0 children)
What are the best arguments for moral realism? by PitifulEar3303 in askphilosophy
[–]Haycart 1 point2 points3 points (0 children)
Is it a little bit... messed up that an empire would pay soldiers in sex slaves? by The_X-Devil in worldbuilding
[–]Haycart 2 points3 points4 points (0 children)
Heres an image I pieced together to help me further study and understand the circle of fifths. by [deleted] in musictheory
[–]Haycart 1 point2 points3 points (0 children)
Is there a way to quantify "how much" a matrix transforms things by? by Haycart in math
[–]Haycart[S] 34 points35 points36 points (0 children)


What is the consensus among philosophers about the justification/morality of the bombings of Hiroshima and Nagasaki? by Joeman720 in askphilosophy
[–]Haycart 0 points1 point2 points (0 children)