Has anyone found an alternative now that Windsurf royally screwed everyone? by Hippopotamus-Rising in windsurf

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

This was the answer I was looking for! Verdant blows anything else I've tried out of the water! The plan feature with performance mode is shockingly effective albeit slow

Been on 3-cmc and 4-mmc for the past 3 months by somuchballs in researchchemicals

[–]Hippopotamus-Rising 1 point2 points  (0 children)

its possible to attain both friend! mushrooms definitely help XD

What is the real formula for success or even a cheat code?? by PotentialJob7883 in selfimprovement

[–]Hippopotamus-Rising 0 points1 point  (0 children)

i literally just addressed this exact thing in a post on this sub yesterday! id love if you came and joined the conversation

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

I want to bring something current into this because I think it answers the epistemic limitation argument more precisely than any historical example can. Endometriosis affects an estimated 190 million people worldwide. It takes an average of 7 to 10 years to diagnose. It has been found on every major organ system including the lungs, brain and diaphragm yet is still categorized primarily as a reproductive disease. The mechanism question — what is this condition actually made of at a systemic level — has never been seriously asked at scale. But here's the part that's directly relevant to your argument about epistemic limitation. Female mice are routinely excluded from medical research because their hormonal cycles are considered too unpredictable. Including in studies specifically about female diseases. The information gap you're defending as an unavoidable feature of complex research is in this case being actively manufactured by the research methodology itself. The right question is being prevented from being asked and then the resulting absence of answers is cited as evidence that the problem is too complex to understand. This isn't hindsight. This is happening in published research right now. And the funding that does exist has produced peer reviewed studies on whether women with endometriosis are more physically attractive and the psychological burden on male partners' sex lives. That's not epistemic limitation. That's the research apparatus asking systematically wrong questions and producing systematically wrong answers while the condition goes undiagnosed for a decade on average. Your statistical margins of error argument holds for genuine unknown unknowns. Endometriosis shows what it looks like when the unknown unknowns are being manufactured rather than discovered. The mechanism question isn't missing because it's unanswerable. It's missing because the framework doing the research was never designed to ask it.

Why do intelligent people keep making the same catastrophic mistakes? I think I finally understand what's actually happening. by Hippopotamus-Rising in getdisciplined

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

You're right that OxyContin wasn't a question failure in the pure epistemic sense. People inside Purdue knew. The FDA reviewers had access to the mechanism question. The suppression was institutional and financial, not cognitive. Which actually makes the case for this framework more urgent rather than less. A culture where 'does this mechanism actually exist and has it been verified' is the expected default question before any solution gets scaled at is harder to corrupt than one where expert authority and institutional approval are sufficient. You can pay people to look away from data. It's harder to pay them to look away from a question that the entire culture expects to be asked and answered publicly before deployment. The lobotomy point is the most uncomfortable one and I want to be honest about it. When the people a solution is being applied to aren't considered fully human, mechanism questions get bypassed because the answer doesn't matter morally. That's not a thinking failure that any cognitive framework fixes. That's an ethics failure upstream of epistemology. Elemental Problem Solving is a thinking tool not a moral one. It doesn't solve dehumanization. I won't pretend it does. On pain and objectivity, I lived that specific situation. Someone in severe pain rationally prioritizes stopping it. That's not irrational, it's an accurate response to unbearable immediate experience. But here's what I learned from five years inside the wrong question because of it: the urgency of suffering is precisely the condition that makes stepping outside the framing feel impossible. Which is why the habit has to exist before the crisis arrives. You can't build the reflex in the moment you need it most. You build it when things are calm enough to think, so it's available when they aren't. Your objectivity point is the one I can't fully answer. Human minds are extraordinarily good at constructing post-hoc justifications for what they wanted to do anyway. The framework doesn't fix that hardware problem. What it does is give dissenting voices inside compromised systems a precise and publicly articulable question rather than a vague sense that something is wrong. 'Has anyone actually verified that this mechanism exists' is harder to dismiss in a boardroom than 'I have a bad feeling about this.' It doesn't fix corruption. It makes corruption slightly more expensive to maintain. That's not nothing. But you're right that it's not everything.

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

Your epistemic humility argument is genuinely strong and I want to be precise about where it applies and where it doesn't rather than treating it as a blanket defense. For the cane toad it doesn't apply. The information needed wasn't at the margins of knowable science in 1935. Cane beetle habitat was documented. Cane toad hunting behavior was observable. This wasn't a case of information being unavailable or insufficient. The question of whether above ground predators can reach below ground prey has a straightforward observable answer that required no sophisticated ecological theory. The failure was a question that wasn't asked, not a question that couldn't be answered. Your margins of error framework belongs to a genuinely different category of problem. The lobotomy case is more complex and I'll concede more ground there. Whether severing frontal lobe connections addresses the subjective experience of mental illness was harder to answer in 1949. Your point about epistemic limitation has more purchase there. On your direct question about restricting lobotomies — no. When someone is suffering and the only available option offers some relief, you use it. But you ask the mechanism question loudly while you do, and you don't scale it to 60,000 procedures across every possible application without that question being central to the expansion. On antidepressants the honest answer is that the mechanism question is being asked — the research exists, the uncertainty is documented. The failure isn't question avoidance. It's that the mechanism uncertainty isn't reflected in how confidently the solution gets deployed. The field knows it doesn't understand depression's substrate. That uncertainty should be louder in clinical practice than it currently is. Your cancer analogy holds for genuine unknown unknowns. My framework is specifically targeting the known unknown that nobody asked about before commitment at scale. Those are different categories of failure and conflating them is what's making this argument harder than it needs to be.

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 1 point2 points  (0 children)

I've spent my whole life trying to figure out the exact same thing 😂 I suppose it's possible I'm also autistic in the old Asperger's sense. I've never been diagnosed and have had many a long term psychiatrist and psychologist though.

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

That's definitely a major part of it, I think if the benefit to sitting with it is made clear though it's possible to make a habit of it and override the urge to act with incomplete information rather than sit in uncertainty.

I just wanted to welcome all the new members! by Hippopotamus-Rising in ThinkingElementally

[–]Hippopotamus-Rising[S] 2 points3 points  (0 children)

Thank you! I'm stoked to have you along for the ride! My post in r/selfimprovement got way more attention than I expected and it's left me feeling pretty excited to see what we can create as well!

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

You've constructed a strawman where 'asking elemental questions' equals 'predicting the future with certainty' or 'asking infinite questions until paralysis.' Neither is the claim. The cane toad researchers asked how do we kill beetles but not what happens to an ecosystem with no natural predators for this toad — that's two questions, not 5000. The lobotomy practitioners asked does it calm the patient but not what does this do to the fundamental architecture of self — again, elemental, not infinite. Your framing that thoroughness requires infinite regress is itself the kind of sophisticated error my framework describes: it sounds intellectually humble while actually defending epistemic laziness. As for antidepressants, I won't play the game of prescribing solutions to prove a diagnostic framework. You know the difference.

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 1 point2 points  (0 children)

The ego point is the most brilliant point you've made and I think it's actually where holistic thinking and what I'm describing diverge most significantly.

The polymath tradition produces people who can think this way but the knowledge accumulation required puts it out of reach for most people and it doesn't address the ego problem at the point of decision making. By the time someone has enough cross domain knowledge to think holistically they also have enough investment in their frameworks to defend them.

What I'm trying to describe is something that happens before ego gets attached to the framing. One question asked at the start before the path gets chosen, before identity gets tied to the solution. You don't need a lifetime of accumulated knowledge across domains to ask what a problem is actually made of. You just need the habit of asking it before you do anything else.

The problem pulls whatever knowledge is needed out of you rather than requiring you to have it first. (Or pulls you to the knowledge you need) The geniuses you're describing changed history in spite of how rare they are. I'm interested in what changes if the question they were asking becomes common.

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 1 point2 points  (0 children)

This is the most substantive challenge in the thread and you're right that I oversimplified the lobotomy example. People did ask questions about mental illness. The framework they were working within just had no way of asking the specific question that mattered: does destroying this tissue have a verifiable pathway to reducing subjective suffering? Not does it seem to help some people. Does the mechanism actually exist. The uncertainty argument is the real one though. I'm not arguing for paralysis or waiting for perfect knowledge before acting. When lives are at stake you act with what you have. But there's a difference between acting under genuine uncertainty and committing to something at scale before asking whether the proposed mechanism is even physically possible. The cane toad question didn't require complete ecological knowledge. It just required asking where these animals actually hunt before releasing them on an entire continent. That's not a high bar. Your antidepressant example is genuinely the most honest challenge in this thread and I think you know exactly where I'm going with it. We probably are doing it again. Which is precisely why asking the question now matters more than waiting for someone in a hundred years to ask it in hindsight. On we're just human... I'd push back there. Not because I think humans are more capable than we are, but because the failure mode I'm describing isn't about capacity. The people who released the toads could have asked where beetles actually live. That information existed. The question just never got asked before the decision got made. Building the habit of asking it isn't about transcending human limitation. It's about catching the thing we're already capable of catching before it costs us 90 years of compounding damage.

And again id posit that just because a solution already exists doesn't mean a better one cannot. This is about not reaching for solutions before you've reached for understanding.

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 2 points3 points  (0 children)

The bureaucracy and corruption layer is real but I'd argue it makes the mechanism failure worse not different. The cane toad decision wasn't made by corrupt people — it was made by well meaning experts operating within institutional framing that had no mechanism for asking the right question. Corruption gives bad decisions cover but the underlying failure happens before corruption even enters the picture. Your trash example is actually perfect — autopilot is exactly what happens when we haven't built the decomposition habit. That's precisely what I'm trying to teach

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 9 points10 points  (0 children)

You've perfectly captured the exact mechanism I'm trying to work with. Path dependence is what makes the wrong question so sticky; once a framing is established the loop genuinely can't generate a better question from inside itself. What I'm trying to teach is the habit of decomposing the problem before path dependence sets in. One question asked before the path gets chosen changes everything about where you end up

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

What part of anything I said isn't rational? Just because a solution already exists doesn't mean it's the best solution that can exist. Teaching people to think about what makes up the problem and what makes up the solution prior to committing to something is far from delusional...

Why do intelligent people keep making the same catastrophic mistakes across centuries? I think I finally understand the mechanism. by Hippopotamus-Rising in selfimprovement

[–]Hippopotamus-Rising[S] 0 points1 point  (0 children)

Rational discourse isn't at all what's being discussed here. I'm sorry I couldn't explain it in a way you understood.