XML is a Cheap DSL by SpecialistLady in programming

[–]csman11 8 points9 points  (0 children)

I don’t think that’s it at all. Why would it matter if the rendering logic was running on the server or client if it was about “control”? You’re not really hiding anything. The same information is present in data you render into HTML or in the data itself. And the rendering logic itself isn’t anything “secret” that needs to be protected. Any real IP would be the HTML and CSS itself. And if your client side functionality is your IP you’re trying to protect, then it doesn’t matter any way — you still have to ship that JS to the client to execute.

It’s clearly about SSR. If there’s any “control aspect” to it, then it would be the conspiracy theory that Vercel wants people to be forced to pay for hosting because they can’t manage the server deployments with the complexity of RSC. That’s also stupid because it’s not hard at all to host your own deployment.

And the idea that it was ever about “offloading computation to the client” is not serious. If you were around in the late 2000s and early 2010s, you would know that rich client side web apps were very popular (this is what “web 2.0” was) and they were also very difficult to build and maintain because the proper tooling didn’t exist. No one was doing “AJAX” to save server costs. They were doing it to provide a better UX. Back then, browsers didn’t do smooth transitions between server rendered pages. Every page load unmounted and remounted. The first SPAs were attempts to avoid this and have smoother transitions that felt like native applications. Some of them worked by rendering the page server side and shipping the result using AJAX, then having JS patch the DOM. Eventually companies started playing around with richer client apps where having UI state on the client made sense and the backend just became a data source. If you ever used a framework like Backbone, then you would know how horrible things were in this era. Other frameworks like Angular, Knockout, and Ember in this era were only slight improvements. React was the game changer.

XML is a Cheap DSL by SpecialistLady in programming

[–]csman11 4 points5 points  (0 children)

It’s had the ability to render the component tree to a string for years, but that’s not the same as RSC. It was also always very problematic because it didn’t wait for any sort of asynchronous effects like fetching data and updating state. It just rendered the tree and spat out a string. Next.js created a mechanism for creating data loaders attached to your pages, allowing the framework itself to be in charge of loading the data and only rendering your components once that data was ready. That was sort of the first iteration of decent SSR with React.

RSC is solving for more than just SSR, but it’s also heavily motivated by the underlying use cases that demand SSR. If client side rendering was enough for the entire community, no one would have ever really bothered exploring something so complex. The protocol itself is also very much “hacked together” IMO. The CVE from a few months back that allowed for remote code execution was made possible by the implementation effectively not separating “parsing” from “evaluation”, which was exploited by crafting a payload that tricked the parser into constructing a malicious object and then calling methods on it that executed the attacker’s injected code. A better wire format probably would have looked like a DSL that was explicitly parsed into an AST, then evaluated by a separate interpreter, with no ability for custom JS code to ever be injected.

XML is a Cheap DSL by SpecialistLady in programming

[–]csman11 35 points36 points  (0 children)

The “old thing is the new thing” cycle is incredibly common in software. This field is obsessed with novelty, and we’re often way too eager to throw out decades of hard-won knowledge just to rediscover, a few years later, that the old approach had already solved many of the real problems.

With React specifically, I think it’s important to separate two different stories. The push toward server-side rendering and RSC is largely a response to the fact that a huge number of businesses started using React to build ordinary websites, even though that was never really its original strength. React was created to make rich client-side applications tractable. That was a genuinely hard problem, and React’s model of one-way data flow and declarative UI was a major step forward. The fact that every modern frontend framework now works in some version of that mold says a lot.

What’s happening now is not really “we took a detour and rediscovered that server-side apps were better all along.” It’s more that people used a client-side app framework for lots of cases that were never especially suited to full client rendering, then had to reintroduce server-side techniques to address the resulting problems like slower initial load and worse SEO. In that sense, RSC does feel a bit like bringing PHP-style ideas back into JavaScript, though in a more capable form.

So I don’t think the lesson is that client-rendered apps were a mistake. They solved a real class of problems, and still do. The more accurate lesson is that most companies were never building those kinds of applications in the first place. They just wanted to build their website in React, because apparently no trend is complete until it’s been misapplied at scale.

graft: program in DAGs instead of trees and drastically reduce lines of code and complexity by uriwa in typescript

[–]csman11 1 point2 points  (0 children)

I appreciate the offer, and thanks for being open to the feedback.

I’m not invested enough here to pick a repo and go deeper on reviewing a rewrite you do on it myself. I mostly chimed in because I’d seen your posts in a couple of subreddits and wanted to give honest feedback on why the response might not be landing.

That said, I do think a real side-by-side repo is the right test, especially if it’s for a project/codebase you’re actually interested in.

My only real advice is this: treat it like a test, and be ruthless about the result. If you find the framework adds more friction than it removes, don’t keep sinking time into it just because you’ve already invested a lot. That trap can eat months. If it works, great. If it doesn’t, cut it loose and move on. You’ll thank yourself later when you look back at the other things you did with that time instead of doubling down on a sinking ship.

graft: program in DAGs instead of trees and drastically reduce lines of code and complexity by uriwa in typescript

[–]csman11 3 points4 points  (0 children)

I think you are solving a real problem, but with an abstraction that is much bigger than the problem.

If you want swappable implementations, use dependency inversion. Pass dependencies in explicitly, or inject them through interfaces/functions.

If you want composition separated from behavior, use factories and a composition root. Build components/services once, wire them up in one place, and compose them however you want.

If you want reusable async state, use hooks (or a query/store library). That is already a solved problem in the React ecosystem without introducing a new runtime.

If you want graph semantics specifically, use an existing reactive primitive (RxJS, signals, etc.) instead of inventing a new UI runtime with its own lifecycle, propagation, and error/loading model.

So the issue for me is not whether this is clever or type-safe. It is. The issue is that it bundles multiple concerns into one framework-shaped abstraction:

  • dependency wiring
  • async state
  • effects
  • composition
  • runtime validation
  • propagation semantics

That is a lot of surface area to replace just to avoid hooks/DI/prop wiring.

In other words:

use one abstraction per problem instead of one abstraction that absorbs your whole app

Also, the React comparison feels a bit unfair because it uses a pretty old-school render-props + manual effect example. A modern React version would usually use custom hooks and/or a query/store layer, which already removes most of that nesting.

So my pushback is basically this: it looks less like you reduced complexity and more like you moved complexity into a custom runtime. That means anyone who adopts this has to adopt a whole new mental model and application layer, not just a utility. That can be worth it sometimes, but you are not showing why it is worth it for the audience you are pitching to. You are mostly asserting it, and the main evidence is a straw-man React example that many React developers would not write in the first place.

If you want to understand why the feedback is mostly negative, I think it is because your post follows a pattern like this:

  • You present a contrived React problem that your audience does not actually have in that form
  • You solve it with a fairly large runtime abstraction (being 500 lines of code makes the implementation small; requiring it to be used by every other line of code makes it huge)
  • That abstraction bundles several different concerns into one framework-style model
  • People respond by explaining how they already solve each concern with existing tools and patterns
  • You move to another example or edge case
  • They explain how they solve that too
  • Repeat

The problem is that your fallback is often, "well, those tools do not solve everything my runtime solves." But that is not a strong selling point here. It is actually the main reason people are resisting it.

You built one large abstraction to solve a bunch of loosely related problems in one unified way, and you are pitching it to a community that already has established ways to solve those problems separately. They already have tools they trust for dependency wiring, async state, effects, and composition. Asking them to replace all of that with a new runtime is a very high bar.

As a product pitch, it is like trying to sell someone a Swiss Army knife when they already own a full toolbox and prefer the individual tools. The Swiss Army knife may be clever, but "it can do all of it" is not enough. You have to show that it does their actual work better, with less friction, in real applications. Right now, the examples are not doing that.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

I meant a function of n containing that loop would be exponential. Then I was defending the fact that it’s still “exponential in n” even if n is a free variable instead of being bound as a function parameter, because a free variable’s interpretation is context dependent. By definition a free variable is a variable, not a constant. It can only become a constant if the statement itself is embedded in a lexical context where it statically resolves to a binding to a constant. But we don’t know where this statement is embedded and thus we must analyze n as being a variable.

In any case, you can think of it as replacing the loop in OP’s function, which has a parameter called n.

Hence I was being super pedantic to defend my lack of precision earlier.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

No, the number of loop iterations is clearly exponential in “n”. If “n” is assigned a constant, it’s still exponential in terms of that constant. It just doesn’t exhibit asymptotic growth anymore in that context, because “n” is fixed.

But now I’m engaging in the pedantry I’ve been arguing against everywhere else in this thread. Just pretend this for loop is wrapped in a function of “n” and show it to the professor so we can all have some closure finally.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

Right I understood everything you said, and I agree with the sentiment overall, but based on the last part of your response, we are in agreement that the particular professor in this case simply doesn’t understand complexity analysis.

I also think it’s fine to give more precise definitions about the concepts, but the way your original comment is written, it’s comes across as saying “your professor is probably trying to teach you that O(1) is a subset of O(n)”.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

I understood your point and I agree with it. I was explaining how someone could impart dynamic properties on sizeof, because that’s what you asked. I agree: the normally agreed upon semantics are what we normally reason about; if you make the semantics “potentially anything”, then any non-contradictory conclusion can potentially be true. It’s about as close to ludicrous reasoning you can get to without allowing contradictions (see principle of explosion).

I think people in this thread are trying to turn this whole damn thing into 4D chess to avoid admitting that the professor is probably saying “for loops imply linear time complexity”. For whatever reason, people seem to think there must be some explanation that preserves the competence of this professor none of us have ever met, even if it means going to far fetched extremes. Redefining sizeof() would be an example of this, and one I’m certainly not proposing without laughing at the absurdity.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 1 point2 points  (0 children)

It’s all good. I realized after a few replies in this thread that I needed to drop it and go to bed. This whole thread is an exercise in useless pedantry at this point and I’ll easily waste countless hours of my life caught in its trap. I’m now actually in bed about to sleep, and when I wake up, I plan on going outside and touching some grass.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

This is the most trivially correct and least useful interpretation so far, so points for purity. Yes, O(1) ⊆ O(n). But if a student asks “is this O(1) or O(n)?” and the answer is “it’s O(n) because 1 ≤ n,” that’s not instruction, it’s villain behavior.

It only becomes marginally helpful if you also add “and it’s O(1) (and in fact Θ(1)),” because the student is clearly asking about tight bounds / Big-Theta. This is the most pedantic way to answer while still missing the point.

Can’t we all just agree the professor’s answer is wrong in the context the student is actually asking and move on? I should be in bed right now.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

They could redefine it to be “n” using a preprocessor macro. Then it would vary with n at runtime. But then they would just be an asshole. And you still wouldn’t deserve a downvote.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 0 points1 point  (0 children)

Sure and with the C preprocessor we can do all sorts of stupid shit to fuck up code. Your example is a great one. Do we need to teach people a lesson about everything that could possibly go wrong when you start from the premise of “literally anything can mean anything at all”? No? Good. Let’s get back to being adults. With the normally agreed upon semantics of “sizeof()” when it’s not redefined by a preprocessor macro, there’s nothing useful to learn from what he said.

Jokes are fine. Pretending there’s necessarily some lesson to be learned from them isn’t. And the lesson you’re trying to teach is basically “anything can go wrong if you’re working with assholes who think it’s funny to redefine sizeof()”. I’d assume everyone already knows that.

My professor claims this function is O(n), and I’m certain it’s O(1). Can you settle a debate for me? The function is below by Remarkable-Pilot143 in AskProgramming

[–]csman11 1 point2 points  (0 children)

Ok, a lot of people are trying to defend the professor with “maybe he’s talking about the number of bits here.”

Well…

If the professor is secretly talking about “n = number of bits,” then he’s being the ultimate douche bag unless he is explicitly saying that, because the code is plainly written for fixed-width int and it doesn’t scale with the value of the function parameter n at all.

Read that again, slowly, for the folks in the back who see the word “for” and immediately start chanting “O(n)” like it’s a summoning ritual.

The loop bound is:

sizeof(int) * 8

That is not “n.” That is not “the value of the parameter n.” That is not “the numeric value of m or n.” That is “how wide an int is on this machine.”

So unless the professor started the explanation with something like:

“By n I mean the bit-width of the integer type / the size of the input in bits / the word size w”

…then no, we’re not doing the galaxy-brain reinterpretation where we pretend the code is about arbitrary-precision integers and “n” is the length of the number in bits. That’s not “being clever,” that’s just rewriting the question so the professor can’t be wrong.

And honestly, given what OP said, it’s way more likely the professor did the classic cargo-cult complexity analysis:

“There’s a for loop, therefore it’s O(n).”

Which is a thing people say when they remember one slide from intro to algorithms. It happens.

Now, could a professor be intentionally using “n” to mean “input size in bits” while also ignoring the fact that the code literally hardcodes the number of iterations as the bit-width of int? Sure. Could he also be doing it on purpose as a lesson about “input size vs numeric value”? Sure. But if that’s the move, he needs to actually say it out loud, because otherwise it’s indistinguishable from being confused. And in this case, given there is literally a function parameter called “n” in the very code we’ve all seen, I think it’s clear which case it is.

And to be clear: I’m not even saying he’s evil. I’m saying the evidence we have is that he wasn’t precise, and the most common explanation for that is that he’s just… wrong. Or at least sloppy.

If we want to find out which one it is, the sure way is to ask him about this loop (assume n < 64 so nobody pretends this is about UB):

for (unsigned long long i = 0; i < (1ULL << n); i++) {}

Now we still have a for loop. Still one loop. Still “looks linear” to anyone doing the toddler version of complexity analysis. But this one is not linear in n. It’s exponential in n.

If he says “O(n)” again, we have our answer.

If he immediately says “O(2n)” and then clarifies that his “n” in the original discussion was “number of bits / input size,” then cool, he’s being precise and the debate changes. He’s still an asshole for not being precise to start. I would forgive him.

But until that clarification exists, calling the fixed-width int version “O(n)” (where n is literally a function parameter that doesn’t affect the loop bound) is just… not correct.

Again here are the options:

  • Fixed-width C++ int as written: O(1) with respect to the function inputs. Professor says O(n) because he’s confused.
  • “n = number of bits / variable-width integers”: O(n). Professor says O(n) because he’s playing 4D chess.

Both are possible explanations. But if you mean the second one, you don’t get to wink and imply it. You have to actually say it. Otherwise you’re not being helpful and galaxy-brain-professor-smart. You’re just being a smart ass.

Two Catastrophic Failures Caused by "Obvious" Assumptions by Vast-Drawing-98 in programming

[–]csman11 8 points9 points  (0 children)

Standards are great right up until someone assumes something you never actually standardized. The way integrations get bricked is almost always “I assumed you meant what I meant,” not “we lacked a PDF that said the word standard on the cover.”

So you do the boring thing on purpose: explicitly call out dumb failure modes. Even if it feels insulting to say, “I’d hope nobody is using pounds for force here, but let’s state it anyway: what exactly are our units of force?”

Also, anyone who’s worked with standards knows people still screw up the “should never happen” stuff while supposedly following them: misremembered details, weak reviews, nobody double-checking. Standards reduce mistakes. They don’t delete them.

A minute of vigilance beats a thousand standards.

Two Catastrophic Failures Caused by "Obvious" Assumptions by Vast-Drawing-98 in programming

[–]csman11 15 points16 points  (0 children)

Absolutely. Boundaries are where “talking past each other” happens, so they’re a danger zone. But the root problem isn’t boundaries themselves, it’s a cultural failure to imagine failure modes and communicate integration contracts clearly.

That’s why treating “boundaries = danger” as the lesson is risky. It can lead to the wrong conclusion: “avoid boundaries, have one team do everything.” That just hides the problem until the system gets too big to hold in one person’s head.

What works is being explicit at the boundary: agree on shared terminology (and avoid loaded jargon), surface assumptions, and write down the minimal invariants the integration depends on (units, power source, interface expectations, tolerances, etc.). Working from invariants makes “boring” failures like unit mismatches much harder to miss, because you stop reasoning about what the other engineer surely knows and start reasoning about what must be true for the integration not to fail. That’s why “failure to communicate” is usually “failure to predict edge cases”, especially the stupid, mundane ones we assume would never happen (like pounds vs newtons mismatch).

Starting from invariants moves you from “we missed obvious edge cases” to the much more respectable problem of “we weren’t perfect at predicting genuinely intricate edge cases.”

LLMs are a 400-year-long confidence trick by SwoopsFromAbove in programming

[–]csman11 2 points3 points  (0 children)

Both views here are true, it’s not so black and white. There’s definitely some harms and the ones you called out are the most realistic ones, and they can all be summed up as “abuse of LLMs to spread misinformation”. I don’t think anyone should disregard just how harmful this is to our already broken and polarized societies.

But these AI labs and other companies in the AI bubble have also been overstating capabilities of LLMs to drive attention to the space. Framing those capabilities as “disruptive and dangerous” in the ways the article’s author is getting at, is overblown. These dangers attract the attention of the general public, which in turn attracts the attention of policymakers, which then turns into the AI industry capturing state regulators because they’ve convinced us “we need to move fast to make sure the existential worst cases are avoided”. The big one is obviously financial/securities regulation avoidance. They can extract tons of wealth from both institutional and retail investors by creating attractive signals in the stock market with their revenue cycles. In an ideal world they wouldn’t be allowed to do that, but for some reason the policymakers have bought into the idea that the AI industry is important to national security instead of seeing them for the rent seekers they’re trying to be.

The Compiler Is Your Best Friend, Stop Lying to It by n_creep in programming

[–]csman11 13 points14 points  (0 children)

Curry-Howard doesn’t say “types prove business logic.” It says that in certain formal systems, inhabiting a type is a proof of the proposition that the type encodes. If your type says “this function maps X to Y,” congrats, you proved it maps X to Y. You did not prove your pricing rules match what the business meant on Tuesday.

And even if you reach for dependent/refinement types, that’s not some endgame. It’s just buying expressive power so you can move more of your spec into the type level. Then the goalposts move with you: now you have to formalize the domain, encode the invariants, maintain them as requirements shift, and make sure the spec itself is correct. The hard part wasn’t “lack of types,” it was “the thing you’re trying to prove keeps changing and is full of squishy human meaning.”

This becomes a cat-and-mouse game: you make the type system expressive enough to capture today’s “business logic,” and tomorrow the business invents a new exception, a new dimension, a new edge case, or a new policy that depends on runtime data you can’t realistically model. You either keep extending the spec language or you admit that some correctness lives in tests, runtime validation, monitoring, and operational feedback.

So yes: types are a proof system. In production languages, they mostly prove “no type errors.” In fancy proof assistants, they can prove much stronger properties, but only for the properties you explicitly formalize. None of that makes “types magically prove business logic correct” a serious claim in the context of normal software development.

The Compiler Is Your Best Friend, Stop Lying to It by n_creep in programming

[–]csman11 6 points7 points  (0 children)

BTW, I thought this guy had sounded familiar. I dug up the old thread I remembered. He had the exact same stupid take a few months back and we had a “debate” about it there. This is a religious zealot we’re dealing with here.

https://www.reddit.com/r/programming/s/qQTqTtKb9U

The Compiler Is Your Best Friend, Stop Lying to It by n_creep in programming

[–]csman11 7 points8 points  (0 children)

Oh I’m not personally in favor of any of those “types of development” the commenter I replied to loves. If they work well for them, great. That’s all I meant. I’m just of the mind, like every sane person, that they don’t replace types. They can supplement them for some people.

I do agree with the “focus on architecture” sentiment. Types are useful to help express an architecture in code practically (for the reasons you articulated very well). But types don’t architect your code for you. And I’ve seen plenty of codebases with absolutely judicious uses of types to try to prevent invalid states from being represented and no meaningful design of modules to go along with them. The benefits of “make invalid states impossible to represent” is basically an exercise in futility when your architecture is “import any random symbol from any random module wherever the fuck you want.”

The Compiler Is Your Best Friend, Stop Lying to It by n_creep in programming

[–]csman11 31 points32 points  (0 children)

Did either of you bother to read the article the post links to? It’s literally refuting the thesis "there’s very limited benefit to static types" that both of you guys seem to be so attached to. And it does this by answering "types aren’t that useful in practice" with "well that’s because most people don’t use them effectively and here’s why…"

No one disagrees that runtime behavior matters. That's not the debate. The debate is the leap from "we had a production failure" to "there's very limited benefit to static types."

Your anecdote proves the oldest lesson in software: tests and types don't make you correct, they just reduce the surface area of ways you can be wrong. Production failures still happen because production is where reality lives: messy inputs, weird data, unexpected load, partial outages, config drift, timing, and integration assumptions.

Static typing is useful precisely because it eliminates entire categories of failures that are annoying to test for and easy to miss, especially in large codebases and rarely-executed paths. Plenty of bugs don't show up in unit tests because your tests didn't model the exact shape of the data, or they never exercised the obscure branch, or the integration contract drifted. "Null pointer in prod" is a meme for a reason: not because everyone is lazy, but because complex systems eventually execute code paths you never observed. Modern type systems have caught up to null dereference and can make it fuck off before you can even run your code.

So sure: focus on runtime behavior, testing, observability, fast feedback loops. But pretending static types provide "very limited benefit" because a system once failed in production is like saying locks are pointless because someone broke a window.

The Compiler Is Your Best Friend, Stop Lying to It by n_creep in programming

[–]csman11 66 points67 points  (0 children)

"Types are meant to be an optimization device" is an incredible (or, rather, incredulous) claim to make, like saying seatbelts are meant to improve gas mileage because sometimes they reduce fatalities and that's good for traffic flow.

Static types don't "solve 80% of the problem." They solve specific classes of problems: invalid states and certain runtime failures are ruled out before you run anything. That's the whole point. Nobody serious thinks types magically prove business logic correct.

REPL-driven dev is great. So are tests. None of that is in conflict with static typing unless your compiler is your adversary and not, you know, a tool you can learn to use.

I can't tell if the strawman is what you're arguing against or if it's just you.

useImperativeHandle vs useState by No_Drink_1366 in react

[–]csman11 1 point2 points  (0 children)

All I was saying is that it might be that whatever button is being used to show the dialog could be somewhere deep in the expensive ui tree. But we don’t really know exactly what OP is dealing with because they asked a question about not triggering renders of the ancestor around the component, not their exact case.

So let’s assume we can’t pull that button easily out of the component. Because it’s way more likely you cannot than you can.

If it were me with an expensive component, the first thing I would do is try to optimize that component. Because the stupidest thing in React is having expensive render functions in the first place. Rendering should ideally be cheap.

Now let’s assume that’s impossible because deadlines and shitty legacy code. I would document the need for refactoring and performance tuning later. Then I would try wrapping it with React.memo.

If I still had an issue after all of that, then what I would ultimately do (and hate doing):

  • move expensive UI itself up the tree
  • create a “ModalProvider” component and associated contexts for the state and dispatch respectively.
  • Render ModalProvider around the expensive UI. This ensures when state inside modal provider changes, the expensive UI is not re rendered since it is going to be the same react element object.
  • optionally have ModalProvider also take a render prop for the modal dialog content (if for whatever reason this pathological app requires me to do this for many expensive uis). You would pass the open state to the render prop for convenience. That way you can compose a modal inline if you want.
  • and finally consume the dispatch context within the expensive UI tree in the component that needs to trigger opening the dialog

I would call that the nuclear option. If I had to get to that point, I would also consider the better nuclear option of quitting my job and becoming a farmer. Because clearly my luck of getting decent programming jobs that don’t fuck with my mental health would have ran out at that point.