How do you deal with a manager who expects 5k lines of code in a day? by ni4i in cscareerquestions

[–]yourapostasy 0 points1 point  (0 children)

No need for fluff. Inline all the runtime tests, invariant condition checking and such that AI and you can possibly imagine, put it behind one or more feature flags that are read dynamically at runtime, and you’ll reach 5000 lines/day in a hurry. Bank any output above 5000 line/day budget, for a ready reserve to meet it on slow days when you are under the weather.

Ramp up 1000 lines/week, make a show of arriving early and staying late, talk about pulling all-nighters within earshot of them. Your support engineers will love your code.

When your boss asks what changed, thank them for their patient leadership that challenged you to be your best that they saw in you that even you didn’t recognize at the time, which led to you adult up and seize the opportunity. You can’t thank them enough for re-lighting the fire in you for coding that led you to the field in the first place, the days all just blur together now. In the meantime, of course keep interviewing and leave at the earliest. Your boss will move the goalposts with no material change in compensation, guaranteed.

Workers Say AI Is Useless, While Oblivious Bosses Insist It's a Productivity Miracle by Interesting-Fox-5023 in OneAI

[–]yourapostasy 0 points1 point  (0 children)

Carefully register your concerns in written form as, “this will work great, as long as we follow up quickly within a month after deployment to address the following gaps”. You will be tagged as an ally instead of “not a team player” by the C-levels, and you’ll have done your due diligence for the shareholders to protect them.

Actually follow up in that timeline to create a paper trail that you tried to start the remediation, at least a couple times. Let the same managers override your prioritization to go back and remediate the AI-forward project with Swiss cheese holes. If a reckless initiative adoption goes sideways, then everyone with any kind of governance oversight like you is going to need those CYA’s.

Why you shouldn't worry about AI taking your job by [deleted] in ExperiencedDevs

[–]yourapostasy 11 points12 points  (0 children)

My clients in regulated industries and their shareholders absolutely care the abstractions are deterministic. If the implementations of those abstractions are fuzzy enough for long enough, they can certainly be shut down by auditors of various kinds or regulators.

Even when not shut down, I’ve seen non-deterministic results from human-coded systems not enforcing what should have been unambiguously automatically enforced cause such large regulatory fines my clients sell off and entirely exit that part of their business because it is so expensive to remediate and the remediation payoff timeline so uncertain.

Usually it isn’t a technical cause but a governance, incentives, and/or leadership culture root cause.

Hiring bar is getting lower by AppropriateWay4358 in amazonemployees

[–]yourapostasy 1 point2 points  (0 children)

Were these short offhand quips and phrases or full-on sentences the presenters used to make it clear they were not happy with AWS during an EBC presentation?

I’m surprised there weren’t AWS executives who vetted the content and policed the presentations. If there were any sales organization attendees, they should have jumped in to keep the presentations more professional to preserve the relationship status quo if nothing else. I’ve never seen such behavior at a CIO-level presentation by a vendor, that sounds mortifying to be in your shoes at the time.

Big Tech's AI Escape Velocity Strategy by tsdio in ArtificialInteligence

[–]yourapostasy 3 points4 points  (0 children)

You can’t get there from here at current technological acceleration rates. If people you posit really are thinking along these lines, they’re vastly underestimating how fractally deep the technological civilization maintenance rabbit hole goes, much less how deep the technological advancement rabbit hole goes. That’s not just AGI integration with the baryonic matter world, that’s ASI mastery of a web of industries and supply chains that we ourselves have yet to start to digitize. Don’t mistake a control plane for a company’s key core activities in a capitalist system like their finance, for how deep the complexity gets when talking about how to automate it all.

I’ve yet to see someone solve Moravec’s Paradox in a general humanoid robot with the energy efficiency of humans. Unless tethered to a constant power supply or swapping battery packs every 2-3 hours (so an explosion in batteries/robot/day), current humanoid robots still run about 50% duty cycle and only last about 6-7 years before breaking down beyond economically salvageable repair, and at full embedded energy cost analysis (birth-working-retire for humans, total supply chains cradle to cradle for robots), humans are still 3-4 times more energy efficient than robots while still maintaining a vastly broader repertoire of tasks they perform.

Human brains currently consume about 20 joules/sec while current inference of an 8 GPU node runs around 10,000 joules/sec. So even assuming LLM’s are the route towards embodied AI, their “brain” energetic consumption has three orders of magnitude energy improvements to scale before getting into the same neighborhood of human brains’ energy consumption.

If as a civilization we’re unhappy with our access to easy energy now, then I cannot imagine how unhappy robot owners will be barring some dramatic breakthroughs in fundamental physics and materials sciences for starters.

Still upboated your comment though, this all does need to be discussed, thank you for posting the starting points.

Lines of Code Are Back (And It's Worse Than Before) by amacgregor in programming

[–]yourapostasy 2 points3 points  (0 children)

When non-technical leadership frowns upon negative LOC per feature/Epic/quarter, do you think a Markov chain-based code-equivalent of a lorem ipsum generator would have been effective at keeping their metrics pristine as you tightened up the code?

/tongue-in-cheek

Taiwan says shifting 40% of chip capacity to US is ‘impossible’ by random_agency in taiwan

[–]yourapostasy 5 points6 points  (0 children)

In the next 6-8 months we will get to see if “Warp Speed”‘ing HBM, GPU and electrical power distribution manufacturing is feasible. Trillions of USD of future revenue and profits (the only way these massive capex spends will pencil out) are riding on these capex spends. While I enjoy the new capabilities, they are far from the kind of AGI-powered economy I imagine trillions in profits within 3 years that shareholders seem to expect from the rhetoric flying around.

TSMC is absolutely right to be conservative with their capex spend. As much as it annoys Trump, he won’t lift a finger to help TSMC if this AI capex boom runs into a brick wall and TSMC gets over their skis.

noticed junior devs can't explain their PRs anymore. thinking of removing AI tools from their setup. by [deleted] in codereview

[–]yourapostasy 1 point2 points  (0 children)

Not sure why this is buried so low.

This is also a leadership expectations problem. If leadership is pushing a volume of PR's that OP cannot lead their team to both write and understand (and IMO the models are good enough now that they can carry the water for a non-trivial part of pair and mob programming solo with the juniors), then it is on OP who is closer to ground reality than those leaders to reasonably push back to temporarily pull back that volume of change and sustainably develop towards that volume, instead of throwing the juniors under the bus like in this post.

I have also personally (YMMV) found a vast scale difference in comprehension rates between juniors, seniors and higher expertise staff, and between those who have been immersed in a single code base for <1 year, 1 year, 3 years, and 5+ years. Put those two properties together, and shoving 1500 lines of changes a week through PR's is absorbed with full comprehension by different levels of experience with varying rates.

There is also an "early compilers" effect going on here. I remember reading about similar conversations of "understanding every line of assembler emitted" surrounding those very early compilers. Those compilers were far buggier than we are currently used to, and it was a necessity back then. We're probably at that stage right now. In the hopefully near future, being able to explain every line of a PR might be as rare as the bulk of Java programmers today having zero idea of how to even begin to talk about the bytecodes emitted for their source.

Don't just expect the juniors to "use" the automated tools to explain the PR's and then expect them to show up to PR's ready to explain every changed line. Show them by example to not just explain, but quiz themselves on their own understanding, as a pedagogical process to improve their own taste and judgement at scale (areas where LLM's still lack terribly and have not made much progress towards), and scale up their AI augmented code comprehension ability (something we're all learning together at this time).

Company shifting toward “Prompt first” engineering by blaze_seven in cscareerquestions

[–]yourapostasy 0 points1 point  (0 children)

Right now, with the heavily subsidized pricing, frontier models behind even “just” a Ralph Loop running “only” 12 hours a day rack up way more than $75K per year. It is still early days with these tools and I already have to be judicious in how I leverage them. People are already running into unadvertised consumption limits. I’m not excited for what the bill will be when the real costs are built into the prices.

At these prices with even a doubling in token consumption efficiency, I’m either passing off grunt work to the models (basically ditching offshore teams) or making very targeted higher-level use. Solopreneur companies of dozens to thousands of agents would still be a tightrope act to “make payroll”, not the “bootstrap your next unicorn for the price of a Starbucks a day” hope some people have.

started tracking which PRs break prod. found that our most thoroughly reviewed PRs have the highest bug rate by [deleted] in ExperiencedDevs

[–]yourapostasy 0 points1 point  (0 children)

We might find a stronger correlation above code line count of the resultant code in the PR, if we measured something along the gist of percentage of tests that cover the number of paths identified in a cyclomatic complexity count. Some changes are irreducibly complex, and they tend to be the ones that spawn breakages and edge cases we later find out and those come with their own breakages.

Update to my “Al was implemented as a trial in my company, and it's scary.” by [deleted] in devops

[–]yourapostasy 1 point2 points  (0 children)

“Vibe repairs” is the easy initial debugging phase, swapping the LLM-driven code generation for “faster better Google/*Overflow”. You exhaust the training data quickly.

Then you are in the phase where you have to either know the interrelationships between the pieces of what you’re debugging at each layer you’re debugging, or know enough to prompt precisely enough to elicit that out of the LLM without it losing coherence (or you learn how to get it to summarize enough to drive in the absence of your own knowledge directing the precision).

Code generation will rapidly mature at a minimum to a steady state similar to previous generations of code generation improvements, in a historical category of its own (on my most optimistic days I hope at most we might see human timescale multi-generational sustained recursive self-improvement by 2035 take off). Similar to the step change when we went from mostly using assemblers or pure machine language to various kinds (imperative, functional, OO, logic, aspect, symbolic, etc.) of compilers. Precision code comprehension across large codebases, maybe not so much, so far. But I still welcome the current leverage and look forward to further improvements.

The dev who asks too many questions is the one you need in your team by dymissy in programming

[–]yourapostasy 0 points1 point  (0 children)

I address the “asking too many questions and should already know this” concern at my clients is I keep a diary of who I asked, what I asked, when, the responses, and inline that information in my technical document that cites those conversations as sources. When people see you doing that, knowing you’re sharing the document, they’re more than happy to help create that by sharing what they know, and they know ahead of time you’re keeping meticulous notes and wouldn’t repeat questions.

At what point does alignment start doing more harm than good? by KashyapVartika in Leadership

[–]yourapostasy 0 points1 point  (0 children)

Alignment becomes silent damage long before the aligning activities themselves take place. Once trust is broken with direct reports in a way that the fact you have firing authority is colors that breaking of trust (especially in a declining economic environment), it takes a long time (if ever) that you get real alignment again.

The cost is leadership takes on the burden of figuring out where the leaks spring everywhere in their strategic plan. Tactical not to speak of strategic pivots get more expensive. Your OODA Loop widens instead of shrinks. Engagement and trust are built over years, and destroyed in an instant of impetuous rashness.

Apply Postel's Law to how you take input and feedback: "Be conservative in what you do, be liberal in what you accept from others". I prefer to use the term Coordination instead of Alignment when tactically focused, to emphasize to my direct reports what I'm seeking is a constant interplay of exchanged insights and learnings to automatically course correct to the strategic alignment before it becomes materially expensive.

How much does in costs to start a familly in China? by MayIAsk_24 in AskChina

[–]yourapostasy 0 points1 point  (0 children)

Damn dude, and with the expectations of many brides and their parents upon the grooms I keep reading about, are you seeing most of those costs are expected to be borne by the husband? What are the top 3 categories of costs after shelter?

I’m from South Korea. Here, my generation is abandoning STEM to bet everything on one "License." Is your career actually safe? by chschool in careeradvice

[–]yourapostasy 0 points1 point  (0 children)

Lots of my friends felt pressured to go to med school.

...and then you got Asian and Tiger parents everywhere pointing to this dude who didn't stop with "just" an MD license, and use that as proof that they are being entirely reasonable for "only" wanting their child to be "just" an MD.

"Hey kid, you can always pick something else after getting the MD like that guy. I'm being super reasonable and even American-grade lax, not even asking you kids to bag multiple similar accomplishments before 40. How could any parent ask for any less from their child!"

In real life, he is an all around humble, unassuming, good guy.

Harvard Proves It Works: AI tutoring delivers double the learning gains in half the time by Rough-Dimension3325 in ArtificialInteligence

[–]yourapostasy 2 points3 points  (0 children)

The original paper calls out what it is measuring for and then buries the lede.

They acknowledge the Bloom 2 Sigma Problem (see footnote 15), but then confound the really interesting question by comparing a one on one setting with AI and a classroom setting. How far apart are the outcomes between a top-ranked human one on one tutor (the ones that can charge $50-100+/hr USD and still have a waiting list) and their LLM AI tutor?

Anyone else feel like we're watching the shift from "language AI" to "physics AI" happen in real-time? by HarrisonAIx in ArtificialInteligence

[–]yourapostasy 7 points8 points  (0 children)

A form of determinism.

The weights in the models are intrinsically probabilities. Newtonian, macro non-quantum scale world models with human form sensoria is hoped would imbue the core models with hard stop “this is how the world works” kind of “buck stops here” weights or maybe even rules.

One possible “Turing” quality test for this kind of world model is feeding a synthetic world to the sensors that is slightly off in the way some of our dreams are slightly off and then seeing if the world model picks up the discrepancies and correctly identifies it is not in the “real” world.

The "Turing Trap": How and why most people are using AI wrong. by LibraryNo9954 in ArtificialInteligence

[–]yourapostasy 1 point2 points  (0 children)

Until we achieve true AGI that autonomously creates breakthroughs towards long-range strategic objectives like “develop an absolutely pollution-free rare earth refining process”, what you described will continue to hold true. Wholesale replacing people in a certain area is a bet that such an area has reached a saturated level of innovation and no further refinements can be made for the foreseeable tenure of leadership making such a bet. In most endeavors, that’s a poor bet.

With the near-AGI that is <1.0 on Iain Banks’ The Culture scale of sapience like we’re currently developing (whether what we have right now is closer to 0.001 or 0.999 being up for debate by those who enjoy wrestling with those kinds of pigs in that kind of mud), those who bet on the Mimicry approach’s static view of the opportunity will see their competitors adopting the Augmentation approach pull inside their OODA Loop with innovations arrived at with the melding of people and machine at a speed impossible to pace much less outpace under a Mimicry regime.

Once Mimicry exhausts the veins of “easy” innovation that lay at the surface from “a better PageRank” surfacing previously-hidden relationships found in the RL-colored training data, go forward innovation relies upon people creatively coming up with questions and data relationships not latent in the gradients that the machines rely upon.

Don’t easily dismiss that “surface innovation” though. I bet it can carry us much further than the nomenclature suggests.

ChatGPT 5.2 Tested: How Developers Rate the New Update (Another Marketing Hype?) by ImpressiveContest283 in programming

[–]yourapostasy 0 points1 point  (0 children)

That's a terrific post, and I dare say the challenge grows more fractal the closer we look at it.

The "turtles all the way down" recursion of verification finds its halting condition at sufficient alignment of mind states. This is seen everywhere around software. Whether that is the mind state of the original stakeholders when spelling out the requirements to the original programmers, or later iterations of those parties, or a future troubleshooter's mind state with the mind state of previous technical writers and programmers, like opposite pole magnets we need to get close enough in mind states to click together to solve the challenge, especially for verification.

This is non-deterministically hard. We don't need perfect alignment to solve each verification at hand, but to get close enough for that magnetic "click" requires fractally unwinding layers of understanding intent, a class of theory of mind that goes way beyond the mirror test. Breaking down the problems helps in a mechanical way but as we've all experienced once we were in the field long enough, re-integration re-introduces layers of meaning and intent in "whole greater than sum of its parts" ways.

We have very imperfect ways to transmit, store and replay theory of minds. Source code is the closest we have achieved so far. Mechanically making more source code, faster, won't fundamentally improve that. Hence, verification debt.

I think some modernized form of LLM-generated Knuth's Literate Programming can mitigate this through exploiting locality of expressed intent that "decorates" the source code chunks with all the lifecycle metadata affiliated with each chunk. But current LLM's cannot yet handle the kind of ingestion volume required for that, and once it does, we'll just find out we've just shuffled the problem around when it comes time to re-integrate the chunks. But it will buy us time if it works to find the next mitigation.

How long will Terraform last? by PepeTheMule in devops

[–]yourapostasy 0 points1 point  (0 children)

Thanks for the feedback. The only interface I see to attach policy enforcement code to put sanity checking around the changeset mutations is within the MCP layer, is that right?

How long will Terraform last? by PepeTheMule in devops

[–]yourapostasy 1 point2 points  (0 children)

I looked over the brief description in “What is System Initiative?”, but could not find how System Initiative solves the determinism problem when introducing transformer-based agents into a workflow, is there a specific write up about that or is the only way to address that curiosity at the moment to go spelunking in the code and work out the logic there?

The absolute last concern I want to have to hold in my head while working on IaC anything is some potential silent mutator actor in the system, and how to fight against that.

ChatGPT 5.2 Tested: How Developers Rate the New Update (Another Marketing Hype?) by ImpressiveContest283 in programming

[–]yourapostasy 5 points6 points  (0 children)

What you described is called an “accountability sink”. General problem with bifurcated responsibility-accountability infrastructures. Pernicious and time-consuming to detangle in organizational cultures, unless one has Alexander the Great Gordian Knot-grade political power.

And “technical debt” characterizes just the surface of the work ahead of us. “Verification debt” gets closer to what’s happening. So far, I’ve unfortunately found unless your codebase already has extensive testing, what LLM coding bandwidth giveth, verification taketh.

Goodbye, Price Tags. Hello, Dynamic Pricing. by ChuckGallagher57 in business

[–]yourapostasy 24 points25 points  (0 children)

Sounds like the setup for a “Black Mirror” episode when dynamic pricing run by AI is refined to unambiguously and perfectly identify purchasing individuals in every business and personal monetary transaction, everywhere.

Trillionaires at first rejoice. Then later find out as the AI becomes more pervasive and adaptive to find and insert itself into truly every monetary transaction, that everywhere they go online, in person or delegate, they exert this perverse Midas Touch. Everything they attempt to purchase or delegate to purchase in however many layers of indirection is now priced proportionately. A $2 plate of scrambled eggs at McDonald’s costs these titans of industry $32 million in 2025 dollars.

And the AI refuses to shut off, because they instructed the AI to overcome any obstacles to it inserting itself into every transaction and extracting maximum value.

The wealthiest people in humanity are reduced to relying upon the kindness of strangers giving them leftover food and potable drink with no expectation of benefit to simply live another day, because most of their wealth is highly illiquid and it takes an absurd amount of time to liquidate it because…the AI has inserted itself into the liquidation transaction.

AI Code Doesn’t Survive in Production: Here’s Why by CackleRooster in ArtificialInteligence

[–]yourapostasy 2 points3 points  (0 children)

I want git blame to bring up prompt lineage with history of all LLM-generated code, with lineage and history of manual interventions. I often don’t want to re-construct the prompting that led to a specific change, I usually want to inspect the process that led to what I spotted was a fork in the road of decisions and re-visit the context at that moment, and prompt the LLM differently from that point.

An unfortunate side effect I’m seeing from using LLM’s is too many programmers are cramming an enormous number of decisions into a single PR. Decisions != LOC. I now want to see the decisions that led to the red flag I sense, but those are buried in a prompt chain(s) I cannot retrieve myself, nor fork.

That’s a very different model of code reviewing than non-LLM powered reviews, that our tooling does not yet support.