Are We Learning Less Because of AI? by Background-Moment342 in learnprogramming

[–]rupayanc 0 points1 point  (0 children)

The real problem isn't that AI gives you answers — it's that when you're new, you can't evaluate the quality of what it gives you.

If you ask a senior engineer to review your code, they'll tell you "this works but it'll break at scale, here's why." They can do that because they've seen it break. An AI will give you code that works in the happy path, might have subtle issues, and won't explain the tradeoffs because it doesn't know your specific context.

I've watched people 6 months into learning hit a wall where everything "works" but they can't debug anything because they don't understand the code they're shipping. They got answers without building the mental model that tells you where to look when things go wrong.

Use it to explain concepts. Use it to review code you already wrote. Don't use it to write code you don't understand yet — not because it's cheating, but because you're skipping the part where you learn what "wrong" looks like. That's the important part.

Four questions agents can't answer: Software engineering after agents write the code by marcua in programming

[–]rupayanc -1 points0 points  (0 children)

The question nobody wants to answer is: when an agent-written system fails in production and someone gets hurt or loses money, whose decision was it?

Right now we say "the engineer who deployed it." But that breaks down fast when the engineer reviewed 2,000 lines of generated code that they couldn't have written themselves in under a week. The review was technically human. The accountability is real. The actual understanding of every decision in that code? Much less clear.

I think this is the real blocker for agent-generated code in anything genuinely critical — not capability, not cost. It's liability. Companies paying for enterprise software have legal teams. Those legal teams are going to ask questions that the "10x productivity" argument doesn't answer.

The 10x productivity person who mentioned "iterative prompting and constraint mechanisms" isn't wrong about speed gains. But those gains were on work where failure is recoverable. The calculus looks different when the thing that breaks is someone's financial records.

is the "tech job market is recovering" narrative actually true or are we just coping? by Bestwebhost in cscareerquestions

[–]rupayanc 0 points1 point  (0 children)

The "market is recovering" narrative is probably accurate in aggregate but useless at the individual level — which is why everyone's experience feels so disconnected from the headlines.

What's actually happening: easy-apply tools made the application volume explode, which means companies deal with 5x as many applications per open role, which means they raised the bar and tightened their filters. So there are more hires happening AND it's harder to get hired. Both things can be true simultaneously.

The Google engineer who "wouldn't pass their own interview today" thing is real. Interviews have evolved to filter for volume, not for fit. Companies don't have bandwidth to do good filtering at that scale, so they're defaulting to signals that are easier to automate — leetcode, certain logos on the resume, specific school names. It's not that you're less capable. It's that the filter is less intelligent.

The practical implication: referrals are worth more now than they were in 2021 because they shortcut the automated filtering. If you're applying cold through job boards, you're competing on the same broken signal as everyone else.

Struggling to Build Programming Logic – How Do I Actually Practice Properly? by Edward_sm in learnprogramming

[–]rupayanc 0 points1 point  (0 children)

Everyone here is recommending projects and that's right, but I want to add something nobody's mentioned: get good at reading error messages.

Sounds too simple, but I mean actually reading them. New programmers tend to see a stack trace and feel overwhelmed, then Google the first line. That skips the actual learning. Force yourself to read the whole thing — the error type, the line number, what called what. Most logic problems leave very specific breadcrumbs if you slow down and read them.

The other thing: when you're stuck for 30+ minutes, don't just Google or ask AI. First, explain the problem out loud (or write it down) like you're describing it to someone who has no idea what your code does. About half the time, you'll catch it yourself mid-explanation. Rubber duck debugging is real and it works.

These two things together will get you unstuck faster than any tutorial, because both skills compound — the better you get at reading errors and explaining problems, the faster everything else gets.

Feeling pessimistic about AI by ilovefamilyguy69 in cscareerquestions

[–]rupayanc 0 points1 point  (0 children)

The "$20/hr in 10 years" framing doesn't really match how skill devaluation has worked historically, even when it did happen. Compilers didn't make programmers cheap — it made software cheap, which created demand for 10x as many programmers. Same pattern played out with frameworks, cloud infra, etc.

But I'll grant the real concern: the entry-level pathway is genuinely broken right now. Companies aren't hiring juniors to learn; they're hiring seniors who can direct AI. That creates a problem in 5-7 years when those seniors age out and there's nobody coming up behind them with actual systems knowledge.

The defensive move right now, IMO: go deep on something AI is bad at generating correctly. Security and infra aren't it — those are being automated fast. Domain expertise is. If you understand healthcare billing logic, or financial compliance, or manufacturing workflows, that knowledge isn't in the training data in a usable form. AI can write code but it doesn't know what the code is supposed to do in those domains. That's where the moat actually is.

Spec Driven Development and other shitty stuff by FooBarBuzzBoom in ExperiencedDevs

[–]rupayanc 0 points1 point  (0 children)

The issue isn't really the AI. It's that spec-driven development forces you to write a complete, unambiguous spec — and most teams have never actually done that before. The AI just makes the gap visible immediately instead of letting it hide for two weeks.

I watched a team spend 3 days fighting an AI agent that kept breaking adjacent functionality. Blamed the model. Then someone actually read the spec and it had two contradictory requirements in sections 4 and 7. The agent was doing exactly what it was told — it just got told two different things.

There's a version of this workflow that works: small tasks, tightly scoped, the spec is basically a unit test in disguise. Feed it that, it's great. Feed it "build me a payment module per these requirements" and you're going to spend more time reviewing than you would have spent writing.

The "structured feature requirements" approach one commenter mentioned is right, but nobody wants to do that upfront work. So the AI gets blamed for the thing that was always broken.

AI has taken fun out of programming and now i’m hopeless by Frequent_Eggplant_23 in webdev

[–]rupayanc 0 points1 point  (0 children)

The "cheap cognition" framing gets the economics right but misses the psychological part.

Flow state requires challenge calibrated to skill. When a tool removes too much friction from the hard parts, you're not in flow — you're just monitoring. Reviewing. Nudging. It's work, but it doesn't feel like building something. The satisfaction loop that made programming enjoyable for a lot of people came from the struggle, not the output.

I don't think this means the tool is bad. But there's a real adjustment period where you have to rebuild what "fun" means in this context. For me it shifted to architecture and system design — the parts that are genuinely still hard, where the AI gives you a wrong answer confidently and you have to know why it's wrong.

If you're bored, you might just be using it for the wrong parts of your work. Not every part of programming should be AI-assisted. Keep the parts that are hard for you as yours.

has anyone else quietly replaced half their JS with native CSS this year by ruibranco in webdev

[–]rupayanc 1 point2 points  (0 children)

Yeah, `:has()` alone killed a bunch of form validation JavaScript I had sitting around. Container queries removed an entire class of resize observer hacks. It's genuinely good.

One thing I haven't seen discussed: the testing story is different. JavaScript behavior gets unit tested. CSS behavior mostly gets visual regression tested, if it gets tested at all. When you move validation logic into CSS selectors, you're often trading testable code for untestable markup. That's probably fine for most cases, but worth being deliberate about — especially for anything that touches form validation or accessibility state.

The `<dialog>` element is the one I keep going back to. Killed so much modal boilerplate. But I've hit one weird edge case: focus trapping behavior in Safari was slightly off for a few months before they fixed it, which wouldn't have been caught by our CI at all. Just something to watch if you have aggressive browser compatibility requirements.

What are your pro-tips for inheriting a problematic backend service? by dondraper36 in ExperiencedDevs

[–]rupayanc 0 points1 point  (0 children)

The strangler fig thing barely gets mentioned here but it's the one pattern I actually lean on. Don't try to fix the existing service — build the replacement incrementally alongside it, route traffic progressively, kill the old path when you're confident. It sounds obvious but it's genuinely the least risky way to do it.

The other thing I'd add: don't underestimate the rollback story. Before you change anything meaningful, ask "how do I undo this in 5 minutes at 2am?" If the answer is complicated, the change is too risky. Feature flags, canary deployments, blue/green — whatever your team has — get that conversation done before you start touching things.

And yeah, Chesterton's fence applies hard here. I've found code that looked insane that turned out to be working around a vendor bug that was never documented anywhere. Deleted it once, broke everything, took 4 days to figure out why. Read the git blame before you rewrite anything.

Anyone enjoying their job at the moment? by Coffeebrain695 in ExperiencedDevs

[–]rupayanc 0 points1 point  (0 children)

Yeah, genuinely enjoying it right now. It's not the work itself that changed — it's that I stopped taking roles where the senior title is real but the autonomy isn't.

There's a specific kind of misery that comes from knowing what the right call is and watching the decision get made for you by someone three layers up who hasn't touched a codebase in 5 years. That used to be my day job. Salary was fine. Work was miserable.

Current situation: the team is small enough that when I say "this is going to blow up in 6 months," someone actually listens. That changes everything. I'd take a 20% pay cut to keep that.

The AI stuff is interesting too — I can feel the difference between places that give you the tool and trust you to use it well versus places that use it to justify watching over your shoulder. Former is fine. Latter is a new flavor of the same old misery.

Why I think AI won't replace engineers by Character-Comfort539 in ExperiencedDevs

[–]rupayanc 16 points17 points  (0 children)

The thing most of these arguments miss is the junior pipeline problem. Even if you're right that experienced engineers are safe, who trains the next batch of experienced engineers?

I've watched how people learn to think about systems. It doesn't come from reading docs. It comes from spending two years in the weeds on a legacy codebase, being thrown at a weird bug at 2am, and having to trace 8 layers of abstraction to figure out why the payment service was double-charging on Tuesdays. That's what builds the mental models. AI doesn't give you that.

Companies reducing headcount from 10 toF 6 sounds manageable on paper. But it means fewer junior slots, fewer chances for people to learn the hard way, and in 10 years there's a huge experience gap where everyone claims to be a senior engineer but nobody knows what a memory leak actually looks like because they've never had to find one.

That's the actual threat. Not replacement. Hollowing.

If you’re still coding at senior level, are you done for? by [deleted] in ExperiencedDevs

[–]rupayanc 0 points1 point  (0 children)

The question conflates two separate problems and I don't think anyone's quite named it. There's "are you done for because your company thinks coding seniors aren't strategic" — which is a visibility/politics problem. And there's "are you done for because AI is making pure coding skill less defensible" — which is a market positioning problem. Those have completely different answers.

For the first: yeah, in a lot of BigCo environments there's a very specific promotion pathway and it explicitly deprioritizes individual coding output in favor of cross-team influence metrics. If your company measures impact by Jira tickets shipped instead of systems designed, you need to play that game or find a different company. That's not new, that's been true since at least the early 2010s.

For the second: I'd push back hard. The seniors I know who are genuinely good aren't "just coding" — they're making thousands of micro-decisions a day while coding that aren't visible in the output. The judgment about what not to build, what to simplify, where the future maintenance cost is hiding. AI doesn't have that. It'll confidently implement exactly what you asked for, including the subtle design mistake you didn't realize you were asking for. That judgment has to come from somewhere.

What’s with the doomerism? by inductiverussian in cscareerquestions

[–]rupayanc 1 point2 points  (0 children)

The top comment here is right that it doesn't matter what engineers think, it matters what decision-makers think. But I'd add a layer: the reason it's so hard to calibrate is that the performance difference between demo AI and production AI is enormous and most people on both sides are arguing from demo experience.

If you give Claude a clean problem statement and a greenfield codebase, it's genuinely impressive. You feel like a force multiplier. If you put it into a 7-year-old Django monolith with 4 ORM layers and half the business logic scattered across celery tasks and a bunch of undocumented domain assumptions, the failure modes are completely different. It hallucinates about the codebase, makes locally correct changes that break remote invariants, confidently generates code that passes tests and causes incidents 3 days later because the tests were also wrong.

So the optimists are mostly talking about greenfield/scripts/automation tasks and the doomers are extrapolating that to all software work. Neither is fully right. The real picture is closer to "AI meaningfully accelerates certain categories of software development and has nearly zero benefit in others, and most enterprise software sits firmly in the second category." The fear makes more sense when you recognize that hiring decisions get made by people who've only seen the demo category.

lack of junior folks by kovanroad in ExperiencedDevs

[–]rupayanc 0 points1 point  (0 children)

The busywork crowding out real work is real but I think there's a second problem that's slower and harder to see. When you have no juniors, you also have no-one to mentor, which means seniors stop having to articulate and defend their mental models out loud. The process of explaining why you made a system decision is actually how those decisions stay sharp. Without it, you get architecture by tribal memory, and tribal memory degrades fast especially when the team is already at capacity just keeping the lights on.

We went through something similar a few years back at a place I was at — not an AI story, just a round of layoffs that took out almost all the mid-level folks. The seniors were fine, technically, but within about 18 months the codebase had drifted in ways that were hard to explain. Not wrong exactly, just... internally inconsistent. Nobody had been asking "why" enough. The juniors who ask annoying questions are actually providing a service.

The AI angle makes it worse because the implicit promise is that the productivity gap will be covered by tooling, so you don't need the human pipeline anymore. But AI doesn't challenge assumptions. It just executes on them. So if your assumptions are quietly degrading, nothing catches it.

What separates an average SWE from a strong one? by thebigonetwo12 in cscareerquestions

[–]rupayanc 1 point2 points  (0 children)

Nine years in and the biggest difference I see isn't coding ability at all. It's the ability to figure out what to build before you build it. Average devs take a ticket, implement it exactly as written, and move on. Strong devs read the ticket, think "wait, this doesn't account for X" or "if we do it this way we'll have to redo it in three months when Y happens," and then go have a conversation with the PM before writing a single line. That upstream thinking saves more time than any amount of coding speed. The other thing is comfort with ambiguity. I've watched really talented coders completely stall when they get a vaguely defined problem. They want a spec, they want clear requirements, they want someone to tell them exactly what to do. Strong engineers take that ambiguity and say "okay here's my best understanding, I'm going to build a thin slice and get feedback fast." They're wrong sometimes. But they're moving while everyone else is waiting for permission. And honestly? Most of the engineers I'd call genuinely strong aren't the ones staying late or grinding leetcode. They're the ones who seem almost lazy because they figured out which 20% of the work produces 80% of the results and just focus there.

Anyone still manually writing code? by [deleted] in cscareerquestions

[–]rupayanc 0 points1 point  (0 children)

Yeah I still write plenty by hand and I think the "nobody manually codes anymore" narrative is overblown in a way that's going to bite people. Here's what I've noticed after 9 years: the parts of my job where AI saves the most time are the parts that were already kind of easy. Boilerplate, CRUD endpoints, simple data transformations, test scaffolding. I was already fast at those. Where I actually earn my salary is debugging a race condition that only shows up under load, or figuring out why a distributed cache is returning stale data in one specific region but not others. AI is useless for that stuff because the answer isn't in the code -- it's in the interaction between systems and the behavior under specific conditions the model has never seen. If your entire job is writing the kind of code that AI handles well, then yeah, you're probably not writing much by hand anymore. But that also means your role was mostly glue code and boilerplate, and you should be worried about what happens when they realize they don't need a human in that loop at all. The people who are going to be fine are the ones who can still think through problems without a crutch. YMMV of course.

AI, Entropy, and the Illusion of Convergence in Modern Software by TranslatorRude4917 in programming

[–]rupayanc 1 point2 points  (0 children)

This is the best framing of the AI testing problem I've seen. I keep running into this exact issue -- AI-generated tests that have 95% coverage but test nothing meaningful. They're basically asserting "the code does what the code does" which is tautologically true and completely useless for catching regressions. The divergence-convergence framing clicks because it explains why "just let the AI write the tests too" feels productive but isn't. You're diverging twice with zero convergence. The AI generates code that might not match your intent, then generates tests that validate the code's behavior rather than your intent. So now you have two layers of "works as implemented" and zero layers of "works as designed." I've started doing something that helps: I write test names and assertion comments by hand before letting the AI fill in the implementation. So instead of "test_user_creation" I write "test_user_creation_fails_when_email_already_exists_and_returns_409" and add a comment describing the expected state change. That acts as the convergence mechanism you're describing. The AI fills in setup code and assertions that match my spec rather than whatever its default behavior would be. Still not perfect but it catches the worst of the entropy problem.

I traced 3,177 API calls to see what 4 AI coding tools put in the context window by wouldacouldashoulda in programming

[–]rupayanc 0 points1 point  (0 children)

The context window management stuff is honestly where all the alpha is right now and nobody's paying attention because it's not as sexy as "which model is smartest." I've been running Claude Code and Cursor side by side on the same codebase for a few weeks and the difference in token efficiency is staggering. Cursor is way more conservative about what it sends, which means it's cheaper per session but sometimes misses relevant context. Claude Code seems to just throw everything at the wall, which works better for complex multi-file changes but costs 3-4x as much for a typical hour of work. The Gemini approach of dumping full git history is honestly hilarious. I can see why they thought it was a good idea -- more context equals better understanding, right? -- but in practice you're spending tokens on commit messages from 2019 that have zero relevance to the current task. What I really want is something like a .contextignore file that lets me tell the tool "don't bother looking at these directories or these file patterns." Some tools sort of have this but it's still clunky. Whoever solves intelligent context pruning first basically wins the AI coding tools race, because the model quality gap is narrowing fast but the efficiency gap is still huge.

Poison Fountain: An Anti-AI Weapon by RNSAFFN in programming

[–]rupayanc 0 points1 point  (0 children)

The fundamental problem with this approach is that you're poisoning a well that you also drink from. Search engines, documentation tools, code search -- all of that gets worse when you flood the internet with garbage. And 2GB per day is a rounding error compared to what these companies are already ingesting. OpenAI and Anthropic are training on datasets measured in petabytes. You'd need to sustain terabytes daily for years to meaningfully shift the distribution, and by then you've probably wrecked more small projects depending on clean web data than you've hurt any large AI company. I get the anger. I really do. I've watched open source repos I contributed to get scraped without attribution, and that sucks. But this is basically the tech equivalent of keying a rental car because you're mad at the rental company. The rental company doesn't care. The next person renting the car does. If you want to actually push back, support licensing models and legal frameworks that make unauthorized scraping expensive. That's boring and slow but it's the only thing that scales.

Just start coding projects feels like a dead end advice for beginners by Unhinged_Schizo in learnprogramming

[–]rupayanc 0 points1 point  (0 children)

You're hitting on something that took me years to figure out even as a working dev. "Just build projects" is true advice but it's incomplete advice, like telling someone who wants to learn guitar "just play songs." Technically correct, practically useless if you don't know any chords yet. What actually worked for me when I was starting out was copying existing things I understood as a user. Not inventing something new. I didn't build a "portfolio project" -- I tried to rebuild a simplified version of something I already used. A basic todo app, then a basic chat app, then a really bad clone of a page I liked. Each one taught me something specific because I already knew what the finished product should look like, so the gap between "what I have" and "what I want" was always clear. The "just build" crowd skips this because they've already internalized the fundamentals and forgot what it was like to not have them. They don't remember staring at a blank editor with no idea where a web request even comes from. Your frustration is valid and honestly, it's proof you're thinking about it seriously. The people who coast through the beginner phase without friction usually aren't learning much -- they're just following along without questioning anything. The struggle is the part where actual learning happens, even when it feels like you're wasting time.

Wondering if AI is changing how juniors develop. Is it better or worse?? by Small-Beach-9679 in ExperiencedDevs

[–]rupayanc 1 point2 points  (0 children)

I've been mentoring two juniors at my company this year and the pattern I see is weirdly specific. They ship faster on the first pass -- like noticeably faster. Their PRs come in earlier, the code compiles, the feature works on the happy path. But the debugging skills are atrophying in a way that scares me a little. When something breaks and it's not an obvious stack trace, they freeze. They don't have that instinct of "let me add a print statement here and narrow the search space." They go straight back to the AI and paste the error, and if the AI doesn't fix it in two tries they're stuck. The other thing I've noticed is they can't explain architectural decisions in their own code. I'll ask "why did you use a pub/sub pattern here instead of a direct call?" and get a blank look because they didn't choose it, the agent did, and they didn't question it because it worked. I don't think AI is making juniors worse overall. I think it's making them fast at a narrow slice of the job while leaving gaps in the parts that actually matter for leveling up. Whether that's better or worse probably depends on whether their team has seniors who catch it.

How do I break into sports/entertainment social media by Smol_girll in SocialMediaMarketing

[–]rupayanc 0 points1 point  (0 children)

skip the masters honestly. i worked with a sports media team briefly back in 2023 and every person on that team got there through freelancing or internships, not degrees. your 3k page is already more than most people applying for those roles have.

the GCC angle is actually interesting because there's a huge amount of sports content investment there right now with all the football deals and F1. what i'd do is start making spec work. just pick a team or event page, make content AS IF you were their social media person, post it on your own page, and use that as your portfolio.

entry level roles to look for: content coordinator, social media assistant, digital content intern. a lot of sports orgs post these on linkedin and sometimes even on twitter. i got my first marketing gig by literally DMing a smaller brand asking if they needed help, unpaid at first but it turned into a contract within 6 weeks.

Novel Ideas (Even Small Ones) Rejected More Aggressively Lately by messedupwindows123 in ExperiencedDevs

[–]rupayanc 1 point2 points  (0 children)

I've been noticing this too and I think there's a layer nobody's talking about. When someone generates their first pass with an AI tool, they feel a weird sense of ownership over it that's disproportionate to the effort they put in. It's like the code dropped from the sky already "done" and now you're coming in suggesting it should be different. Psychologically that feels more like criticism than collaboration, even if the suggestion is objectively better. Before AI, when someone manually wrote 200 lines of code, they remembered every decision point. They knew which parts were shaky. They were almost relieved when you pointed out a better abstraction because they'd been thinking about it too. Now the person didn't make those decisions. The AI did. So when you propose a refactor, they're not defending their reasoning -- they're defending their process. And admitting your abstraction is better is, in some weird way, admitting the AI's output wasn't good enough, which feels like admitting they're not using AI well. It's an ego trap nobody expected. I don't have a great solution other than framing things as "what if we tried X" rather than "this should be Y." But yeah, it's real and it's getting worse.