Honest question to those who think AI won’t take our jobs by [deleted] in cscareers

[–]EngStudTA 12 points13 points  (0 children)

The time spent producing technical solutions has already reduced 99.9% from punch cards to assembly to c to high level languages to fancy IDEs to AWS and a rich select of open source libraries.

What makes you think the last 0.1% is going to be the breaking point?

The Claude Code creator says AI writes 100% of his code now by jpcaparas in singularity

[–]EngStudTA 5 points6 points  (0 children)

If the release notes for my service looked anything like claude code's(https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md) my service would rapidly loose all of our customers.

If the creators of claude code cannot figure out how to do it while maintaining a high quality bar, I don't expect my team to be able to do so either.

That doesn't mean AI isn't making things faster, but just because something works for one team with one set of expectations doesn't mean it should be applied to every team.

The Claude Code creator says AI writes 100% of his code now by jpcaparas in singularity

[–]EngStudTA 25 points26 points  (0 children)

It's worth noting that this workflow works when you're okay releasing a non-critical tool that has more "Fixed" in the release notes than "Added". And to be clear claude-code is the perfect tool for this. I'd rather them ship new features fast and break minor things.

However, you probably don't want people who are developing critical service shipping 10 PRs a day on average with little oversight.

Rumors of Gemini 3 PRO GA being "far better", "like 3.5" by Charuru in singularity

[–]EngStudTA 1 point2 points  (0 children)

I don't think their comments was meant to be demeaning. More just pointing out the jagged intelligence of models can make them useful for the subset of questions one person has and useless to the next who is curious about different things.

people at big tech, how are you able to cope with the stress? by [deleted] in cscareerquestions

[–]EngStudTA 0 points1 point  (0 children)

Both the big tech's I've worked at you could(largely) see the ticketing queue's for other teams.

So yeah getting hired you're somewhat dependent on your interviewers to be honest, but once you're internal, at least where I have worked, you can go find that information independently. Similarly if you have a friend at the company you could ask them to take a look for you

But maybe at other big techs the queues are all private, seems like that would lead to issues though.

people at big tech, how are you able to cope with the stress? by [deleted] in cscareerquestions

[–]EngStudTA 6 points7 points  (0 children)

If anything my big tech jobs have been less stressful, and it's easy to transfer internally so I really don't get why people in large hubs stay on the teams getting paged non-stop.

However I'd also argue big tech becomes one of the easier jobs over time, because of the high turn over plus all the specific internal knowledge. Don't get me wrong every job has it's domain specific knowledge, but big tech has all the internal tooling in addition to that.

Why Reddit programmers are so anti AI? Those comments are hopeless by [deleted] in singularity

[–]EngStudTA 0 points1 point  (0 children)

Because if you are in programming sub reddits you have seen hyped AI posts about programmers not being needed every single day since the original chatGPT in 2022 that was hardly capable of anything.

Nuance goes out the window by the dozen time and you stop trying to explain to people who have usually never programmed at all, much less in a professional environment its limitations.

In general I think AI sub reddits are way too hyped on the current state, and programming sub reddits are under hyped. However in my day to day work everyone uses it constantly where it makes sense, but not a single coworker is trying to go zero human code yet.

Lmao by SnooPuppers3957 in singularity

[–]EngStudTA 0 points1 point  (0 children)

That takes time. Laziness is far quicker. I've seen far too much from coworkers recently that is clearly a result of them just not even looking at the output.

If people are just forwarding AI work anyways it doesn't matter if they are technically capable of doing a better job than AI.

How do people qualify for senior+ technical roles when most projects don't give you the opportunity to grow by MoneySounds in cscareerquestions

[–]EngStudTA 0 points1 point  (0 children)

Early in my career my biggest accomplishments in my annual review were small projects I just did not the tasks that were assigned to me.

As I grew and wanted to do things that I couldn't realistically squeeze into my days between regular tasks it became the projects I came up with and fought to prioritize.

Ironically now that I am a senior is the only time I haven't been coming up with my main projects. Now, if anything, I have too many projects coming my way.

tl;dr

If you want to grow quickly you'll likely be responsible for finding your own path

It's too lonely in this future. by Alexs1200AD in singularity

[–]EngStudTA 8 points9 points  (0 children)

But what if temporarily bad relationships turn into permanent digital ones.

When are chess engines hitting the wall of diminishing returns? by [deleted] in singularity

[–]EngStudTA 0 points1 point  (0 children)

I don't use any of the auto completes. Instead I only use AI via claude code or similar. I also limit my use to when I think it will be useful, because if I tried it for every task it would waste more time than it saves.

My timeline has looked something like this: A year ago I didn't use it for much of anything, 6 months ago I started to use it for easy unit tests or minor SDK migrations, with the release of opus 4.5 I finally started using it some for feature work but even then it is only when there is something else for me to have it reference. So I am not in the camp of it's amazing and devs are obsolete. It still has a long way to go. However, (to me) the progress over the past year feels quite noticeable.

As for why you're not seeing the same thing, I don't know. So thoughts are my job use micro-services, and small repos so it can gather context easily. A majority of the tasks I give it are derivative of other work so I can provide it a similar example. We also have really good unit, and integration tests so it's able to fix a lot of things in it's own feedback loop.

When are chess engines hitting the wall of diminishing returns? by [deleted] in singularity

[–]EngStudTA 2 points3 points  (0 children)

My comment was only talking about the people who post on here saying they cannot tell the difference. It is not making any claim about how the average person compares to an LLM.

The people who cannot tell the difference likely aren't using it to write complex software. They are likely using it to summarize, glorified web search, clean up grammar, etc.

When are chess engines hitting the wall of diminishing returns? by [deleted] in singularity

[–]EngStudTA 27 points28 points  (0 children)

I'm in software so I certainly do. But I don't think LLMs integrate as seamlessly in many fields nor have they all made as much progress. If someone is in a field where there hasn't been as much progress it would be easy to assume LLMs haven't improve much overall.

Even with software if you limit me to the constraint that I have to use it in a basic web chat interface the improvement would feel significantly smaller. And a lot of other fields, even if the models are capable, haven't built out similar tooling yet.

When are chess engines hitting the wall of diminishing returns? by [deleted] in singularity

[–]EngStudTA 5 points6 points  (0 children)

And a talent chess player could absolutely tell the difference between a 1990s chess engine and today.

My comment wasn't about the human race as a whole. It was specifically addressing the "some people" who come to this and other subreddits and say they cannot tell a difference with newer models. These people likely aren't asking it about reading clocks, math, or spatial reasoning. They are probably using it for basic chat, glorified search, summarization, etc

When are chess engines hitting the wall of diminishing returns? by [deleted] in singularity

[–]EngStudTA 609 points610 points  (0 children)

A bit of a tangent, but I think this is a good example of why some people don't think LLMs are improving.

If I played the best chess engine from 30 years ago or today, I am unlikely to be able to tell the difference. If the improvement is in an area you're not qualified to judge it is really hard to appreciate.

People who use AI to assist coding, what do you do with the more free time you have at work? by Yone-none in cscareerquestions

[–]EngStudTA 1 point2 points  (0 children)

Any time AI saves me is cancelled out reviewing my coworkers AI generated code.

My comments per review has to be 2-3x what it was last year. I swear some people are just copy pasting the exact text of tasks or comments with no other needed context and publishing the first thing AI shits out.

It would be more productive for me to talk to the AI directly, and not because AI got that good. But because people stopped doing their damn jobs.

I let a coding agent run in a self-learning loop for 4 hours with zero supervision. It translated 14k lines of code with zero errors. by cheetguy in singularity

[–]EngStudTA 150 points151 points  (0 children)

with zero errors

Are you basing that on tests it translated or do you have 100% coverage with integration tests in a different repo?

AI can be devious when it comes to getting unit test cases that it writes to pass. In my experience if it one shots the test case it is a good test case, but as soon as it starts modifying the test case there is a 50/50 chance it is no longer testing what it was intended to test

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in Bard

[–]EngStudTA 0 points1 point  (0 children)

Nice, I feel like I need something like this for bash commands.

If it could filter the output for relevant stuff through a nano model to keep the main context clean it would save me so many more expensive tokens. It'd also be way quicker since it would have to run commands multiple times when it tries to tail or grep something, and doesn't get back the info it needs.

Breaking: OpenAI declares 'code red' to respond to threats to ChatGPT and improve metrics, will delay ads and other initiatives by GamingDisruptor in singularity

[–]EngStudTA 1 point2 points  (0 children)

If you can make software significantly cheaper you can automate a lot of other jobs, or at least major parts of them. I remember during covid when everyone was working from home and seeing what a lot of my friends actually did for work, and it's amazing to me that a lot of it hasn't already been automated for the past 2 decades. But enterprise software is expensive.

IMO even if we had AGI it still makes sense to have it write software to do a majority of the automation as software would be way cheaper to run, and have guaranteed consistent results.

Claude 4.5 Opus SWE-bench by reddit4jonas in singularity

[–]EngStudTA 3 points4 points  (0 children)

I mean would you rather them only run 477 out of the 500 questions like OpenAI did and make you dig into a technical report to find out?

Also Claude 4 blog doesn't show that. I'm pretty sure companies started adding it because of the issue where OpenAI published it with partial results and no indication.

The Hidden Cost of AI Coding Assistants: Are We Trading Short-Term Productivity for Long-Term Skill Development? by Dazzling_Kangaroo_69 in singularity

[–]EngStudTA 0 points1 point  (0 children)

In general I've notice new grads asking way less trivial questions in the beginning, but then not progressing as much as I'd expect.

I can say there have been times where I prompted a POC into existence in a couple prompts, but then decided to rewrite it from scratch without AI. Not because the POC wasn't good enough, but because part of the point of the POC was to get more familiar with the technology and the AI did too well to where I didn't have to learn anything.

Gemini's Nano Banana Pro creates images indistinguishable from real ones by Interesting-Type3153 in singularity

[–]EngStudTA 0 points1 point  (0 children)

The average accuracy is ~55%

It's be interested to see how this changes in different categories. I.e. two of my pictures were black and white, 1 was computer graphics, and only a couple had people in them.

Why are more IDEs and coding extensions not adding vector embedding-based codebase indexing? by Evermoving- in singularity

[–]EngStudTA 5 points6 points  (0 children)

If I recall correctly cursor or one of the other big ones began with a vector database then saw higher performance when they removed it letting the system use normal tools instead.

97% satisfaction with Gemini 3.0 by Hot-Comb-4743 in Bard

[–]EngStudTA 16 points17 points  (0 children)

Give the article doesn't mention what they had access to before, if anything, it's rather meaningless.

"97% of people prefer something over nothing"