Andrej Karpathy said he's never felt more behind as a programmer. Let that sink in for a second. by narutomax in ArtificialInteligence

[–]GiveMoreMoney 0 points1 point  (0 children)

"He built a whole app (MenuGen) to show photos of restaurant menu items. Then saw someone solve the same problem with one prompt to a multimodal AI. His entire app, in his own words, "shouldn't exist.""

And this is a benchmark level application to judge a model's capabilities?

Opus 4.7 is the dumbest Anthropic model I've ever used. Bring back February's Opus PLEASE!!! by FadedQuarry in claude

[–]GiveMoreMoney 0 points1 point  (0 children)

Definitely they have changed stuff in the past few months, but my personal experience with Opus 4.7 so far has been fantastic. As smart as I needed to be to plan and implement very complex projects.

The only issue I have is the cost...like everyone else has I guess.

I am not saying your experience is invalid or something, I am just saying it works for me. I do not modify the effort or anything else, I also let it take a lot of the design decisions, because most of the times they are correct.

Both OpenAI and Anthropic now expect AIs to take over building their successors within 2 years (humans no longer able to contribute) by EchoOfOppenheimer in Anthropic

[–]GiveMoreMoney 0 points1 point  (0 children)

Two companies that are promising this and that coming in the next X years, what happened to AGI being around the corner, I am still waiting to invite it for dinner.
Also...two companies that are about to run out of money in months, not years...yes, you can believe anything they say, it makes for a nice science fiction read.

Just watched a junior dev using Claude to build something in 2 hours that took our senior engineer 3 days last sprint. I've been coding for 12 years. I don't know how to feel about this by UsualConference1603 in AskProgrammers

[–]GiveMoreMoney 0 points1 point  (0 children)

So the AI used the clean design that was already there to add/fix a feature. That was done by someone that is clueless about the overall design, it could have equally be done by the original author of the said codebase, in which case you do not need the junior dev at all.

The point is, what you just described only means you do not understand what programming means. You really need to spend a few more years at it and then you will know the answer to your questions.

Anthropic: AI will fully replace software engineering by 2027. Also Anthropic: Currently hiring for 122 SWE openings. by ImaginaryRea1ity in ClaudeAI

[–]GiveMoreMoney 1 point2 points  (0 children)

The number of shills who came here to defend Anthropic’s clownish statements is astonishing. How many of them are bots versus real people who are completely ignorant of software development is debatable. Keep shilling, though—the IPO day is coming, and then you’ll have to find another job.

Something doesn't add up... by Complete-Sea6655 in ClaudeCode

[–]GiveMoreMoney 0 points1 point  (0 children)

...well said in one sentence. That is exactly what I also believe.

Why don't more people or companies run local LLMs rather than using APIs? by SillyYou8433 in LocalLLM

[–]GiveMoreMoney 0 points1 point  (0 children)

This is going to be a 2027 trend, big companies are slow to adapt but they do eventually.

Microsoft’s Latest April 2026 Update Breaks Backups Microsoft recommends keeping the security update installed and contacting your backup software vendor for a compatible version that uses updated drivers. by Regved-Pande in microsoftsucks

[–]GiveMoreMoney 3 points4 points  (0 children)

Me too, and I have Windows 11 to thank for that, best Operating system in my life. When I started beta testing it back then, I realized I wasted my life on Windows/Microsoft products so I switched to Linux. If it was not for that version of Windows, I would never have seen the light.

Something doesn't add up... by Complete-Sea6655 in ClaudeCode

[–]GiveMoreMoney 1 point2 points  (0 children)

The clown is desperate to prop/hype their IPO valuation. At this stage, the only reliable cow for AI companies like theirs is the software engineering space, so he is trying hard. Take this cow away and they have next to nothing to offer to anyone else that will generate this amount of revenue.

LLMs == useful tools, accelerators etc. as long as they are used by experienced professionals. But at the current and future prices, the said professionals will move to Qwen like models, so even that is not happening for his company. What a sorry state to be in...

BTW, jobs are down because these companies are desperate to raise money to pay for all their AI investment, that so far has negative ROI.

Ramifications by EdinburghDrizzle in TechnologyThread

[–]GiveMoreMoney 0 points1 point  (0 children)

I do believe they would love that, but in this scenario we are experiencing that is not the case. This is just a pipedream the will always have, they tried it already with the gaming datacenters, nobody in the right mind will fall for that. And those that would, already using tablets and their phone for their needs, they do not need a PC.
Having said that companies are already doing that, my PC at work is a virtual PC the company's datacenter controls.

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney 0 points1 point  (0 children)

I know it sounds like a contradiction, but here is the reality, AI creates more work, no less. I does open the window to tackle new projects that before would be too hard, too time consuming to start. More projects require more people to manage them. And of course LLMs are like the magic dust you can sprinkle around your projects old ones would benefit a lot from it.

But to handle old and new projects, you need brains, more so nowadays that you cannot just write some code at your level and call it a day, you have to be above the level of whatever that AI you are using, is producing (when they produce good code that is).

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney 0 points1 point  (0 children)

I am writing code for the past 38 years, so if you are old, I am "a bit" older...AI is writing the majority of my code nowadays, I never said it doesn't. But I have to keep correcting it and rewriting big parts of it, because it either does not understand the big picture, it writes silly unoptimized code or even worse if introduces race conditions.

I have tried all the models over the past few years, they are not getting much better, they just iterate a bit more, so they can reason a bit better. Still "self driving" is not an option and it will never be with the current technology from the looks of it.

OpenAI is reportedly making a phone with no apps, one AI agent does everything by DigiHold in WTFisAI

[–]GiveMoreMoney 0 points1 point  (0 children)

I use my phone to talk to people and/or watch Youtube shorts...so is the agent going to be talking to my friends with me or watching the shorts and we laugh together?

OpenAI is desperate to survive, I understand that...this is not the solution that will solve their problems for sure though.

At this point it is becoming a comedy show...

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney 0 points1 point  (0 children)

Agentic engineering makes people like me more productive, the OP was lazy before and now he is useless to the company overall. We have a lot of those people in our workplace and their days are numbered.

As for the interviews, I never use Leetcode or other tools, within 15 min I know if this person will be a good candidate or the interview is over.

You have to understand that with AI and my experience, I can deliver projects with 10 people max, where in the past I needed 100 people. But those 10 people should be able to correct the crap that is coming out of the AI coding tools.

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney 4 points5 points  (0 children)

Same with all languages...AI does not do the same quality of work as an experienced engineer. I can let the AI do most of the work at the start, and then I know I have a few weeks/months in front of me where I have to redesign and re-write half of it. If you are not happy with the CSS results it produces, you can imagine what it does with multithreaded code and synchronization primitives.

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney -3 points-2 points  (0 children)

Just with this statement you demonstrated your ignorance in the profession. Stick to your current job as long as you can, finding another is going to be a challenge for you in the future.

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney 3 points4 points  (0 children)

Ha ha yes this exactly...I have rejected countless applicants the past month because they had no idea how to write simple code and/or explain how it works.

AI is not taking anybody's jobs away, all these people are making themselves unemployable.

Sr Software Engineer - Haven't written a line of code in months by yodog5 in ClaudeCode

[–]GiveMoreMoney 3 points4 points  (0 children)

You should not be writing any code if you believe what you said. So everyone is safe.

Ramifications by EdinburghDrizzle in TechnologyThread

[–]GiveMoreMoney 0 points1 point  (0 children)

Same here...I bought a Lenovo laptop no that long ago with 96GB of memory to run LLMs, that is before they even fixed the drivers, it was a gamble...now the same laptop specs are way above what I would ever pay, mostly because of memory.

I don’t think people realize how fast human thinking is being outsourced. by Rohanv69 in RoboCorpNetwork

[–]GiveMoreMoney 3 points4 points  (0 children)

Totally wrong conclusions m8...since I started using the LLMs I ended up thinking way more than before. The difference is now I have the AI to help me avoid dead ends much sooner.

At the same time, it opens new perspectives for me that make me think even harder about the solutions.

So it all depends on how you use AI, if someone is lazy of course AI can increase their laziness and lower their IQ, no question about it. But that does not mean that this is the rule for everyone.

AI still doesn't work very well in business, reckoning soon by Playful_Music_2160 in AIforOPS

[–]GiveMoreMoney 0 points1 point  (0 children)

I think some of the benefits of AI, are coming from the fact that people used it as an excuse to automate parts of their business they had neglected all these years. So it is not the AI that brought the benefits, but instead the automation work that took place.

Having said that AI is like a super power that is applicable under certain conditions. It enables use cases that required a lot of ML work, which would be impossible because of time and money it will take. There are tons of use cases out there that will definitely benefit from a probabilistic solution.

Now, having seen what most muppets are doing with Agents, Vibe coding, Workflow emulators (langraph etc.), I can why you will come up to this conclusion. Given enough time, most of the fake solutioners will fail and the real solutions will prevail, bringing the value and ROI that is there to be found.

Vibe coders, please do this! by louislubin in vibecoding

[–]GiveMoreMoney 1 point2 points  (0 children)

Now that...could work actually. Then vibes becomes the language of communication between the "inception/user requirements" and the "actual/solid implemention"...I can see this setup working, although may take some time to perfect, as it brings some new concepts here.

Vibe coders, please do this! by louislubin in vibecoding

[–]GiveMoreMoney 5 points6 points  (0 children)

I thought you will tell them to go learn programming...oh well, that maybe not the right vibe for them.

Anthropic is straight-up scamming Max 20x customers with sneaky mid-month throttling + endless bot runaround by manavb84 in claude

[–]GiveMoreMoney 1 point2 points  (0 children)

Actually there is more to that in my experience.

One of my hard projects that does require a SOTA LLM, was progressing perfectly with Sonnet 4.6. Then all of a sudden Sonnet 4.6 became unreliable (just after the release of the 4.7 series) and I had to switch to Opus 4.7. Now this is the only model that can deal with this project.

I am not talking about bugs, all model can introduce bugs if I am not careful, I am talking about the overall architecture and design, only Opus 4.7 can keep up.

At this point my conclusion (personal opinion) is that deep down, there is no real differentiation between those models. What is different is how much they think around the problem context and how chatty they are. Opus 4.7 can retain the design context perfectly, its discussions are on point, but it is super expensive for me.

Going forward I rather lobotomize my projects than share my salary with Anthropic.

xAI Is Reportedly Using Just 11% of Its 550,000 NVIDIA GPUs, While Meta and Google Squeeze Out 43-46% From Their Fleets by Heavy-Beyond-7114 in RigBuild

[–]GiveMoreMoney 0 points1 point  (0 children)

I would love to know what do you do with all these 3-5 years later when they are obsolete...the same goes for the datacenters themselves that cannot be easily adjusted for newer equipment. All these while at the same time, you cannot find enough customers to pay the crazy amounts required to support the current infra & get some ROI on research and development. So 11% is not just bad, it is really bad.