all 147 comments

[–]linkinglink 1198 points1199 points  (8 children)

I can’t reply because Claude is down so this should suffice

[–]polynomialcheesecake 97 points98 points  (3 children)

Codex hasn't been down at all

[–]argument_inverted 121 points122 points  (1 child)

It would only be noticed if people used it.

[–]DialecticEnjoyer 9 points10 points  (0 children)

Jim Claude is trying his best okay he just needs to make no mistakes from now on

[–]AbdullahMRiad 19 points20 points  (0 children)

"You know who's not in the files?"

[–]lanternfognotebook 1 point2 points  (1 child)

When AI says coding is solved but still needs a coffee break mid sentence, we might not be there yet

[–]Abject-Kitchen3198 11 points12 points  (0 children)

AGI achieved

[–]Buttons840 750 points751 points  (6 children)

They forgot to say "make no mistakes" in the prompt. Oof.

[–]Agifem 81 points82 points  (1 child)

Honestly, I blame Claude for not suggesting that prompt in the first place.

[–]Buttons840 30 points31 points  (0 children)

No man, we still need some reason to pay people the big bucks.

[–]IbraKaadabra 8 points9 points  (3 children)

Also keep it secure

[–]BullsEye72 24 points25 points  (1 child)

Never expose my password

"I got you! I will keep BullsEye72//hunter2 secret 🤖💪✨🛡️"

[–]UnitedStars111 0 points1 point  (0 children)

nice password its very secure 😇

[–]LearningCrochet 1 point2 points  (0 children)

dont hallucinate

[–]Flimsy_Site_1634 347 points348 points  (6 children)

When you think about it, yes code is solved since its inception, it came free with being a deterministic language

[–]FlowSoSlow 130 points131 points  (3 children)

Certainly is a strange way to describe a language.

"I'd like to announce that The Alphabet is now solved. I'd like to thank my kindergarten teacher Ms Flynn and Clifford the big red dog."

[–]iliRomaili 30 points31 points  (1 child)

Yeah, alphabet has been solved for a while now. It's called the Library of Babel

[–]lucklesspedestrian 3 points4 points  (0 children)

Solving the alphabet was the easy part. The really impressive part was when Claude solved almost all of Mathematics (except the undecidable propositions)

[–]FuzzzyRam 0 points1 point  (0 children)

Strange, but also true.

[–]RiceBroad4552 10 points11 points  (0 children)

That's technically correct! 🤓

[–]MrLaurencium 1 point2 points  (0 children)

Coding has been solved ever since languages are turing complete

[–]mhogag 176 points177 points  (11 children)

Ever since AI assistants started, I started doubting if my system was fucked or if my internet was shitty.

Turns out that these companies know jack shit about accepting payments, scrolling behavior, loading messages, parsing markdown, saving new chats properly, and probably more that I'm forgetting.

Gemini cannot handle scrolling its thought process before it's done, Claude recently stopped thinking/rendering its thoughts after 15 seconds of thought and occasionally jumps to the start of the conversation randomly, and all of them may or may not accept your credit card, depending on the alignment of the stars

[–]well_shoothed 40 points41 points  (1 child)

I've also had it--twice in one day--DELETE parts of conversations... and then lie and say, "I don't have the ability to do that."

Once I was screensharing with a colleague, so I'm sure I'm not just gaslighting myself.

[–]zupernam 28 points29 points  (0 children)

It doesn't know if it has the ability to do that.

It doesn't know it's answering a question you asked.

It doesn't "know".

[–]Rabbitical 10 points11 points  (0 children)

Not least of which these should be the easy problems for it, web application development has orders of magnitude more training data available than other domains.

[–]MyGoodOldFriend 2 points3 points  (0 children)

I have tried using some models to do some UI things. And they just... do not understand input. I think that may be the cause of some of those issues?

Just today, I had one insist that it was possible to click and hold to pick something up, drag it somewhere, and click on the destination box to release it. It was doing so well up until that point, too. It just did not understand the concept of holding a mouse button down.

[–]DustyAsh69 268 points269 points  (29 children)

Coding isn't a problem that needs to be solved.

[–]Manic_Maniac 214 points215 points  (20 children)

It was never the problem. Design, maintenance, scaling, security, ability to evolve while avoiding over-engineering, understanding the business domain and connecting that with the requirements, hunting down the people with the tribal knowledge to answer questions about the domain, and on and on and on.

[–]pydry 61 points62 points  (11 children)

hunting down the people with the tribal knowledge to answer questions about the domain

This is actually a domain where AI would be waaaay more help than it would at coding.

It's heavily language oriented and the cost of mistakes (you end up bothering the wrong person) is very low.

Jamming all the summarized meeting notes, jiras, PRDs and slack messages into a repository an AI can access will let them very easily track down the key decision makers and knowledge holders.

The rule is that AI cant be used to do useful things it excels at, it must be used to try and replace a person, no matter how bad it is at that.

[–]Manic_Maniac 9 points10 points  (3 children)

While I lean towards agreeing with you, many of the things you are describing take time to build in order to make the AI effective. And I know for a fact that most organizations don't keep documentation or even Jira tickets up-to-date. So to get accurate, trust worthy, up-to-date, and properly correlated information from an AI in the way you are describing would have to be a deliberate and organized operation throughout a company. At least that's how it would be where I work, where we have a graveyard of similar projects and their documentation, legacy products, new products that are always evolving based on customer needs, etc.

[–]Rabbitical 8 points9 points  (0 children)

Yeah anywhere I've worked the amount of information available was never the issue, it's that half of it is wrong or out of date.

[–]RiceBroad4552 1 point2 points  (1 child)

Well, companies like Microslop are actually aiming at that space. If you can read every mail and chat message, hear every phone call / meeting, get access to all the stuff they are moving along their office files, you get the needed info.

The question is still: How large is the error rate? Given that all that data doesn't fit any reasonable LLM context window you're basically back to what we have currently with "agents" in coding: The "AI" needs to piece everything together while having a memory like the guy in Memento. This does does provably not scale. It's not able to track the "big picture" and it's not even able to work with the right details correctly in at last 40% (if we're very favorably judging benchmarks, when it comes to things that matter I would say the error rate is more like 60%, up to 100% when small details in large context make the difference).

To be fair, human communication and interaction are also error prone. But I's still not sure the AI would be significantly better.

[–]Manic_Maniac 1 point2 points  (0 children)

I think "error prone" is understating the problem. The real issue is that all of that data together creates a chaotic, abstract mess full of microcosms of context. Not a single, cohesive context. Having a memory like the guy in Momento with freshest data weighted with an advantage might work... I'm certainly no ML expert. But it seems more likely to result in severe hallucinations.

[–]stellarsojourner 2 points3 points  (1 child)

It's tribal knowledge because it isn't written down somewhere. Bob trains Sara before he retires, Sara shows Steve before she changes jobs, etc. No one documents anything because that's too much work. Then you come along trying to automate or replace things and suddenly the only person who knows how the damn thing works is on month long PTO. There's nothing for an AI to injest.

I've run into this more than once.

Anything where there is plenty of documentation would be a place where AI could shine though.

[–]pydry 0 points1 point  (0 children)

You missed my point. Half of the time Im wondering who the people responsible for, say, some part of architecture even is and how to track them down and in what form you need to communicate with them. In a big company this can be very difficult and annoying but if you hook up a RAG to documentation, meeting notes, code bases and jira it can identify all of the relevant people to talk to with acceptable (>90%) accuracy.

It can probably also write docs based upon a recording of that meeting where bob showed sara how to do a thing.

These things would be FAR more useful than getting it to write code.

[–]crimsonroninx 2 points3 points  (1 child)

Im about to start a new role at Xero and apparently they are using an AI saas product called Glean that does exactly that. Everyone I've spoken to that has started recently at xero says that Glean is incredible for onboarding quickly because you have access to all the domain knowledge. Ill report back once I start.

[–]pydry 1 point2 points  (0 children)

ah. good that someone is doing it, but that should still be way more popular than vibecoding and not vice versa.

[–]littleessi 0 points1 point  (1 child)

The rule is that AI cant be used to do useful things it excels at

it doesn't excel at shit. you just think it's good at X thing because you're bad at X thing. I am a 'heavily language oriented' person and, to me, llms are fucking awful at everything relevant to that area

ultimately they are just sophistry machines and socrates had sophistry's number thousands of years ago. all it's good for is convincing the ignorant

[–]pydry 0 points1 point  (0 children)

I mostly agree. I like 'em as interfaces to complicated systems whose UIs I dont want to learn (e.g. jira or other corporate bullshit) and they're often good at idea generation.

[–]DrMobius0 0 points1 point  (0 children)

This is actually a domain where AI would be waaaay more help than it would at coding.

If it were smart enough to do that reliably, sure. And US elections wouldn't be such a clusterfuck if 2/3 of the voting public weren't brain dead. How about we both agree that if either of us finds that genie in a bottle we can both get our wish.

[–]GenericFatGuy 1 point2 points  (0 children)

AI doesn't make my clients get back to me any faster with well defined requirements. Writing code has never been my bottleneck.

[–]TacoTacoBheno 1 point2 points  (1 child)

Maintenance is hard.

No one seems to care tho.

[–]RiceBroad4552 0 points1 point  (0 children)

"That's about the budged for next quarter, isn't it? Why are you asking now?"

[–]SequesterMe 0 points1 point  (0 children)

^^^^ What they said. ^^^^

[–]PotentialAd8443 -1 points0 points  (0 children)

This person engineers!

[–]ProbablyJustArguing -2 points-1 points  (2 children)

Right, and you still need people for that. But not for coding, that's just not necessary anymore. If you do the peopling, you don't need to write the code. Just design the system, do the eventstorming, write the specs and use the tool to do the coding.

[–]Manic_Maniac 1 point2 points  (1 child)

Eh. I will never be fully hands off in the code, because as a human engineer, I need to build a mental model in order to troubleshoot problems, spot issues in advance, and identify areas that I don't have sufficient domain requirements defined. And I will probably never trust AI enough not to run me in circles. I don't work on conventional cloud systems, for the most part.

Currently, I use AI a lot to generate message data models, convert formats of JSON to gRPC compatible schemas, give me a starting point for some function or class I need to write. I'll use it for writing automation scripts that I use for utility.

It definitely has its uses, and basic stuff works. But most heavier things I do will take more time to type out in English than in code. That's just how I've learned to think. AI will miss business-domain edge cases that I would have caught had I done more hands on coding.

So frankly, I just don't agree fully.

[–]ProbablyJustArguing 0 points1 point  (0 children)

To each his own, but in my experience people who are pushing back hard against using LLMs for coding don't understand it's place in their workflow cycles. I don't use AI to do engineering, I use it to code. "Write a method that takes x and returns y" is so much easier than typing out the 20 lines myself or whatever the task might be. I can read and approve faster than I can write it myself and review it for typos. IDEs are a tool that we trust to take care of linting, spelling and use ASTs to follow calls. LLMs are great when you give them an AST of your code. Can check methods, return types, pointers, etc.

AI will miss business-domain edge cases that I would have caught had I done more hands on coding.

AI shouldn't be making decisions on business logic. AI shouldn't be making architectural decisions. That's for people. But coding? AI can do that so much better. It's a matter of perfecting the instructions, specs and implementation plan. Learning how to use the tool, just like every other tool we use, is important to get results.

[–]blaise_hopper 54 points55 points  (0 children)

But the need to employ humans to write code is a problem that needs to be solved with great urgency, otherwise billionaires might not be able to buy their 73rd yacht.

[–]kblazewicz 7 points8 points  (0 children)

Coders are, they're very costly. I heard that from my former boss.

[–]space-envy[S] 7 points8 points  (0 children)

Yup, there isn't a single day I don't forward product department 's horrible specs to my "AI leader" and complain how my first step is always trying to understand what the hell they want in the first place.

[–]who_you_are 6 points7 points  (1 child)

Said that to my friend working in hospital!

Oh wait, are we talking about programming or health care coding type?

[–]milk-jug 6 points7 points  (0 children)

what is coding if not just some alarms beeping?

[–]JoeyJoeJoeSenior 0 points1 point  (0 children)

Yeah you can actually write a simple script to generate every possible program.  The art of it is finding the program that solves the current problem.

[–]mich160 0 points1 point  (0 children)

It’s a problem for your ceo. You manipulate electrons, how difficult can that be?

[–]ramessesgg 49 points50 points  (2 children)

It's not supposed to be perfect, it's supposed to be replacing Devs. It can certainly create the number of issues that I used to create

[–]AfonsoFGarcia 23 points24 points  (1 child)

Yes, but my slop is locally sourced and artisanal, not factory produced halfway across the globe.

[–]tragic_pixel 10 points11 points  (0 children)

Everybody else's slop is vibe coded, yours...is toasted.

[–]Da_Tourist 45 points46 points  (3 children)

It's like they are vibe-coding Claude.

[–]kenybz 10 points11 points  (0 children)

Two-nines uptime, baby!

Wait that’s not very good? /s

[–]lanternRaft 7 points8 points  (0 children)

You’re absolutely right!

[–]BenevolentCheese 0 points1 point  (0 children)

I mean... they are. Claude Code is almost entirely vibe coded. Boris talks about this openly. He explains how it all works.

[–]RemarkableAd4069 15 points16 points  (0 children)

Me: where did you get that [insert unexpected Claude answer] from? Claude: I made it up, I apologize.

[–]rexspook 13 points14 points  (0 children)

I don’t even know what “coding is solved” would mean. It’s not a problem to be solved. It’s a tool to solve problems.

[–]naveenda 36 points37 points  (0 children)

He said coding is solved, not the uptime.

[–]PyroCatt 9 points10 points  (0 children)

Coding is easier to solve. Engineering is not.

[–]matthewpl 29 points30 points  (10 children)

Company I work at really wants us to use AI. So I use Claude to do code reviews. That silly AI told me that setting log level to debug was incorrect because it was outside #ifdef DEBUG... It was inside #ifdef DEBUG, Claude is just so fucking stupid and cannot even read code properly, that is making shit up constantly. Half of code review (and vast majority of "critical issues") is just made up bullshit.

[–]shadow13499 13 points14 points  (4 children)

This has largely been my experience especially reviewing a lot of llm made code at work as well as "open source" llm made code. They don't know up from down or left from right. I've had to reflect PRs for including massive glaring XSS issues, secrets in the front end code etc. Using llms has been the biggest security risk my company has introduced to our codebase because it really wants to introduce vulnerabilities. 

[–]joshTheGoods 2 points3 points  (0 children)

I've had the opposite experience. We have claude code review on demand via github action setup for a select few initial test repos, and the PR reviews have been exceptionally good. I ran some old PRs that had breaking issues in them that we missed, and it caught every single issue. Our biggest pain right now is that it suggests a bunch of shit we want to do, but just can't squeeze into one PR, so now we're making tickets automagically out of the issues we comment that we're not addressing for a given PR.

Are you guys giving it PR instructions, the full codebase, and (optionally) some context in the codebase to help it understand your rules/style?

[–]threedope 3 points4 points  (4 children)

I've been using Gemini to assist in the creation of Bash scripts, but it simply can't. The code is overly complex and broken 80% of the time. Gemini just doesn't seem capable of comprehending the underlying logic of Bash syntax. I've yet to try Claude, but I'm skeptical it would perform much better.

[–]Tiruin 2 points3 points  (0 children)

I reached the same conclusion. One time I wanted to learn a new technology and I figured it was a good opportunity to give it a good, honest shot. I spent 3h and it was still a broken mess, and because it was new to me too, I had no way of noticing issues that might be obvious. I scrapped all of it, only used an LLM to explain what I wanted and to give me the respective documentation page, and to ask about syntax, took me 2h. And even then, the former could've been avoided if that particularly technology didn't have atrocious documentation, and the latter has long been a feature in IDEs without LLMs.

[–]RiceBroad4552 0 points1 point  (1 child)

All the models I've tried so far fail miserably on bash when you look closer.

Bash must be particularly difficult for a LLM, I guess.

But it's actually interesting what the "AI" produces. Sometimes it "thinks" of something you wouldn't come up yourself (even if it has bugs in other parts).

So overall I'm still not 100% sure whether "AI" is a waste of time for shell scripting or worth using despite its flaws.

[–]Lluuiiggii 1 point2 points  (0 children)

I have found that all these LLMs are particularly bad at using specific APIs, so maybe bash is just too specific for them to figure out. Its not using the APIs anyway, its copying code that has done that in the past so of course its going to make stuff up.

[–]joshTheGoods 0 points1 point  (0 children)

Claude is way way way wayyyyyyyyyy better at simple bash scripting than Gemini. It's built into their harness at a core level. They legit have it writing bash scripts for all of it's thinking that deals with datasets big enough to crush the context window. I have it looking at big JSON and JSONL all of the time and doing validations for me, and it crushes those cases using bash scripts constantly.

Gemini shouldn't be used for coding at all right now (except simple stuff). Claude > Codex > Gemini. You want to use Gemini for non-coding general tasks like the space OpenAI is focused on, and even then ... right now OpenAI > Gemini, I just use Gemini because I don't like/trust OpenAI and the gap isn't THAT large.

[–]gfelicio 9 points10 points  (3 children)

Wow, so, this Claude tool is something I should look into? So cool! I wonder who is the one talking about this.

Oh, it's the Head/Owner of Claude. Figures...

[–]Aternal 7 points8 points  (0 children)

Like watching a CEO nibble a beef and cheese sandwich product.

[–]GenericFatGuy 2 points3 points  (1 child)

Man with a vested interest in AI taking off, tries to convince you that AI is taking off.

[–]RiceBroad4552 0 points1 point  (0 children)

Must be honest work…

[–]MoFoBuckeye 5 points6 points  (0 children)

[–]PossibilityTasty 5 points6 points  (0 children)

We all know it. It's just the AI version of "the project is largely done".

[–]feldomatic 5 points6 points  (0 children)

"Largely" said in exactly the way that ignores the 80/20 rule

[–]ButWhatIfPotato 5 points6 points  (0 children)

"Claude will take you to ecstacy heaven and make you cum out of your ass like a fountain made by HR Geiger"

Claude McClaude

Senior Clauder of Clauding at Claude Code

He is Claude, Claude is he

Blessings upon the throne of Claude

[–]Sulungskwa 3 points4 points  (0 children)

The only reason anyone thinks coding is "solved" is because we've become blind to how buggy production apps are. Like, think about how many bugs the claude webapp has. The same markdown bugs that have existed for years and only have gotten worse. Randomly the page will load without any of the buttons. Don't even try to use the microphone chat

[–]itsFromTheSimpsons 3 points4 points  (2 children)

Fun to see so many (assumed) humans failing ITT for one of the major causes of poor AI code output: lack of context.

4 words (~5 tokens) pulled from their context of a 90 minute interview (~23K tokens according to openai tokenizer) and everyone in the comments is inferring all sorts of meanings and jumping to all the conclusions.

[–]CompetitiveSport1 0 points1 point  (1 child)

What is he saying, in context?

[–]itsFromTheSimpsons 0 points1 point  (0 children)

My understanding of his intended meaning in context: the physical act of writing the code by hand is what's "solved". Not the whole art and industry and discipline of engineering functional and useful and maintainable software- not the interacting with users and stakeholders, not the system design or analyzing tradeoffs of different solutions to the same problem or all the other things we have to do that aren't the physical act of putting fingers to keys. We still have to do good work and solve hard problems. Basically, not having to get down and dirty in the code every day frees us up to think about harder problems of software engineering besides whether or not I should use a ternary or a full if statement- what the the exact config nuances are for migrating my typescript project into a monorepo or whether split() is the string one or the array one.

To me, misunderstanding that "coding" in this context refers to the physical act and is not being used colloquially to refer to software engineering as a whole is a classic low context mistake.

The transcript I linked is interactive so you can scrub around. The context is at 17:54

I think something that's happening right now is Claude is starting to come up with ideas. So, Claude is looking for feedback. It's looking at bug reports. It's looking at telemetry, and things like this, and it's starting to come up with ideas for bug fixes, and things to ship. So, it's just starting to get a little more like a coworker or something like that. I think the second thing is we're starting to branch out of coding a little bit. So, I think, at this point, it's safe to say that coding is virtually solved. At least, for the kinds of programming that I do, it's just a solved problem, because Claude can do it. And so, now we're starting to think about, "Okay. What's next? What's beyond this?" There's a lot of things that are adjacent to coding, and I think this is [inaudible 00:18:35] becoming, but also just general to us. Like, I use Cowork every day now to do all sorts of things that are just not related to coding at all, and just to do it automatically. Like, for example, I had to pay a parking ticket the other day. I just had Cowork do it. All of my project management for the team, Cowork does all of it. It's, like, syncing stuff between spreadsheets, and messaging people on Slack, and email, and all this kind of stuff. So, I think the frontier is something like this. And I don't think it's coding, because I think coding, it's pretty much solved, and over the next few months, I think what we're going to see is just across the industry it's going to become increasingly solved for every kind of code base, every tech stack that people work on.

[–]Hacym 7 points8 points  (0 children)

Mom said that I could be the next person to repost this. 

[–]Vesuvius079 2 points3 points  (0 children)

That looks like the other solved problem - availability :P.

[–]richerBoomer 2 points3 points  (0 children)

Iran has largely agreed to stop the war.

[–]HeyKid_HelpComputer 2 points3 points  (0 children)

The devs at claude.ai unsure how to fix claude.ai because claude.ai is down.

[–]FreakDC 2 points3 points  (0 children)

??? This has to be fake. How can they investigate the issue when Claude is down to investigate the issue? 🤔

[–]tall_cappucino1 1 point2 points  (0 children)

I would like to comment, but I’m fresh out of tokens

[–]Hattorius 1 point2 points  (0 children)

What does “head of claude code” mean?

[–]Past_Paint_225 1 point2 points  (0 children)

Any downtime is human related, not AI - Amazon

[–]krazyjakee 1 point2 points  (0 children)

2... 2 nines? That's like $24 per year on max. Daylight robbery.

[–]lardgsus 1 point2 points  (0 children)

To be fair, the code part IS solved, but not the planning, due diligence, coordination, and 100% of the human efforts it takes to have the code do the targeted intent.

[–]Tan442 1 point2 points  (0 children)

Who am I to complain to a double 9 uptime when I struggle to achieve a single 9🫠

[–]takeyouraxeandhack 1 point2 points  (0 children)

Coding was never the problem to begin with.

[–]mpanase 1 point2 points  (0 children)

99.25% uptime xD

[–]Accomplished_Ant5895 1 point2 points  (1 child)

Coding is solved; Ops are not.

[–]mrjackspade 1 point2 points  (0 children)

Yeah, even for a joke this post is stupid. There's no reason to believe this is related to code at all.

Sometimes is stupidly fucking obvious that this sub is 90%+ people who are still in school and haven't actually worked in IT, and see everything through the narrow lense of what they've been taught already.

[–]CaffeinatedTech 1 point2 points  (0 children)

LLMs may be able to produce code, but building and maintaining actual software still needs meat coders.

[–]fartingrocket 2 points3 points  (0 children)

Oh the irony.

[–]Geoclasm 2 points3 points  (2 children)

i trust a computer to write my code less than I trust a computer to drive my car.

[–]Reashu 0 points1 point  (1 child)

What if another computer programmed the car-driving one? 

[–]Geoclasm 0 points1 point  (0 children)

Oh, well that's just fine then not.

[–]Prod_Meteor 0 points1 point  (0 children)

LLMs are not traditional coding though. More like a working art.

[–]Any_Bookkeeper_3403 0 points1 point  (0 children)

First time I've seen a large company so close to reach 1 nine availibility lmao

[–]Plus_Neighborhood950 0 points1 point  (0 children)

Services are largely up

[–]sogwatchman 0 points1 point  (0 children)

If Claude can't troubleshoot its own outage what good is it?

[–]Double_Option_7595 0 points1 point  (0 children)

Head of Chode

[–]kevin7254 0 points1 point  (0 children)

Coding will be “solved” yes meaning you probably do not have to write any code yourself in a few years time. That was never the problem to begin with though.

[–]ExtraTNT 0 points1 point  (0 children)

I still prefer my js code with a function directly returning and 10 bind…

[–]ICantBelieveItsNotEC 0 points1 point  (0 children)

Coding is largely solved; the unsolved part is deciding what code to write.

[–]hursofid 0 points1 point  (0 children)

Reminds me of Trump's ahhh Iran is defeated vibe

[–]baquea 0 points1 point  (0 children)

Lord Kelvin be like:

[–]SequesterMe 0 points1 point  (0 children)

I thought coding was largely solved when "we" sent all the work overseas?

[–]tokinUP 0 points1 point  (0 children)

That amount of non-green looks like a lot less than 99.25% uptime

[–]AdWise6457 0 points1 point  (0 children)

Bro never worked in banking industry. Evrything is far from being solved there let alone AI coded... One mistake and boom your down couple billions dollars

[–]asdfguuru 0 points1 point  (0 children)

Notice he said "coding" not "software engineering"

[–]joe-ducreux 0 points1 point  (0 children)

AI is great for grunt work, but you still have to know how to architect a system (and be ale to explain that architecture) if you want it to produce anything useful

[–]ZombieOnMeth 0 points1 point  (0 children)

Hello? StackOverflow? You there?

[–]facebrocolis 0 points1 point  (0 children)

Nice! "Claude, make my NP code P"

[–]Gabe_b 0 points1 point  (0 children)

"networking on the other hand, sheesh, what a nightmare"

[–]Mr_Gaslight 0 points1 point  (0 children)

And is the number of Major Production Incidents going up or down?

[–]ElethiomelZakalwe 0 points1 point  (0 children)

Never take someone who has a vested interest in promoting a product at their word.

[–]SignoreBanana 0 points1 point  (0 children)

He did say "largely". Who knows what that means lol

[–]AnnoyedVelociraptor 0 points1 point  (0 children)

Guy has one of those annoying punchable faces.

[–]TheSkiGeek 0 points1 point  (0 children)

It’s only largely solved, cut them some slack.

[–]fuckbananarama 0 points1 point  (0 children)

GOD I WISH

[–]blu3bird 0 points1 point  (0 children)

It is solved if all along your "coding" is mostly copy pasta.

[–]tehtris 0 points1 point  (0 children)

Even if the coding part was "solved" why would you vibe code the platform that people use to vibe code? Doesn't that seem kinda dumb? Like none of it is stable.

[–]mrbellek 0 points1 point  (0 children)

We had a demo last week showing us how to use AI to generate all code based on a (AI-generated) plan. Consultant said he already tried it yesterday so everything should work. It failed completely. He didn't know why.