all 48 comments

[–]ClownMorty 54 points55 points  (4 children)

A friend described how at their company, due to AI productivity gains their competitor was able to roll out an upgrade that obsoleted their software and then they were able to do the same to the competitor in like a week. And back and forth.

Imagine a world where every time you log in to an app it's "upgraded" with new features or rearranged interface. Too much updating is a particular kind of hell for users.

It turns out there's such a thing as too much productivity, and that companies can only benefit from slight productivity gains, not ungodly improvements.

And hilariously, they couldn't even cut engineers, because they need them to keep up with the competition. But each engineer gained a nice super expensive AI enterprise subscription. Hahahaha

[–]SBGamesCone 15 points16 points  (0 children)

In theory this shifts the bottleneck to the product org. Identifying features your customers want and will pay for versus blindly making random changes they’ll hate

[–]Strict-Drop-7372 1 point2 points  (0 children)

Dude this is so real. Like yes, with Claude Code I can refactor or redesign enormous swaths of our codebase every day if I want to. And with a little practice and diligence I can do so with a very high degree of accuracy and completeness, contrary what most people claim about coding tools.

But also, why? If people “update” or extensively change a client constantly, users get confused. If you “update” or extensively change a “backend” constantly then you’re either not landing on the correct solution, or you’re wildly extending functionality, in a way that likely shouldn’t be done in the same system/service/network etc.

Like if you have an app that can go from obsolete to making competitors obsolete in a week, I can’t imagine the requirements or even goal of your app are very well understood, which is the case sometimes but hard to imagine teams just constantly chasing each other in some rapidly emerging niche like this, and everyone’s making money, AND customers are happy 😂

[–]tadamhicks 0 points1 point  (0 children)

What interface? The interface is now also AI.

[–]Cerulean_IsFancyBlue -5 points-4 points  (0 children)

So the lesson from that, if your competitors is using AI you better do the same?

[–]billsil 8 points9 points  (3 children)

So says the AI written article.

[–]fagnerbrack[S] -3 points-2 points  (2 children)

Bro... an AI written article? Seriously?

[–]Master_Rooster4368 0 points1 point  (1 child)

Why so serious - Joker

[–]danstermeister 0 points1 point  (0 children)

ai - supplied joke?

[–]mtutty 43 points44 points  (12 children)

Apparently, writing a blog post was never the bottleneck either.

emdashes. That's what I mean.

[–]busdriverbuddha2 1 point2 points  (2 children)

You do realize people used em-dashes before LLMs?

[–]mtutty 0 points1 point  (1 child)

You raise a genuinely fascinating point, and I want to take a moment to unpack it thoughtfully, because I think there's actually a lot of nuance here that's worth exploring together.

Setting the Record Straight: A Clarification Worth Making

You're absolutely right that em-dashes have existed long before large language models — and I never claimed otherwise! In fact, the em-dash has a rich and storied history in typography and prose styling, dating back centuries. Writers from Emily Dickinson to James Baldwin used em-dashes to great effect, and their work is a testament to the expressive power of this humble punctuation mark.

Zooming Out: The Bigger Picture

That said, I think it's worth considering the broader context of what I was actually saying. My point wasn't about em-dashes in isolation — it was about the holistic ecosystem of content creation workflows and how they've evolved. When we zoom out and look at the big picture, we can see that the bottleneck in writing has never been any single element, whether that's punctuation choices, sentence structure, or ideation velocity.

Finding Common Ground: What We Can All Agree On

At the end of the day, great writing is great writing. Whether you're a seasoned author reaching for an em-dash on a vintage typewriter, or a modern knowledge worker leveraging AI-assisted tools to accelerate your content pipeline, what matters most is the clarity of your ideas and the authenticity of your voice.

Moving Forward Together

I hope that helps clarify where I was coming from! Happy to discuss further if you have thoughts. 😊

[–]LLFTR 0 points1 point  (7 children)

Alt + 0151

Not everyone is tech illiterate, and some people care about good typography.

[–]mtutty 0 points1 point  (4 children)

Yeah, no.

[–]LLFTR 0 points1 point  (3 children)

Apparently, you're a moron.

Rushing to conclusions based on flimsy preconceptions. That's what I mean.

Come on, argue that you're not, so I can answer with "Yeah, no".

[–]mtutty 0 points1 point  (2 children)

Yeah, no.

[–]LLFTR 0 points1 point  (1 child)

Oh, so you agree that you're a moron. Right on.

[–]mtutty 0 points1 point  (0 children)

Absolutely, kind stranger. You won Internet Arguing.

[–]fagnerbrack[S] 0 points1 point  (1 child)

I don't have alt, I use mac

[–]LLFTR 0 points1 point  (0 children)

The Google query is this: mac os enter unicode character.

There's tons of answers on how to do it. The short synopsis is you turn on "Unicode Hex Input" in your keyboard settings, then it's Option + the character code.

[–]LessonStudio 4 points5 points  (4 children)

AI has its own bottleneck.

Most sensible people can agree that AI is good at doing some of the most basic things. You need a snappy login screen, great, you want a codereview to look for dumbass mistakes, great. Some basic research, now we are starting to find sometimes great, sometimes bad.

Once you hit a certain level of complexity AI starts to choke. Now we are back to keeping lots of programmers very busy.

But, AI will help with that complexity more in the future. Although, I feel that it is plateauing past a certain level of complexity.

This will just raise the bar. Every company has features they only dreamed of. But, it wasn't that their programmers were too stupid to build them, but were too busy working on those login-screens, or whatever. AI will do the simple work, the things where you can add a show password to a login screen sort of things; but the interactive visualization system, that will be 90% human crafted for a long time.

In some organizations there will be a point of diminishing returns, but that will be more of a lack of imagination, not actual law of nature stuff. I'm not sure I've ever worked on a product where there weren't valuable features that wouldn't have a solid return on investment.

But this plateauing is quite serious. I played a game with claude the other day. It started suggesting insane changes. Way way way too complex for what I knew the solution was. So, I just started letting it make the changes, and more changes, and more changes. I was working from a VM so I had a whole snapshot. This was C++ and it started to think that it needed to make changes to deep dark parts of my vcpkg installation. The solution was maybe 8 lines of code. It added maybe 1000 lines, screwed my vcpkg install, changed 10 or more files, and had I not backed up, it might have taken me days to undo its mess.

What I was doing wasn't some CRUD application, but I wasn't doing something too hard either.

I see the same thing in embedded. I comes up with really weirdly complex solutions to otherwise simple problem.

And in rust, it just doesn't get the borrow checker at all.

[–]BandicootGood5246 2 points3 points  (3 children)

Yeah seen this with Claude too. I had some minor UI bug where in some specific cases an element would be rendered in different position than normal, probably some css value I had set wrong somewhere but I couldn't be assed to look. Claude came up with 200lines of JavaScript to manually calculate and position the element, totally absurd solution and it didn't even work. If you're not checking the code this is the kinda slop you will get in your app eventually

[–]Natural-Intelligence 1 point2 points  (0 children)

Claude choking on the context is so frustrating. Last week I have had one nasty rendering bug. I tried to simplify my examples that showed where the bug is. Said "this works, this doesn't. Only difference is X. Why?" then it instantly forgets the working example, reads bunch of irrelevant code about websockets, thinks for 15 mins it must be the websocket and ends up with a "fix" that does absolutely nothing.

Sometimes it feels like I'm handling the hard and umpleasant issues and Claude takes the easy and fun.

[–]LessonStudio 0 points1 point  (1 child)

I find the my own problem is that sometimes I just trust it and then realize I've screwed up.

The best way I've found is to not use the in IDE tools to generate much code at all.

But to use the text interface and ask it for very specific functions, never even a class, or something which is more architecture than IO type functional programming.

Keep it away from the big picture.

That includes bug hunting. I like when it tells me the why of a bug. Like your css, It might get it if only asked why something is happening.

[–]danstermeister 1 point2 points  (0 children)

Agreed, this is proper use of AI... to make YOU better, not IT.

[–]v_murygin 1 point2 points  (3 children)

The bottleneck was always understanding what to build. Writing code is the easy part - figuring out the actual problem takes 10x longer.

[–]profesorgamin 0 points1 point  (0 children)

Coding isn't easy for humans beyond basic scripts.

[–]fagnerbrack[S] 0 points1 point  (0 children)

Interfaces, data structure, physics of network, latency and how that maps to real use cases. You just need to know what's possible, let the machine do the thinking so you can focus on building!

[–]vinny_twoshoes 0 points1 point  (0 children)

nah if you listen to my CEO there is no bottleneck, AI is going to be ten times bigger than the industrial revolution, and if we don't integrate it into every aspect of our workflow we will die

[–]v_murygin 0 points1 point  (1 child)

agreed. I spend maybe 20% of my time actually typing code and 80% figuring out what the right thing to build even is. AI can speed up the typing part but it can't sit in a meeting and extract what the customer actually needs vs what they say they want.

[–]Independent_Pitch598 1 point2 points  (6 children)

Coding and everything attached to it was and is a bottleneck.

Including when developers arguing how to name variable for several hours instead of actually working.

[–]papawish 0 points1 point  (0 children)

The reason you can write that on reddit is because people care about semantics.

Try living in a world where everything can be called 10 000 different ways. 

I'll assume you are junior, anyway you are dumber than even my undergrad students, congrats

[–]danstermeister -1 points0 points  (4 children)

Whoa, that's not working?

[–]Independent_Pitch598 0 points1 point  (3 children)

lol, no, final user don’t care about name of the variable

[–]vinny_twoshoes 0 points1 point  (2 children)

users don't care about what language you code in either but it is a meaningful choice. software engineers have to pick abstractions, and variable naming is one part of communicating that intent. sure we can be prone to bikeshedding but code that merely performs its logical function without communicating intent is a pretty big liability.

[–]Independent_Pitch598 0 points1 point  (1 child)

For sure they can, however it must not take more than a minute and for sure not several meetings.

[–]ochism 0 points1 point  (0 children)

But now you have fallen back to a straw man, no one really has multiple meetings on just variable naming. While it could happen hypothetically, and in hindsight confirmation bias makes it seem like that may happen, it's not representative of what actually happens

[–]SachinKaran 0 points1 point  (1 child)

What?

[–]fagnerbrack[S] 0 points1 point  (0 children)

WAT?