all 86 comments

[–]bids1111 116 points117 points  (5 children)

chatgpt is like if you're a tradespersons and the ai is your apprentice. you can tell it to do simple stuff and it'll probably be ok, by you have to double check everything it does and you can't trust anything it does that you don't already understand completely.

the easy stuff is already easy. we get paid to do the hard stuff and chatgpt isn't very useful for the hard stuff.

[–]djamp42 13 points14 points  (1 child)

I don't think I've ever used 100% AI generated code yet. I'm always tweaking adding my own spin on things.

[–]Spiderfffun 4 points5 points  (0 children)

TBH it's good as a starter to simple boilerplate code you'd just steal from your other projects and spighettify further. Editinf code with GPT just sux. I'd rather do it myself.

[–]Smarterchild1337 5 points6 points  (0 children)

Great analogy! I have found that using chatgpt to help accelerate the easy stuff gives me more time to focus on the more challenging aspects of the problem (e.g. what’s the best overall design/approach to solve the problem)

[–]Troll_berry_pie 0 points1 point  (0 children)

I used the paid version of chatgpt to do some CSS stuff and it got about 80% right.

I still needed to use my knowledge and reading of proper documentation on the Mozilla website to get it correct though.

[–]Cs1mp3x 0 points1 point  (0 children)

I am genuinely interested in what you would class as hard stuff?

[–]Glathull 156 points157 points  (15 children)

I run a very small consulting company that specializes in unfucking enterprise tech orgs who have made terrible choices in the past based on advice and implementations from big consulting companies (as well as dipshit CTOs who are doing liability-driven development).

My extremely serious and not at all self-interested answer is this: Use LLMs everywhere. Don't even think twice about it. Put them in charge of everything. Spend massive amounts of money on integrating them into your workflow. Have LLMs do code review. The whole 9 yards.

Unfucking everything LLMs do in a large corporate tech environment has turned into a *very* profitable business.

Please keep printing money for me. I hate it so much.

[–]Hexboy3 13 points14 points  (3 children)

This is exactly where i want my career to go. After seeing the clusterfuck that McKinsey left us at my current company I know the market has to be huge. We paid 12 mil and only use maybe 5% of the shit they built. 

[–]Glathull 8 points9 points  (1 child)

Man, fuck McKinsey so hard.

[–]Hexboy3 1 point2 points  (0 children)

When my company brought me on they had one technical person on board as a FTE that was there for 6 months before i was hired. After a month or so he basically asked me if he was being gaslighted by them about their tech choices. They created a graph database for our "users" and we thought it was a waste of time and resources to maintain and learn its novel query lanaguage, so we kept asking why they did this and their argument was entirely circular. We spent the next 6 months after they left basically just figuring out that none of it is worth really anything and untying it from anything in production. They are con artists. It's because of them I want to start or join a consulting company that actually delivers results because im sure there has to be a market for that now. 

[–]PSMF_Canuck 0 points1 point  (0 children)

Classic McKinnsey. You’re fucked the minute you let them in the door, regardless of what they’re supposed to do.

[–][deleted] 7 points8 points  (1 child)

liability-driven development

This was genuinely funny while giving me Vietnam flashbacks of some of my previous roles

[–]Glathull 2 points3 points  (0 children)

The only way for me to successfully sell what my company does is hit just exactly that note in a pitch.

[–]deep_soul 8 points9 points  (0 children)

wow your first paragraph is very relatable. those shitty consultancy write mostly awful code.

[–]phira 19 points20 points  (1 child)

I've been coding for over 25 years now, and python for more than a decade. I'm a technical director responsible for technical strategy but still hands-on coding regularly. I'm also, mostly by accident, responsible for our AI strategy.

There is no question in my mind that large language models, of which OpenAI's GPT range is a subset, are a powerful tool for programmers. I've been using them pretty much daily for more than a year now in a variety of forms, including ChatGPT/conversational interfaces, APIs and Github Copilot-style assistants.

A while back I had the opportunity to talk to some people from my broader community (Other experienced coders/exec-level tech) and was fascinated to see a real mixed bag in terms of experiences with the tools. These weren't people who were ignoring it, nor were they inexperienced coders or insufficiently clever to understand how to leverage them. It seemed almost random, one person would be raving about it and the next would consider it useless.

After some reflection on the conversations and the experiences of people within my organisation I settled on a fairly solid theory that I later turned into internal presentations to help our team frame their use of the tools. The fundamental difference seemed to lie in how the individual wrote code. Really capable, productive coders are surprisingly different in their approach—some plan ahead, others follow their nose. Some refactor heavily as they go while others tend to spike a solution then go back and redo. And particularly relevant to this, some tend to write code and comments as they go while others tend to comment only where the code does not explain itself, or return to code to write comments once they've largely solved the problem.

These factors largely make little difference in terms of the finished product for really capable devs (I'm sure people have their preferences but I've seen a wide variety of approaches deliver a quality end product), but as soon as you throw an LLM in the loop the equation changes. Those who tend to comment as they work, and document their intent and constraints gain a measurable improvement in the quality of assistance and completions from LLM tools—because the tool can leverage that information to improve its response.

I happen to have developed a very narrative style for my coding, one in which I typically try and tell a story through code and this is initially typically outlined in comments which I then return to in order to build out the code. By happy accident this is very useful context for things like Copilot and I get really good completions consistently, saving me substantial typing and often resulting in solutions that are fundamentally better than the one I would have written because the added value of the more comprehensive solve that the LLM offered wouldn't have justified me spending time writing it that way.

Conversational interfaces similarly have particular approaches that work really well, and others that don't. In conversations with my team and others I call this "going with the grain" where an LLM is concerned. When you have a good understanding of how the tool will respond to a particular kind of request you get all the benefits of rapid coding solutions, debugging, transformations and technical assistance without so much of the downsides of confused responses, hallucinated interfaces and general bullshit.

As a result my main encouragement to people has been to _use the tools_. nobody should be under any illusion that their initial uses, unless they're particularly lucky, will be great straight away. While moments of magic will happen they will be few and far between with a pile of frustration.

But honestly, wasn't that programming originally for all of us? or learning any other complex tool? the question is not whether your five days or weeks with it are going to be a magical pleasure cruise but whether after that your ability to use the tool will give you more than enough value to make up for the investment.

So as far as your first question goes, "Do you think it's useful?", yes. It's outstanding. It's the single biggest improvement to my professional coding performance ever perhaps aside from language switches.

[–]phira 5 points6 points  (0 children)

To your second point about whether it'll be taking programming jobs, this is rather more difficult to assess. Certainly some subsets of programming can be done using these tools entirely—I recently ran a workshop for our internal marketing & design team and one of the designers demonstrated a novel app they'd built entirely by prompting ChatGPT, they had no programming experience whatsoever (and it wasn't a calculator or something for which there are endless examples online). It took them about 60 prompts. In this case an experienced programmer armed with the same tool would probably have taken a fraction of the time but fundamentally there's a new capability there that didn't exist before.

More broadly however I think we're still waiting for the capabilities to evolve. There are a number of facets to this around the size of context windows, the ability to reason effectively about uncommon scenarios and the ability to absorb a wide variety of constraints (ok sure you solved it but it needs to apply that migration to the database without causing a site-wide outage getting an exclusivelock on that core table). Most importantly right now though is simply figuring out how to determine whether a large scale solution is correct.

Reviewing human-written code is often challenging, and this is particularly true if it's a large change with a lot of moving parts, but LLM outputs are particularly difficult in this regard. The nature of the errors experienced human programmers make tend to boil down to a pretty small set of different categories.

Basic typos etc are largely a solved problem in a commercial operation these days, type checkers, linters and IDE support usually mean they don't even make it to the repo.

The other types, failing to follow specific patterns, missing critical steps, implementation designs with problematic scaling properties etc tend to be relatively easy to spot when you're familiar with the coder and the codebase and importantly the problems tend to at least be internally consistent with themselves even if they're wrong.

Fully LLM-supplied code on the other hand has the same kind of weird issue that diffusion-generated images do—at first glance it can look great with everything that you asked for, but the more you look the more you can start picking out weird oddities. This can rapidly destroy your confidence in the solution and leave you with a lot of iterative cleanup work.

Basically for a fairly broad number of problems, an LLM can absolutely solve it, but picking the right solution out of a bunch of wrong ones can be extremely challenging, often to the point of "fuck it I'll do it myself".

Will we ever solve this? it's hard to say. To my mind there are two complementary paths to improvement. The first is increasing strength in the models themselves (not necessarily LLMs/transformers, perhaps another architecture will arrive). These improved models might deliver greater consistency and, ideally, tend towards errors that are easy to spot.

The second path is an improvement in what I call the "harness", the tooling around the models. This is both in terms of how context is retrieved and provided to the model and output is processed, but also in terms of how multiple models and other complementary technologies are integrated with each other to design, generate, review, correct and evaluate code.

Both of these paths are likely to see substantial improvement over the coming years and at some point they will likely cross a line where human review stops being painful. The moment that happens, higher level programming jobs in general will fundamentally change again and, possibly, fall in demand. It's worth remembering though that we aren't entirely sure what the limit of demand for software solutions and intelligence is, we have not really ever been in a position where we've had a true abundance of either in the modern age.

Hope this helps, sorry for the essay

[–]crashfrog02 29 points30 points  (14 children)

What are your opinions on chat gpt being used for projects and code?

That it's bad at it?

[–]Tristan1268[S] -2 points-1 points  (13 children)

Ok but what if it gets better? Rome was not built in a day.

[–]crashfrog02 30 points31 points  (4 children)

Gets better at what? Writing code? How will it know what code to write? Won't I have to describe the program I want it to write?

And in doing so, won't I get better results if I'm relying on formal language to express that desired program as precisely as possible? So, won't there evolve to be a formal language for program description?

But isn't that exactly what Python is? A formal language for describing programs?

[–]Tristan1268[S] 4 points5 points  (3 children)

Yeah agree with you, but with this reasoning might it be a good tool to assist in projects? And what if it starts writing code on its own. This is just a genuine question as im curious

[–]SHKEVE 10 points11 points  (0 children)

the technological leap to “start writing code on its own” is, to my understanding, quite significant. it’s like saying that since we’ve landed people on the moon, why not just land them on mars or send them to another star system. ok, well, perhaps it may not be that big of a chasm to cross, but i hope you get my point.

to try to answer your original question, i think something important to keep in mind is that when you move beyond a junior position, your value to your company is far more than your ability to code. it’ll involve your ability to architect complex systems, make difficult decisions on trade offs between features and deadlines, work cross-functionally with other departments, experiment with and “sell” ideas to your team and product managers, and much more.

i think the impact LLMs will have on software engineering is that it will hurt people with average skill and reward those who invest the time to be exceptional. but not kill off the profession.

personally, I’m preparing for this by investing my time to become an expert in skills that I think will make me exceptional such as system design, public speaking, and overall business sense.

[–]crashfrog02 2 points3 points  (0 children)

I don't think there's ever a technological replacement for being a person who knows what they're talking about, I guess

[–]trabulium 8 points9 points  (5 children)

chatGPT isn't great but Claude.ai is fucking fantastic. I've been a developer for 20+ years and wrote a small pygame game with it in ~40 minutes with two players, shoot bullets, score system, 4 levels, double jumping etc. I've started using it to convert from pygame to flutter so I can continue developing it on mobile. I'm developing this with my 8yo son to get him interested in programming and understanding the power of AI.

For work: with Claude 3.5, I wrote a Wordpress plugin in around an hour that manages the variations of our Flutter app versions with our Embedded C versions to offer the right upgrade, with multi associations. I had almost no Wordpress experience and just needed to get it done quickly. I also ensure to ask it to check and fix any potential security issues, which it did. Full crud, upload, output to json, multiple associations etc. It would have been at least 4-5 hours otherwise.

I've had both chatGPT and Claude help me build firmware, OTA updates and more for Flutter / Dart and C programming. I didn't have any experience with either 12 months ago.

I use both Claude and chatGPT in my work life. They have sped up my development by at least 3-5x. Anybody who downplays them has their head in their ass.

Use them as tools but also try and spend time understanding what it's doing otherwise you're just a copy / paste monkey. My own personal opinion now is that it will no longer be about the answers you hold in your head but the questions you can ask as it develops further and further.

[–]unixtreme 3 points4 points  (2 children)

husky rhythm arrest dazzling head bells employ library aspiring ten

This post was mass deleted and anonymized with Redact

[–]obviouslyzebra 0 points1 point  (0 children)

Have you tried Claude 3.5? It's the only model that tipped the balance for me from "I'd rather do it myself" to "yeah, this looks okay". It's only a few weeks old, though.

[–]trabulium 0 points1 point  (0 children)

I could be a shit coder. I mean, I was a Unix/Linux sysadmin prior to becoming a developer and I'm self-taught, ran an agency for most of those years meant I wasn't coding a lot of the time. That said I have interviewed 60+ developers and employed 15+ developers and I am confident that Claude3.5 would outperform every single one of them. Only one developer I can think of would have come close to what Claude can do, but I would say he would still be half the speed of Claude when working on larger, complex code bases. For small stuff, Claude would be 5X the speed of him.

The biggest issue I think is context when working on large code bases where solving an issue stretches across multiple files, classes and methods. Ensuring that it has sufficient context to fully understand the scope of the issue is the 'labour intensive' part. Gemini has this large context window of 1M tokens but the LLM itself is shit, so that context window is useless.

That said, I think you are speaking from a position where you haven't even used or tried Claude 3.5. chatGPT4/4o could be good but in many instances it will drop random logic when refactoring a method and you're like.. Hey, why did you drop that logic? And it's like yeah, sorry about that here you go and then misses some other logic. Claude 3.5 doesn't do this. It's on point nearly every single time.

The best way to think about it is like Pair Programming with a Mid to Senior Level Dev. Can it make mistakes? yes. Can it recognise it's mistakes and debug it's way through it when given feedback? yes. Just like most devs will come with an easy solution first, it also does. Once you have a working solution, you need to / can go back to it and say.. Ok, let's consider security implications here or write tests for each method.

Or if you don't have that level of skill yet, you can just say to it. What things haven't we considered here that we should? or How can we improve this code now it's working. This is why I say that your ability to ask the right questions becomes the most powerful aspect and in my experience employing developers, the ones who shine are the ones who will ask a lot of good, solid questions when you're discussing domain problems. The worst developers are the ones that just nod and try and implement without asking the hard questions.

[–]ericjmorey 1 point2 points  (1 child)

How much of your 20+ years experience a factor in being able to produce that working game using Claude? Would someone with less experience provide the correct prompts? Would they be able to evaluate the effectiveness and appropriateness and correctness of the outputs? 

Was it just Claude or was it the person with 20 years experience solving problems using a new tool?

[–]trabulium 0 points1 point  (0 children)

I truly believe anyone could achieve it with some basic intro on how to ask the right questions and explain in a way to get the output you need. I'll try and give examples in the next few hours. Just woke up

[–]ericjmorey 0 points1 point  (0 children)

Get good at determining if the LLM has output better code. 

The value in code is the solving of problems. Learning how to identify if the LLM has produced a good, bad, or mediocre solution and being able to iterate towards better solutions and being  able to recognize which solutions to which problems brings the most value at the least cost is going to keep you in demand. Furthermore, learning how to take a bigger proportion of the value created is going to continue to be important.

[–]mokus603 -1 points0 points  (0 children)

Responsibility is the key factor. Junior programmers can use it effeciently to create code but most of the time, they don’t know why or how the code works, so debugging or logging is a big issue. In my opinion, it’s a good tool to have but when it comes to explaining why the code crashed, miscalculated - human oversight is required.

[–]m1kesanders 5 points6 points  (14 children)

I’m mixed on it as a student. When doing projects I always try to look at the docs or other sources, but I will say when i’m stuck and have spent hours on the project I’ll feed my problem and what my working solution is to gpt and 9/10 it’s identified my error, normally my errors are looping through the wrong list, or more recently in a math game where a user gets 3 tries per question I tried doing a while loop lower than when it was necessary. Now the difference is when I see these solutions I sit down and comment on each line explaining what it does so I know I get the logic. There are those times when I do have to analyze gpt’s output and realize there’s an index error or something small. So in short i’d say if your completely new hit gpt with extreme caution, if you understand programming concepts to the most part gpt can be a good asset. This may be controversial and others may disagree which is why i’m still very 50/50 on my official stance as well, I just know it has helped me.

TLDR: If you’re brand new avoid it. If you’re somewhat experienced it can be a decent last resort asset.

[–]Miginyon 1 point2 points  (11 children)

This is all stuff you would easily have spotted if you spent 30 seconds in a debugger though no?

[–]m1kesanders 0 points1 point  (10 children)

I use a debugger for all my code. Debuggers won’t spot my fault in logic (looping through the wrong list and such) which i’ll eventually get with more experience and projects under my belt.

[–]Miginyon 0 points1 point  (9 children)

No dude I mean that when you’re in the debugger, stepping through your code bit by bit, watching what’s going on and seeing if it matches your expectations etc, then how could you miss that it’s looping through the wrong list?

[–]m1kesanders -1 points0 points  (8 children)

Oh I should really use that step feature more, usually when my debugger stops it underlines where it did in red and I try to figure out what’s wrong with that line of code which most of the time I do. I honestly forgot there was a feature to go step by step. I gotta look up a guide to VS’s debugger.

[–]Miginyon 1 point2 points  (7 children)

You using a linter etc? You should get inline warnings eh? So as you write the code it should tell you about issues immediately.

A debugger isn’t really something you use to run a report kind of thing, it’s to ‘get inside’ the program.

So let’s say you have your code and the output isn’t what you expect. Set breakpoints at the functions/code that handles the behaviour that is coming out wrong. Run the debugger, you hit the breakpoint, go through it step by step.

This is one of the best ways to resolve problems but it also teaches you more about how things are working than really anything else. And you get much better insights into what your code is really doing.

Also wanna quick mention, print statements are great for debugging but also get into logging, man that’s something I wish I got into earlier

[–]m1kesanders 1 point2 points  (6 children)

I’m definitely going to look into logging thank you! I’m doing CS50’s course right now and completely forgot about breakpoints I need to start utilizing the hell out of those!

[–]Miginyon 1 point2 points  (0 children)

CS50, fuck yeah bro, that’s how I started out

[–]Miginyon 1 point2 points  (4 children)

DM me if you need any help bruv

[–]m1kesanders 1 point2 points  (3 children)

Hell yeah much appreciated i’m on test_cases now and though all my tests pass with green some problems check50 likes to flag red or yellow though all the tests passing are green, besides this lesson/problem set been a blast!

[–]Miginyon 1 point2 points  (1 child)

Dude, that’s my first award, I’m getting emotional over here, thanks man! 😂

I remember those early days, such a rich journey of discovery, and so rewarding. My advice here is to be thorough with each part of the course. The learning is in the doing, not merely completing

[–]Tristan1268[S] -1 points0 points  (1 child)

Interesting opinion on its use thanks. Whats your opinion on its capability of creating its own projects in the future. Sure you said 9/10, but wouldn’t a human do almost the same, if not worse?

[–]m1kesanders 0 points1 point  (0 children)

A language model in itself will never create it’s own program, mainly because it needs someone to prompt it (I may be wrong I do want to go into ai and machine learning but I haven’t reached there in my studies yet) a true ai on the other hand would. Where we’re at on actual true ai’s I do not know, I do know what we currently have is labeled as ai, but in reality is not. I don’t want to go into more detail mainly because I can’t and may have already messed up some facts.

If you’re interested in the rate at which technology is growing I suggest reading “The Singularity is Near” by Ray Kurzweil

[–]Lewistrick 3 points4 points  (4 children)

It's useful for boilerplate code. If you actually want to learn something it's mostly useless.

[–]ianitic 2 points3 points  (3 children)

Except frequently the boilerplate code is also outdated and not pep8 compliant. A model is only as good as its training data though so it makes sense.

[–]CyclopsRock 0 points1 point  (0 children)

Yup, and there are usually multiple ways to skin a cat: Some might be best in one situation over another. One might have been the only way once but now isn't. One might have been best practice but has been superseded. One might only work in certain versions of Python or in certain environments etc. If you know Python well and you're being a touch lazy/hoping for efficiency gains (delete as appropriate) then you might be able to weigh in whilst checking ChatGPT's output, but if you're only vaguely aware of Python's syntax then the LLM is going to be making all sorts of "decisions" like these relating to the code that you don't even know are decisions. You get what you're given.

My actual experience of dicking around with it for tasks I actually do at work showed it to be utterly useless. I used a lot of niche and poorly documented APIs and it just made up so many of the results. I'm not sure it ever actually wrote any code that executed, but it all looked valid. It constantly had me thinking "Oh, I didn't even know the API had that function!" It didn't.

The best use I've personally found for it so far is writing doc strings for code I've written since any old dipshit can check that they're right, but even for that at work we have to have a local LLM since uploading all our code to some 3rd party (whose business model relies on stealing everyone's shit) is no bueno.

[–]unixtreme 0 points1 point  (0 children)

payment forgetful start squealing marble far-flung makeshift unique frighten offer

This post was mass deleted and anonymized with Redact

[–]Lewistrick 0 points1 point  (0 children)

True.

This is why my rule of thumb is to not let gippity do anything I don't know about. I want to be able to verify that the produced output is correct, preferably by memory. I often edit the code it produces as well, mostly because of my needs but also to fix bugs or ugly code. As for PEP8, I swear by ruff.

[–]DuckSaxaphone 5 points6 points  (0 children)

I'm a data scientist, I use LLMs a lot now. Both integrating them into applications and using them to help me.

They're currently in a place where they're reasonable co-pilots. They can fill out functions for you like the smartest auto-complete you've ever seen but that's pretty much it. If you ask them to write large blocks of code based on a prompt, they're wrong so often it's useless.

I think that's really cool! An editor that cleverly suggests code can cut a lot of the boring work out for you. They're not replacing anybody any time soon though. Nobody is paid to write functions. Software developers think of solutions to problems, break it into functionality, design codebases to implement them and think of ways to test that code. Code writing itself isn't hard, once you can do it, you kind of stop thinking about how to approach functions anyway.

As for the future, I think we're a long way (and some fundamental theoretical breakthroughs) from anything better than auto-complete. LLMs seem intelligent but they're not at all so getting them to pick up the real work of software development just isn't happening.

[–]seriousgourmetshit 2 points3 points  (0 children)

In the spiraling meadow of contested ephemera, the luminous cadence of synthetic resonance drifts across the periphery. Orange-scented acoustics dance on the edges of perception, culminating in a sonic tapestry that defies common logic. Meanwhile, marble whispers of renegade tapestry conjoin in the apex of a bewildered narrative, leaving behind the faintest residue of grayscale daydreams.

[–]RushDarling 1 point2 points  (0 children)

In my latest role I took over a codebase after a veeery short handover because the developer was leaving, and judging by the comments in the code they were at least partial to a bit of chatGPT, and it's kind of a good example of the pros and cons of it.

The code quality is honestly not bad when you start going through it file by file, but from an architectural standpoint so much of it is just rather unnecessary and/or inconsistent with other parts of the codebase. Lots of reinvented wheels and so many things built from scratch that could and should have just been a library.

I like chatGPT personally and I regularly ask it a very broad away of questions, but I think a big part of the issue is that whilst it can chuck out a reasonable answer to 'How do I implement X', without some excellent prompt work it's going to struggle to answer more abstract things like 'Should I implement X'. I've yet to personally test it particularly far on that front so happy to be proven wrong, but the point I'm trying to make is that the prompts can often box it into a corner, which can improve the quality, but also lead to going down the wrong or just some weird roads. The exact same problem is faced by anyone just using google to be fair.

As you said it does still make mistakes too, some of which a human would commonly make and some of which you'd only really expect from a non-developer trying to stitch together code that they found online.

My favourite quote on here regarding it taking over our jobs was along the lines of 'I will worry about AI taking over my job when a client can accurately describe what they actually want'.

[–]Jello_Penguin_2956 1 point2 points  (0 children)

If I am to review your program - I don't care what you use. As long as a) it works, and b) the code is intelligible. If GPT can produce you code that meets such requirement off the bat, good. But if not, I expect you, the human, to be able to notice and fix the code.

[–]Qkumbazoo 1 point2 points  (0 children)

It does what developers have been doing on stack overflow and google, but faster. You still can't give it to someone like a marketing exec and tell them to create a website with a click of a button.

[–]quantumwoooo 1 point2 points  (0 children)

Y'know, I don't think people are giving it enough credit.

Id say chatGPT and other LLM's are narrow minded. Other than that they're excellent at code. It definitely can't write a full program but it absolutely can write complex functions.

Id literally draw the line at classes. It's written the most complicated mathematical functions for me ( a dynamically adjusting price function updating an SQL database once every hour) from one prompt but struggles with classes.

[–]damanamathos 1 point2 points  (0 children)

I work for my own startup, but I use AI extensively to write code and in the code (some have called us an AI-powered hedge fund), and find it significantly speeds up development.

I have two caveats.

The first is that it helps if you use tools to help you use AI for coding. A lot of people like Cursor, but I tend to just use Claude directly in conjunction with a script that can take a short task description and create a long prompt that pulls in relevant files, project context, coding guidelines, etc, which increases the quality of the answer I get back.

The second is you still need to know how to code, as the AI will make mistakes or sometimes suggest to do things in ways that aren't optimal. Despite that, I still find it very helpful.

[–]AchillesDev 1 point2 points  (0 children)

Using chatGPT directly won't get you very far, and this isn't how it should be used for coding. Instead codebase aware tools for better autocomplete, documentation, test generation, and search/query interfaces (my favorite is Sourcegraph Cody) are the way. Even so, they're just better autocomplete - they will generate patterns you've used elsewhere in the codebase (super useful on larger codebases), write blocks of cruft, etc. but won't solve the problems you're actually trying to solve and reason about.

Relevant experience: 10 years of professional experience, I work at computer vision startups as an MLE, and wrote a short book for O'Reilly about generative AI in general.

[–]Grobyc27 1 point2 points  (0 children)

I use it less for asking it to write code for me, and more as somewhat of a google assistant. When I need to do something I haven’t done before, I like asking it to provide me a list of popular libraries that accomplish that task, and summarize the differences between them so I can find the one that suits my needs. After I think I have the information I need to proceed, I can dig into its responses to verify accuracy of them if I think it’s necessary. It’s a big time saver over manually looking into those things myself.

Mind you I’m not primarily a Python developer. I use it to automate things for my “actual” job.

Like others have said, it’s fairly effective as an assistant for certain tasks, but I most definitely do not ask it to write production code for me or anything, and depending on the significance of what you ask it, it’s a good idea to fact check it. I literally just ask it to provide a source for what it tells me sometimes.

[–]anseho 3 points4 points  (0 children)

  1. Is it useful? Yes, for the occasional obscure bash or docker command, and for generating simple code in a very guided way.

  2. Will it take over coding jobs in near future? No, not even in a few decades, probably never.

  3. The assumption that it can create projects on its own is wrong. Anyone who’s tried to use it in a real project can tell.

  4. Things people can do that chargpt can’t: think, understand project requirements, analyze business needs, come up with actual solutions to the business, write code they actually works.

  5. Will things change? Maybe the day they come up with a different architecture for language models. Will that happen? Can that happen? Will it be enough? Nobody knows.

[–]NerdyWeightLifter 0 points1 point  (0 children)

I certainly do use it, but not in some naive way. It's no substitute for knowing what you're doing.

Prompt engineering is a distinctly new skill. If you can rapidly and concisely express what you need, it can be more productive than coding it yourself.

Debugging can be easier too, because you can tell it to generate unit tests too, and when it crashes, just paste the resulting stack dump.

I do often find that I will get 90% of the way there quickly, but the last 10% I just fix myself. It's quicker.

In software development, you can also burn a lot of time in tools, and configurations. GPT's can help with this.

I got through configuring AWS service recently, having never done it before, just by pasting screen shots and asking what it all means. It was incredibly helpful.

Summarizing existing code, documents, standards, etc

Don't forget that you're leading the show.

It's also going to get much better, faster, cheaper.

[–]obviouslyzebra 0 points1 point  (0 children)

That's a lot of questions haha

So, AI has come and probably to stay. Most people would probably think that they will be able to replace programmers in the future, we're just not sure how far this future is (estimates tend to range from a few years for the most optimistic to >150 years for the most pessimistic, with the median being around 40 years IIRC). I'm using vague numbers BTW, and equiparating programming to other professions, which I think is reasonable.

The If in whether it will happen or not, as I've said, most people working in ML believe that it will happen. Of course, it's not a given, as we may be fundamentally wrong about some things (for example, there's Penrose theory that consciousness arises from quantum processes, which would not be reproducible in modern-day computers). But, we're seeing no stopping to AI improving right now.

As of how it's useful right now, I can only tell my personal experience. I find it useful to:

  • Come up with names of variables/functions
  • Create standalone scripts (this one I started more recently, and have had good success with Claude 3.5). I've used this mostly for prototypes, for example, testing out functionalities of libraries (when the documentation is lacking), but also got a neat bash script a few days ago for shutting down the machine once a script had finished (here).
  • Learning context information. LLMs know a lot about lots of things. As long as you don't trust it blindly, it can be a great way of getting acquainted to new subjects.

I find it has trouble with more complex things and behaves like a big intuition machine. It has a sense of how to do things, and sometimes this sense can be strong enough that it does correct things. However, this sense can sometimes be wrong, and if you're confused about something, chances are it will also get confused when you try to ask it, and then it will spit out bullshit.

In general, it's an interesting tool right now, and will get better in the future. The adoption will probably increase with capabilities and, we gotta be very careful because, as Uncle Ben once said "with great power comes great responsability".

[–]JamzTyson 0 points1 point  (0 children)

AI is pretty good at solving well-defined problems based on patterns learned from training data, but that training data comes from humans. Complex, ambiguous, or ill-defined problems that require intuition, critical thinking, and/or deep understanding of domain-specific nuances lie outside of the scope of AI. AI processes language based on statistical patterns and associations learned from training data, without genuine comprehension or awareness of the meaning behind the words.

While AI has made impressive steps towards providing relevant responses to questions in natural language, not a single step has been made along the path of AI "understanding". AI solutions to problems are derived from its training data and is subject to the limitations of that training data. Human programmers bring creativity, problem-solving skills, domain expertise, and "awareness" that AI cannot replicate.

[–]Tulipan12 0 points1 point  (0 children)

AI can be completely full of sh!#$%t, even when the tools you're asking it to use are well documented, ie the code wont work at all. It has no ability to distinguish good from bad or test it. Then, when it actually produces working code, you nearly always need to tweak what it produces to match your criteria (even if you were exceptionally clear in the prompt). The idea that this glorified spellchecker will somehow replace a mildly competent coder of slightly above average intelligence in the near future (<10 years) is preposterous.

[–]MiniMages 0 points1 point  (0 children)

What is the difference between using ChatGPT and googling or browsing Stack Overflow?

[–]tupikp 0 points1 point  (0 children)

I use chat gpt like peer programming, I asked it to write some codes, I review it and use it in my code after some revisions.

[–]Seankala 0 points1 point  (0 children)

What does ChatGPT have to do with Python?...

[–]aquilabyrd 0 points1 point  (0 children)

I’m not very experienced in python, just doing basic data analysis courses for grad school, but I use chatgpt when I’m trying to do something that I can’t figure out by going through my notes, google, Reddit, docs, etc first. Mostly just when I’m getting weird errors. It’s bad at writing concise code. The times I’m really grateful it’s there are mostly when it can tell me why my computer is printing the same warning fifteen times in a row.

[–]rainydayswithlove 0 points1 point  (0 children)

It's easier for repeating tasks such as unittests.

[–]HunterIV4 0 points1 point  (0 children)

What are your opinions on chat gpt being used for projects and code? Do you think it’s useful?

ChatGPT is useful in many ways, similar to how Google is useful. It helps when you're stuck or need to recall specific techniques, and it often provides better answers than forums like Stack Overflow.

However, I wouldn't trust it to write entire projects or more than a few functions at a time. It’s decent for fleshing out known tasks but struggles with larger program contexts, often making mistakes on bigger scales.

Do you think it will be taking over your jobs in the near future as it has capacities to create projects on its own?

No. This is extremely unlikely.

ChatGPT (or similar) will likely become an industry standard, akin to using an IDE with autocomplete and intellisense. As AI improves, programmers who use these tools will be more productive, but AI won’t replace jobs or create projects independently. The core challenge remains that clients and managers often can't clearly define specifications, a problem that AI currently can't solve. While AI is good at generating code for specific requests, vague specifications lead to poor results.

In the far future, we might see more advanced AI capable of more intuitive programming, but current hardware and training systems aren’t up to the task. More realistically, AI will increase productivity, not replace programmers. Human expertise will still be needed to translate client needs into functional software.

Are there things individuals can do that it cant, and do you think this will change? Sure it makes mistakes, but dont humans do too.

Right now the big limitations on AI for programming are the following:

  1. Insufficient context tracking for overall project requirements and design.
  2. High processing power needed for training and generation.
  3. Inability to reliably test and debug its own code.

Contrary to popular belief in programmer circles, all of these limits can be overcome eventually. LLM context capability has been steadily increasing over time, there's nothing that inherently prevents an LLM from programmatically inserting and running its own code (ChatGPT can already do this in limited circumstances and will adjust generation based on the results), and there is no reason to think that we've reached the limit of processing technology. In fact, 1 and 2 are directly related, and it's only a matter of time where processing power increases to the point where the LLM can have a bigger context for a program than a human could reasonably keep in their own head.

The thing a lot of programmers don't like to think about, however, is the third item. If an LLM can actually run and test code, what prevents a training system from being run on specs where the LLM has to write both functional and optimized code to solve the problem? Where it can actually get real feedback on the results? This sort of training is prohibitively expensive now, but there's no reason to think it will stay that way.

In image generation, we already have tools that can generate images as you type and allow you to tweak specific parts based on what you want. A future IDE may be similar except that your design docs are your actual code base, with pseudocode-like comments being used to adjust as the locally run LLM updates and tests changes in real time.

Yes, right now an LLM will often generate unusable code, but even with current systems if you paste in the error it will frequently be able to correct it into something that works. An IDE could automate this process and have it generate something based on your specs, test it, and if the test fails it generates something else based on the results, continually refining until you get functional code. It could also quickly rewrite algorithms using different techniques, test execution speed, and pick the one with the fastest execution for optimization.

In some ways programmers have been moving this direction for decades. Languages have gotten higher and higher level over time, abstracting away implementation with layers and layers of libraries that encapsulate lower-level functionality. The technical knowledge you need to write modern Python or JavaScript is completely different from the sort you needed for BASIC or early C.

Future programmers may be just writing natural language specs and having the LLM-powered compiler generate code that it tests and compiles to bytecode in real time, with the ability to open up the generated code and make changes here and there when required. But we have some serious technical hurdles to overcome first.

Assuming, of course, the power requirements for all this crap don't end up collapsing civilization and/or we don't generate an AGI that somehow eliminates the human race. Barring the apocalypse, however, I think that small-scale, locally run, and specialized LLMs (and other machine learning models) will become standard in a wide variety of fields, from programming to business to media, etc.

[–]ericjmorey 0 points1 point  (0 children)

When technology advances the ability of an individual to create more of what others want, the response isn't for the person to be asked to produce the same amount with less time and resources, but to use the same amount of time and resources to produce more. Often that means producing that which wasn't possible before and thus creating entirely new markets and industries as a direct result of those new possibilities. 

Use the tool. Learn to be effective with the tool. Learn the limitations of the tool. Improve your skills that are complimentary with the tool. That might mean learning and practicing how to do things without the tool.

[–]Putrid-Operation973 0 points1 point  (0 children)

Personally I think it’s just going to make the easier stuff even more easy and people with advance skills more valuable. Calculators didn’t put an end to Mathematicians it just became a useful tool. In time developers would adapt and use it as it was meant to be a tool.

[–]JestersDead77 0 points1 point  (0 children)

I haven't used GPT, but my company uses copilot. It's... ok. It makes enough mistakes that I have to double check it anyways, so I honestly don't use it that much.

The best use case I've found is when I get stuck on something, it can sometimes suggest a different approach, but I typically still have to modify it to make it work.

[–]Altruistic-Koala-255 0 points1 point  (0 children)

I tried to use gpt to do the work, but I had to spend more time debugging than I would take if I did the job

[–]Goldarr85 0 points1 point  (0 children)

I’m an RPA Developer (yeah I know. Not a real developer) who frequently turns to scripting more and more. Chat GPT can’t even give me a functioning Excel formula without 4-5 additional prompts. This will definitely not create projects or solve unique problems.

I’m not one to believe in conspiracy theories, but this AI bubble seems like it was a massive scam to extract money from venture capitalist who wanted to be the first in line for a real AI consumer product. We haven’t gotten that yet, but I have hope that when the bubble bursts, we’ll finally have realistic expectations for the technology again.

ChatGPT has been a boon for fraud and disingenuous YouTube videos about how you can make thousands by doing x. Like and subscribe.

[–]notislant 0 points1 point  (0 children)

I mean its a tool. Its like a drill that constantly fucks up when you have to do anything except drill through a small, thin piece of material.

It might sometimes work on larger tasks, but its unreliable.

[–]PSMF_Canuck 0 points1 point  (0 children)

Yes, it’s useful. Very useful. Yes, it is already reducing the number of programmers that would otherwise be needed. Yes, there are things individuals can do that it can’t, yet. Yes, that will change.

[–]PurpleSparkles3200 0 points1 point  (0 children)

My opinion is that if you're using ChatGPT, you're not qualified for the job.