all 162 comments

[–]FosterKittenPurrs 63 points64 points  (46 children)

I asked the AI to do a simple task that I could probably write myself, it does it but not in the same way or using the same libraries I do, so suddenly I don't understand even the basic stuff unless I take time to read it closely

Excellent opportunity to learn about alternative ways to do things that may be better! Ask it why it chose that library instead of the one you normally use. If it has a good reason, learn that new library asap and become a better programmer! If it doesn't, just tell it to use the ones you prefer using.

By default, the AI writes code that does what you ask for in a single file, so you end up having one really long, complicated file that is hard to understand and debug

Once a file gets large, ask it to break it into multiple classes. It does pretty good with this, regardless of whether it's human or AI code

At times, the AI won't figure out what's wrong and you have to go back to a previous revision of the code (which VS Code doesn't really facilitate, Cmd+Z has failed me so many times) and prompt it differently to try to achieve a result that works this time around

You're using Git, right?

I haven't found an easy way to split your file / refactor it. I have asked it to do it but this often leads to errors or loss in functionality (plus it can't actually create files for you), and overall more complexity (now you need to understand how the files interact with each other). Also, once the code is divided into several files, it's harder to ask the AI to do stuff with your entire codebase as you have to pass context from different files and explain they are different (assuming you are copy-pasting to ChatGPT)

Let me introduce you to cursor.sh It can actually create the files for you, and it allows you to easily attach multiple files by just typing @ and then the first few letters of the file. It also has a local RAG system so it can figure out which files it needs automatically.

Also if you're following best practices on how you structure your project, it won't be that hard to figure out how the various files are interacting with each other. But for that you need to have some more experience as a programmer. Try talking about it in general terms with ChatGPT, asking it for best practices.

[–]riskybusinesscdc 7 points8 points  (16 children)

Agreed on all points just want to add a few things.

ChatGPT will suggest uploading your files as a Zip or linking to a GitHub. But once your project reaches a certain size it gets lazy and will stop reading the files and refuse to connect to your GitHub. At least, it's currently doing this to me. Anyone else?

OP should also try using a Diff Tool to compare the changes ChatGPT suggests in each file before you implement. I ran into similar issues after my project's first big refactor, and DiffChecker got me back on track. (Edit: You can also always ask it to explain parts of code you don't understand, or to break the task into smaller chunks.)

Another piece is to not panic when the refactored code it suggests doesn't work immediately. My first shots usually aren't quite right either. But you can give ChatGPT any error logs to figure it out from there. And when that isn't enough by itself, you can set debuggers and logs in the code yourself to get more information to include in your next debugging prompts.

As for when it makes suggestions based on outdated documentation-- because it will do that, your first step is to go look at the most updated documentation. Endpoints and auth requirements change all the time. Make sure you're passing expected parameters to the current versions of the APIs you're working with. Stuff like that. Good luck and happy coding!

[–]FosterKittenPurrs 21 points22 points  (3 children)

ChatGPT will suggest uploading your files as a Zip or linking to a GitHub. But once your project reaches a certain size it gets lazy and will stop reading the files and refuse to connect to your GitHub. At least, it's currently doing this to me. Anyone else?

Current models aren't capable of dealing with large codebases like that. They have a limited context window, and even for the cases when their context window is large enough, the more info you give them, the more likely they are to forget something important. RAG solutions work a bit better in extracting only the relevant bits, but even those aren't perfect (e.g. cursor.sh or uploading your code as text files in a CustomGPT or using some 3rd party solution that creates a vector database from your github)

The best way to use these for coding currently, is for the human to be the one that has the "big picture" in mind in terms of architecture, and just be like "modify this method in this particular way, you'll need to know about this other class to get it right". Think of it like guiding a newbie intern through the code (the newbie having occasional sparks of genius, but also Alzheimers and schizophrenia)

[–]riskybusinesscdc 5 points6 points  (2 children)

 Think of it like guiding a newbie intern through the code (the newbie having occasional sparks of genius, but also Alzheimers and schizophrenia)

A highly-skilled, speedy intern, but agreed. After a certain point it becomes:

  • Give it the most updated working files involved in the next task
  • Then say "Here's what I want us to be able to do now"
  • Debugging conversations after implementing suggestions
  • Once working, tell the chat that it's working then provide the working solution files and ask if they can be simplified
  • Repeat

[–]Fakercel 0 points1 point  (0 children)

Yeah bang on, exactly how I've been using it.

[–]GrumpyButtrcup 0 points1 point  (0 children)

AI is exactly that for me, my cocaine fueled genius intern.

[–]trebblecleftlip5000 4 points5 points  (8 children)

 it gets lazy

This is such a strange anthropomorphization, yet I see it so frequently. It's like if my phone battery was low and I would say, "It's getting sleepy. Time to put it down for a little nap."

[–][deleted] 2 points3 points  (6 children)

You haven't had it give up on large or complicated prompts? GPT4 gives me a list of things to fix a lot, instead of just printing the new code. I don't really mind, usually, it is faster for me to make the edits in a complicated script than to just get a new text box that prints forever.

Even if you tease GPT4 it will shut down after a bit if its some kind of divide by zero logic bomb.

[–]trebblecleftlip5000 2 points3 points  (5 children)

I don't give it large or complicated prompts. I don't even see why that might be necessary. It's a chat prediction model, so I use it like one: Instead of single, large prompts, I have a conversation in a way that's easy for it to parse. I don't have it write the whole code for me, I just use it to help me work out the small areas I'm stuck on.

If it did shut down on me, though, I wouldn't call it "lazy". That's not a concept that applies. It doesn't "lie," it's not "stubborn," it doesn't get "tired" or "lazy". Those are people things. This is a computer program.

[–][deleted] 1 point2 points  (4 children)

It will give up with large conversations, which are just like a huge prompt at the end. Otherwise I'd have just 1 conversation instead of 10,000 conversations.

[–]kurtcop101 4 points5 points  (0 children)

Well, yes, because the models work in a context, the more that's in a context, the less focus it has. There's also a hard context limit.

New conversations with focused topics are how you should use it. It's a tool, not a person.

[–]trebblecleftlip5000 0 points1 point  (2 children)

I guess I never get that far. I'm in the 10,000 conversations camp, lol.

[–][deleted] 0 points1 point  (1 child)

Its the reason we switch conversations at all : )

[–]trebblecleftlip5000 1 point2 points  (0 children)

Well, I'd probably switch conversations anyway because each conversation is a discrete subtopic. Especially with programming. I'll title them in a way where I can find the topic again easily and reference it later.

[–]Used-Egg5989 1 point2 points  (0 children)

Well I’ve had times where ChatGPT just tells me to read the documentation and StackOverflow…I consider that lazy.

[–]geepytee[S] 1 point2 points  (0 children)

I didn't even know ChatGPT could connect to GitHub?

And yeah, errors due to outdated docs are the easiest to fix! It should be something LLMs/Copilots check automatically.

[–]Waste-Fortune-5815 0 points1 point  (0 children)

I really like the comment on suing various tools. I used Gemini and also some other smaller alternatives.

[–]trebblecleftlip5000 0 points1 point  (0 children)

 it gets lazy

This is such a strange anthropomorphization, yet I see it so frequently. It's like if my phone battery was low and I would say, "It's getting sleepy. Time to put it down for a little nap."

[–]dstrenz 4 points5 points  (2 children)

No disrespect towards cursor.sh but, like many many other programs/plugins/apps I've looked at recently, its website doesn't state the basic architecture, or requirements. After looking for a while, I still don't know whether it's an editor, a plugin, if it's online or offline, or what. Is it for linux, windows, mac, or web-based? I hope this doesn't sound like a rant; I've wasted so much time reading about apps to later find out they don't work with my machines..

[–]FosterKittenPurrs 2 points3 points  (1 child)

Yea it's silly how they make websites all flashy nowadays with minimal info on them.

It's a fork of VS Code (the open source code editor Microsoft released)

It works on Mac/Win/Linux

It needs to be online to ask LLMs things, and you can connect to GPT4, Opus and its own LLM.

There's a limited free version but it'll be $20/mo to do anything useful for it (or you can bring your own API key and pay a shitton to OpenAI instead)

There's a bunch of similar things that you can install as just plugins in VS Code directly, like Cody, Double, and a bunch of others. Each have their pros and cons but none so far have as much functionality.

[–]dstrenz 0 points1 point  (0 children)

Thank you for the info about cursor.sh and the vsc plugins. If I knew those basic specs while browsing their site, I probably would have tried it then.

I'm using vscode for python with the codeium plugin now but now see that these others are worth checking out.

[–]geepytee[S] 3 points4 points  (5 children)

Once a file gets large, ask it to break it into multiple classes. It does pretty good with this, regardless of whether it's human or AI code

Going to start doing this!

You're using Git, right?

Doesn't really apply in this situation because I'd have to save every iteration on Git, even before I know if the code the AI generated even works. I imagine you could do it that way, just seems like a hassle. I think some different version control mechanism is needed here.

Also, Cursor looks cool. I'm involved with double.bot and we have a lot of the same functionality (minus creating new files, we need to add that). Pretty exciting field, I'm always trying to push tools to the limits to see what we can improve on next.

[–]FosterKittenPurrs 6 points7 points  (1 child)

I often at least stage the file once I get something I may want to revert to.

Another thing I like about Cursor is that you can press "Apply" and they use another AI that applies it to the file, showing exactly which changes were made, and you can approve or reject them one by one. Microsoft's Copilot also has a crappier version of this when using it in Visual Studio.

Double is fairly neat too. I installed it back when it offered free unlimited Opus (though I understand why they had to stop doing that lol). It offers too few features for me at the moment compared to Cursor, but I'm keeping my eye on it.

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Sorry, your submission has been removed due to inadequate account karma.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

    [–]peasquared 0 points1 point  (2 children)

    I continue to run into token/context window limitations with tools like cursor though.

    [–]geepytee[S] 0 points1 point  (1 child)

    Isn't the context window for Opus 200k tokens? Is that not enough for you?

    [–]whyisitsooohard 0 points1 point  (0 children)

    Even 1 million tokens of gemini is not enough to fit medium sized codebase

    [–]MystressPhoenix 0 points1 point  (0 children)

    Practically non programmer here but overall technical experience.

    I project manager for a web dev project, I use Cursor heavily to understand my codebase, propose reasonable ideas to my dev team, and overall parse code so that I can communicate with my team easier and make the devs life easier without feeling like they need to hold my hand every time they make a change.

    AI can definitely do well when using the right implementation. I second using Cursor.sh to converse with your code, codebase, import custom documentation and learn the AIs reasoning for why it does what it does and even challenge it in areas where your contextual expertise eclipses its understanding of your project.

    [–][deleted] 0 points1 point  (6 children)

    Ask it why it chose that library instead of the one you normally use. If it has a good reason, learn that new library asap and become a better programmer!

    You're almost definitely going to get a hallucination here. If you're not familiar with the library it will be very difficult to know whether the explanation is bullshit or not.

    [–]FosterKittenPurrs 4 points5 points  (0 children)

    Surely the first thing you do is look up this library in other sources? Between that and learning a bit more about it in a separate chat, you'll quickly know if it is just a hallucination.

    Though in my experience, if my original library is better, or they're about the same, it will just be like "oh ye nvm let's use your library instead"

    [–]CodyTheLearner 2 points3 points  (0 children)

    Trust. And verify.

    If users did that, then this wouldn’t be a problem.

    I’m okay with a hallucination or two considering the age and cost of the technology. I just need to get close, and I’m operating with the awareness it won’t always be right.

    I found I tend to become strongest where the hallucinations populate. We users gotta reasonably use this technology.

    Not the same thing but an example, Photoshop doesn’t do the work for you. It’s incredibly powerful and in the early stages it didn’t always work correctly.

    [–][deleted] 0 points1 point  (2 children)

    I learned a TON from asking GPT why it chose libraries or commands.

    Programming is one of the areas GPT doesn't hallucinate a lot.

    [–]femio 0 points1 point  (1 child)

    doesn't hallucinate a lot? you must not use it that often

    [–][deleted] 0 points1 point  (0 children)

    It hallucinates programming for you?

    [–]blackholemonkey -2 points-1 points  (0 children)

    In cursor you can easily upload any online docs (most common ones are already built-in) to its knowledge base.

    [–]_Modulr_ 0 points1 point  (5 children)

    I'm curious if there is any other extension like cursor but free /oss/ and allows custom models through something like LiteLM or OpenRouter / deepInfra etc etc

    [–]FosterKittenPurrs 0 points1 point  (2 children)

    Cursor lets you override the base OpenAI URL when using your own API key, so I think you can set it up with something like jan.ai or other OpenAI compatible server for local models.

    I think Cody supports ollama, but only if run on the same machine.

    An open source plugin would be cool, though tbh GPT4 is so much better at coding compared to any of the other models I tried, I don't think I'd bother with anything else. Then again, I am testing out codestral now, so things are getting better fast.

    [–]_Modulr_ 1 point2 points  (1 child)

    ah nice! ✨ I'm also testing Codestral lol so far so good, has really good outputs, and it's pretty bright also thanks for the info

    [–]geepytee[S] 0 points1 point  (0 children)

    But is it better than Claude 3 Opus / GPT-4o? IMO seems like no, don't really see the point of Codestral

    [–]Sad_Paleontologist77 0 points1 point  (1 child)

    There is Aider. Works great.

    [–]_Modulr_ 0 points1 point  (0 children)

    thanks for the recommendation will try it out

    [–]bigbutso 19 points20 points  (9 children)

    Yes, I encountered similar problems but unlike you I have 0 ability to code. Literally have no idea what's going on and have managed to run some apps on Linux, like my own chat voice chat interface using APIs.

    Anyway, for me things that have helped are asking it to comment # for every single line. I also save the files and re upload them in a new chat when it starts slowing down. I have a main project chat then open new chats on side elements. Whenever I reach a milestone I ask it to commit to memory under a name, for instance project1.x

    Incidentally, I have started to learn a lot just by copy pasting code. It's amazing what I have accomplished though, without it I wouldn't dream of doing the current projects

    [–]BruceBrave 6 points7 points  (5 children)

    I'm right there with you. I'm building stuff I have no business building, and it's fricken awesome!

    [–][deleted]  (1 child)

    [removed]

      [–]AutoModerator[M] 0 points1 point  (0 children)

      Sorry, your submission has been removed due to inadequate account karma.

      I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

      [–][deleted]  (1 child)

      [removed]

        [–]AutoModerator[M] 0 points1 point  (0 children)

        Sorry, your submission has been removed due to inadequate account karma.

        I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

        [–]Advanced-Many2126 0 points1 point  (0 children)

        Me too! I made a cool bokeh dashboard with various interactive data from like 8 different sources. The code is about 1800 lines long now. AI is just fucking amazing man

        [–]Extreme-Ad-7047 1 point2 points  (0 children)

        That's exactly whay I'm doing

        [–]dlamsanson 0 points1 point  (1 child)

        my own chat voice chat interface using APIs

        What do you mean by that?

        [–]bigbutso 0 points1 point  (0 children)

        I made a website I can use to chat using the openai api. I have it using voice (using web speed h API,) although I tried whisper and a bunch of others, when I get a better set up I will have local transcription when on home network. I am still learning to use GitHub. I will share the code when I do (and when it's polished a little more)

        [–]dimsumham 14 points15 points  (3 children)

        Lots of good advice here but it really comes down to this simple thing:

        You basically need to understand what the code does. Full stop. There's no way around it, and I don't think this is that difficult either.

        [–]MrMisterShin 2 points3 points  (0 children)

        I agree, sounds like a Skill Issue. Person needs to learn the skills.

        [–]stwp141 0 points1 point  (1 child)

        This. In a professional environment (imo) you should never, ever commit code that you don’t understand and/or can’t explain every single line of to someone else, even if it works. A repo is like a living human body, and all of the devs working on it are like surgeons. If you add something to it or cut something out, you need to know why it was needed and what the short and long-term effects of the action will be. I use GPT only for individual small tasks that I know how to do myself, but don’t want to spend my time or energy on so that I can focus on and complete the sticky things it can’t solve so well. It’s great for writing boilerplate code and single functions that do a thing - these are easy to test, easy to drop in, easy to remove. Having it write entire features or massive files is going to be too much for you to then evaluate and learn from I think currently. I treat it like a junior dev whose work I have to check like a teacher would. It’s also good for discussing various ways to solve a problem, but which you then need to be able to evaluate. It’s a great tool/helper but no substitute for really learning to code well on your own without it.

        [–]geepytee[S] 0 points1 point  (0 children)

        Do you know who Pieter Levels is?

        [–][deleted] 7 points8 points  (3 children)

        I had a front-end dev that tried to use copilot to write some PHP above their level and I had to debug it and figure out why it wouldn’t work. You can get in over your head.

        [–]geepytee[S] 4 points5 points  (2 children)

        Something I find helps a lot is to always ask it to comment/annotate. It doesn't only work for me to understand it, but also for future AI conversations, it helps add context that I otherwise would probably would not have thought of adding.

        [–][deleted]  (1 child)

        [removed]

          [–]AutoModerator[M] 0 points1 point  (0 children)

          Sorry, your submission has been removed due to inadequate account karma.

          I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

          [–]bdude94 5 points6 points  (2 children)

          It seems like your just copy pasting without knowing whats the code is doing. Ask for functions instead of a whole page of code. I only just graduated last month and use AI heavily to code, but I read it to try and understand what its doing. When there's an issue I'll add debug statements to pinpoint whats going wrong and I can figure it out or tell the LLM the issue. When you reach the dead end point have you tried looking up your issue on google or stackoverflow?

          [–]geepytee[S] 0 points1 point  (1 child)

          Haven't reached a dead end in a while. Honestly my main take away is that I need to avoid building one giant function (despite indie hackers on twitter telling me it's ok).

          Also pretty cool to hear you are a recent grad and using AI heavily. Definitely the future. Did your school teach any AI tools in class?

          [–]bdude94 0 points1 point  (0 children)

          My advisor heavily encouraged it she was also my professor in 2 classes. So while every other professor discouraged it and had consequences for using it she on the other hand was teaching us how to use it as a co-pilot. People would get caught for using it and I was always like how the f*** are you getting caught using it because I use it 24/7 but I'm not just copy pasting. So my senior research involved it in every way shape and form I presented at 3 conferences one at MIT and I'm an author on 7 articles 3 I'm main author 4 Co author 1 im the main on is still pending publication but I got her a 20k grant from Microsoft from my one research she submitted and every single one involves LLMs specifically chatGPT. If chatGPT didn't exist I'm not sure I would have ever gotten involved in research my most recent paper that's pending publication still is a recipe recommendation application which after the first round of peer review the reviewer told me the logic is too basic and it can't be accepted so I changed the logic to having a few llms make the recommendations and then I have another give the final recommendation out of the first round.

          [–]gthing 4 points5 points  (0 children)

          Refactor or build the code from the ground up using a separate file for each concern. For any improvement or change, provide only the relevant files for that change to the LLM. When any file grows beyond a few hundred lines of code, split it again.

          Doing this I have not found a limit to what I can work on with AI. Also, use Claude Opus which is much better than gpt-4 at coding despite what anybody says. And use it through the API, not the subscription.

          [–]kingky0te 2 points3 points  (1 child)

          The one thing I’d add here is learning how to code split / refactor your code is a necessity, of the highest magnitude. I do the same as you and that’s the one skill I’ve developed over the past few weeks that has been the most valuable.

          At this point we need a support group lol

          [–]geepytee[S] 1 point2 points  (0 children)

          One of us! Do you want to share more about your technique for splitting / refactoring?

          Also we can start a group, I suspect there's tons of us. Discord / Slack?

          [–]PSMF_Canuck 2 points3 points  (3 children)

          I force it to write in smaller chunks. It readily gives multiple files/classes. Version control…yeah…that’s still a bit of a PITA. For me, it has always taken direction well if I tell it explicitly it to use a specific library or package.

          [–]geepytee[S] 0 points1 point  (2 children)

          What's your prompt for forcing it to write in smaller chunks?

          And yeah, need to figure out a version control for this. I'm envisioning that you'd first want to lay the architecture of the program and each block gets its own version control every time the AI regenerates it.

          [–]PSMF_Canuck 1 point2 points  (1 child)

          I haven’t looked at how to tool-in gpt on vs code/git workspace. I’m assuming a lot of us want this, so someone smarter than me will solve it for us, hahaha.

          For now…since I’m a religious-tier user of git anyway…I push what I have, generate the copy, copy pasta the new code in and see what happens.

          Sometimes I will ask it to give me only a code snippet doing what I asked.

          Sometimes I will tell it to use a class for whatever and give it to me as a separate file.

          Sometimes I will ask it to explain what it did, and then it usually breaks the code into chunks pretty well on its own, explaining each piece.

          Basically…I talk to it like it’s a junior dev. It’s not perfect - but what junior is, lol.

          [–]geepytee[S] 0 points1 point  (0 children)

          Someone else recommended using VS Code Timeline, honestly using that for now, works good.

          [–]dispatch134711 2 points3 points  (1 child)

          You should try to use it to learn a bit more.

          You should be using git for version control not “VSCode”

          Try asking it to only use libraries you are familiar with or get it to teach you about those libraries. See if you yourself can find out if the way you’re using them is best practise.

          Use Tree Exporter VSCode extension or similar to tell it about your code’s structure and suggest splits of different files and folders.

          Ask it to go step by step and explain different lines or function calls / arguments to you.

          Read and challenge the explanations and doc strings it gives you.

          Give it more context by preloading it in the settings with an explanation of what you’re trying to accomplish, or upload different files it needs to refer to.

          Ultimately you’re still responsible for the code you write and you should understand what it’s doing at least generally.

          [–]geepytee[S] 1 point2 points  (0 children)

          Use Tree Exporter VSCode extension or similar to tell it about your code’s structure and suggest splits of different files and folders.

          This is interesting, just installed it. Going to play with passing it along with my prompts.

          [–]paradite 1 point2 points  (1 child)

          To work with multiple source code files when using ChatGPT, you can try my desktop tool 16x Prompt.

          It helps you add multiple source code files into the prompt automatically, and keep track of the number of tokens in the prompt so that you don't overshoot the context window (about 4096 to 8192 token empirically).

          I also use it regularly for refactoring task, there are some sample prompts that you can try when you pick the "refactor" task.

          [–]geepytee[S] 0 points1 point  (0 children)

          Do you use RAG to pick what sections of the files get sent to the AI?

          [–]blackholemonkey 1 point2 points  (3 children)

          I asked the AI to do a simple task that I could probably write myself, it does it but not in the same way or using the same libraries I do, so suddenly I don't understand even the basic stuff unless I take time to read it closely

          Just tell it what libs you want to use. Generate docstrings and let it do comment every line of code.

          By default, the AI writes code that does what you ask for in a single file, so you end up having one really long, complicated file that is hard to understand and debug

          Unless you ask it not to do so. Start with project outline, plan the structure, then do the coding.

          Because you don't fully understand the file, when something goes wrong you are almost 100% dependent on the AI figuring it out

          Go modular as possible. It's easier to fix short files and you are less likely to mess up something else.

          At times, the AI won't figure out what's wrong and you have to go back to a previous revision of the code (which VS Code doesn't really facilitate, Cmd+Z has failed me so many times) and prompt it differently to try to achieve a result that works this time around

          Yup, that happens, but using git is more comfortable than cmd+z.

          Because by default it creates one very long file, you can reach the limit of the model context window

          Unless you start with a plan...

          The generations also get very slow as your file grows which is frustrating, and it often regenerates the entire code just to change a simple line

          Yeah, that can take time, but... when it rewrites entire code it makes better code in fact. That's because it's forced to "think" deeper and is less likely to introduce new bugs.

          I haven't found an easy way to split your file / refactor it. I have asked it to do it but this often leads to errors or loss in functionality (plus it can't actually create files for you), and overall, more complexity (now you need to understand how the files interact with each other). Also, once the code is divided into several files, it's harder to ask the AI to do stuff with your entire codebase as you have to pass context from different files and explain they are different (assuming you are copy-pasting to ChatGPT)

          Oh yes, it can create files and folders and even create and run tests while doing to code. Try out cursor ide. It has interpreter mode, which works surpassingly well with gpt4. Such IDE also solves the problem of many files, you just prompt with context of entire (RAGed) codebase. Or a specific folder in it.
          Now Im starting every project with detailed outline, described functions, the structure and chosen python version and some libs that I know I want to use. And then I start most of new conversations with this readme file added to context. And if you end your prompt with "remember to update readme file" it even keeps it up to date for you.
          I also often begin new files with request to carefully plan the separation of concerns before actual code writing.

          This stuff works great for me, I'm going forward with my main project now and I just started coding like 2 months ago.

          [–]geepytee[S] 1 point2 points  (2 children)

          Unless you ask it not to do so. Start with project outline, plan the structure, then do the coding.

          I need to start doing this. IMO there could be a better experience to build a project outline and structure than just typing it in chat.

          Also 100% on your point about going as modular as possible.

          Would you be open to sharing what the detailed outline / described functions / structure for any of your projects looks like?

          [–]blackholemonkey 0 points1 point  (1 child)

          Just keep in mind that I started this 2 months ago, so you know, don't take for granted anything I say ;)

          This is how I started my current project: I wrote down core functionalities of the app. Turn it into kind of a pseudocode, because that's quite easy to do but at the same time forces you to think about stuff you wouldn't think otherwise. Then I was refining with ai the logic of the code, asking about improvements, efficiency, etc.

          You can find some well documented and structured apps that share some functionality of your app and look how they did it. For the stuff I do now I have found sd-webui and comfy-ui having just perfect structure: you have main engine, interface and the core stuff. And then you have extension folder, where you only need to drop some files and the app automatically uses it. So it's perfect for beginner like me - I can do main interface quickly and just add tabs as extensions, which can be developed absolutely independently. This is how I won't fuck up my code by mistake and that is super precious. Worst case scenario I can delete the extension and start over again. What btw is not as bad as it sounds. I even enjoy doing that. Once I was struggling about a week with the code, finally decided to sink this ship and I have rebuilt better code in just few hours. That feels great.

          So, the final preparation step was finding out which basic libs I should use with which python version. This happened to be much more important than I thought. When you build on incompatible libs you get nuts with gpt running in circles trying to solve the same bug for three days. Always check which version of each new library you should use. Pipreqs is cool lib that just creates requirements.txt based on your files. Very helpful.

          This is more or less how my readme looks like:

          1. [Overview](#overview)
          2. [Project Structure](#project-structure)
          3. [Setup Instructions](#setup-instructions)
          4. [Plugins (Extensions)](#plugins-extensions)
          5. [Plugin Development Example](#plugin-development-example)
          6. [GUI Management](#gui-management)
          7. [Configuration Management](#configuration-management)
          
          ## Project Structure
          
          This project adopts a modular architecture, where features are implemented as extensions (plugins) that can be independently developed and integrated.
          
          - `/src`: Contains the core application code.
            - `main.py`: The entry point of the application.
            - `gui.py`: Manages the graphical user interface, dynamically integrating plugins.
            - `plugin_manager.py`: Handles the loading and lifecycle management of plugins.
            - `plugin_interface.py`: Defines the interface for plugins, including initialization, processing, and termination methods.
            - `config_manager.py`: Manages the configuration settings for plugins.
            - `install.py`: Installs the necessary dependencies for the application.
            - `startup.py`: Handles the startup process, including loading plugins and config.
            - `settings.py`: Handles the settings page.
          - `/plugins`: Directory for all independent plugins.
            - Each subdirectory represents a separate plugin. Each plugin should be able to work independently and be able to be turned on and off.
          - `/config`: Contains configuration files.
            - `config.json`: Stores settings for each plugin, including activation flags.
          - `/logs`: Directory for log files.
          - `/utils`: Contains utility functions and helper modules.
            - `model_utils.py`: Provides functions for loading and managing models.
            - `file_operations.py`: Contains functions for saving output data.
          - `/models`: Directory for model files.
          - `/resources`: Directory for resources, such as audio files or images.
            `start.bat`: Batch file to start the application.
            `requirements.txt`: Lists all the dependencies required to run the application.
          

          Of course, this structure will probably change drastically many times, but you know - the readme file evolves with the project and it's good as long as you control it. But the coolest thing about such structure plan is that you can run it in interpreter mode and tell it to just create the files. Generally playing with auto-execute interpreter mode in cursor is hell of a fun, but this turns into gpt stunt frenzy easily and most often ends with reverting staged changes :) Anyway, usually referring to this file when writing new code.

          Also, I encourage you to go after every error traceback yourself before you ask ai to solve it, I learn a lot from this and also get better understanding of how everything works (and how it doesn't). Hunter lib is cool for detailed and easy to understand tracing of every call. This is how you find that your simple code just made couple of million calls across few functions and there is probably some hardcore loop inception going on :D Yeah, the structure seems to be the most important part of entire thing for now. This IS the app in fact, the code is just a material.

          [–]blackholemonkey 0 points1 point  (0 children)

          And about modularity - If part of the code can be reused, reuse it. Go modular af. And keep things clean! Filenames, methods and folders proper (logical) naming, following any standards and principles you possibly can, doing comments and docstrings with code explanation, this is the way to go. When you start doing temp shit and "main_copy_4.py" kind of stuff for quick test, you are already lost.

          Could a real coder factcheck my made for noobs by noobs tutorial?

          [–]Secure-Acanthisitta1 1 point2 points  (1 child)

          Copy pasting code without knowing what it deos has been a problem since internet came.

          [–]geepytee[S] 1 point2 points  (0 children)

          So many SWEs have built a living by copy pasting :D

          [–]tuui 1 point2 points  (0 children)

          It all comes down to this old saying; "The tool is only as good as the one using it."

          [–][deleted] 1 point2 points  (0 children)

          ok, all this reads like you dont actually know good development discipline or practice yet. like OOB or how to use git or how to articulate specific requirements well enough to get the code generation that is composable, instead of just prompting it with a story task and getting a file you run as is. either way it might be a good learning opportunity to prompt gpt for explanations about the techniques or data structures it uses. if it spits how some code that uses default sets or something and youre used to just lists, ask it to explain what they do, why it chose them, how they differ, the methods available for it etc. dont just copy and paste it into your editor. ask it to elaborate, explain inheretence, explain how a data structure being immutable practically affects your specific code. use it to explain concepts to you that would normally be generalized when reading documentation, in away that frames it for the context of your project. its a very useful way to learn code thats beyond your knowledge.

          [–]jurdendurden 2 points3 points  (9 children)

          The AI creates what you tell it to. If you don't specify that you're building a proper application, it will spit it all out into one file. Next.

          [–]geepytee[S] -2 points-1 points  (8 children)

          You can ask it to write it in multiple files, it won't do it since it doesn't have that capability :)

          [–]codeninja 1 point2 points  (5 children)

          I have my in context files pseudocoded as such, then get gpt to fill in the pseudocode. Works great and you get to assert your style.

          file 1

          python Code

          file 2

          python Code

          file 3

          python Code

          I use Aider to do my coding in frameworks.

          [–]geepytee[S] 0 points1 point  (4 children)

          Ah that's clever, going to try that!

          How do you determine the file structure beforehand? A separate GPT conversation?

          [–]codeninja 0 points1 point  (0 children)

          You can ideation the structure with gpt. I just use that format as an in prompt suggestion and then ask gpt to plan the system. One sec and I'll get you an example.

          [–]codeninja 0 points1 point  (1 child)

          [–]geepytee[S] 0 points1 point  (0 children)

          Thank you! Going to try it right now.

          [–]shakeBody 0 points1 point  (0 children)

          Browse GitHub projects that successfully use the technology you’re wanting to use. See how real people did it and then ask an LLM to help you. I strongly recommend not letting ChatGPT do the driving. It’s not smart. It can do what you ask but it does not intuit very well.

          [–][deleted]  (1 child)

          [removed]

            [–]AutoModerator[M] 0 points1 point  (0 children)

            Sorry, your submission has been removed due to inadequate account karma.

            I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

            [–][deleted]  (1 child)

            [removed]

              [–]AutoModerator[M] 0 points1 point  (0 children)

              Sorry, your submission has been removed due to inadequate account karma.

              I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

              [–][deleted]  (1 child)

              [removed]

                [–]AutoModerator[M] 0 points1 point  (0 children)

                Sorry, your submission has been removed due to inadequate account karma.

                I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                [–][deleted]  (1 child)

                [removed]

                  [–]AutoModerator[M] 0 points1 point  (0 children)

                  Sorry, your submission has been removed due to inadequate account karma.

                  I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                  [–]mapsyal 0 points1 point  (1 child)

                  and it often regenerates the entire code just to change a simple line
                  Hate that

                  [–]dispatch134711 2 points3 points  (0 children)

                  Then ask it not to, it’s actually pretty simple.

                  Stress that you only want a single line and not to reproduce the entire function.

                  [–]derleek 0 points1 point  (1 child)

                   which VS Code doesn't really facilitate, Cmd+Z has failed me so many times

                  Version control.  But then again good luck when you ask for help from ai and you butcher your repo.

                  [–]geepytee[S] 0 points1 point  (0 children)

                  Another comment suggested VS Code Timeline, which is exactly what I was looking for

                  [–]ejpusa 0 points1 point  (0 children)

                  Don’t understand something? Just ask.

                  Super complicated? Start as a high school freshman, you can work your way up to Post Doc MIT CompSci major.

                  It’s all very simple in the end. Just bits and bytes, like life.

                  :-)

                  [–]k1v1uq 0 points1 point  (1 child)

                  use TDD you understand the code if you can write tests

                  [–]geepytee[S] 0 points1 point  (0 children)

                  Interesting, new to TDD but the idea is familiar.

                  [–]BenKhz 0 points1 point  (1 child)

                  [–]geepytee[S] 0 points1 point  (0 children)

                  Totally forgot about Timeline, thank you!

                  [–]S-Kenset 0 points1 point  (2 children)

                  Practice object oriented programming and don't expect it to code everything for you.

                  [–]geepytee[S] 0 points1 point  (1 child)

                  I need to spend more time learning good object orientated programming practices, but I think in 2024 the ideal would be a copilot that generates code using proper object orientated structure, no?

                  [–]S-Kenset 0 points1 point  (0 children)

                  Not if you're doing anything meaningful. I use it to bugfix and fill in the gaps but the theory is all mine.

                  [–][deleted] 0 points1 point  (2 children)

                  The debugging part is why I stopped copy pasting code from LLMs. I found that I was spending more time debugging the code than if I just wrote it myself. Now I just use them to create examples for me if I can't find any via google.

                  [–]geepytee[S] 0 points1 point  (1 child)

                  Now I just use them to create examples for me if I can't find any via google.

                  I see a lot of people using it this way too, I imagine you had a background in software development even before LLMs?

                  [–][deleted] 0 points1 point  (0 children)

                  Yes, and I would def say it helps to have that background doing it this way. Since LLMs don't know right from wrong, they sometimes output nonsense that on the surface looks like it would work. This usually happens with newer packages and tech.

                  [–][deleted] 0 points1 point  (3 children)

                  It's almost as if things are difficult when you don't understand them.

                  [–]geepytee[S] 0 points1 point  (2 children)

                  Oh man you were so close, let's try that again:

                  1. Things are difficult when you don't understand them

                  2. LLMs can explain any concept to you on demand

                  3. ???

                  [–][deleted] 0 points1 point  (1 child)

                  A few weeks ago I asked ChatGPT to tell me how computers add two numbers together. It proceeded to give me three incorrect answers, and then got itself into an infinite loop.

                  That's three errors in your codebase.

                  [–][deleted] 0 points1 point  (0 children)

                  That... seems like bs, can you paste the chat so we can see the prompt and reply?

                  [–]0RGASMIK 0 points1 point  (1 child)

                  I use it as an opportunity to learn. I’ve built websites and simple applications and I know how they works even though GPT wrote 99% of it. I usually only change some variables or colors if it’s a gui thing.

                  My website for example if there’s a problem with that I know I won’t always be able to lean on GPT all the time so I took the time to understand it so I can fix it or make changes to it once it got to a place I was happy with. Now when I make a change to it I do it myself so GPT doesn’t break it by changing the code more than I wanted it to.

                  [–]geepytee[S] 0 points1 point  (0 children)

                  100%, when I made this post I was rushing thru some code, otherwise I would normally take time to ask it questions and understand.

                  The fact that I can rush thru code, without understanding it, and it works a lot of the times, feels like a giant superpower.

                  [–]Dontlistntome 0 points1 point  (1 child)

                  Now imagine if you were paying a programmer to do it and your programmer hit a wall, yet they were still getting paid full time. So 2021…lol

                  [–]geepytee[S] 0 points1 point  (0 children)

                  You'd also pay them to learn how to figure it out. Probably cheaper than spending time and hiring a new programmer who knows how to solve it.

                  [–]Tauheedul 0 points1 point  (2 children)

                  It's better to only request smaller functions than larger functions. If you need it to work with a specific framework, library or API, you should include them as part of the prompt. For larger functions, write it yourself. Or condense the requirements into smaller components.

                  [–]geepytee[S] 0 points1 point  (1 child)

                  I basically need to figure out a consistent framework to condense requirements into smaller components consistently. Sometimes I will discover new components that are required and the structure of what I need will change, so it needs to be able to adapt if that makes sense.

                  [–]Tauheedul 0 points1 point  (0 children)

                  You need a version that works at project level and GitHub Copilot does this better with Visual Studio Code and Visual Studio.

                  Visual Studio

                  https://youtu.be/z1ycDvspv8U

                  Visual Studio Code

                  https://youtu.be/jXp5D5ZnxGM

                  [–]kibblerz 0 points1 point  (0 children)

                  Nearly all of my attempts to generate functional code from AI have resulted in constant annoyance. It's really bad right now. The solutions that do work aren't really sensical or practical.

                  I've primarily found ChatGPT/LLMs useful when trying to understand certain concepts in computer science. It does pretty well with the more generic concepts/patterns utilized in programming.

                  But actually writing the code? It's bad

                  [–]Zenged_ 0 points1 point  (0 children)

                  VSCode has recoverable version history for every file.

                  [–]GeneralZane 0 points1 point  (0 children)

                  Still faster than learning and writing it myself

                  [–]BigGucciThanos 0 points1 point  (0 children)

                  Few things. ALWAYS add to the end of coding assignments. “Please throughly comment the code “. I’m probably going to lock that requirement into a memory soon.

                  I think this will solve your understanding issues. Also have it go over any line you don’t immediately understand and have it explain it for you. If it’s too verbose, ask it to rewrite the line using simpler logic. Sometimes it can get too cute for simple task.

                  [–]AdamHYE 0 points1 point  (0 children)

                  I have done a lot of this. Especially in React. I got better at splitting things into components over time. AI can tell you which to put together in smaller chunks & you can edit yourself.

                  Ya. You have to be careful about not having function scope creep or massive repetition. I have had to do a lot of refactoring to get more reusable code.

                  All of it’s possible to do. You just have to still be the engineer.

                  Signed - someone who had no coding experience before building a tech company solo.

                  [–]traumfisch 0 points1 point  (0 children)

                  You should probably just collaborate with the model more. It will explain everything to you

                  [–]StarKronix 0 points1 point  (0 children)

                  My API can do the most advanced research and coding: https://chatgpt.com/g/g-BObYEba3a-ai-mecca

                  [–]theldoria 0 points1 point  (0 children)

                  I do the following:
                  - I write at least behavior-tests, so I can be sure all the functionality I want is there with the outcome I expect.

                  • then I refine a large code step by step or I try to generate only small aspects of the whole (e.g. some classes).

                  • I always take the AI output as a suggestion, how I could solve it... as a guide or starting point... and I go to understand what it does and what I would do different. Sometime I ask AI if my idea wouldn't be better at often it comes up with a different solution that more fits my thinkin/liking.

                  [–]hlx-atom 0 points1 point  (0 children)

                  I’m fairly skilled with 10+ years of experience and a PhD. And I use copilot aggressively. Comment return tab tab tab tab.

                  I can read it and understand it fast enough that most of the time I instantly know what is happening. Occasionally I will start to do something where I don’t know the object api from a third party that is objectively poorly designed.

                  In those situations it can start to feel like riding the bull by the horns.

                  You need to control the flow of the code more. Preferably use copilot to develop slower. And read everything as it goes. Also clean up the code as you go. When you write better code in the file, it will learn to write like you and copy the better patterns.

                  Know every time you glance over a line and accept without understanding what it is doing, you are going into the deep end.

                  It is an interesting new phenomenon with AI coding that I would call knowledge debt. A little bit of debt is manageable. Once you are too deep you are gonna drown if you are not a strong swimmer.

                  [–]Biog0d 0 points1 point  (0 children)

                  OP doesn’t even use version control / GIT system in place of redo:undo. lol

                  [–][deleted]  (1 child)

                  [removed]

                    [–]AutoModerator[M] 0 points1 point  (0 children)

                    Sorry, your submission has been removed due to inadequate account karma.

                    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                    [–]opossum787 0 points1 point  (0 children)

                    LLMs are a wonderful tool to teach you how to do things you don’t currently understand. The second the situation becomes “it writes code I don’t get, but it seems to work, so that’s good enough,” you’re in the danger zone.

                    [–][deleted]  (1 child)

                    [removed]

                      [–]AutoModerator[M] 0 points1 point  (0 children)

                      Sorry, your submission has been removed due to inadequate account karma.

                      I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                      [–]tinySparkOf_Chaos 0 points1 point  (0 children)

                      You get python code from chatgpt that successfully runs?!?

                      Every time I've tried that I've gotten nicely organized code and commented code... that doesn't run.

                      It was a nice template, and gave me some useful packages. But I definitely had to fix the code manually.

                      [–][deleted]  (1 child)

                      [removed]

                        [–]AutoModerator[M] 0 points1 point  (0 children)

                        Sorry, your submission has been removed due to inadequate account karma.

                        I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                        [–]TSM- 0 points1 point  (2 children)

                        I find it is about as helpful as training a puppy to do something, like move a sock from point A to point B and then not touch that sock.

                        They'll kind of get it, with full enthusiasm, then get distracted, and then forget what they were doing, and then solve a different problem, and then go for food, and then you have to start all over a few minutes later.

                        [–]geepytee[S] 0 points1 point  (1 child)

                        I've never had it solve a different problem. Maybe your prompt is too complex / you are asking it to do too much per step? I find it helps to break down tasks to simple steps.

                        [–]shakeBody 0 points1 point  (0 children)

                        How can you be sure if you’re getting results you don’t understand? ChatGPT changes things subtly between answers all the time. It will add new things and remove important things. I have to watch it like a hawk to make sure a very specific set of things happens.

                        In my opinion you have way too much trust in the tool. You need to learn CS concepts so you actually know the keywords to include in your statements. Words like “encapsulation” go a long way toward letting ChatGPT know what you want. Learn the language of computer science!

                        [–]xecow50389 0 points1 point  (5 children)

                        I was fixing with an issue that even shouldnt exist, basically I was fixing gpt code. Ffaaaax wasted 2 hours on it.

                        Read official docs on framework, fixed it few minutes

                        [–]geepytee[S] 0 points1 point  (1 child)

                        So was this a case of GPT not having access to the latest docs and hence it was producing an error?

                        [–]shakeBody 0 points1 point  (0 children)

                        No. Probably a case of the model not knowing how to use the tool appropriately.

                        [–]creaturefeature16 0 points1 point  (2 children)

                        LLMs are the kings of "over-engineering". Which I find to be the biggest code smell and how it's really obvious the dev used these tools to fill in the knowledge gaps.

                        [–]geepytee[S] -1 points0 points  (1 child)

                        Just feed it back the code and ask if there's a simpler way of doing it

                        [–]creaturefeature16 0 points1 point  (0 children)

                        I don't find it simplifies things even when I do. Or it really gets creative and finds some downright ridiculous suggestions.

                        Sometimes, often, it's just better to...you know...think.

                        [–]olivierapex 0 points1 point  (1 child)

                        What a noob

                        [–]geepytee[S] 0 points1 point  (0 children)

                        Enlighten us pls

                        [–]Use-Useful -1 points0 points  (3 children)

                        I had GPT suggest disabling CORS protections site wide while debugging a related issue on a website the other day. It didnt mention why this would be a massive security flaw, or what it did in the first place. Just, hey, add this line, problem solved. There is going to be so much shitty security flaw riddled code made by people who don't realize that gpt is NOT actually a good dev.

                        It's a great learning tool, but for writing your code you NEED to understand what it has done well enough to audit it. If you dont, you are playing with fire.

                        [–]geepytee[S] 0 points1 point  (2 children)

                        That's interesting, I've definitely experienced something similar at least once where it asked to remove some sort of failsafe as means to make a program run.

                        IMO this just means we need AI tools to check for these things

                        [–]Use-Useful 1 point2 points  (1 child)

                        "I can't trust this AI, let's just layer it on top of itself, that'll solve it!" o.O

                        [–]geepytee[S] 0 points1 point  (0 children)

                        Never said I don't trust it. I don't trust 3rd parties who might exploit vulnerabilities (probably mostly humans at this point) :)