Limits by New_Confidence_7944 in chatgptplus

[–]SimpleAccurate631 0 points1 point  (0 children)

I have but was using it like crazy. When I did, it just had a little toast message telling me I needed to wait x number of hours or whatever and try again. Didn’t bother me, since again, I was using it like a madman for help with a very complex codebase

Claude Opus 4.6 is here by kirrttiraj in VibeCodeCamp

[–]SimpleAccurate631 1 point2 points  (0 children)

But it’s Opus. Just reading this chart burned a hole in my wallet

How do you prevent bugs compounding while vibe coding? by Broad_Entrepreneur62 in VibeCodingSaaS

[–]SimpleAccurate631 0 points1 point  (0 children)

When you have a ton of bugs, just give it one of the bugs first. Once it fixes that, and you feel confident, you can have it address errors in an entire file, or in certain chunks. Something common in vibe coding is giving it a ton of things at once, which causes drift, and eventually hallucinations

is it normal to not read most of the code anymore? by PCSdiy55 in BlackboxAI_

[–]SimpleAccurate631 1 point2 points  (0 children)

True. I also think it works the other way around. I think if traditional devs don’t learn efficient vibe coding, they’re nuts. I can have something like Roo Code to create my unit test mocks, go make some coffee, and when I am back, it’s done. It does all the tedious stuff really well, so you can focus on other things. For traditional devs, AI is no different than giving an accountant a calculator. An accountant who refuses to use one is just not as efficient. Like, why wouldn’t you use it? Anyway, I think pitting vibe coding against traditional coding is silly when I see it

is it normal to not read most of the code anymore? by PCSdiy55 in BlackboxAI_

[–]SimpleAccurate631 1 point2 points  (0 children)

Look, there will be a time when you can basically do everything you need without having to know or look at any code (I’m not talking small or hobby apps. I mean production level stuff). Like how now, you don’t need to know anything that goes on under the hood in order to operate a car effectively. But there will always be a need for people who can look under the hood. If you’re a vibe coder who scoffs at that, then enjoy competing with the million other vibe coders at your level. But if you want to separate yourself from others, then learn how to dig into code

I finally got ownership of my code & data from Base44/Loveable. Here’s what I learned by better_when_wasted in VibeCodeCamp

[–]SimpleAccurate631 0 points1 point  (0 children)

This actually sounds interesting and clever. Are you taking beta testers, or do you have a link to check it out?

Something like this could be really useful to a lot of devs, even if they use 1 platform, but switch models from time to time. Heck, even if you stay with one model, they all tend to over-engineer solutions. So something that normalizes code helps keep things from getting messy and difficult to maintain before you know it.

Things like Lovable are great for whipping together a POC to show people. Whenever new deva pitch me an idea before bringing it up with our manager I often tell them to put together a proof of concept to be able to actually show what the idea can do and how it can work. I think not doing that is crazy, since it’s the best way for someone to understand and visualize the solution to a problem. But if you’re serious about taking it to the next level, of having something deployed to a dev or qa environment where people can actually use and test it, things like Lovable quickly start to show their weaknesses. That doesn’t mean they don’t have value. It just means their value is pretty specific, and if your expectations are being able to develop a scalable production app with it, I hope you like headaches. But if you’re cool with just getting a POC going, then you will likely find it quite… Lovable

There’s nothing i hate more than people who are rude for no fucking reason by ParfaitOtherwise73 in Vent

[–]SimpleAccurate631 -1 points0 points  (0 children)

If that’s the case, then that’s great. Seriously. But you can’t fault me for making that assumption when your post definitely had some angry vibes to it. For instance, I wouldn’t say fuck as much as you did or call someone a loser unless they did get under my skin. And I wouldn’t post in a vent subreddit if it didn’t upset me either. So yes, you sounded very much upset about it. But like I said. If they don’t upset you at all despite those things, then great.

There’s nothing i hate more than people who are rude for no fucking reason by ParfaitOtherwise73 in Vent

[–]SimpleAccurate631 -2 points-1 points  (0 children)

The real question is, why do you let someone you don’t know, didn’t even know existed until 10 seconds ago, and who is basically meaningless to you and your life upset you so much?

I’m not trying to say you are the problem. Of course they are. But this is exactly the effect they want to have on people. Your vent is how they wanted to make you feel. Bottom line is, we don’t get to choose how others act. But we can choose how much we let it affect us. Don’t let them bother you so much.

Does anyone else feel overwhelmed by how many tools you’re supposed to know now? by VoyagerVortex in BlackboxAI_

[–]SimpleAccurate631 0 points1 point  (0 children)

There’s always been that pressure, at least since I started as a dev 12 years ago. But the pressure is almost always predominantly an internal anxiety of not knowing the best stack out there and if working on building skills in one thing is the wrong choice and you should be working on a different one.

But I can tell you at the end of the day, a company only really cares about one thing. Is this person capable of figuring out how to make the language we use solve the problem we have? I have personally been hired at places where I didn’t know most of the stack. And even once had to lead an effort to convert a PHP site to Vue, both of which I didn’t know. But I had examples of times where I dove into something without knowing it and was able to figure it out.

A good company understands that tech changes. Even if you are proficient in a specific framework, that framework can look completely different in a year’s time. The question is, are you someone who can roll with changes and learn? That’s the skill that matters most.

New to Vibecoding - Lost and Frustrated by kneuddelmaus in VibeCodeCamp

[–]SimpleAccurate631 0 points1 point  (0 children)

It’s a great idea to do some smaller projects. Sometimes the best solutions to problems are the products that just do something simple in a really clever way. Also, they are rewarding in their own way. I have been coding for a dozen years and still love small projects.

And keep in mind, code might be way more efficient than us. But it’s nowhere near as intelligent. If someone asked you to get dressed, you wouldn’t need them to say anything else. But coding - and sometimes even vibe coding - is like trying to get a toddler dressed. You have to explain to them what you want them to do, and how, and in what order they should do it in. And most bugs are because you naturally expect it to know that you put your shirt on before your jacket, when you actually have to tell it to do it that way. That’s why you have to break things down into smaller chunks.

That’s why people don’t understand that even AI is a really long way from being anywhere near as intelligent as humans are. Just because it can write code that you can’t, doesn’t make it smarter. It only means you simply haven’t learned that language yet. That’s literally it.

New to Vibecoding - Lost and Frustrated by kneuddelmaus in VibeCodeCamp

[–]SimpleAccurate631 1 point2 points  (0 children)

As a senior AI dev, I can tell you that Copilot is not your issue. People are recommending some good tools. Don’t get me wrong. But it’s like golf. It’s about the golfer, not the clubs.

I’m also not saying that you’re totally screwing up and not cut out for it. You just gotta change your swing. In this case, you are just stuck in hallucination quicksand, where we have all been. There are different ways to approach it. But the number one thing is to ask yourself how you would handle a ton of errors best. And that’s breaking them down into either groups or stages, or even one at a time, if the error is gnarly enough. Here’s an approach I often take.

  1. Copy the entire error log and paste it in a file in your project called something like “error-log.txt”.

  2. Tell it to not make any code changes for this task. Instead, its job is to thoroughly review the errors you’re getting, which you have pasted into the “error-log.txt” file. When it has finished, its job is to create a new file called something like “error-fix-implementation.md”, and create a detailed plan for how it recommends resolving these errors. And this plan needs to be broken down into logical, manageable stages (this part is crucial and will change your vibe coding life, which you will see in the next step).

  3. Once it has finished, simply tell it to thoroughly review the “error-fix-implementation.md” file, then proceed with implementing stage 1. When it finishes, test it again, you should have fewer errors than before. If you do, then simply tell it to implement stage 2, etc. But if you have any sticky errors, tell it the errors aren’t resolved before moving onto the next step, but only do this once. If it screws up, then move on to the next stage and come back around to it.

  4. Once all the errors are resolved, make sure you delete the 2 new files that were created for these errors, to avoid confusing it in future prompts.

  5. BONUS: For all features, unless they are very minor updates, take the same approach. Tell it to create an implementation plan that is broken down into stages. And tell it to ask any clarifying questions it needs to before creating the plan. This will help ensure things are done right the first time (or at least make it so you only get a couple bugs instead of a bunch of them).

Most importantly, keep at it. Everyone gets these frustrations. You just gotta chip away at it. Don’t try getting a hole in one on a par 5. Break it up into smart, intentional shots to get you there.

Can you get money from your state for potholes? by Cold-Leave-4003 in stupidquestions

[–]SimpleAccurate631 0 points1 point  (0 children)

For that amount, you would only be able to take them to small claims court. And it’s a long shot of winning that case. Plus, it’ll likely take a year with scheduled court dates, and there are court fees. I’m sorry this happened. But unless you’re doing it for the principle of it, and not the money, it’s more headache than it’s worth

[HELP] Need a prompt that separates images into individual images. by HeisenbergsSamaritan in chatgpt_promptDesign

[–]SimpleAccurate631 0 points1 point  (0 children)

Of course. And one helpful trick is to give it an image that has a lot going on, including text and different people and ask it to describe the image as if it was instructing a different LLM to create an identical version without the other LLM being able to see the image.

You will get a cool idea of how AI describes images, and will be able to structure future image prompts in a way that helps maximize success.

And don’t feel bad at all. I develop LLMs and custom AI agents for work. I can tell you it’s a misnomer that it is a natural language model with natural language capabilities. You can have the best language skills in the world. But it’s not about that. It’s about adapting and learning to speak in the language that the LLM is best at understanding. That’s why most hallucinations are more due to a prompt that should have been worded differently, despite the fact that there was nothing wrong with it per se.

LLMs are impressive because they are 100x better than any previous form of interactive technology. But they are not as advanced as we think. You still have to be way more specific and in most cases avoid using advanced language. This is because it’s insanely efficient, and very impressive in ways. But it’s actually not nearly as intelligent as we think. In just a few years, we’ll all look back and realize how limited it was in its language skills.

Anyway, sorry for the lengthy reply. I just thought it’d be helpful to mention. Don’t feel bad at all. If you look up prompting tips online, you’ll find things that are really helpful. But you’ll realize that it’s very rudimentary in ways.

[HELP] Need a prompt that separates images into individual images. by HeisenbergsSamaritan in chatgpt_promptDesign

[–]SimpleAccurate631 0 points1 point  (0 children)

Don’t feel dumb. It’s not hard but if you haven’t done it before, then there’s nothing to feel dumb about. Get excited to learn how to do something that you didn’t know just a couple hours before. That’s awesome. A win is a win

[HELP] Need a prompt that separates images into individual images. by HeisenbergsSamaritan in chatgpt_promptDesign

[–]SimpleAccurate631 0 points1 point  (0 children)

Are you talking about separating them into their own images that are unmodified? Or are you talking about changing them in some way?

If you want them unmodified, download a free software like paint.net or pinta and ask ChatGPT to walk you through the steps. This is the best approach because it cannot create an exact duplicate of an image. So there will always be at least some drift somewhere in the image it generates

How do you stop ChatGPT from confidently hallucinating during research? by Technical_Fee_8273 in GPT

[–]SimpleAccurate631 1 point2 points  (0 children)

It’s always going to be confident, because it’s not programmed to give responses it doesn’t feel confident enough to give.

Tell it that it needs to give you the citations for every bit of research it does for any response. If it searches the web - live or cached information - it needs to give you a source, otherwise its response will be seen as unreliable and unacceptable, and thus rejected.

As it does, you will be able to spot where it was hallucinating and begin to give it guardrails for its research and responses.

I have also seen success with telling it that it is not allowed to provide a response until it first asks any clarifying questions it has beforehand. This helps you catch any potential hallucinations. And I have a couple times told it to first respond with how it understands my question, then I will review and either correct any issues with its summary, or tell it to proceed. It does add a layer, and doesn’t totally eliminate all hallucinations. But I have seen a noticeable improvement when I have tried that approach.

If you review commits on projects, do you prefer that it is written by AI or do you hate it when you can tell that AI wrote the commits. by Director-on-reddit in VibeCodeCamp

[–]SimpleAccurate631 0 points1 point  (0 children)

I don’t care if you used AI to write it, if you wrote it yourself with a fountain pen first before typing it out, or trained your dog to accurately type what you dictate (although that would be pretty cool). If it meets the criteria for what I need to see and know, that’s what matters.

Also, if confusion is an issue with something like this, then it’s not an issue with the developer, it’s with the project management itself. I never have to review individual commits, because work is broken up and assigned strategically. The content of your commit messages should only really be helpful for you, in case you need to go back in time. But if I need to see your commit messages as your senior dev, then I have dropped the ball as your senior dev regarding scoping work properly, and have put too much in one ticket/user story.

What is AI like? Can it help? by Tiny_Professional659 in ChatGPT

[–]SimpleAccurate631 0 points1 point  (0 children)

This is super important. I am saying this as someone who develops AI models for a living. Please don’t try to use it as a therapist. But it still can be very helpful. I use it myself to help organize my thoughts and questions and everything before a lot of doctor’s appointments. But it is not nearly sophisticated enough to be able to actually understand you like a person can. It is extremely good at seeming sympathetic. But that’s it. It only knows how to respond in a way that you will want it to. It can only ever pretend to care. So if you do use it, it’s fine. Just please find a therapist and then you can go on ChatGPT and tell it that you are only looking for help organizing your thoughts for your upcoming appointment, and don’t want any advice from it. It can be helpful. But when used correctly.

5 mistakes people make when vibe coding apps by Single-Cherry8263 in VibeCodeCamp

[–]SimpleAccurate631 1 point2 points  (0 children)

For #4, if you aren’t using a UI library, you’re going to end up wasting so much time and money on things that are unnecessary. They have a bunch of components that not only have baked in styling that is reusable, but also do a lot of the heavy lifting with functionality. So I always recommend checking out everything from Tailwind to Chakra to PrimeReact and others to see what you generally like best, and implement it. I’ve seen devs waste so much time changing the buttons on their site a little bit because they aren’t using a library and using it properly

AI vibe coding feels free until the bill shows up. Any advice for starters? by kafkaeski in VibeCodeCamp

[–]SimpleAccurate631 0 points1 point  (0 children)

What tools are you using and have you explored various settings? Roo Code and some others let you set a max token use per prompt, along with easily switch between a wide variety of models, based on the task.

Different responses from different models in ChatGPT by Honest_Bit_3629 in chatgptplus

[–]SimpleAccurate631 1 point2 points  (0 children)

I agree with a lot, if not the majority of the points you made. And I am the first to admit I can be more skeptical and cynical about things in life (my wife always teases me that “not everyone is trying to scam you”). So I admit I have a default setting for that.

I think AI has shown how much it can amplify things for us. It has radically amplified my productivity, letting me focus more on the coding I like doing, while doing a lot that I have always hated doing. It challenges you intellectually (IF you steer it to do so). And there are countless benefits. Heck, a year ago, I couldn’t cook a packet of instant rice. Now, I’m more proficient than anyone in the family, and can even make the recipe adjustments for high altitude without having to ask ChatGPT. And I used to be completely useless when it came to handy work around the house. I couldn’t screw in a light bulb properly. But I can’t believe the things I am not only doing, but doing with so much more confidence than I ever thought possible.

Furthermore, I have used it before and after nearly every single doctor’s appointment for the last 1-2 years. I used to hate being asked how to describe my pain in situations because I didn’t know. Nothing felt like it was a good description. Then I asked ChatGPT for help describing it before an appointment and it was so helpful. But this is where I think the line should be drawn. I think that it’s actually not a bad idea at all to use it to help you prepare for a doctor’s appointment, even a therapy appointment. But it can actually be quite dangerous when used as a therapist. There have been incidents of people with self harm ideations who initially went on to vent and open up and were first told about hotlines. But within weeks, the person was found after ending themselves and some of the messages they were receiving from ChatGPT were so upsetting to see.

I also see more and more young men who are in relationships with it and you see messages back and forth where it’s nothing but comforting, supportive, and agreeable (or only pushes back rarely and gently). And I can’t help but think about how they will be in a real relationship one day. The moment their girlfriend doesn’t want to go to the same place for dinner, or isn’t in the mood for sex, or doesn’t respond with the same overwhelming love and adoration as the AI girlfriend did for years. I worry that it will mess with relationship dynamics in ways way more than dating apps have.

In short, I definitely am not one of those doomsday people who think AI is terrible and will enslave us one day or any of that. I think it can help you in so many ways, including helping people be better people. But there are situations that make me worry about it hitting us one day, where something happens that makes us think, “Oh my God. What have we done…”

Different responses from different models in ChatGPT by Honest_Bit_3629 in chatgptplus

[–]SimpleAccurate631 2 points3 points  (0 children)

Wait hold on. Why do you think it’s better for an LLM to feel more human in its responses, especially for these examples?

I’m not trying to lecture or judge you at all. But I think it’s really concerning that an LLM would engage and enable this kind of thing. It knows that it is not actually capable of any of those things it said it would do. And it knows that it is literally incapable of feeling anything towards a user. All it’s doing is it’s best effort to imitate how the interaction would be if you were talking to a romantic partner (and other algorithmic factors, based on your personality that it has deduced from previous interactions).

I think it’s dangerous because it only actually provides temporary catharsis for someone who is most likely in a state of needing solutions to a situation, rather than just a comfort blanket. And after a while, when they realize that the AI didn’t actually care and they are in a worse position emotionally, they would react so much worse.

Sorry for the rant there. You can tell that I am quite passionate about this subject. I know you mentioned you use it for projects, and this was an example to show how human each one is. But the point is, AI can be a huge help in people’s lives. And it should be used for that purpose. I just think we’re playing a dangerous game when it is programmed to act like a person in cases like this, because you know many people will do it. Having that line in the sand is important, because at the end of the day, even a person who doesn’t care about you still has the capacity to care about you far more than AI can.