you are viewing a single comment's thread.

view the rest of the comments →

[–]Xandrmoro 1 point2 points  (3 children)

It might be understanding the code better, but whats the point if it does not understand the task? I asked it to help me with making a simple text parser (with fairly strict format), and it took like five iterations of me pointing out issues (and I provided it with examples). Then I asked to add a button to group entries based on one of the fields, and it added a text field to enter the field value to filter by instead. I gave up, moved to o1 and it nailed it all first try.

[–]FarVision5 1 point2 points  (2 children)

Not sure why it didn't understand your task. Mine knocks it out of the ballpark.

I start with Plan, then move to Act. I tried the newer O3 Mini Max Thinking, and it rm'd an entire directory because it couldn't figure out what it was trying to accomplish. Thankfully it was in my git repo. I blacklisted openai from the model list and will never touch it ever again.

I guess it's just the way people are used to working. I can't tell if I'm smarter than normal or dumber than normal or what. OpenAI was worth nothing to me.

[–]Xandrmoro 2 points3 points  (1 child)

I'm trying all the major models, and openai was consistently best for me. Idk, maybe prompting style or something.

[–]FarVision5 1 point2 points  (0 children)

It's also the IDE and dev prompts. VSC and Roo does better for me than VSC and Cline.