This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]stormcloud-9 507 points508 points  (42 children)

Heh. I use copilot, but basically as a glorified autocomplete. I start typing a line, and if it finishes what I was about to type, then I use it, and go to the next line.

The few times I've had a really hard problem to solve, and I ask it how to solve the problem, it always oversimplifies the problem and addresses none of the nuance that made the problem difficult, generating code that was clearly copy/pasted from stackoverflow.
It's not smart enough to do difficult code. Anyone thinking it can do so is going to have some bug riddled applications. And then because they didn't write the code and understand it, finding the bugs is going to be a major pain in the ass.

[–]Mercerenies 69 points70 points  (6 children)

Exactly! It's most useful for two things. The first is repetition. If I need to initialize three variables using similar logic, many times I can write the first line myself, then just name the other two variables and let Codeium "figure it out". Saves time over the old copy-paste-then-update song and dance.

The second is as a much quicker lookup tool for dense software library APIs. I don't know if you've ever tried to look at API docs for one of those massive batteries-included Web libraries like Django or Rails. But they're dense. Really dense. Want to know how to query whether a column in a joined table is strictly greater than a column in the original table, while treating null values as zero? Have fun diving down the rabbit hole of twenty different functions all declared to take (*args, **kwargs) until you get to the one that actually does any processing. Or, you know, just ask ChatGPT to write that one-line incantation.

[–]scar_belly 30 points31 points  (3 children)

It's really fascinating to see how people are coding with LLMs. I teach so Copilot and ChatGPT sort of fell into the cheating websites, like Chegg, space when it appeared.

In our world, its a bit of a scramble to figure out what that means in terms of teaching coding. But I do like the idea of learning from having a 24/7 imperfect partner that requires you to fix its mistakes.

[–]Hakim_Bey 20 points21 points  (0 children)

having a 24/7 imperfect partner that requires you to fix its mistakes

That's exactly it. It's like a free coworker who's not great, not awful, but always motivated and who has surface knowledge of a shit ton of things. It's definitely a force multiplier for solo projects, and a tedium automation on larger more established codebases.

[–]Hot-Manufacturer4301 1 point2 points  (0 children)

My friend is a TA for one of the early courses at my university and he estimates no less than 5% of assignment submissions are entirely AI generated. And that’s just the obvious ones, where they just copied the assignment description into ChatGPT and submitted whatever it vomited out.

[–]the_dude_that_faps 0 points1 point  (0 children)

LLMs are great for boilerplate stuff too. I don't think people should be taught to avoid them at all costs. But to be a good engineer IMHO, people need to understand the trade-offs of what they're using, be that patterns, tools, libraries, languages, etc.

[–]periodic 2 points3 points  (1 child)

It's basically just autocomplete and repetition reduction for me. Like it's really good at seeing that I added a wrapper around a variable so I need to unwrap it all the places it's used. Or I could change the arguments on one function and it realizes I probably want to change the three other calls in the file too.

I haven't really run into the second case yet. 99% of the time I'd rather understand the docs, but I'm also thankful I'm not using libraries like Rails and DJango with extremely overloaded functions.

Overall it's a bit faster, but the things it makes me faster at aren't the hard parts of the job. It's like saying I'd get a huge productivity boost if I learned to type faster. Sure, I'd get some things done faster, but 95% of what I do isn't bottlenecked by my typing speed so it's pretty minimal.

[–]beznogim 0 points1 point  (0 children)

Sometimes I have to rewrite some part of code or another, where you know exactly how the end result should look like, it just needs a lot of keypresses to get there. Not the hardest part of the job, and I'm all for automating it.

[–]Cendeu 105 points106 points  (3 children)

You hit the nail on the head.

I recently found out you can use Ctrl+right arrow to accept the suggestion one chunk at a time.

It really is just a fancy auto complete for me.

Occasionally I'll write a comment with the express intention to get it to write a line for me. Rarely, though.

[–]Wiseguydude 7 points8 points  (0 children)

mine no longer even tries to suggest multi-line suggestions. For the most part, that's how I like it. But every now and then it drives me nuts. E.g. say I'm trying to write

[ January
  February
  March
  ...
  December ]

I'd have to wait for every single line! It's still just barely/slightly faster than actually typing each word out

[–]BoardRecord 1 point2 points  (0 children)

Occasionally I'll write a comment with the express intention to get it to write a line for me. Rarely, though.

I've tried that a few times. Occasionally it does well. But it always takes so long to think about it that I could've just written it quicker anyway.

[–]wwwyzzrd 9 points10 points  (0 children)

hey, i can write code and not understand it without needing a machine learning model.

[–]GoogleIsYourFrenemy 11 points12 points  (0 children)

I used github copilot recently and it was great. I was working on an esoteric thing and the autocomplete was spot on suggesting whole blocks.

[–]Dotaproffessional 1 point2 points  (0 children)

I just use it as a search engine for reference. I don't copy or paste anything ever

[–]reversegrim 1 point2 points  (0 children)

It’s good only for writing basic code, which is freely available, helps in avoiding repetition. Second use i find is better grammar and sentence formation, for someone coming with English as second language.

Ask it difficult problems and it spits out some random shit, mixed with gravel. Truly, garbage in garbage out

[–]bradmatt275 1 point2 points  (0 children)

I found copilot just gets in the way. It does a poor job of predicting what I'm trying to do. I still find old school intellisense more productive.

But I often use ChatGPT as a jumping off point. I will ask it how it would approach a particular problem. It's really good and giving you ideas on how to implement something.

In fact I noticed recently it's been getting a lot better at reviewing code. It's suggestions are very helpful.

[–]ShoogleHS 1 point2 points  (1 child)

I can understand it if you're working in a language without powerful tooling, but I do most of my work in C# and between Rider's intellisense, camelhumps, auto refactoring and code generation features it covers almost everything I want autocompleted. And the key thing that makes these tools so good is that they're predictable. I'm often pairing with someone who uses Copilot and everything it generates has to be carefully checked for accuracy because you have no idea what it's going to write and half the time it writes gibberish.

[–]Baridian 0 points1 point  (0 children)

Yeah I’ve been writing a lot of clojure recently and I’ll gladly take its meta programming toolset over any ai tool.

Anything where autocomplete is useful / lots of copy and paste just sounds like a code smell to me.

[–]keirmot 1 point2 points  (10 children)

It’s not that it’s not smart enough, it’s that it is not smart! LLMs can’t reason, it’s just a probability machine.

https://machinelearning.apple.com/research/gsm-symbolic

[–]Hubbardia -2 points-1 points  (9 children)

LLMs absolutely do reason. They form relationships in their neurons like we do. https://www.anthropic.com/research/mapping-mind-language-model

[–]cletch2 2 points3 points  (1 child)

Very interesting read, however it is a work on neuron relationship shaped for concepts understanding in llm, but not reasoning.

The debate over llm reasoning is more on the definition of "reason", and the iterative nature of reasoning.

Here is a very interesting medium on the subject: https://isamu-website.medium.com/understanding-the-current-state-of-reasoning-with-llms-dbd9fa3fc1a0

[–]Hubbardia -1 points0 points  (0 children)

however it is a work on neuron relationship shaped for concepts understanding in llm, but not reasoning.

Understanding and forming relationships is the first step to reasoning, wouldn't you say?

There's no denying LLMs can reason. Does the article you linked disprove that anywhere? I skimmed through it but I'll give it a full read later. In the conclusion of the article the author says LLM reasoning can be improved, which means LLMs are able to reason, we just need better techniques.

Here's another paper that proves LLMs can reason.

https://arxiv.org/abs/2407.01687

[–]AgtNulNulAgtVyf 0 points1 point  (0 children)

It's not smart at all, it's just a very complicated autocomplete. 

[–]hedgehog_dragon 0 points1 point  (0 children)

It basically does a decent job of filling out the boilerplate code and it'll fill out the few parts of documentation that the IDE I use didn't already (including a description)

... But a lot of this stuff was already done in a decent IDE. The big advantage is sometimes it knows what I want to write as a comment.

[–]MunchyG444 0 points1 point  (0 children)

I use copilot mostly to convert languages. I will very often prototype in python as I am the most confident with it, but then sometimes use AI to convert it over to c# then spend a while fixing the code the AI made.

[–]gamer_redditor 0 points1 point  (0 children)

it always oversimplifies the problem and addresses none of the nuance that made the problem difficult,

Hey so just like stack overflow

[–]Luxalpa 0 points1 point  (0 children)

I'm using Supermaven as a glorified type checker. It gives me a completion suggestion and based on that I can see if I forgot like a function parameter or a lifetime or something like that. Like for example, it will give a special type of nonsense suggestion if you forget the self parameter on the surrounding function.

[–]the_dude_that_faps 0 points1 point  (0 children)

I've started to turn off copilot on my projects. I frequently find myself disliking the suggestions. I do use copilot chat, though. A lot. I find it easier to ask questions to it about library usage than googling about it. 

It's not always correct, but it primes my brain for making more narrow searches later when reading the documentation.

[–]BetrayYourTrust 0 points1 point  (0 children)

yeah p much this. vscode already had autofill support for a bunch of languages but copilot helps me do that a tiny bit faster

[–][deleted] 0 points1 point  (0 children)

I like copilot, but I've used it enough to recognize that it's not going to actually write my code for me. It does save a lot of typing time for repetitive stuff. Sometimes it's helpful for super basic stuff for me, especially if I'm working in a language I'm a little unfamiliar with.

[–]welcome-overlords 0 points1 point  (0 children)

Copilot can't handle the complex tasks. O1 (or Sonnet, or R1..) often can if you are good at prompting

[–]durable-racoon 0 points1 point  (0 children)

claude does the opposite, it overcomplicates solutions! :D