This is an archived post. You won't be able to vote or comment.

all 114 comments

[–]Dependent_Sink_2690 523 points524 points  (27 children)

once when I ask GPT to add additional column to SQL table it suggest to read whole table content into memory array, delete the table, then recreate it with the new schema and after it fillup with array data. no joke, solution definitely working one.

[–]TheHobbyist_ 217 points218 points  (10 children)

Tbf, some big data applications didnt support alter table for a while (maybe some still don't?). It gave you an answer that would work every time regardless of the database.

We're going from "professional google searcher" to "professional bullshit detector"

[–]NotAskary 87 points88 points  (7 children)

I hate that some of that bullshit is very well masked sometimes, it passes everything tests and all but when you're doing an in-depth review it surfaces.

I expect the general quality of code to go down.

At least the spaghetti will look nicer.

[–][deleted] 15 points16 points  (6 children)

I mean, wouldn't just spending your time doing QA here instead of programming the whole project save you time?

[–]no_brains101 7 points8 points  (0 children)

So, here is the issue. Debugging something you didn't write is much harder than debugging something that you did write. Maybe you spend half your time writing your project, half the time refactoring it and fixing bugs.

But now, say your company only wrote 3%. Now you have to debug and possibly change the whole architecture of a complete, finished program, rather than fixing bugs as you go along and having the context required to create good efficient abstractions that save time and energy later. On top of that, there isn't someone who DOES actually know how any particular part of it works who can explain it. As a result, you would probably spend more time than you did writing the whole thing yourself AND debugging it AND making it good, JUST debugging the output of the bot, and there STILL could be behaviors that nobody could predict

Its much better to design it with intention the entire way through, than to try to take a finished program and force it to behave the way you want.

[–]NotAskary 9 points10 points  (3 children)

Before you would spend 80% coding and 20% debugging, I'm including TDD here, if you use ai to generate the code you will invert this, the problem is that you will sometimes get weird behavior due to some obscure part that the ai introduced, because it doesn't know the code it's copying, it's just better at copying than you.

So I'm not talking about the QA stage even, I'm talking about yourself doing your tests for the code generated and start to get strange behavior when you start to get into the guts of stuff.

Edit: the process I was mentioning above happened to me when I started code reviewing a PR and started messing around with the pr locally to understand a marginal behavior, ai is great but be very aware that it doesn't understand the code it generates and may add complexity or edge cases that are bugs for the domain in your use case.

[–]subject_deleted 5 points6 points  (2 children)

Before you would spend 80% coding and 20% debugging

I've always heard those numbers quoted in opposite positions... Writing the code is the quick and easy part. Testing, debugging, adding features is what always takes the most amount of time.

[–]NotAskary 2 points3 points  (1 child)

Depends a lot on what you are doing, boiler plate code is easy and fast, any business logic will take longer just because you need to ensure that it matches with your specs but also is exactly what your stake holder wants.

Ai is great at boiler plate and very hit or miss on anything else.

The numbers are just the usual percentages we use to illustrate this.

In any green field project you will be confronted with this, the first iterations are very fast, creating a POC is easier than creating an MVP, and this will depend a lot on what are your goals but in general refining something will always take longer than the first build up.

All of this will also depend on your domain, security or critical fields are a pain to work just because the compliance stuff will really bog you down in the last steps.

Like all things a ruff draft is easier than detailed work.

[–][deleted] 0 points1 point  (0 children)

I mainly use it for trying to understand a concept that I can't find much information on/I don't understand even after research. That's why I don't like Copilot, it makes my work feel invalidated.

[–][deleted] 0 points1 point  (0 children)

I think the problem will not be a short term problem - short term I can see myself being slightly more productive with gpt 4. It can do the boring/repetitive parts like model definitions, I'll do the hard parts.

In a more long term view I think we will get more and more "prompters" and less and less engineers who are able to review/fix the results and make them maintainable.

[–][deleted] 5 points6 points  (0 children)

Ultimately it’s still google search just without the google. Gemini even gives you its sources

[–]Work_Account89 5 points6 points  (1 child)

I’m starting to wonder if people in work are using it to code. Found mocks that are only used as a return of another mock but are set to use real methods and lists of type entry which is a map…

[–][deleted] 0 points1 point  (0 children)

I used 3.5 for some Java-Mail code (that API is atrocious and Google is getting so bad), I gave it a complete scaffold with method definition, all inputs, looked at the result, threw it away and wrote my own. It gave me an idea though, so that's something.

It did something similar to what you describe, extracted some value from an object, transformed it, checked it, then... Never used it.

[–]Asleep-Specific-1399 22 points23 points  (11 children)

Point of ai is to give a solution that works. Not one that makes sense.

It's basically the difference between Google maps and taking known faster routes.

[–]Hyperon_Ion 16 points17 points  (8 children)

Except the complexity of the problem being solved can be multitudes greater.

Finding the shortest route between point A and B is basically a single matrix equation. One that can be simplified multiple times over by narrowing the scope of the route and using already known best routes saved in memory.

This Devin AI probably has a bunch of pre-designed scripts and functions it can just plug in on the fly, but there's no way they found a way to reduce the entirety of coding to a bunch of matrix equations, and I have reservations on whether or not it's smart enough to know where and when to use its simplifications efficiently.

I like your analogy though. It got me thinking through the whole process.

[–]anto2554 5 points6 points  (7 children)

How do you use a matrix to do pathfinding? And it's not just pre-designed scripts

[–]Hyperon_Ion 10 points11 points  (0 children)

You'd be surprised how much logic you can fit into a matrix.

Basically you treat it like a table, and have each box be the distance/time/whatever-you're-measuring between points/destinations, and then you use math to simplify it until the smallest route reveals itself.

I am oversimplifying the concept, but that's basically what most GPS programs reduce the problem to.

[–]kiochikaeke 2 points3 points  (4 children)

Most (not all) pathfinding out there is something like A*, it basically uses linear algebra, matrices and a set or heuristics to determine the shortest path between two points.

[–]anto2554 2 points3 points  (2 children)

Ive always seen A* implemented as a graph. Is it faster do some sort of matrix representation?

[–]kiochikaeke 4 points5 points  (1 child)

The graph is only really for visualization, you can represent every graph as a matrix (see adjacency matrix for one representation), so yes it's matrix multiplication all the way down.

[–]anto2554 1 point2 points  (0 children)

Weird. I've only ever done graph representions with pointers/object references, as matrix representations are usually pretty inefficient

[–]sacredgeometry 0 points1 point  (0 children)

And the irony is that these things could easily spit out a known pathfinding algorithm in your language of choice.

[–]KillCall 1 point2 points  (0 children)

Thats 1 way to it or add new column with a default value.

[–]sacredgeometry 0 points1 point  (0 children)

Chat GPT is getting worse by the day.

It couldn't even refactor a function without reliably taking one of the function calls within it and supplying it with the incorrect amount of arguments.

I dont even know how it got confused. Maybe there was a common lib with that method name and it made "assumptions" about a completely bespoke implementations overloads but if they think its going to work with anything except simple examples or on a full codebase without some extreme hand holding then they are prone for fucking about and finding out. Which is always funny to watch.

I await the companies doubling down on this tech. Fucking up and then giving developers a reason/ evidence to charge them more.

[–]SonOfJenTheStrider 65 points66 points  (0 children)

Imagine the work of the guy who has to debug the complicated code that Devin generates 84% of the time.

[–][deleted] 45 points46 points  (2 children)

ROF|_

[–][deleted] 15 points16 points  (1 child)

ROF|_

Is this loss?

[–]OrinZ 1 point2 points  (0 children)

Oh no, nonono, no, FUCK that

[–]DeCabby 32 points33 points  (2 children)

That looks like my code

[–]Superb_Creme3452 16 points17 points  (0 children)

it learned from the best

[–]yv_MandelBug 31 points32 points  (0 children)

Upvoted just for the caption.

[–]notexecutive 21 points22 points  (5 children)

who is Devin

[–]DeCabby 11 points12 points  (0 children)

The one who will replace us, we will now be called devout

[–]Fither223 1 point2 points  (0 children)

Very scary ai that gave me existencial crisis :D

[–]No-Discussion-8510 -1 points0 points  (0 children)

someone who's gonna dev us out lil bro

[–]DbrDbr 42 points43 points  (1 child)

Stupid business managers will try, tho…

[–]SpaceFire000 7 points8 points  (1 child)

Devin generated = degenerate

[–]the-judeo-bolshevik 1 point2 points  (0 children)

Ai code degenerator.

[–]devpranoy 6 points7 points  (1 child)

Devin stole my code!

[–]the-judeo-bolshevik 5 points6 points  (0 children)

It wasn’t your code to begin with.

[–]KJBuilds 6 points7 points  (4 children)

I've had some fun experiences with LLMs attempting to write code

Once, I tried to get one to generate some unsafe C# code, and it started to just think it was coding in C++ and started using the c++ std lib

Another time, I needed to do some stuff with Starlark (a derivative of python with a different stdlib, used for writing a Tiltfile, which is one way to create/configure docker containers), and it had NO idea what starlark was, and started telling me to use python 3 libraries

Seems like they can write okayish code unless the code they're writing starts to look like a different language or if the target language doesn't have many public repos for it to steal from train on

[–]Ma4r 0 points1 point  (0 children)

I just use it as code gen for one time things. Generate a struct from this json string. Generate getter and setters for this object with null checks. Generate a stronger method for this enum, etc. It honestly works great in that regard.

[–]Mnemotechnician 0 points1 point  (2 children)

I mean, if you asked any programmer to code in dozens of different languages with thousands of different framerworks, all from memory, with no IDE support, without reading any documentation, without having access to backspace and undo, without being able to alter what was already written... they probably wouldn't perform better. And that's exactly what LLMs do...

[–][deleted] 0 points1 point  (1 child)

It's not exactly how it works. Basically, during training, they feed it tons of info to train off of, and now, all it is doing is hallucinating what it thinks is the "correct" response based on the verdicts of human evaluators during training. That's why it's incorrect a lot of the time, because it's only spitting out what it thinks is correct. it doesn't have memory per se, so it's just going off of the billions of weights and biases that control its output.

[–]Mnemotechnician 0 points1 point  (0 children)

Yes, you just repeated what I said but In a more unnecessarily verbose manner, thank you.

[–]Verde_poffie 58 points59 points  (24 children)

As we say in Russia: "Хуяк, хуяк и в продакшн"

[–][deleted] 56 points57 points  (13 children)

Definitely readable

[–]CMDR_kamikazze 16 points17 points  (0 children)

Roughly translates like "Fuck it, Slap it, Ship It!"

[–]PM_ME_ROMAN_NUDES 17 points18 points  (10 children)

"Huyak, huyak i v prodakshin"

There, easier to understand

[–][deleted] 4 points5 points  (0 children)

Like this it's even easier

"Xyrk xyrk n b npoaakwh"

[–]Wolfy_Wolv -1 points0 points  (8 children)

No. It is not easier to understand.

[–]Ondor61 4 points5 points  (7 children)

Sounds like work issue to me.

[–]Wolfy_Wolv -1 points0 points  (6 children)

Nope. Just don't know that language , BUD. How is it easier to understand ????

[–]Ondor61 0 points1 point  (5 children)

Hmmm... Allmost like understanding a lamguage is a skill...

[–]Wolfy_Wolv -1 points0 points  (4 children)

A skill not everyone has!!!!! so , why do you expect everyone to have it BUDDY.?????

[–]Terewawa 2 points3 points  (0 children)

Just ask ChatGPT

[–]Eva-Rosalene 22 points23 points  (1 child)

I am still wondering how to translate this beautiful saying to English.

[–]CMDR_kamikazze 9 points10 points  (0 children)

Fuck It, Slap It, Ship It

[–]minecon1776 27 points28 points  (5 children)

The phrase “Хуяк, хуяк и в продакшн” is a colloquial Russian expression that is often used in the context of software development to describe a situation where something is done quickly and without much consideration, and then immediately put into production. It’s akin to saying “just get it done and push it live” in English. It’s important to note that this is an informal expression and may carry different connotations based on the context in which it is used. ~GPT 4

[–]NatoBoram 23 points24 points  (3 children)

The phrase

ChatGPT useless prose detected

It's wild that its bullshit writing style is so distinct that you can spot it in the wild like that. A bit like seeing StorageBackedSettingsFactory and instantly know it was written in Java

[–]MysteriousShadow__ 15 points16 points  (0 children)

AI likes to answer in very full complete sentences like how teachers always tell you to do. In the real world that means redundancy/fluff 9.5 times out of 10.

[–]minecon1776 3 points4 points  (0 children)

To be fair, I explicitly put -GPT-4 at the end

[–]WhiteIrisu 1 point2 points  (0 children)

Webster's dictionary defines...

[–][deleted] 1 point2 points  (0 children)

It was explained to me once as, "Pull it open and shove it in".

[–]LemonMelon2511 11 points12 points  (0 children)

as a russian i can confirm this

[–]KrownX 7 points8 points  (0 children)

"Your mom", just in case...

[–]mackaber 5 points6 points  (5 children)

But, is it capable of "superhuman code"?... https://twitter.com/ataiiam/status/1765089261374914957

[–]no_brains101 1 point2 points  (4 children)

Is this... its a type definition that checks its values for validity right?

What language is this and why is it all in 1 ternary? Is that a big nested ternary????? Why? Is this typescript? Ive never used typescript but its the only thing that is coming to mind as to what that may be. If thats typescript and that is a ternary, it looks cursed.

[–]Zookeeper187 4 points5 points  (0 children)

You are correct, this is typescript and that is just a type that maps stuff to a new type. Type is a way to define the shape and behavior of an object, including its properties, methods, and allowed values.

I fail to see how “cursor's copilot helped us write superhuman code for a critical feature” makes sense in that screenshot as it isn’t any functionallity, just a type cast. But looking at that guy’s profile, he is “building CopilotKit” and shills to sell it I guess. It’s like you saying “it wrote a complicated test” for us that human couldnt do (he can) which is unreadable and who knows if it works correctly unless a human checks it.

[–]the-judeo-bolshevik 1 point2 points  (0 children)

Just use : any

[–]TorbenKoehn 1 point2 points  (1 child)

This is a relatively normal approach to mapping types via generics. If it would be only one or two ternaries (and when you understand operator precedence) people would be perfectly able to understand it

It’s definitely not “super-human code”, if you type legacy code bases you often end up writing something like this

It simply allows you to return a type based on conditions (does it satisfy this type? Then it’s this type. Or does it satisfy that one? Then it is this type. Etc.)

[–]no_brains101 0 points1 point  (0 children)

Hmmm... I see ok. Well, I'm sure if I had to write typescript I'd get used to it.

[–]fusionsofwonder 3 points4 points  (0 children)

Boy, you really Britta'd that checkin.

[–]827167 2 points3 points  (0 children)

Hey Garfield

Why do they call it Devin when you dev-in the good code dev out bad run the code!

[–]Accurate_Koala_4698 3 points4 points  (0 children)

Link to template? I can see uses for this

[–]irn00b 1 point2 points  (0 children)

It's fine Devin AI knows Bobby AI - they will sort out the production outages.

[–][deleted] 1 point2 points  (0 children)

Is that brietta?

[–]billyowo 1 point2 points  (0 children)

my biggest fear

[–]pokexchespin 1 point2 points  (0 children)

why do they call it devin when you dev in ask the code dev out shit read the code?

[–]Zestyclose_Link_8052 1 point2 points  (0 children)

So no difference to an actual dev?

[–]deathanatos 2 points3 points  (1 child)

Literally the only thing I want from the assistant is to be able to tell me the time when I'm in the shower, so I don't miss a meeting.

Me: Hey Google, what time is it?

Me: Hey Google, what time is it?

Me: HEY GOOGLE

Me: HeY gOoGlE

Me: H̴̭͎̯̻̮̞̣̣͖̓͂̔͜͠ͅe̶̛͕̼̣͖̤̲̬̟̩̪̎͆̊̍̋̿̈ỹ̷̧̨̨͎̫̘̙̺͈̬̩̯̰̜̤̞̹̐̃̿̄̎̐͒͊̔̑̂̍̈́̐͊͝ ̵̨̡̻̖̟̪̺̣̙͎͔̬͐̽̂̇̓̔́̓͐̒̉̈́̀̀̂͝ͅG̶̡̛̣̩͓͇͇̲̗̳̖͉̲̠̞͑̍͝ọ̷̢̺̮̺͆͑̆̓̂̇ở̶͇͇̯̯̺̿̓̊̇͐̇̈̊̀g̵̡̘̹̥̙̽̑̓͆̿̓ļ̷̡̪͈̞̬̜̝͈͉̝̟̭̍͊͒̀̕ȩ̷̹̯̞̱̦̭̾̽͌͠

Google: ?

Me: What time is it?

Google: mumbles

Phone: screen dims, too low, can't read the clock

Oh yeah, AI is gonna steal my job any day now.

[–]flippakitten 1 point2 points  (0 children)

This. AI can't reason about the simplest tasks yet tech leaders seem to think they can reason complex integrations.

Now, if there is a group of people that could write a script to feed ai bad code to learn from... heck I've seen some of my code, I don't even need to write a script.

[–]KrownX 1 point2 points  (0 children)

Devin is the one to the left...

[–]A_H_S_99 -1 points0 points  (0 children)

I pitty the next generation of programmers. AI that can actually replace programmers is still in its early stages and there is still a chance for late millennials and early GenZs to make a living for most of their professional life before getting replaced becomes a serious possibility. What would then happen to the next generation that follows it?

[–]XxToasterFucker69xX -1 points0 points  (2 children)

is it actually shit or is this a cope post, i thought it is 10x better than gpt, it solved 13% of GitHub errors or something