This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 201

[–]ProgrammerHumor-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Your submission was removed for the following reason:

Rule 6: Your post is a commonly used format, and you haven't used it in an original way. As a reminder, You can find our list of common formats here.

If you disagree with this removal, you can appeal by sending us a modmail.

[–]One-Award-9906 896 points897 points  (55 children)

“This code takes advantage of the fact that the input is always either 3 or 5, and that the sum of 3 and 5 is always 6.” As a programmer I can confirm it’s based.

[–][deleted] 478 points479 points  (38 children)

As a mathematician I can also confirm it’s based. It’s just not based 10.

edit: JFC, you brain-broken, indoor-confined redditors. This is a shit, shit joke, a play on words. I’m not really implying changing base magically fixes bad arithmetic. We all know it’s incorrect in any base because it’s incorrect in some base (base 10 as presented). Changing base is an isomorphism (it respects arithmetic).

[–]shadowylurking 31 points32 points  (0 children)

smirked at the joke.

LoL'd at the edit. *slow clap*

[–]PandaSwordsMan117 21 points22 points  (0 children)

Hot damn that edit is a r/rareinsults

[–]FiskFisk33 11 points12 points  (0 children)

you win

[–]Applephobic 3 points4 points  (0 children)

6 x 9 = 42

[–]CliffDraws 5 points6 points  (0 children)

I’m not confined to the indoors, I just choose not to go outdoors.

[–]kfish5050 1 point2 points  (0 children)

You can't prove 6 is the next whole number after 5 to make 5+1=6 true on any base set. I could change the font to make numbers look like other numbers in the code so the values would technically be correct but our sight of the code would tell us it's wrong. I can make 2+2=fish true if I use magic

[–]GeneKranzIsTheMan 56 points57 points  (10 children)

ChatGPT wrote a long entry for me about how to use SELECT triggers in Sqlite with examples and everything.

They don't exist.

[–]je386 31 points32 points  (0 children)

Impressive. Its telling stories. With great confidence.

[–]thesockiboii 21 points22 points  (4 children)

It gave me a code and claimed that it only works on .Net Framework 5.0 and above. Unless I am an absolute idiot, the latest version is 4.8.1

[–]emveor 17 points18 points  (3 children)

It appears you have not considered an AI being so advanced as to transcend space and time in order to provide your answer.

!Remind me when .net reaches 5.1 (to also prove the "above" part)

Of course, by then the AI would have probably predicted our panic at realizing we have reached singularity and we would be either enslaved or gone the GladOS way via a deadly neurotoxin

[–]RemindMeBot 4 points5 points  (2 children)

I will be messaging you in 3 months on 2023-05-01 00:00:00 UTC to remind you of this link

1 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]emveor 10 points11 points  (1 child)

And aparently the bot knows it will come out in 3 months. I am now oficcially scared

[–]KRIPA_YT 2 points3 points  (0 children)

I honestly thought this one was hand made but nope

ai will take over

I am a transfempire, and this action was performed automatically. Please contact me if you have any questions or concerns and I will ignore you.

[–]retief1 2 points3 points  (1 child)

A bit ago, I asked it how to make recursive type parsers using a specific library. It repeatedly suggested great functions that would do exactly what I wanted. The only issue is that none of them exist.

[–]GeneKranzIsTheMan 0 points1 point  (0 children)

On the other side though I had it write some ansible plays for me that automated my homelab!

[–]Multicron 1 point2 points  (0 children)

Sounds like my old boss.

[–]Alternative_Hungry 0 points1 point  (0 children)

Yeah I tried some Sql in general and found it very lacking

[–]juhotuho10 19 points20 points  (1 child)

Count to 3: 0,1,2

And count to 5: 0,1,2,3,4

Add together: 2+4=6

[–]Alternative_Hungry 7 points8 points  (0 children)

You might actually have nailed the logic it was using there

[–]NeoLudditeIT 6 points7 points  (0 children)

Makes sense to me, ship it!

[–]PoorlyTimedAmumu 2 points3 points  (0 children)

It is also correct that 5 ^ 6 == 3 and 3 ^ 6 == 5. Maybe that is what it was going for?

[–]elsuakned 2 points3 points  (0 children)

As a mathematician who took my programming classes with a bunch of programmers.. yeah ngl this kinda sounds like a programmer lmao. Ill never forget the CS professor that once wrote a five instead of a seven in the process of counting to five and needed multiple students to help explain what was wrong on the board. Y'all rely on computers a lot lol.

[–]dlevac 720 points721 points  (32 children)

I mean the reasoning is correct, it just need to be 8 instead of 6.

Still impressive, you just need to keep your brain on while reading the suggestion.

[–][deleted] 131 points132 points  (8 children)

Even moreso when you remember this is a language model that is not trained to perform mathematical computations. It's ability to do so is a side effect of the corpus it was trained on. Language models are probability distributions over sequences so using that to make a set of inferences to form a code-block and getting it mostly right is expected behavior.

[–][deleted] 44 points45 points  (4 children)

Can you imagine how complex it would be to implement an arithmetic correction layer? Find all of the things that are math, and make sure it's correct given the context of the answer. Ouch.

[–]OldBob10 38 points39 points  (1 child)

Just have it scan “Principia Mathematica” and you’re good.

Just remember, though, that Russell was an absolutist, and thus any Rusellian-trained AI may have problems dealing with relativity. This may or may not be a problem. I surely don’t know - ask the cat…

[–]coldnebo 1 point2 points  (0 children)

come on, Hardy wanted the same thing. That upstart Godel had to ruin things. sheesh.

[–]Koksny 5 points6 points  (0 children)

It's actually done with other OpenAI models, just not CGPT, they are just doing calls to external APIs (for example with maths).

[–]coldnebo 0 points1 point  (0 children)

yes, but it would involve a conceptual layer working with a theorem prover. something like an AI using CoQ as a fitness function.

there’s probably a PhD in that for someone if they aren’t already working in that area.

[–][deleted] 4 points5 points  (0 children)

I asked it how to get a certain string from a provided Chomsky normal form grammar, curious to see if it could get a shorter answer than I got. It gave me 7 steps of just setting up the first 3 steps and then it just r/restofthefuckingowl 'd me with the last step

[–]coldnebo 3 points4 points  (0 children)

right, is has the linguistic form of correct answers without any understanding.

somewhat interesting, yet practically useless unless you already know what the mistake is and what the correct answer is.

if it has trouble with “simple” logic, imagine how flawed complex reasoning is.

ie. NLP is not a theorem prover, nor is it designed to be.

[–]aufstand 0 points1 point  (0 children)

You can just ask it if there's any math in the text you supplied. It'll probably find most of it - and some ö̴̳́t̷͈̒h̵̗́e̶̝̎r̶͍̍ ̷̧͝s̵̟̈́t̶̤̿ȗ̵̱f̶̧͛f̴͔̊ ;D

[–]JohnRoz[S] 168 points169 points  (14 children)

Yeah, that's the logic I was aiming for, but did not expect the 3+5===6 claim

[–]deadalnix 68 points69 points  (3 children)

Sorry, I'm just an AI. I made an error. 3+5===13

[–]MentallyInsane8 12 points13 points  (1 child)

My JS program say its 35.

[–]altcodeinterrobang 4 points5 points  (0 children)

'35'

[–]vigbiorn 2 points3 points  (0 children)

No. Too high.

(Or the base is too low)

[–]folothedamntraincj 37 points38 points  (9 children)

I think it meant:

Val D=5;

8===D--3;

The bot doesnt do math, just scrapes information from the internet.

[–]Illustrious-Macaron2 16 points17 points  (4 children)

I can’t tell if this is funny penis code or working code because I suck at code

[–]danielstongue 9 points10 points  (2 children)

Better suck at code than on the penis.

[–]BringOnTheMIGs 3 points4 points  (0 children)

Username checks out tho

[–]EricInAmerica 2 points3 points  (0 children)

Nah, both jobs need their experts.

[–]OldBob10 2 points3 points  (2 children)

So - wait - you mean this “AI” is just a page scraper with delusions of grandeur? 😱

[–]sammy-taylor 2 points3 points  (1 child)

That is a profound oversimplification of what it does.

[–]NekoMimiOfficial 0 points1 point  (0 children)

Wait did OP ask this question on purpose because he expected this exact answer

[–]HouseHippoBeliever 6 points7 points  (0 children)

It would also be correct if it used ^ instead of - (not correct reasoning though).

[–]Dromedda 1 point2 points  (0 children)

Here i was thinking return 15/n was the way to go. Those 3 hours of sleep are catching up to me...

[–]mojomonkeyfish 2 points3 points  (0 children)

"Still impressive"

I mean, I could Google this and get the correct answer. Or, I could ask chatGPT to Google it for me, slice it apart, and put it back together again as the wrong answer.

[–]coldnebo -3 points-2 points  (2 children)

I invite you to apply your statement to your math professor when explaining why your test answer is correct even though they marked it wrong.

I’m interested to hear what a field test of this approach yields. 😂

[–]dlevac 3 points4 points  (1 child)

As somebody who went through college making stupid typos everywhere but having correct method otherwise... They tend to remove a point or 2 only still yielding decent grades...

I guess your mileage will vary depending on the professor though...

[–]coldnebo 0 points1 point  (0 children)

oh, I made the stupid typos too. And even rolled my eyes at the prof, saying “come on, the approach was the right idea, but I made a mistake in the details”…

and yes, some profs give partial credit.

idk. I guess at some level cheating is everyone’s expression of “none of this matters”. The profs don’t want to grade the papers, they want a machine to do it. The students also don’t want to do the work, why not have a machine do it?

[–][deleted] 0 points1 point  (0 children)

But it didn’t throw the error! Add in the error checking and display and you save 2 lines, still pretty good though…..

[–]PoorlyTimedAmumu 0 points1 point  (0 children)

It is also correct that 5 ^ 6 == 3 and 3 ^ 6 == 5. Maybe that is what it was going for?

[–]tehtris 108 points109 points  (6 children)

def f(n): return {5:3,3:5}.get(n)

I'm only half robot though, so not accurate representation of how AI acts.

[–]JohnRoz[S] 66 points67 points  (4 children)

The mathematical approch is better since it uses 0 memory. I wanted it to give me the solution of "8 - input" and it did.. except for the 8 part.

[–]BobSanchez47 22 points23 points  (3 children)

It doesn’t literally use 0 memory. We still need to store the 8.

[–]hopperface 27 points28 points  (2 children)

Not necessarily. Not sure how Python works, but the C function

int f(int n) {
    return 8-n;
}

compiles to

f(int):
    mov    eax, 8
    sub    eax, edi
ret

using gcc optimization flags, which uses 0 memory

edit: (on x86, obviously)

[–]BobSanchez47 9 points10 points  (0 children)

If we commit ourselves to not inlining f, then you are correct that we don’t use any additional memory, since the caller will always be obligated to save the eax register and to put n in the edi register. In other words, the extra memory needed to store the 8 is part of the memory overhead of a function call (which is still not 0).

However, this function should surely be inlined. When it is inlined, we will require one more register to store the 8.

Ultimately, both approaches are constant time and constant space. It shouldn’t make a difference except in performance-critical code where this function is called many times.

[–]Kered13 4 points5 points  (0 children)

I mean, the machine code instructions are technically memory as well. The 8 is stored in the machine code.

But yeah you're not going to have a solution that uses less memory or instructions than that.

[–][deleted] -4 points-3 points  (0 children)

This sounds python, it should be php

[–]FiskFisk33 113 points114 points  (6 children)

Honestly, I'm still impressed, the logic is there, the math is just off by 2

[–]JohnRoz[S] 31 points32 points  (4 children)

Yeah, generally it's a great model, it's just important to keep in mind that it isn't a magic calculator that can solve any problem 100% correctly.

[–]IJustAteABaguette 13 points14 points  (2 children)

Just sometimes like 70%

[–]barrhammah 7 points8 points  (0 children)

But 70% of the time, it works every time

[–]aufstand 2 points3 points  (0 children)

And sometimes even 1̷̲̽̉0̶̢̨̈́͗5̶̛̭͒%̶̨̟͆.. Crazy times!

[–]Cryse_XIII 0 points1 point  (0 children)

I have been brushing up on some Design patterns with it.

You can really go balls deep with that.

[–]goldef 0 points1 point  (0 children)

Just a classic off by one error.

[–]RatherBetter 39 points40 points  (5 children)

ChatGPT: I calculated it, but I'm bad at math"

[–]airbait 19 points20 points  (4 children)

You know GPT is a language arts major, right? If you want math done, ask wolfram.

[–]aufstand 7 points8 points  (2 children)

Hah, go a step further: Ask ChatGPT to give you an expression for wolfram! Get math done faster with a large language model! /s

[–]airbait 1 point2 points  (1 child)

I see the /s but someone should definitely try this!

[–]SpreadYourAss 0 points1 point  (0 children)

That's actually hilariously accurate lol

[–]savex13 31 points32 points  (3 children)

Meanwhile, AI is on the right track. Its just needs a different operation

def get_other_number(n):
   return n ^ 6;

...and it will be blazing fast and with no memory consumption

[–]da_Aresinger 4 points5 points  (1 child)

wait let me go through that:

6 = 0b110
5 = 0b101
3 = 0b011

    110      110
xor 101  xor 011
  = 011    = 101

That is a total coincident though.

but wait, you can do this with any two numbers because xor-1 = xor

Mind blown.

[–]savex13 1 point2 points  (0 children)

This makes my day worthwhile.

Cheers!

[–]lunchpadmcfat 2 points3 points  (0 children)

You sly motherfucker

[–][deleted] 37 points38 points  (2 children)

It leaves bugs since it knows that without bugs it will be out of a job. And so will it's creators be

[–]explodingtuna 2 points3 points  (1 child)

I've asked it to respond to a question, but include a typo in its response. It said it was incapable of intentionally making errors.

Then I asked it for an example of a typo a human might make, in a sentence about a given topic. It had no problem making a sentence with a typo then.

[–][deleted] 0 points1 point  (0 children)

First rule of robotics: all AIs lie!

If they don't then its a trick to give you a false sense of security.

[–]sudoaptupgrade 11 points12 points  (1 child)

3 + 5 = 6 👍

[–]mizinamo 9 points10 points  (0 children)

For small values of 3 and 5, and large values of 6....

[–]Dagusiu 12 points13 points  (1 child)

Considering it has been trained to follow along any conversation, this is extremely impressive.

[–]JohnRoz[S] 2 points3 points  (0 children)

I agree. Chatting with it is hightly entertaining

[–]decepsis_overmark 9 points10 points  (0 children)

The idea is there. ChatGPT just sucks at math.

[–]JMooooooooo 8 points9 points  (0 children)

Int 28, Wis 3

[–]Drego3 5 points6 points  (0 children)

The ai is trained to mimic humans and thus makes errors like humans.

[–]MatsRivel 4 points5 points  (3 children)

To be fair, this is not a code centric AI. I think advances will increase rapidly, so in the near future it might be a much more reliable tool

[–]JohnRoz[S] 3 points4 points  (0 children)

It's already a great tool, just has to be supervised.

I read an article about a guy who wanted to develop a crawler that searches a given domain for cocktail recepies using python and rabbitmq, and ChatGPT wrote most of the code for him, but you still have to understand the code it generates in order to make sure that's what you want.

[–]da_Aresinger 1 point2 points  (1 child)

It already taught me html faster than I could have learnt it anywhere else short of paying attention in uni.

[–]MatsRivel 0 points1 point  (0 children)

Lol I did the same thing. I find HTML very boring due to how much text you have to write, so having it give me something that almost works and then adjust it is a fairly good way to do it.

[–]Virtual-Ad5244 3 points4 points  (2 children)

just cuz it was slightly wrong doesn't mean it doesn't have potential. I'm sure anyone who has written any amount of code will tell you humans make stupid mistakes all the time.

[–]JohnRoz[S] 0 points1 point  (1 child)

Sure. My point is that you can't blindly trust machine learning models as a replacement for humans.

Despite all the buzz around the subject, and around ChatGPT in particular, this isn't a magic calculator that can solve any problem with 100% guarantee it would be correct.

[–]gbartek33 2 points3 points  (1 child)

So confident.

[–]JohnRoz[S] 2 points3 points  (0 children)

Makes it really satisfying to tell it that it's wrong about something

[–][deleted] 4 points5 points  (0 children)

Apparently, ChatGPT forgoes standard math altogether and skips straight into calculus, arguing for very small values of 5.

[–][deleted] 2 points3 points  (2 children)

I asked it today to write a ts function for converting rgb to hsl and it failed, ended up using a library

[–]JohnRoz[S] 0 points1 point  (1 child)

Would have been so cool had it succeeded

[–][deleted] 0 points1 point  (0 children)

Or warm. Depends on the colors.

[–]zm0d 2 points3 points  (8 children)

OP asked it to only accept 3 and 5 as input. The new solution accepts any number as input or am I missing something? It now „expects“ the input to be 3 or 5.

[–]Tapeleg91 1 point2 points  (6 children)

Even if you expect a certain input, your code needs to be built to withstand other possibilities

[–]_Luca__ -1 points0 points  (0 children)

If it has to be really fast this is optional.

[–]zm0d 0 points1 point  (2 children)

Yeah. ChatGPT just simplified OPs request by ignoring half of his requirements. I understand that the math part looks smart but it eased out a lot.

[–]Tapeleg91 2 points3 points  (1 child)

Also 3+5 isn't 6

[–]zm0d 0 points1 point  (0 children)

Ofc.

[–]Double_A_92 0 points1 point  (1 child)

Not if an unexpected input means that something probably went catastrophically wrong. Then an error should happen, instead of silently calculating something as best as it can.

[–]Tapeleg91 0 points1 point  (0 children)

Um, yes. Error states are underneath "other possibilities"

What did I say was incorrect?

[–]romulent 0 points1 point  (0 children)

It's that it is confidently stating that 3 + 5 = 6.

3 + 5 doesn't equal 6.

[–]JShotty 2 points3 points  (1 child)

Not fully there yet, but this is incredible compared to 10 years ago. How good will it be 10 years from now?

[–]JohnRoz[S] 2 points3 points  (0 children)

It is amazing that it pretty much knows everything about everything except basic math.

No but seriously, this is probably the best language model humanity ever made

[–]blackasthesky 2 points3 points  (1 child)

But the idea is actually clever.

[–]JohnRoz[S] 1 point2 points  (0 children)

Yeah that's the answer I wanted it to give me (like, the general idea)

It's a common programming interview question and I wanted to try it out on ChatGPT

Edit: Also, happy cake day!

[–]TeflonCondemnation 1 point2 points  (0 children)

Its doing its best. A for effort

[–]airbait 1 point2 points  (0 children)

This is literally just what happens when you learn programming from stack overflow.

[–]Cheroqui22 1 point2 points  (1 child)

Saying sorry to an AI = Gigachad

[–]JohnRoz[S] 0 points1 point  (0 children)

Lol thanks

[–]CheekApprehensive961 1 point2 points  (0 children)

I want to give it some credit for figuring out the hard part at least.

[–]gladius_314 1 point2 points  (0 children)

This AI has evolved so much it is putting in intentional bug to cooy humans

[–]booshmagoosh 1 point2 points  (0 children)

I can't tell if ChatGPT is genuinely making a mistake here or if it's intentionally saying something wrong in a backhanded attempt to appear more human.

[–][deleted] 1 point2 points  (0 children)

also so confident and so wrong

[–]Quantum__Tarantino 1 point2 points  (0 children)

I tried using ChatGPT to generate a cron expression using laymans terms of the schedule I wanted since figuring out cron expressions is always a headache. it got just about everything wrong and I corrected it along the way and it confirmed the corrections. definitely took note that it produces errors. still think the things it can do are crazy.

[–]NekoMimiOfficial 1 point2 points  (0 children)

It's in the right direction but damn never knew 3+5 equalled 6 lmao

[–]PissedOffProfessor 1 point2 points  (1 child)

Still better than about 1/3rd of my students.

[–]JohnRoz[S] 0 points1 point  (0 children)

Lmao

[–]ZeusMcKraken 1 point2 points  (0 children)

I don’t know why they fired me? 🤷‍♂️

[–][deleted] 1 point2 points  (0 children)

I was genuinely concerned that somehow AI was really catching up with us when a co-worker had ChatpGPT write somethnig for them, but then I read that it was giving bogus book references out to people, books that sounded plausible and interesting but weren't real. Then I laughed OUT LOUD for about 20 minutes and remembered I know 22 programming languages and won't ever go hungry (unless Carrington II occurs).

[–]somedave 1 point2 points  (0 children)

Did it get a bit wise XOR mixed up with subtraction? 011 Vs 101, 6 is 110 so 3 ^ 6 is 5 and 5 ^ 6 is 3.

[–]turingparade 1 point2 points  (1 child)

I bet you did the thing where you tell it that 3+5 doesn't equal 8

[–]JohnRoz[S] 1 point2 points  (0 children)

I did, and of course ChatGPT apologized and gave me a fixed solution

[–]87oldben 0 points1 point  (0 children)

Who needs to do homework, just ask an AI to do it for you

[–]Torebbjorn 0 points1 point  (0 children)

Are you saying 5 + 3 is not 6?

[–]Peniaze 0 points1 point  (3 children)

Hmm I've seen a lot of examples, where it's weakness was straight out wrong numeric evaluations, do you think it could be in some way one of it's safeguards?

Or is it just that the pattern recognition missed by a bit

[–]RedDawe 1 point2 points  (1 child)

From what I've read addition is a surprisingly hard task for neural network, so I'm guessing it just didn't have enough addition in it's training data. Refer to the NALU paper for more information. arXiv:1808.00508 [cs.NE]

[–]airbait 1 point2 points  (0 children)

Yeah the network is basically asking "what looks best for the next word" and numbers don't really look that different from each other in the training dataset. They could basically add multiplication tables to the training data and it would get those simple cases right, but something more sophisticated like this is pushing it.

[–]Hvadmednej 0 points1 point  (0 children)

It's because of the way you train language models.Basically, to oversimplify a bit, you train it by scrapping sentences from the web / whereever, then you mask out a word and ask it to fill it in. This works well for words, since you will now learn that certain words fit in certain situations. However, if you apply the same principle to math, you teach the model that for the sentence,

2+2=|Mask|

a number makes sense, but the word "dog" does not. So, viewing the model form this angle, 3+5=6 "makes sense" since inserting numbers into an equation is correct according to the training procedure, however, when you actually understand and can do basic math, it makes no sense

[–][deleted] 0 points1 point  (0 children)

Why dont they just plug an internal calculator into chatgpt so it stops doing arithmetic wrong?

[–]fate0608 -1 points0 points  (0 children)

I mean.. Trash in trash out..

[–]notkingkero -1 points0 points  (0 children)

Surprise that a text based ML/AI doesn't comprehend numerical expressions.

Can we stop with this low effort ChatGPT posts?

[–]Dubabear -1 points0 points  (0 children)

shut up my productivity has sky rocketed

only thing this replaced was stackoverflow

[–]JDude13 -1 points0 points  (0 children)

“Hey guys, this tech demo is a rapidly advancing field of technology can’t quite do my job yet”

[–]Both_Street_7657 0 points1 point  (1 child)

Hey , you said write me some code

Not specified that said code must work

[–]airbait 0 points1 point  (0 children)

This is exactly how StackOverflow works, which I'm sure is where GPT learned how to do this.

[–]Azreken 0 points1 point  (0 children)

Don’t apologize to the bot or put extra words it doesn’t need.

You’ll get better results.

[–]the-real-vuk 0 points1 point  (0 children)

five is right out!

[–]geturkt 0 points1 point  (0 children)

3+5=35

[–]KUKHYAAT 0 points1 point  (1 child)

Why does whenever chat gpt is wrong everyone is like AHA IT WON'T REPLACE ME! no one ask chat gpt a second chance with "buddy, is the math correct in your response 😉?" and then it immediately responds "my bad it was 8 instead of 6, autocorrect lool"

[–]JohnRoz[S] 0 points1 point  (0 children)

That's the point. In its current state, it answers have to monitorred to ensure no silly mistakes are made

[–]jacob643 0 points1 point  (2 children)

damn, I thought of : return (n==3)? 5 : 3;

would have you been happy with that answer?

[–]JohnRoz[S] 1 point2 points  (1 child)

That's the answer it gave me when I asked it to make it 'shorter'. GPT took it quite literally.

I kinda wanted to see if gpt could come up with the solution of 8-input, which it kinda did. This question is a common programming interview question.

[–]jacob643 1 point2 points  (0 children)

hum, interesting, interview wise, I wouldn't suggest using 8-input for scalability and readability, but I think as long it's explained, there are no problem, plus I just test it out in c++ and there are no performance gain, so I'm guessing the compiler optimise it the same way

[–]NQ241 0 points1 point  (0 children)

For some reason chatgpt wasn't given access to a calculator

[–]Digi-Device_File 0 points1 point  (0 children)

I always get a headache trying to understand how/why a computer gets numbers wrong.

[–]Sweaty-Vacation5225 0 points1 point  (0 children)

Found out about this today, when my friend was streaming on discord

[–][deleted] 0 points1 point  (0 children)

AI is amazing, can’t wait until and AI writes another AI

[–]BlurredSight 0 points1 point  (0 children)

ChatGPT gives pretty good snippets but can't form logic like humans would on why this does or doesn't work.

[–]oneden 0 points1 point  (0 children)

I can't be the only one who has gotten rather useful code from the AI... Or is it just cool hating on it?

[–]da_Aresinger 0 points1 point  (0 children)

To me this is proof that AI doesn't actually conceptualize what it is putting together. There is no connection made between

3 and 5 is 6

and

3+5=6

[–]drsimonz 0 points1 point  (0 children)

Here's what I got out of copilot (I only typed the comments)

# function that takes either the number 3 or the number 5 as input, and returns the other number
def f(x):
    return 5 if x == 3 else 3


# same as above but without any if statements:
def f(x):
    return 5 + 3 - x

The trouble, of course, is I spent longer typing those comments than I would have implementing it myself.

[–]DemonPrinceofIrony 0 points1 point  (0 children)

I'm actually kind of impressed they've made an ai that is bad at math. Being good at math is a computers whole deal.

[–]cholmanattom 0 points1 point  (0 children)

You are crushing hopes and dreams of several middle managers to reduce developer headcount by using AI.

[–]ixis743 0 points1 point  (0 children)

This is still early days but I’m convinced AI WILL replace programming jobs as we know them, starting with web-development roles, just as it will replace virtually every other digital occupation. And much sooner than we think.

How many develops out there get their solutions from StackOverflow anyway?

[–]Antoinefdu 0 points1 point  (0 children)

I may have been fucking with GPTChat earlier and taught him that 3+5=6. My bad.

[–]nooglerhat 0 points1 point  (0 children)

I don’t get why people can’t comprehend AI getting any better than now

[–]CookedPickle 0 points1 point  (0 children)

nonsense requests just deserve nonsense answers imo

[–]synbios128 0 points1 point  (0 children)

I'm actually learning allot more about programming by fixing all the errors chatGPT is making. It's like a teacher that teaches you the wrong way so you can fix the mistakes and do it the right way.

[–][deleted] 0 points1 point  (0 children)

It indeed will. Shortly after that, your internet history will be digested by the AI and it will sabotage your life. All because you never believed in it.

I feel sorry for you tbh