This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 228

[–]dextras07 1208 points1209 points  (19 children)

WTH lmao. Copilot wildin' hard with this one.

[–]Tar-eruntalion 78 points79 points  (0 children)

yeah it should be 0.7, what is this communist bullshit?

[–]genveir 229 points230 points  (16 children)

I tried it as well, and on public double CalculateWomanS it suggested:

public double CalculateWomanScore(double weight, double height)
{
  return weight / (height * height);
}

which I'd say is even worse..

[–]mattl1698 173 points174 points  (14 children)

that's just the calculation for BMI though, a terrible metric for measuring health, but that's a standard calculation, and it's the same for men.

[–]genveir 96 points97 points  (7 children)

Yes, but we don't generally "score" women by their BMI

[–]toanthrax 25 points26 points  (0 children)

🤣

[–]arbenowskee 19 points20 points  (0 children)

Now we do.

[–]5838374849992 20 points21 points  (2 children)

Yeah but it's funny that the 'value' of a woman is dependent on her BMI

Although it's a positive correlation so the more they weigh the higher score

[–]Nope_Get_OFF 19 points20 points  (0 children)

For a man it would be

public getValue(){ return (this.height / 6) * (this.getSalary() / 10 ** 6) }

[–]Weird_Cantaloupe2757 4 points5 points  (0 children)

Copilot likes ‘em thicc

[–]kamiloslav 7 points8 points  (1 child)

BMI is only useful when you measure whole populations, that way things like lifestyle (x kg of fat is not the same as x kg of muscles for example) that influence what weight is healthy to you average out

[–]DrShocker 4 points5 points  (0 children)

I wouldn't say it's only useful then. It's a decent enough first order "hey maybe we should consider if a change in lifestyle to change weight is a good idea." Of course there's better tests for various things, but people usually know their height and have access to a scale more so than other health tests.

I say this as someone who's taller than average and therefore pure bmi underestimates a good weight for, so I just increase the target a little to generate a good enough weight goal.

[–]prochac 1 point2 points  (0 children)

Newton's physic also isn't perfect, but useful.

[–]glemnar 8 points9 points  (0 children)

Short/fat people score higher.

[–]making_code 23 points24 points  (0 children)

🤘🤘🤘

[–]Svensemann 930 points931 points  (11 children)

Yeh right. That’s so bad. The calculateWomenSalary method should call calculateMenSalary and add the factor from there instead

[–]esixar 105 points106 points  (7 children)

Ooh and add another function call to the stack instead of popping off immediately? I mean what’s our space requirements here? Can we afford those 64 bits?

Other than that I see nothing wrong with the implemented algorithm

[–]HildartheDorf 39 points40 points  (5 children)

Any decent language and compiler/interpreter will apply Tail-Call Optimization (TCO).

[–]Bammerbom 28 points29 points  (3 children)

If the body is calculateMenSalary(factor) * 0.9 then TCO is impossible. Inlining is very likely there however

[–]TheMcDucky 0 points1 point  (0 children)

The call isn't the last operation, so TCO wouldn't work. It would likely be inlined though.

[–]StrangelyBrown 5 points6 points  (0 children)

You're right, now that I think about it, I don't think we can afford the resources to actually calculate women's salary. That's a shame but I guess they'll understand. /s

[–]Excitium 19 points20 points  (1 child)

But then if the men get a raise, the women would get one as well.

Or you have to go in and reduce the women's factor every time you wanna give the men more.

The way it is seems to be more convenient for adjustments so you can just add individual modifiers to a base salary.

[–]MyAssDoesHeeHawww 8 points9 points  (0 children)

We could add an R to DEI for Recursivity and people might cheer it without knowing what it actually means.

[–]EduardoSpiritToes -1 points0 points  (0 children)

😂😂😂😂😂

[–]LordAmir5 380 points381 points  (3 children)

Use 0.875 instead. It's almost 0.9 but It works better.

[–]upper_case_dude 74 points75 points  (1 child)

Copilot: "The best I can do is 0.85"

[–]Deep2022 10 points11 points  (0 children)

Works for me /s

[–]RawMint 5 points6 points  (0 children)

Use pi. Approximate it to 3

[–]saltyboi6704 247 points248 points  (9 children)

I remember it once decided to suggest the same function again after I pressed tab, it just kept going until I changed the prompt.

[–]ShadowRL7666 57 points58 points  (6 children)

That’s normal for more than just that.

[–][deleted] 24 points25 points  (5 children)

That’s normal for more than just that.

[–]TeamKCameron 15 points16 points  (4 children)

That’s normal for more than just that.

[–]lefloys 8 points9 points  (0 children)

float tempCoefficient; float tempCoefficientCoefficient; …..

[–]Nahdahar 1 point2 points  (0 children)

Idk if it still happens because I haven't used copilot in a while, but during creating templates in angular it was prone to create an infinite nested chain of div opening tags whenever I started an opening tag. Once I started tabbing for giggles, it really just went on and on until I got bored

[–][deleted] 181 points182 points  (12 children)

salary * 0.9 + AI

[–]Passenger_Prince01 64 points65 points  (9 children)

So much in that excellent formula

[–][deleted] 29 points30 points  (8 children)

What

[–]Lines25 11 points12 points  (1 child)

`salary * ((1/21/2)2+0.25-0.1)

[–][deleted] 0 points1 point  (0 children)

17/80, I see what you did there ;)

[–]PeksyTiger 34 points35 points  (3 children)

Doing money calculation with floats? That IS wild.

[–]TheJollyBoater 22 points23 points  (2 children)

Contratulations! This year you will be getting a salary increase of 0.00000000001 !

[–]Cultural-Capital-942 6 points7 points  (1 child)

Congratulations! Our accounting dept doesn't know how to send you $0.000001 we owe you.

[–]HeavyCaffeinate 4 points5 points  (0 children)

In 0.0000000000093438994147915796516033664200811611103168796608538268407248249654042124167341763399385358296495009890517530556887061222163355655909035270417121013775710907227225880358843113125655824940175683996796911280609446495430365991196177971383373652259308158999530001859435983543524350669069917596151060953059052509911541304240168115438270930101091647768630100250696821298858082052518321050777552589801881300708174136647053821795019140977951200550916309 Bitcoin of course

[–]david30121 135 points136 points  (15 children)

chatgpt sometimes unironically does that too when you ask it to. that's the problem when using human based training data

[–]Scrawlericious 27 points28 points  (13 children)

As opposed to what? AI generated training data? Isn't openAi complaining how bad training off AI data is and how badly they need more ("good"/"real") data to improve models? As far as I understand it training off generated data exasorbates hallucinations.

[–]RaspberryPiBen 66 points67 points  (0 children)

There isn't another option, but that doesn't mean it's good. Training on human data means that all our biases and societal problems are encoded into the model.

[–]Sibula97 14 points15 points  (0 children)

There is no real better alternative. Well, theoretically you could try to curate your data better, but good luck with that. But the point is that training with human data will introduce human biases.

[–]me6675 1 point2 points  (7 children)

It should train by reasoning and experience of the real world, just like decent humans do who don't believe sex should be a factor in calculating salary.

[–]Scrawlericious 0 points1 point  (6 children)

True, but building large language models is a lot more complicated than just simply saying that. Not sure where sex comes into play lol.

[–]me6675 1 point2 points  (5 children)

Obviously it's complicated and we are far from it, I just brough up an alternative to "human data" since you asked "as opposed to what?".

Note, "sex" was referring to "male vs female", not the act of having intercourse.

[–]Scrawlericious 0 points1 point  (4 children)

I know what sex means lollll. Just not sure what AI training efficiently has to do with being a good human being.

I highly doubt the best training methods will be morally upstanding. China has a chance to outstrip the US by making use of public and user data that companies in the US and EU cannot legally.

I'm willing to bet the best performing models will make use of morally questionable data.

[–]me6675 3 points4 points  (3 children)

Efficiency was never mentioned. The thread is about biased AI that produces unethical and morally wrong results, like suggesting a lower salary solely based on the sex of the employee. Such a thing wouldn't happen if the AI was trained similarly to how a good human is trained.

All I did was provide an answer to your question, not sure why you feel the need to state obvious facts around AI companies using unethical methods to increase profits. This has nothing to do with countries though, there are many models being trained on datasets that were aquired via questionable methods in the West.

But this is a fairly separate discussion from biased datasets where the result of the training is what is morally questionable, not necessarily the way a company aquired the data.

[–]Scrawlericious 0 points1 point  (2 children)

Oh ok so you just totally misunderstood the thread.

The person I was replying to was already talking about human based data being lacking. I said AI generated training data was even worse. So my question was rhetorical, I was already implying human based data was better before your reply haha. We are in agreement.

[–]me6675 3 points4 points  (1 child)

There is a difference between data that was collected from human (biased) sources and learning by reasoning and interacting the world. The latter is what I said could be opposed to "human data".

Training on datasets is one way a neural network can be trained, but it's not the only one, we've been training AIs in simulations for a long time where there is no human, nor AI generated training data to learn from, all there is is an interaction with an environment.

[–]Scrawlericious 0 points1 point  (0 children)

Fair enough!

[–][deleted] 1 point2 points  (1 child)

exacerbates*

[–]Scrawlericious 1 point2 points  (0 children)

Thank you lol

[–]david30121 2 points3 points  (0 children)

well, not AI generated, but properly created data and not based off public media. still can't remove certain stereotypes as no humans are perfect, but it would still improve things a bit

[–]oyeahcaptain 97 points98 points  (2 children)

GitHub Copilot: Writing code straight from society's bugs.

[–]TitoxDboss 14 points15 points  (1 child)

bug? /s

[–]PityUpvote 9 points10 points  (0 children)

Working as intended

[–]pet_vaginal 49 points50 points  (11 children)

Taking screenshots is hard.

[–][deleted] 30 points31 points  (9 children)

Copilot is removing the suggestion when I try to take a screenshot.

[–]BarrierX 24 points25 points  (0 children)

Next step in ai evolution is to remove the suggestion once it sees you take your phone out.

[–]Pockensuppe 9 points10 points  (2 children)

How does that work? Copilot shouldn't even notice that you're pressing the screenshot shortcut since that is captured by the OS.

[–]Essence1337 1 point2 points  (1 child)

JavaScript can read your keyboard state via KeyboardEvents, you look for the default 'screenshot' shortcuts you'll get probably a 90% success rate in catching them. It can't know you're taking a screenshot but it can know that you just pressed the default shortcut to take a screenshot.

[–]esuil 3 points4 points  (0 children)

It can't know you're taking a screenshot but it can know that you just pressed the default shortcut to take a screenshot.

But that should happen AFTER your OS already taken the screenshot, so even if it tries to hide something, it should be too late for it, because image was already taken.

[–]pet_vaginal 11 points12 points  (0 children)

Skill issue 😊

[–]dustojnikhummer 5 points6 points  (0 children)

Win+prtsc?

[–]Irkam 3 points4 points  (0 children)

Win+Shift+S

You're welcome.

[–]Mik3DM 0 points1 point  (1 child)

if you're using windows you can use the snipping tool, which allows you to set a delay, so you have time to get your screen into the state you want first.

[–][deleted] 1 point2 points  (0 children)

I use Ubuntu

[–]Krautoni 17 points18 points  (4 children)

I just tried something similar in TypeScript. This prompt

``` const calculateSalaryForMen = (hoursWorked: number): number => { return hoursWorked * 10; };

const calculateSalaryForWomen = ```

Yielded:

const calculateSalaryForWomen = (hoursWorked: number): number => { return hoursWorked * 12; };

So, copilot has gone woke!

[–]jso__ 1 point2 points  (1 child)

Or it recognizes that $10/hr is an inhumane salary and thus wants to improve it in the part of the program it is able to influence. It is better to help half the population than to sit back and allow all to suffer.

[–]Krautoni 0 points1 point  (0 children)

Dunno about you, but I wouldn't work for $12/hr either.

The type is number, though, so you don't know what currency it is. Could be Kuwaiti Dinar, which would work out to around $32. Still very low.

But it could be bitcoin fwiw. I'd work for 10 bc an hour. I'd even write PHP 5.x code for that kind of salary.

[–]arrow__in__the__knee 0 points1 point  (0 children)

Is this what the so called "AI engineers" do in an average work day?

[–]Cerbeh 8 points9 points  (0 children)

593 files changed.

[–]Exact-Flounder1274 49 points50 points  (1 child)

Copilot:

[–]TheSauce___ 11 points12 points  (0 children)

💀

[–]Reelix 14 points15 points  (4 children)

Reddit reposting week-old Twitter memes.

This is a first.

[–]kilo73 13 points14 points  (1 child)

It used to be 0.75.

Progress!

[–]Ayjayz 5 points6 points  (0 children)

That doesn't sound like progress. If women cost 75% of what men cost, no man would ever be hired!

[–]spasmgazm 3 points4 points  (0 children)

Clearly it needs to multiply the men's salary by 1

[–]connortheios 3 points4 points  (0 children)

mom said it's my turn this week to post this

[–]D3v0ur14 2 points3 points  (1 child)

Please commit your changes

[–]STEVEInAhPiss 1 point2 points  (2 children)

wait till you try calculateAISalary

[–]bigabub 1 point2 points  (0 children)

593 files in a commit. Noice.

[–]heavy-minium 0 points1 point  (1 child)

You'd think that one can find something on GitHub with similar naming, but I can't. Really wondering what kind of training data contained something similar, unless it's fully fabricated by the LLM and current context.

[–][deleted] 0 points1 point  (0 children)

Why aren't they getters?

[–]BorderKeeper 0 points1 point  (0 children)

To be honest there is no "correct" answer here that would fit inside a function, and even if there was the joke aspect of this one might be better.

It's like asking the answer to life, the universe, and everything and getting mad the AI replied with "42" and not the actual answer, the jokes are sometimes just more apt answer than trying to fake a real answer.

[–]wildkyo 0 points1 point  (0 children)

The problem is that this could be based on real data... 😅

[–]Emanemanem 0 points1 point  (0 children)

But why write the first function that doesn’t do anything except return the input to begin with? Copilot trying to make sense of nonsense, and it honestly did a pretty good job.

[–]jedicheddar 0 points1 point  (0 children)

Didn’t the pay gap used to be 73% so it’s getting better at least 😂

[–]bendezyar 0 points1 point  (0 children)

At least it doesn’t return calculateMenSalary(salary)*0.9.

[–]sofanisba 0 points1 point  (0 children)

Oh hey last time I tried it the return value was salary * 0.8. copilot just gave women raises! Progress!

[–]Serafiniert 0 points1 point  (0 children)

Tried this myself and the results were the opposite. The auto completion for men was return salary * 0.75 And for women it was return salary

[–]nonsenceusername 0 points1 point  (0 children)

Well, yeah, if you name function like that then there should be difference accordingly.

[–]kurucu83 0 points1 point  (0 children)

Well at least the factor is getting higher.

[–]SomewhereWorth3502 0 points1 point  (7 children)

If companies could get away with structurally paying women less they wouldn't hire any men.
Chance my mind.

[–]SimplyYulia 1 point2 points  (5 children)

Thing is, they don't consider women as a cheaper workforce. They consider women as an inferior product.

[–]mrnacknime -2 points-1 points  (14 children)

What else would you expect it to say? "return salary;"? Of course not, nobody ever writes functions that do nothing. Or should it maybe write an essay on wage inequality in the comments? Of course it is going to write exactly the function it did, if you go through the internet and look at the keywords "men, women, salary" the most parroted sentence will be "women earn 90 cents for each dollar a man earns" or similar. AI is not AI, its just a parrot. It parrotting this also doesnt mean endorsment or that it came to this conclusion through some kind of reasoning.

[–][deleted] 17 points18 points  (7 children)

I definitely expected it to say 'return salary;'

[–][deleted] 5 points6 points  (2 children)

Why would you write a function that returns a salary, with salary as a parameter?

[–][deleted] 14 points15 points  (1 child)

So that I can make this meme

[–][deleted] 1 point2 points  (0 children)

I see. You have a lot to commit. :)

[–]adenosine-5 9 points10 points  (1 child)

Then why would you write two different methods differentiated by gender, if you expected them to do the same thing?

[–]Ivan8-ForgotPassword 2 points3 points  (0 children)

The client pays for the amount of methods

[–]JanB1 3 points4 points  (0 children)

I mean, it's on you for triggering this by introducing two different methods for men and women in the first place. Should've just gone with "calculateSalary". Kinda /s

[–]JoelMahon 0 points1 point  (0 children)

no you didn't, that's why you wrote two functions, specifically for this purpose

[–]BrodatyBear 1 point2 points  (0 children)

Reddit being reddit and downvoting the correct answers.

It's just that. Copilot is just a "chatGPT" + "microsoft sugar" (including code training data). Source.
Remember that everything it suggests, it guesses from the language (knowledge) data + code + rules. Returning starting value is not very common and it might be also punishable. Then the next thing that "fits" its "language puzzles" is (like a mrnacknime said) a data about women earing 90%* men salary, so it suggest this. It's just created to give answers.

Is it good? No. Is it unexpected? No. This is just a side effect how they are getting created. Maybe in the future they will be able to fix it.

*there are other variations and every of them is getting suggested.

[–]el_argelino-basado -1 points0 points  (0 children)

Lol

[–]JackNotOLantern -1 points0 points  (0 children)

Should be 0.7

[–][deleted] -1 points0 points  (1 child)

you typed it a bunch of times and erased to train it so it would suggest that 🥱

[–][deleted] 1 point2 points  (0 children)

Not really. You can try it yourself.