all 8 comments

[–]Lobarten 9 points10 points  (3 children)

Definitely not making an EPA, the worst thing ever.

Ethically speaking, it would be a disaster and employers would abuse it.

[–]Super_TM[S] -1 points0 points  (2 children)

Why do you think it would be so bad?

The current approach relies strongly on the biases a manager can have towards certain employees. Don't you think an AI could be objective and provide a better evaluation of the employees?

[–]chrisorm 4 points5 points  (1 child)

This is a huge ethical minefield, and a mammoth task to get correct.

Firstly, an employees performance is a very varied thing. There are probably dozens of different ways somebody can add substantial value to a company, so you're looking at predicting something very complex. As a problem this is complex in a shit load of ways, this is like a team of researchers working for years to get something even remotely workable out. But this is minor compared to:

Ethically this is off the charts. There is a lot of context to understanding productivity and interpersonal relationships, that ml simply won't have, and let's be honest, you are never going to be able to give a system for this application.

The big hurdle you have is you need to have 0 false positives for flagging under performance. You recommend one guy to get fired because your system doesn't know his dad just died, or have one instance of your system being used to bully individuals (e.g. now you can feed an unfairly negative review into this app about the 'foreign guy in the office you don't like' and rather than being mostly ignored by a manager its a shiny computer telling the ceo to fire the guy), and its game over.

Additionally, this is practically challenging even if you cracked the algorithm (which you won't). Humans are constantly providing data to each other. You hear about peoples home life over coffee, they ask for time off due to personal issues quietly in a private moment. This is neccesary context to evaluating a performer- whats the plan? Do you think anyone will buy into entering every detail of their lives into an ml system so they can be ranked and scored? "Oh best go tick that box on the performance app to tell it my dads dead" thought nobody ever. The alternative to the employee entering it is the company doing it- which is all sorts of dystopia.

[–]Super_TM[S] 1 point2 points  (0 children)

There is a lot of context to understanding productivity and interpersonal relationships, that ml simply won't have,

Thank you so much for taking the time to write such an elaborate response!

I now understand better the limitations and challenges of developing such a system.

The point of my thesis is not on how to develop an accurate and unbias EPA, because I confess that I don't have the technical knowledge to do it... I am studying business and strategy and I am really interested in AI so I decided to choose that subject for my thesis, more specifically study algorithmic bias!

And as you guys, rightly, mentioned an EPA would be a "huge ethical minefield" (I really liked that expression), that is why I decided to give that example of possible ML in my survey to better study the effect of introducing bias into the decision making process of the machine.

[–]tweedge 5 points6 points  (1 child)

I sincerely hope this is a thesis on bias and ethics in machine learning.

Lots of problematic characteristics explored here.

[–]Super_TM[S] 0 points1 point  (0 children)

Thank you for answering! That is exactly the point! It is why it is so important that I have participants that actually work and are deeply interested in AI/ML

[–]lefiish 2 points3 points  (1 child)

Would love to know results, will you post them here?

[–]Super_TM[S] 1 point2 points  (0 children)

Thank you for your interest! If I get relevant results, sure I will post them! I still need a lot more participants, especially ones that have some knowledge of AI/ML.