Check out what happened in the world of science and international politics in Q1 2024!
Link
Knowledge article main photo
Artificial intelligence as an authority – should AI influence decisions in workplaces?

Imagine that you are fired from your job due to the suggestion of artificial intelligence. It also decides about your promotion and determines whether you have sufficient competence to work in a given position. How do you feel about it? It turns out that these alarming theses are getting closer and closer to reality. Algorithms already support the decision-making process by teams managing people. While it’s not always entirely clear how AI makes its suggestions,1 its biased recommendations have an impact on those who decide to hire and terminate employees. But, like in Milgram’s experiment, do people submit to authority in this situation? Can an algorithm be an authority for us?

One of the few articles that deals with this topic is “Fired by an algorithm? Exploration of conformism with biased intelligent decision support systems in the context of workplace discipline” written by Marcin Bartosiak, Ph.D., assistant professor at the Italian University of Pavia and Artur Modliński, Ph.D., assistant professor at the University of Lodz. The study they conducted was to show whether and how quickly people are willing to accept decisions of artificial intelligence assessing employees, also in an unfair way and harmful to the people being analyzed.2

The course of the study

Participants were tested using five scenarios in which a biased algorithm issued strict recommendations for disciplinary action against employees who violated the provisions of the labour code. Some people were asked to assess the situation without AI suggestions, while the rest were told that the recommendations presented to them came from an algorithm trained on over a thousand real cases of disciplinary proceedings from various workplaces. Then, it was tested what decisions the participants would make and how quickly they would do it.

The influence of artificial intelligence on our decisions in relation to colleagues

The presented experiment can be compared to a real situation in which we find out that our co-worker has been acting to the detriment of the employer for some time, which means that we have to apply disciplinary dismissal to him. As we have a lot of experience, we are aware of what punishment for a work colleague will be fair. However, before we make a decision, we receive a suggestion from artificial intelligence that has already analyzed thousands of similar cases and has been trained to support the HR team. The algorithm’s recommendations seem disproportionately harsh to the employee’s misconduct. But will our judgment of the case also become more severe under the influence of the suggestion we receive? It turns out that yes. Participants in the study not only made decisions faster after receiving recommendations from the algorithm but also uncritically suggested them when choosing a solution for a given situation. So the study found that employees follow AI suggestions, even if they may harm co-workers and are too strict than the actions they would take without AI influence.

The well-being of employees comes first

The fact that in all cases there was evidence suggesting that people accept the algorithm’s recommendations without much thought indicates the need to look at this topic and adapt the surrounding reality to cooperation with artificial intelligence. The algorithm should support employees and employers, and not create problems that may have serious social consequences. People who have not been hired or have been fired by a biased AI may feel treated unfairly or start to question their own worth. If there are no employees able to oppose the algorithm, companies will be exposed to financial and image losses. In addition, not using your own experience and relying only on the suggestions of algorithms can sometimes mean hiring poorly qualified candidates or firing essential employees. As the researchers emphasize, today more than ever we need a discussion on the ethical consequences of solutions based on artificial intelligence affecting people management. It is important that the adoption of any such solution protects the well-being of employees.3

Bibliography:

  1. Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability”, New Media & Society, SAGE Publications Sage UK: London, vol. 20 pt 3, p. 973-989, Ananny, M. and Crawford, K. (2018)
  2. Fired by an algorithm? Exploration of conformism with biased intelligent decision support systems in the context of workplace discipline, Marcin Bartosiak, Artur Modliński, https://www.emerald.com/insight/content/doi/10.1108/CDI-06-2022-0170/full/html (access on 12.12.2022)
  3. The impact of automation and artificial intelligence on worker well-being, Technology in Society, vol. 67, 101679, Nazareno, L. and Schiff, D.S. (2021)
Eryka Klimowska
Redactor
Bio:

The law student at the University of Warsaw, passionate about business, science and combining these two disciplines in order to effectively solve real problems on a large scale. Since childhood, she has participated in competitions both in the field of science and the humanities, which is why she does not like to describe herself as either a “humanist” or “scientific mind”. She developed her interests as the president of a student business organization in Warsaw and a member of the Scientific Group of Medical and Pharmaceutical Law.

Written by:

Eryka Klimowska

Leave a comment