Problems of bias and discrimination of human capital in artificial intelligence systems
https://doi.org/10.26425/1816-4277-2024-3-176-185
Abstract
The article is devoted to the study of the genesis of understanding the causes of bias and discrimination of human capital from traditional forms to the era of digitalisation and artificial intelligence (hereinafter referred to as AI). In this article, the authors continue their case studies in the field of studying the ethics of AI, advantages and risks of its widespread distribution and use. The main purpose of writing this work is to study the question of how the use of AI and algorithmic solutions aggravates the problem of bias and discrimination of human capital which is inevitably associated with the emergence of AI. The authors of the article present a brief overview of the retrospective, discover new modern forms of discrimination generated by the action of AI and subject them to open discussion, offering their vision of solving issues on neutralisation the risks caused by the use of AI technologies in relation to certain groups of workers. The authors consider in the article the manifestations of bias and discrimination in society in general and in the field of human resource management in particular, identify possible threats of discrimination as a result of the spread of AI and the consequences of these threats.
About the Authors
E. V. KashtanovaRussian Federation
Ekaterina V. Kashtanova - Cand. Sci. (Econ.), Assoc. Prof at the Personnel Management Department,
Moscow
A. S. Lobacheva
Russian Federation
Anastasia S. Lobacheva - Cand. Sci. (Econ.), Assoc. Prof at the Personnel Management Department,
Moscow
References
1. Leonov V.A., Kashtanova E.V., Lobacheva A.S. Ethical aspects of artificial intelligence use in social spheres and management environment. European Proceedings of Social and Behavioural Sciences. 2021;118:989–998. http://doi.org/10.15405/epsbs.2021.04.02.118
2. Suen H.-Y., Hung K.E., Lin Ch.-L. Intelligent video interview agent used to predict communication skill and perceived personality traits. Human-centric Computing and Information Sciences. 2020;3(10). http://dx.doi.org/10.1186/s13673-020-0208-3
3. Vinichenko M.V., Narrainen G.S., Melnichuk A.V., Chalid Ph. The influence of artificial intelligence on human activities. In: Frontier information technology and systems research in cooperative economics. Heidelberg: Springer; 2020. Pp. 561–570.
4. Symitsi E., Stamolampros P., Daskalakis G., Korfiatis N. The informational value of employee online reviews. European Journal of Operational Research. 2021;2(288):605–619. http://dx.doi.org/10.1016/j.ejor.2020.06.001
5. Sinha N., Singh P., Gupta M., Singh P. Robotics at workplace: an integrated Twitter analytics – SEM based approach for behavioral intention to accept. International Journal of Information Management. 2020;55:102210. http://dx.doi.org/10.1016/j.ijinfomgt.2020.102210
6. Courtland R. Bias detectives: the researchers striving to make algorithms fair. Nature. 2018;558:357–360. https://doi.org/10.1038/d41586-018-05469-3
7. Popkova E.G., Gulzat K. Contradiction of the digital economy: public well-being vs. cyber threats. In: Economy: complexity and variety vs. rationality: Proceedings of the 9th National Scientific and Practical Conference, Vladimir, April 17–18, 2019. Cham: Springer; 2019. Pp. 112–124.
8. Heinrichs B. Discrimination in the age of artificial intelligence. AI & Society. 2022;37:143–154. https://link.springer.com/article/10.1007/s00146-021-01192-2
Review
For citations:
Kashtanova E.V., Lobacheva A.S. Problems of bias and discrimination of human capital in artificial intelligence systems. Vestnik Universiteta. 2024;(3):176-185. (In Russ.) https://doi.org/10.26425/1816-4277-2024-3-176-185