Discriminatory algorithms and constitutional jurisdiction: the legal and social risks of the impact of biases on widely accessible artificial intelligence platforms

Authors

DOI:

https://doi.org/10.18759/rdgf.v24i3.2311

Abstract

Through the bibliographic method, the immediate objective of the study is to propose control parameters to ensure that artificial intelligence platforms are fair and non-discriminatory. Thus, this paper addresses the problem of algorithmic discrimination, which is increasingly present in society. The text provides a brief account of the modus operandi of discrimination in artificial intelligence platforms, explaining how artificial intelligence technology is capable of learning from historical data and how these biased data can affect algorithmic outcomes. The points covered in the article include an introduction to the importance of the internet in everyday life, an explanation of how discriminatory algorithms work and their legal and social risks, as well as a proposal to ensure the quality of the objective outputs required by the platforms. The central research problem of the article is how to prevent algorithmic discrimination in artificial intelligence platforms, proposing control parameters to ensure that these systems are fair and do not reproduce biases or inequalities present in society. In summary, the legal and social risks of discriminatory algorithms are highlighted, and solutions are proposed both through jurisdiction and prevention for the dynamics of algorithmic discrimination.

Keywords: Algorithmic discrimination; Artificial intelligence; Control parameters.    

Downloads

Download data is not yet available.

Author Biographies

Mônia Clarissa Hennig Leal, Universidade de Santa Cruz do Sul - UNISC

 

   

Lucas Moreschi Paulo, Universidade Santa Cruz do Sul

 

 

References

BUITEN, Miriam C. Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, n. 10, p. 41–59, 2019.

CODED BIAS; Direção: Shalini Kantayya. Produção: Shalini Kantayya. Reino Unido: Netflix, 2020.

COZMAN, Fabio G.; NERI, Hugo. O que, afinal, é Inteligência Artificial? In: COZMAN, Fábio G.; PLONSKI, GuilhermeAry; NERI, Hugo. Inteligência Artificial: Avanços e Tendências. São Paulo: Instituto de Estudos Avançados, p. 19-27, 2021.

CRESTANTE, Dérique S. Discriminação algorítmica: a aplicabilidade dos standards protetivos fixados pela Corte Interamericana De Direitos Humanos e pelo Supremo Tribunal Federal em relação ao direito de igualdade e não discriminação a partir das noções de Ius Constitutionale Commune Latino-Americano e dever de proteção estatal. Dissertação (Mestrado em Direito) – Faculdade de Direito, Universidade de Santa Cruz do Sul, p. 180, 2022.

EUBANKS, Virginia. Automating inequality: how high-tech tools profile, police, and punish the poor. Nova Iorque: St. Martin’s Press, 2017.

FISS, Owen M. Groups and the Equal Protection Clause. Philosophy and Public Affairs, v. 5, n. 2, 1976.

HEIKKILÄ, Melissa. The viral AI avatar app Lensa undressed me—without my consent. MIT Technology Review. Artificial Intelligence. dez. 2022. Disponível em: <https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/>. Acesso em 23 mar. 2023.

HEIKKILÄ, Melissa. These new tools let you see for yourself how biased AI image models are. MIT Technology Review. Artificial Intelligence. mar. 2023. Disponível em: <https://www.technologyreview.com/2023/03/22/1070167/these-news-tool-let-you-see-for-yourself-how-biased-ai-image-models-are/>. Acesso em 23 mar. 2023.

JACOBS, Francis G. Judicial Dialogue and the Cross-Fertilization of Legal Systems: The European Court of Justice. Texas International Law Journal, v. 38, 2003.

LEAL; Mônia Clarissa Hennig; MAAS, Rosana Helena. “Dever de proteção estatal”, “proibição de proteção insuficiente” e controle jurisdicional de Políticas Públicas. Rio de Janeiro: Lúmen Juris, 2020.

LUCCIONI, Alexandra Sasha; AKIKI, Christopher; MITCHELL, Margaret; JERNITE, Yacine. Stable bias: analyzing societal representations in diffusion models. Arxiv. Computer Science. Cornell University, mar. 2023.

NOBLE, Safiya Umoja. Algorithms of oppression: how search engines reinforce racism. Nova Iorque: New York University Press, 2018.

O’NEIL, Cathy. Algoritmos de destruição em massa: como o big data aumenta a desigualdade e ameaça a democracia. Trad. Rafael Abraham. Santo André: Editora Rua do Sabão, 2020.

PASQUALE, Frank. The black box society: the secret algorithms that control money and information. Cambridge e Londres: Harvard University Press, 2015.

SAGÜÉS, María Sofía. Discriminación estructural, inclusión y litigio estratégico. In: FERRER MAC-GREGOR, Eduardo; MORALES ANTONIAZZI, Mariela; FLORES PANTOJA, Rogelio. Inclusión, Ius Commune y justiciabilidad de los DESCA en la jurisprudencia interamericana. El caso Lagos del Campo y los nuevos desafíos. Colección Constitución y Derechos. Querétaro México: Instituto de Estudios Constitucionales del Estado de Querétaro, 2018. p. 129-178.

Published

2023-12-04

How to Cite

Hennig Leal, M. C., & Moreschi Paulo, L. (2023). Discriminatory algorithms and constitutional jurisdiction: the legal and social risks of the impact of biases on widely accessible artificial intelligence platforms. Revista De Direitos E Garantias Fundamentais, 24(3), 165–187. https://doi.org/10.18759/rdgf.v24i3.2311