TY - GEN
T1 - Fairness of AI in Predicting the Risk of Recidivism
T2 - 18th International Conference on Availability, Reliability and Security, ARES 2023
AU - Farayola, Michael Mayowa
AU - Tal, Irina
AU - Malika, Bendechache
AU - Saber, Takfarinas
AU - Connolly, Regina
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/8/29
Y1 - 2023/8/29
N2 - Artificial Intelligence (AI) is applied in almost every public sector because of its positive impacts. However, AI's ethical aspects and trustworthiness constitute a significant uproar and concern among different AI stakeholders due to AI's adverse effect on users when the AI system lacks cautionary measures. AI is used in the criminal justice system for predicting recidivism risk. However, AI's negative impact translates into bias and high incarceration towards a group of defendants in a population assessed for recidivism risk. This paper focuses on fairness as a requirement of a trustworthy AI framework previously proposed to ascertain the appropriate application of AI systems in predicting recidivism. This paper aims to raise awareness about the fairness of AI models and stimulate further research and deployment of efficient and effective exploitation of fair and trustworthy AI models in the criminal justice system when predicting recidivism. Fairness has been a significant concern for criminal justice system stakeholders and has received considerable attention with more theoretical and practical studies than other trustworthy AI requirements. Hence, this paper reviews state-of-the-art fairness, outlines valuable findings, and proposes future directions to achieve fair AI systems for predicting recidivism risk. In addition, this paper ensures mapping existing technical works in the literature to the fairness pipeline corresponding to the criminal justice system's AI development phases.
AB - Artificial Intelligence (AI) is applied in almost every public sector because of its positive impacts. However, AI's ethical aspects and trustworthiness constitute a significant uproar and concern among different AI stakeholders due to AI's adverse effect on users when the AI system lacks cautionary measures. AI is used in the criminal justice system for predicting recidivism risk. However, AI's negative impact translates into bias and high incarceration towards a group of defendants in a population assessed for recidivism risk. This paper focuses on fairness as a requirement of a trustworthy AI framework previously proposed to ascertain the appropriate application of AI systems in predicting recidivism. This paper aims to raise awareness about the fairness of AI models and stimulate further research and deployment of efficient and effective exploitation of fair and trustworthy AI models in the criminal justice system when predicting recidivism. Fairness has been a significant concern for criminal justice system stakeholders and has received considerable attention with more theoretical and practical studies than other trustworthy AI requirements. Hence, this paper reviews state-of-the-art fairness, outlines valuable findings, and proposes future directions to achieve fair AI systems for predicting recidivism risk. In addition, this paper ensures mapping existing technical works in the literature to the fairness pipeline corresponding to the criminal justice system's AI development phases.
KW - Criminal Justice System
KW - Fairness
KW - Recidivism
KW - Trust
KW - Trustworthy Artificial Intelligence
UR - https://www.scopus.com/pages/publications/85168764014
U2 - 10.1145/3600160.3605033
DO - 10.1145/3600160.3605033
M3 - Conference Publication
AN - SCOPUS:85168764014
T3 - ACM International Conference Proceeding Series
BT - ARES 2023 - 18th International Conference on Availability, Reliability and Security, Proceedings
PB - Association for Computing Machinery
Y2 - 29 August 2023 through 1 September 2023
ER -