When something goes wrong: Who is responsible for errors in ML decision-making?
Апстракт
Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is important to have a principled way of assigning responsibility for such errors. The integral characteristics of ML, however, pose serious difficulties in defining responsibility and regulating ML decision-making. First, we elaborate on these characteristics and their epistemic and ethical implications. We then analyze possible general strategies for resolving the assignment of moral responsibility and show that, due to the s...pecific way in which ML functions, each potential solution is problematic, whether we assign responsibility to humans, machines, or using hybrid models. Then, we shift focus on an alternative approach that bypasses moral responsibility and attempts to define legal liability independently through solutions such as informed consent and the no-fault compensation system. Both of these solutions prove unsatisfactory because they leave too much room for potential abuses of ML decision-making. We conclude that both ethical and legal solutions are fraught with serious difficulties. These difficulties prompt us to re-weigh the costs and benefits of using ML for high-stake decisions.
Кључне речи:
Machine learning / Algorithmic decision-making / Opacity / Responsibility / Liability / Hybrid responsibility / Machine responsibilityИзвор:
AI & SOCIETY, 2023Издавач:
- Springer
Институција/група
Filozofija / PhilosophyTY - JOUR AU - Berber, Andrea AU - Srećković, Sanja PY - 2023 UR - http://reff.f.bg.ac.rs/handle/123456789/5887 AB - Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is important to have a principled way of assigning responsibility for such errors. The integral characteristics of ML, however, pose serious difficulties in defining responsibility and regulating ML decision-making. First, we elaborate on these characteristics and their epistemic and ethical implications. We then analyze possible general strategies for resolving the assignment of moral responsibility and show that, due to the specific way in which ML functions, each potential solution is problematic, whether we assign responsibility to humans, machines, or using hybrid models. Then, we shift focus on an alternative approach that bypasses moral responsibility and attempts to define legal liability independently through solutions such as informed consent and the no-fault compensation system. Both of these solutions prove unsatisfactory because they leave too much room for potential abuses of ML decision-making. We conclude that both ethical and legal solutions are fraught with serious difficulties. These difficulties prompt us to re-weigh the costs and benefits of using ML for high-stake decisions. PB - Springer T2 - AI & SOCIETY T1 - When something goes wrong: Who is responsible for errors in ML decision-making? DO - 10.1007/s00146-023-01640-1 ER -
@article{ author = "Berber, Andrea and Srećković, Sanja", year = "2023", abstract = "Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is important to have a principled way of assigning responsibility for such errors. The integral characteristics of ML, however, pose serious difficulties in defining responsibility and regulating ML decision-making. First, we elaborate on these characteristics and their epistemic and ethical implications. We then analyze possible general strategies for resolving the assignment of moral responsibility and show that, due to the specific way in which ML functions, each potential solution is problematic, whether we assign responsibility to humans, machines, or using hybrid models. Then, we shift focus on an alternative approach that bypasses moral responsibility and attempts to define legal liability independently through solutions such as informed consent and the no-fault compensation system. Both of these solutions prove unsatisfactory because they leave too much room for potential abuses of ML decision-making. We conclude that both ethical and legal solutions are fraught with serious difficulties. These difficulties prompt us to re-weigh the costs and benefits of using ML for high-stake decisions.", publisher = "Springer", journal = "AI & SOCIETY", title = "When something goes wrong: Who is responsible for errors in ML decision-making?", doi = "10.1007/s00146-023-01640-1" }
Berber, A.,& Srećković, S.. (2023). When something goes wrong: Who is responsible for errors in ML decision-making?. in AI & SOCIETY Springer.. https://doi.org/10.1007/s00146-023-01640-1
Berber A, Srećković S. When something goes wrong: Who is responsible for errors in ML decision-making?. in AI & SOCIETY. 2023;. doi:10.1007/s00146-023-01640-1 .
Berber, Andrea, Srećković, Sanja, "When something goes wrong: Who is responsible for errors in ML decision-making?" in AI & SOCIETY (2023), https://doi.org/10.1007/s00146-023-01640-1 . .