REFF - Faculty of Philosophy Repository
University of Belgrade - Faculty of Philosophy
    • English
    • Српски
    • Српски (Serbia)
  • English 
    • English
    • Serbian (Cyrillic)
    • Serbian (Latin)
  • Login
View Item 
  •   REFF
  • Filozofija / Philosophy
  • Radovi istraživača / Researcher's publications - Odeljenje za filozofiju
  • View Item
  •   REFF
  • Filozofija / Philosophy
  • Radovi istraživača / Researcher's publications - Odeljenje za filozofiju
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Artificial morality: Making of the artificial moral agents

Thumbnail
2019
2710.pdf (154.9Kb)
Authors
Kušić, Marija
Nurkić, Petar
Article (Published version)
Metadata
Show full item record
Abstract
Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine's potential to be an agent, or moral agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there ...are insinuations of needed additional psychological (emotional and cognitive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top-down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort.

Keywords:
moral psychology / machine learning / hybrid model / Artificial morality / artificial moral agents
Source:
Belgrade Philosophical Annual, 2019, 32, 27-49
Publisher:
  • Univerzitet u Beogradu - Filozofski fakultet - Institut za filozofiju, Beograd
Funding / projects:
  • Dynamic Systems in Nature and Society: Philosophical and Empirical Aspects (RS-179041)

DOI: 10.5937/BPA1932027K

ISSN: 0353-3891

[ Google Scholar ]
URI
http://reff.f.bg.ac.rs/handle/123456789/2713
Collections
  • Radovi istraživača / Researcher's publications - Odeljenje za filozofiju
  • Radovi istraživača / Researcher's publications - Odeljenje za psihologiju
Institution/Community
Filozofija / Philosophy
TY  - JOUR
AU  - Kušić, Marija
AU  - Nurkić, Petar
PY  - 2019
UR  - http://reff.f.bg.ac.rs/handle/123456789/2713
AB  - Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine's potential to be an agent, or moral agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cognitive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top-down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort.
PB  - Univerzitet u Beogradu - Filozofski fakultet - Institut za filozofiju, Beograd
T2  - Belgrade Philosophical Annual
T1  - Artificial morality: Making of the artificial moral agents
EP  - 49
IS  - 32
SP  - 27
DO  - 10.5937/BPA1932027K
ER  - 
@article{
author = "Kušić, Marija and Nurkić, Petar",
year = "2019",
abstract = "Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine's potential to be an agent, or moral agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cognitive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top-down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort.",
publisher = "Univerzitet u Beogradu - Filozofski fakultet - Institut za filozofiju, Beograd",
journal = "Belgrade Philosophical Annual",
title = "Artificial morality: Making of the artificial moral agents",
pages = "49-27",
number = "32",
doi = "10.5937/BPA1932027K"
}
Kušić, M.,& Nurkić, P.. (2019). Artificial morality: Making of the artificial moral agents. in Belgrade Philosophical Annual
Univerzitet u Beogradu - Filozofski fakultet - Institut za filozofiju, Beograd.(32), 27-49.
https://doi.org/10.5937/BPA1932027K
Kušić M, Nurkić P. Artificial morality: Making of the artificial moral agents. in Belgrade Philosophical Annual. 2019;(32):27-49.
doi:10.5937/BPA1932027K .
Kušić, Marija, Nurkić, Petar, "Artificial morality: Making of the artificial moral agents" in Belgrade Philosophical Annual, no. 32 (2019):27-49,
https://doi.org/10.5937/BPA1932027K . .

Related items

Showing items related by title, author, creator and subject.

  • Šta pokazuje Kantov "kompas"? / What does Kant's 'compass' show? 

    Cekić, Nenad (Srpsko filozofsko društvo, Beograd, 2020)
  • Biološke osnove morala: egoizam, altruizam i samoobmanjivanje / The biological basis of morality: egoism, altruism and self-deception 

    Živanović, Igor (Univerzitet u Beogradu, Filozofski fakultet, 2016)
  • Opasnosti od moralnog poboljšanja ljudi / The perils of moral enhancement 

    Dobrijević, Aleksandar (Univerzitet u Beogradu - Institut za filozofiju i društvenu teoriju, Beograd, 2012)

DSpace software copyright © 2002-2015  DuraSpace
About REFF | Send Feedback

OpenAIRERCUB
 

 

All of DSpaceInstitutions/communitiesAuthorsTitlesSubjectsThis institutionAuthorsTitlesSubjects

Statistics

View Usage Statistics

DSpace software copyright © 2002-2015  DuraSpace
About REFF | Send Feedback

OpenAIRERCUB