Keyword search (4,163 papers available)

"decision-making" Keyword-tagged Publications:

Title Authors PubMed ID
1 Cell Fate Dynamics Reconstruction Identifies TPT1 and PTPRZ1 Feedback Loops as Master Regulators of Differentiation in Pediatric Glioblastoma-Immune Cell Networks Abicumaran Uthamacumaran 39420135
PSYCHOLOGY
2 Education in Laparoscopic Cholecystectomy: Design and Feasibility Study of the LapBot Safe Chole Mobile Game Noroozi M; St John A; Masino C; Laplante S; Hunter J; Brudno M; Madani A; Kersten-Oertel M; 39052314
ENCS
3 Computational neuroscience across the lifespan: Promises and pitfalls van den Bos W; Bruckner R; Nassar MR; Mata R; Eppinger B; 29066078
PSYCHOLOGY
4 Who Should Decide How Machines Make Morally Laden Decisions? Dominic Martin 27905083
JMSB
5 No food left behind: foraging route choices among free-ranging Japanese macaques (Macaca fuscata) in a multi-destination array at the Awajishima Monkey Center, Japan Joyce MM; Teichroeb JA; Kaigaishi Y; Stewart BM; Yamada K; Turner SE; 37278740
CONCORDIA
6 Dissecting cell fate dynamics in pediatric glioblastoma through the lens of complex systems and cellular cybernetics Abicumaran Uthamacumaran 35678918
PHYSICS
7 Neural evidence for age-related deficits in the representation of state spaces Ruel A; Bolenz F; Li SC; Fischer A; Eppinger B; 35510942
PERFORM
8 Resource-rational approach to meta-control problems across the lifespan Ruel A; Devine S; Eppinger B; 33590729
PERFORM
9 Developmental Changes in Learning: Computational Mechanisms and Social Influences. Bolenz F, Reiter AMF, Eppinger B 29250006
PERFORM

 

Title:Who Should Decide How Machines Make Morally Laden Decisions?
Authors:Dominic Martin
Link:https://pubmed.ncbi.nlm.nih.gov/27905083/
DOI:10.1007/s11948-016-9833-7
Publication:Science and engineering ethics
Keywords:Artificial intelligenceCollective decision-makingEconomic efficiencyEthicsMarket freedomMoral agencyPublic policiesRegulationSelf-driving car
PMID:27905083 Category: Date Added:2016-12-02
Dept Affiliation: JMSB
1 John Molson School of Business, Concordia University, Montréal, Canada. dominic.martin@concordia.ca.
2 Department of Philosophy, McGill University, Montréal, Canada. dominic.martin@concordia.ca.

Description:

Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University