Abstract:
A default assumption within moral philosophy is that for an entity (E) to be morally responsible for an action/outcome (x), it is necessary that E is a moral agent. Call this the ‘Agential’ view of moral responsibility. The Agential view lies at the core of our intuitions about ‘responsibility gaps’: the phenomenon that artificial technologies (e.g., self-driving cars, 'killer' robots) seem to result in situations where when something goes wrong, no one may justifiably be held responsible. In this talk, I provide two arguments for rejecting the Agential view. One, the concept of ‘agency’ is a rich notion—agents are thought to possess mental states, have some degree of causal efficacy, a capacity to be guided by reasons/sentiments, have freedom and interests etc.— and by requiring morally responsible entities to be agents, the Agential view puts a stronger requirement on the concept of ‘moral responsibility’ than is required. Two, responsibility can have different ‘faces’ —e.g., accountability, answerability, attributability—and it is implausible that a single notion of agency can be the locus of all these distinct types of responsibility.
Short Bio:
Nikhil Mahant works on topics within the philosophy of language, mind, and artificial intelligence. He is currently a Marie Skłodowska-Curie postdoctoral fellow at Uppsala University, Sweden. His project—titled ‘Do AI generated outputs have content?’—focuses on philosophical questions concerning the linguistic and mental capacities of AI systems. Earlier he has worked at the Central European University (CEU), Vienna and St. Stephens College, Delhi. He was educated at the Indian Institute of Technology, Delhi (B. Tech., Civil), the University of Delhi (MA, Philosophy), and CEU, where he obtained his PhD in Dec. 2022.ils of Nikhil's seminar