CITI has stopped operations in 2014, to co-launch NOVA LINCS THIS SITE IS NOT BEING UPDATED SINCE 2013
citi banner
  Home  \  Seminars @ CITI  \  Seminar Page Login  
banner bottom
File Top
Counterfactuals in Logic Programming with Applications to Morality (NOVA-LINCS Seminar)
{ Wed, 19 Nov 2014, 14h00 }

By: Luis Moniz Pereira  [ hide info ] ; Ari Saptawijaya  [ show info ]

Luis Moniz Pereira


Luís Moniz Pereira, born 1947 in Lisbon, is Professor Emeritus of Computer Science , and Director of CENTRIA, the AI centre at Universidade Nova de Lisboa (1993-2008). Doctor honoris causa by T.U. Dresden (2006), elected ECCAI Fellow (2001), he launched the Erasmus Mundus European MSc in Computational Logic at UNL (2004-2008), and belongs to the Board of Trustees and Scientific Advisory Board of IMDEA –Madrid Advanced Studies Institute (Software).

He was founding president of the Portuguese AI association, and founding member of the editorial boards of: J. Logic Programming, J. Automated Reasoning, New Generation Computing, Theory and Practice of Logic Programming, J. Universal Computer Science, J. Applied Logic, Electronic Transactions on AI, Computational Logic Newsletter, Intl. J. Reasoning-Based Intelligent Systems (Advisory-Editor), and presently Associate Editor for Artificial Intelligence of the ACM Computing Surveys. His research centres on Knowledge Representation and Reasoning, Logic Programming, and Cognitive Sciences.

More information in:

*** Joint CENTRIA/CITI/NOVA LINCS seminar ***

Counterfactuals are conjectures about alternatives to events that did not occur in the past; thoughts about what would have happened, had an alternative event occurred. Herein we show how counterfactual reasoning is modeled using Logic Programming (LP), particularly by benefitting from LP abduction and updating. The approach is inspired by Pearl’s causal model of counterfactuals, where causal direction and conditional reasoning are captured by inferential arrows of rules in logic programs. In this approach, LP abduction hypothesizes background conditions from given evidences or observations, whereas LP updating helps frame these background conditions as a counterfactual’s context. Moreover, LP updating imposes causal interventions into the program, imposing minimal adjustments in the model through defeasible LP rules. We apply counterfactuals to computational morality resorting to this LP-based approach, and show their potential for specifying and querying morality issues, viz., to examine viewpoints on moral permissibility of actions through classic moral examples from literature. The results from this application have been validated on a prototype implementing the approach on top of an integrated LP abduction and updating system with tabling.

Download Paper

File Bottom