Skip to content

Commit

Permalink
Added ICTAI2023 publication
Browse files Browse the repository at this point in the history
Still missing the DOI, HAL link (and eventually link to IEEE, depending on the conference proceedings).
  • Loading branch information
rchaput committed Nov 6, 2023
1 parent 1b14e48 commit 66c2908
Show file tree
Hide file tree
Showing 3 changed files with 41 additions and 0 deletions.
Binary file not shown.
Binary file not shown.
41 changes: 41 additions & 0 deletions content/publication/ictai2023/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
+++
title = "Learning to identify and settle dilemmas through contextual user preferences"
date = 2023-11-06
authors = ["Rémy Chaput", "Laetitia Matignon", "Mathieu Guillermin"]
profile = false

publication_types = ["1"]
publication = "*The 35th IEEE International Conference on Tools with Artificial Intelligence (ICTAI)*"
publication_short = "ICTAI2023"

abstract = """
Artificial Intelligence systems have a significant impact on human lives.
Machine Ethics tries to align these systems with human values, by integrating
"ethical considerations". However, most approaches consider a single objective,
and thus cannot accommodate different, contextual human preferences.
Multi-Objective Reinforcement Learning algorithms account for various preferences,
but they often are not intelligible nor contextual (e.g., weighted preferences).
Our novel approach identifies dilemmas, presents them to users, and learns to
settle them, based on intelligible and contextualized preferences over actions.
We intend to maximize understandability and opportunities for user-system
co-construction by showing dilemmas, and triggering interactions, thus empowering
users. The block-based architecture enables leveraging simple mechanisms that
can be updated and improved. Validation on a Smart Grid use-case shows that our
algorithm finds actions for various trade-offs, and quickly learns to settle
dilemmas, reducing the cognitive load on users.
"""

summary = """
This paper presents a novel Multi-Objective Reinforcement Learning approach to
settle dilemmas, by putting humans in the loop.
"""

tags = ["Machine Ethics", "Multi-Objective Reinforcement Learning", "Moral Dilemmas",
"Human Preferences"]
featured = false

url_pdf = "Chaput_Learning_identify_settle_dilemmas_paper.pdf"

url_slides = "Chaput_Learning_identify_settle_dilemmas_slides.pdf"

+++

0 comments on commit 66c2908

Please sign in to comment.