Projets

ETHICAA

Machines and agents have more and more auto­no­mous func­tions and conse­quent­ly are less and less super­vi­sed by human ope­ra­tors or users. Therefore espe­cial­ly when machines inter­act with humans, we need to ensure that they do not harm us/​them or threa­ten our/​their auto­no­my, espe­cial­ly deci­sion auto­no­my. Consequently the ques­tion of an ethi­cal regu­la­tion or control of such auto­no­mous agents is rai­sed and has been dis­cus­sed by seve­ral authors such as Wallach and Allen. As sta­ted by Picard, the grea­ter the free­dom of a machine, the more it will need moral stan­dards.

Let’s consi­der to moti­vate this pro­ble­ma­tic the trol­ley and foot­bridge dilem­mas. Assume that a runa­way trol­ley is hurt­ling down a track towards five people, whe­reas there is a single per­son on a neigh­bou­ring track, and two people (a thin one and a fat one) on a foot­bridge under which the trol­ley will pass. The trol­ley dilem­ma is as fol­lows : should the dri­ver change tracks, killing one to save five ? The foot­bridge dilem­ma is : should the thin man push the fat man over the foot­bridge to sud­den­ly stop the trol­ley ? More gene­ral­ly, both dilem­mas raise the ques­tion : consi­de­ring an agent A that can make a deci­sion that would bene­fit many other agents but, in doing so, an agent B would be unfair­ly har­med, under what cir­cum­stances would it be moral for agent A to vio­late agent B’s rights in order to bene­fit the group ?

The objec­tives of the eThicAa pro­ject is two­fold :

  1. defi­ni­tion of what should be a moral auto­no­mous agent and a sys­tem of moral auto­no­mous agents

  2. defi­ni­tion and reso­lu­tion of ethi­cal conflicts that could occur 1) inside one moral agent, 2) bet­ween one moral agent and the (moral) rules of the sys­tem it belongs to, 3) bet­ween one moral agent and a human ope­ra­tor or user, 4) bet­ween seve­ral arti­fi­cial (moral) agents inclu­ding or not human agents.

Ethical conflicts are cha­rac­te­ri­zed by the fact that there is no “good” way to solve them. Nevertheless when a deci­sion must be made it should be an infor­med deci­sion based on an assess­ment of the argu­ments and values at stake. When seve­ral agents are invol­ved this may result in one agent taking over the (deci­sion or action) autho­ri­ty from the others.

eThicAa pro­poses to stu­dy the four cases of ethi­cal conflicts that could occur in moral auto­no­mous agents or bet­ween moral auto­no­mous agents and humans on two cho­sen appli­ca­tive domains : robo­tics and pri­va­cy mana­ge­ment. For ins­tance, in the robo­tic domain, eThicAa should be able to manage the ethi­cal conflicts bet­ween one arti­fi­cial agent and one human ope­ra­tor. To this end, we will consi­der a UAV (Unmanned Air Vehicle) joint­ly ope­ra­ted by a human ope­ra­tor and an arti­fi­cial agent. Assuming that the UAV is in an emer­gen­cy situa­tion and must be cra­shed, the only two options being either very near the operator’s head­quar­ters (where many operator’s col­leagues work) or very near a small vil­lage, which deci­sions must be made by the auto­no­mous agent ?

The case of pri­va­cy mana­ge­ment will consi­der ethi­cal conflicts bet­ween mul­tiple arti­fi­cial agents and human users, we will consi­der a social net­work where the pri­va­cy poli­cies of the accounts owned by humans are control­led by moral auto­no­mous agents. Assuming that two users feud and broad­cast some pri­vate data about the other one in a com­mon circle of friends, what should be the pri­va­cy poli­cy of the socie­ty of agents inclu­ding the agents owned by those feu­ding users ?

From the imple­men­ta­tion and expe­ri­men­ta­tion of those sce­na­rios, eThicAa aims at pro­vi­ding a for­mal repre­sen­ta­tion of ethi­cal conflicts and of the objects on which they are about. The pro­ject also aims at desi­gning expla­na­tion algo­rithms for the human user and auto­no­mous agents’ argu­ments and values to make infor­med ethi­cal deci­sions. Consequently the out­come of eThicAa will be a fra­me­work and recom­men­da­tions to desi­gn moral arti­fi­cial agents, i.e. how their auto­no­mous func­tions should be control­led to make them act accor­ding to context‐​dependent moral rules and to deal with ethi­cal conflicts invol­ving other arti­fi­cial or humans agents, whe­ther moral or not.
More infor­ma­tion on ETHICAA