Projets

Web intelligence

It is gene­ral­ly agreed that trust is a key concept in nowa­days infor­ma­tion tech­no­lo­gies, which applies not only in contexts where secu­ri­ty is in focus. Beyond sys­tem relia­bi­li­ty it ensures its usa­bi­li­ty by both human and arti­fi­cial agents. Numerous works in socio­lo­gy, psy­cho­lo­gy, phi­lo­so­phy and cog­ni­tive science on the one hand, and in com­pu­ter science on the other, show that trust is a com­plex notion with mul­tiple facets. While the concept is used by now in many appli­ca­tions, there is still no consen­sus about a clear‐​cut and uni­fied defi­ni­tion.
In this pro­ject we pro­pose to start from Castelfranchi et col.’s theo­ry of social trust, which is cer­tain­ly on of the best esta­bli­shed theo­ries among the above men­tio­ned dis­ci­plines. We are going to confront its ana­ly­sis to the spe­ci­fic needs in secu­ri­ty in order to extract the requi­red key ele­ments, and com­plete it by some notions that are requi­red in imple­men­ta­tions and a prio­ri absent from the theo­ry (such as trust dyna­mics, the link with the topic notion,…). We are also going to for­ma­lize the resul­ting theo­ry in logic, and imple­ment the pro­per­ties that have thus been laid bare within agent plat­forms. The lat­ter step will be done at two levels, viz. the indi­vi­dual and the col­lec­tive level.

Chef de pro­jet :  Andreas Herzig

Collaborateurs impli­qués : Olivier Boissier

Durée : 2006–2010

Partenaires : Institut de Recherche en Informatique de Toulouse (IRIT), Ecole des Mines de Saint‐​Etienne – Département Informatique et sys­tèmes intel­li­gents, Institute of Cognitive Sciences and Technologies (ISTC)