Research‎ > ‎

Trust & Security

Security Applications of Trust


David DeAngelis and K. Suzanne Barber

The Laboratory for Intelligent Processes and Systems

The University of Texas at Austin

{dave, barber}


We propose using trust based techniques from the artificial intelligence community for security applications [1]. Most systems designed to enforce security are built with the intent to prevent one of a few fundamental security threats. These traditional security mechanisms are very effective for the most common threats, but they have several key weaknesses. The primary weakness of most security systems is a lack of online monitoring of a user’s behavior, where a user, or agent, may be a person or another application. Trust based security mechanisms offer continual online monitoring of system processes and can be tuned to recognize suspicious behavior. We view security not as a product, but as a process.  Trust is certainly not a suitable replacement for existing security mechanisms in many cases, but the strengths of trust directly complement those of conventional security.


Degree of trustworthiness serves as a decision criterion for governing the actions of a mobile agent. A trust model can be built by comparing the number of times a partner agent, or host, performs innocently versus maliciously. For example, trust has been assessed by measuring the number of positive and negative “experiences” an agent undergoes with the host. Agents can build models which identify malicious entities not only based on direct interaction with those entities, but also based on information from other agents. Both types of models are often more robust and easier to build because information can be sent to other agents even if the attacked agent is in the process of being killed. In recommendation based reputation building, the trusting agent forms a trust model of the host by asking other agents in the system about their interactions with the host. This method allows the trusting agent to form a reputation of the trustee without being exposed to the risk of direct cooperative interaction.


Some properties of trust can be used to maximize the security of a system. One such property is the continuous nature of trust. A behavior monitoring system for authenticated users based on trust is much more powerful for detecting intrusions than a simple human operator, because such an intrusion detection system could be used to filter all user activity, reporting only suspicious user behavior to a human operator. Graceful degradation is another property of trust modeling that can be harnessed to provide security. A fully distributed security system, such as one where security agents reside on each node in a network and alert the others to suspicious behavior, is capable of intrusion detection.


In a multi-agent system, a goal driven agent (human or software) is motivated to trust others when its resources are too limited to permit goal achievement in isolation. These goals may have multiple requirements (quality, completion timeliness, costs) influencing the reward received from goal achievement. To maximize the reward it receives from achieving a goal, an agent must consider the trustworthiness of potential partners relative to multiple dimensions accounting for multiple goal requirements (which determine rewards) and multiple partner constraints (which estimate partner behavior). This research endows agents with the ability to assert how much it should trust multiple facets of a potential partner’s behavior – the availability of the partner to deliver quality and on time solutions within cost – in the context of multiple goal requirements.


The concept of multi-dimensional trust is a flexible tool that can be used in any situation exhibiting goal driven behavior where the goal has multiple components. The experimental results presented in [2] show higher goal achievement when using the complete partner selection strategy over a single dimensional strategy in cases with multidimensional goal rewards. Another benefit is that using multi-dimensional trust models allows for the uninterrupted pursuit of rapidly changing goals. If a new goal is introduced to an agent system but the goal can be decomposed into the same basic elements as a previous goal, no new trust models need to be learned.


The value of trust modeling has been shown for intrusion detection scenarios [3]. A system of agents that is fully connected can share interaction experience to identify and avoid a malicious intruder, and degradation of the connectivity network has little impact on the algorithm effectiveness. Applying these trust based techniques with conventional security methods can be useful in developing systems with unprecedented security.



[1] DeAngelis, D.; and Barber K.S. 2006. Security Applications of Trust in Multi-Agent Systems. Masters Thesis, The University of Texas at Austin.

[2] Gujral, N.; DeAngelis, D.; Fullam, K.; and Barber, K.S. 2006. Modeling Multi-Dimensional Trust. In The Workshop on Trust in Agent Societies at The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, 35-41.

[3] DeAngelis, D.; Fullam, K.; and Barber, K.S. 2005. Effects of Communication Disruption in Mobile Agent Trust Assessments for Distributed Security. In The Workshop on Trust in Agent Societies at The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, 27-37.