Formalizing Evidence and Trust for User Authorization
Department of Computer Sciences Phone: 765-494-6013 Email: bb@cs.purdue.edu |
Leszek Lilien Department of Computer Sciences Phone: 765-496-2718 Email: llilien@cs.purdue.edu |
This research is developing formal model for trust, authorization, fraud, and privacy. It incorporates the comprehensive aspects of trust in social life and computer science applications. Considering associated contexts, it automates the evaluation of trust under uncertain evidence and dynamic interactions. Trust is being integrated with authorization and authentication mechanisms for use in an open computing environment so that applications can use these models.
This research presents an authorization framework based on uncertain evidence and dynamic trust. A prototype called TERA (Trust-Enhanced Role Assignment) has been built for experimental studies. The TERA prototype evaluates the trust of a user from her behaviors. It decides whether a user is authorized for an operation based on the policies, the evidence, and the degree of trust. The reliability of the evidence is based on the trust of the evidence provider. A user's trust value is dynamically updated when additional data on behaviors is available. The trust information is managed by a reputation server.
Four user behavior patterns have been identified. These patterns have been integrated in the TERA prototype to simulate users with different levels of trust. These patterns are used as benchmarks for evaluation of trust/reputation systems. Two algorithms have been developed to determine a user's trust value based on a sequence of her interactions and past behaviors. A classification algorithm is designed to build user-role profiles. Experiments have been conducted on discovering the intention behind a sequence of behaviors. The research on fraud formalization is being integrated with vulnerabilities in any system to devise anomaly detectors, state transition analysis, and risk analysis.
The mischievous behavior detection mechanisms have been adopted to improve the security and privacy in distributed systems. An intruder identification mechanism has been developed to detect the entities giving wrong and false information. The quorum-based method has been adopted to detect coordinated attacks. The trust relations among the members in a peer-to-peer system are applied to protect the privacy of peers. The trustworthiness of a peer is assessed based on its behaviors and other peers' recommendations.
Research will enhance the efficient use of machine learning and incentive-based distribution of work.
This research has use for improving healthcare delivery. The objective is achieved through enhancing trust in timely data exchange among patients, physicians, and nurses. A framework that enables distributed and pervasive data access in healthcare is proposed, with focus on trust, integration, privacy, and usability.
The Ph.D students are learning about formalization of difficult concepts such as trust, evidence, and fraud. The design of experiments to quantify these concepts and evaluate them in terms of malicious behavior and interactions is unique in computer science. The research has taught students to integrate formal methods in philosophy, statistics, and machine reasoning to practical problems in database processing. Two minority Ph.D students have been trained in database security research practice and experiments. This research helped in upgrading course material in database classes that goes beyond reliability, integrity, and security and leads to the notion of trust as a measure.
The formalization and detection of fraud have been studied based on a sequence of transactions. Several types of frauds have been identified and the costs associated with them are being studied through experiments. Three deceiving intentions have been identified based upon the behavior patterns and a deceiving intention predictor is developed to detect unauthorized access and fraud.
Experiments are being conducted to study the detection capability of the deceiving intention predictor when multiple collaborators misbehave intentionally but in a coordinated manner. In current implementation, the authenticity and integrity of the interaction histories and evidence are protected by using cryptography techniques.
The tradeoff between privacy and trust has been investigated. The objective is to build a certain level of trust with the least loss of privacy. The research involves the estimation of loss of privacy and gains in trust by disclosing a set of evidence that contains private information. In the probability method, privacy is measured as the difference between entropies. Bayes networks and kernel density estimation are being adopted to compute the conditional probability for entropy evaluation. In the lattice method, privacy loss is measured as the least upper bound of the privacy levels of candidate evidence.
Current research efforts grant privilege to a user based on
her properties that are demonstrated by digital credentials (evidences).
Holding credentials does not certify that the user will not carry out harmful
actions. Authorization based on evidence as well as trust makes the access
control adaptable to users' misbehaviors. Existing computational trust
management model can be broadly categorized into authorization-based and
reputation-based trust management. Our research effort integrates them into one
framework. Evidence testifies certain properties of an entity, or subject. A
computational evidence theory, such as Bayesian network, Dampster-shafer theory
and subjective logic, deals with the evaluation and combination of evidence. In
our research, Damspter-shafer theory is used to integrate reputation and
subjective logic is adopted to evaluate recommendations.
The website contains information about the project participants,
including collaborators and graduate students. The links to three news websites
are provided, at which this research effort is publicized. The research papers,
presentations, and proposals can be downloaded. The TERA prototype software and
the demonstration are available for public access.
The demonstration of the TERA prototype consists of four
video clips, including an overview of the software, an introduction to the
components, and two authorization examples. The demonstration illustrates (a)
how authorization based on evidence and trust cooperates with role-based access
control, (b) how a user’s behaviors impact her trust value, (c) how the
four trust production algorithms realize different heuristic and subjectivity,
and (d) how the trust values are propagated among TERM servers.