Research‎ > ‎

Expertise Identification & Motivation

Expertise Modeling, Identification, and Motivation

Suratna Budalakoti, David DeAngelis and K. Suzanne Barber
The Laboratory for Intelligent Processes and Systems
The University of Texas at Austin
{sbudalakoti, dave, barber}

Untapped capabilities permeate large-scale networks. Search engines specialize in identifying existing static documents on a network that are appropriate for a given query. We propose a method of connecting users and resources that can leverage both the static and dynamic (live) capabilities of a network of human users. No single user has complete knowledge across many different domains. On a large network, however, it is likely that somebody has expertise in nearly every question domain. We intend to develop a framework that facilitates the flow of information from those that have it to those that need it. We propose implementing this framework to answer questions that benefit from uniquely human insight. These types of questions will often be recommendation-based, and built on qualitative tradeoffs that are best suited for human judgment. For such a system to be successful, it is essential that it be able to identify and access experts in any given area, and motivate them to participate. Our research intends to find algorithmic approaches to fulfill these aims. In the past, laboratory research has developed a working demonstration of a system capable of identifying domain experts [1]. 

The described high-level problem can be decomposed into several sub-problems. The first problem is the identification of expertise. Expertise is defined as the ability of a user to answer a given question to the satisfaction of the questioner. Given a question, how can the pool of potential answerers be indexed and searched to predict who is capable of providing the best answer? The idea of “best answer” is non-intuitive. Because the intent behind this system and the key to the motivation mechanism is to help questioners, answers are evaluated by the questioner. This lets us avoid issues of answer scope and depth. A simple URL response to a very thorough discussion will often be more useful and expedient than a response developed from scratch. This flexibility allows our system to leverage both static (documented and searchable answers) and dynamic (live response) resources. 

Estimating which user is most likely to give the best answer is a challenging problem, and requires a complex model of human expertise along: a) expertise dimensions: the various distinct areas of human knowledge, and the expert’s ability in each of these areas, b) compatibility: the likelihood that the answerer’s personality and approach to answering questions matches that of the questioner, c) willingness: the probability that the answerer will be willing to invest the time required to answer the question. We use an approach consisting of graph-based clustering [2], hyperlink-based web modeling [3], and collaborative filtering [4] to identify the most suitable expert for a problem. Additionally, we are investigating signaling theory, a branch of game theory that examines communications between individuals as signals that broadcast their ‘quality’.

Multi-dimensional models of expertise are crucial to the problem, as an expert on one subject is not necessarily an expert on another, even when the two areas are closely related. While question-answer systems already exist on the web, in most current implementations, there is little or no modeling of user expertise, and hence an expert is expected to wade through many questions until finding one that is most suitable. Very specialized questions may never be viewed by the few qualified experts, and the result is that only simple and generic questions usually get answered. By automating the process of finding an expert for a question, we remove the investment of time required by an expert, thereby reducing the cost of participation and improve overall productivity.

Another avenue of research we are investigating is expert response motivation – the problem of motivating experts to participate. We attempt to do this in three ways: a) reduce the cost of participation by reducing the time required to find the right question; b) make participation attractive to experts through identification of the more valuable experts; and, (c) provide incentives to others to answer the original expert’s questions more promptly and satisfactorily. We believe that a virtuous cycle can be established where, over time, people with high expertise are more likely to participate in return for high-quality responses from others with high expertise levels.

[1] Godil, H. Finding Experts by Modeling Domain Expertise, Master’s Thesis, University of Texas at Austin, 2006.
[2] Dhillon, I. Co-Clustering Documents and Words Using Bipartite Spectral Graph Partitioning. Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), August 26-29, 2001, San Francisco, CA, USA.
[3] Kleinberg, J. Authoritative sources in a hyperlinked environment. Proc. 9th ACM-SIAM Symposium on Discrete Algorithms, 1998.
[4] Goldberg, D., D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to weave an information tapestry. CACM, 35(12):61--70, Dec 1992.

Publication Links: