![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Expertise finding is the use of tools for finding and assessing individual expertise. In the recruitment industry, expertise finding is the problem of searching for employable candidates with certain required skills set. In other words, it is the challenge of linking humans to expertise areas, and as such is a sub-problem of expertise retrieval (the other problem being expertise profiling). [1]
It can be argued that human expertise [2] is more valuable than capital, means of production or intellectual property.[ citation needed ] Contrary to expertise, all other aspects of capitalism are now relatively generic: access to capital is global, as is access to means of production for many areas of manufacturing. Intellectual property can be similarly licensed. Furthermore, expertise finding is also a key aspect of institutional memory, as without its experts an institution is effectively decapitated. However, finding and "licensing" expertise, the key to the effective use of these resources, remain much harder, starting with the very first step: finding expertise that you can trust.
Until very recently, finding expertise required a mix of individual, social and collaborative practices, a haphazard process at best. Mostly, it involved contacting individuals one trusts and asking them for referrals, while hoping that one's judgment about those individuals is justified and that their answers are thoughtful.
In the last fifteen years, a class of knowledge management software has emerged to facilitate and improve the quality of expertise finding, termed "expertise locating systems". These software range from social networking systems to knowledge bases. Some software, like those in the social networking realm, rely on users to connect each other, thus using social filtering to act as "recommender systems".
At the other end of the spectrum are specialized knowledge bases that rely on experts to populate a specialized type of database with their self-determined areas of expertise and contributions, and do not rely on user recommendations. Hybrids that feature expert-populated content in conjunction with user recommendations also exist, and are arguably more valuable for doing so.
Still other expertise knowledge bases rely strictly on external manifestations of expertise, herein termed "gated objects", e.g., citation impacts for scientific papers or data mining approaches wherein many of the work products of an expert are collated. Such systems are more likely to be free of user-introduced biases (e.g., ResearchScorecard ), though the use of computational methods can introduce other biases.
There are also hybrid approaches which use user-generated data (e.g., member profiles), community-based signals (e.g., recommendations and skill endorsements), and personalized signals (e.g., social connection between searcher and results).
Examples of the systems outlined above are listed in Table 1.
Table 1: A classification of expertise location systems
Type | Application domain | Data source | Examples |
---|---|---|---|
Social networking | Professional networking | User-generated and community-generated | |
Scientific literature | Identifying publications with strongest research impact | Third-party generated |
|
Scientific literature | Expertise search | Software | |
Knowledge base | Private expertise database | User-Generated |
|
Knowledge base | Publicly accessible expertise database | User-generated |
|
Knowledge base | Private expertise database | Third party-generated |
|
Knowledge base | Publicly accessible expertise database | Third party-generated |
|
Blog search engines | Third party-generated |
A number of interesting problems follow from the use of expertise finding systems:
Means of classifying and ranking expertise (and therefore experts) become essential if the number of experts returned by a query is greater than a handful. This raises the following social problems associated with such systems:
Many types of data sources have been used to infer expertise. They can be broadly categorized based on whether they measure "raw" contributions provided by the expert, or whether some sort of filter is applied to these contributions.
Unfiltered data sources that have been used to assess expertise, in no particular ranking order:
Filtered data sources, that is, contributions that require approval by third parties (grant committees, referees, patent office, etc.) are particularly valuable for measuring expertise in a way that minimizes biases that follow from popularity or other social factors:
In academia, a related problem is collaborator discovery, where the goal is to suggest suitable collaborators to a researcher. While expertise finding is an asynchronous problem (employer looking for employee), collaborator discovery can be distinguished from expertise finding by helping establishing more symmetric relationships (collaborations). Also, while in expertise finding the task often can be clearly characterized, this is not the case in academic research, where future goals are more fuzzy. [4]