A BK-tree (short for Burkhard-Keller tree) is a metric tree suggested by Walter Austin Burkhard and Robert M. Keller specifically adapted to discrete metric spaces. For simplicity, given a way to measure the distance between any two elements of a set, a BK-tree is built with a single root node and several subtrees, each connected to the root as a child. All nodes in a subtree has an equal distance to the root node, and the edge weight of the edge connecting the subtree to the root is equal to the distance. As shown in the picture. Also, each subtree of a BK-tree is a BK-tree.
 BK-trees can be used for approximate string matching in a dictionary . The problem is formulated as follows: Given a pattern string and a text string , we need to find a substring in , which, of all substrings of , has the smallest edit distance to the pattern .
An ordinary way using BK-tree is to insert all substrings of and the pattern string into the BK-tree, and find the subtree containing nodes with the smallest distance from the root. This will lead to a time complexity of . However, this algorithm is very accurate and can find every substrings with the smallest edit distance to .
 This picture depicts the BK-tree for the set of words {"book", "books", "cake", "boo", "boon", "cook", "cape", "cart"} obtained by using the Levenshtein distance
The BK-tree is built so that:
The insertion primitive is used to populate a BK-tree according to a discrete metric .
Input:
Output:
Algorithm:
Given a searched element , the lookup primitive traverses the BK-tree to find the closest element of . The key idea is to restrict the exploration of to nodes that can only improve the best candidate found so far by taking advantage of the BK-tree organization and of the triangle inequality (cut-off criterion).
Input:
Output:
Algorithm:
Consider the example 8-node B-K Tree shown above and set "cool". is initialized to contain the root of the tree, which is subsequently popped as the first value of with ="book". Further since the distance from "book" to "cool" is 2, and as this is the best (i.e. smallest) distance found thus far. Next each outgoing arc from the root is considered in turn: the arc from "book" to "books" has weight 1, and since is less than , the node containing "books" is inserted into for further processing. The next arc, from "book" to "cake," has weight 4, and since is not less than , the node containing "cake" is not inserted into . Therefore, the subtree rooted at "cake" will be pruned from the search, as the word closest to "cool" cannot appear in that subtree. To see why this pruning is correct, notice that a candidate word appearing in "cake"s subtree having distance less than 2 to "cool" would violate the triangle inequality: the triangle inequality requires that for this set of three numbers (as sides of a triangle), no two can sum to less than the third, but here the distance from "cool" to "book" (which is 2) plus the distance from "cool" to (which is less than 2) cannot reach or exceed the distance from "book" to "cake" (which is 4). Therefore, it is safe to disregard the entire subtree rooted at "cake".
Next the node containing "books" is popped from and now , the distance from "cool" to "books." As , remains set at 2 and the single outgoing arc from the node containing "books" is considered. Next, the node containing "boo" is popped from and , the distance from "cool" to "boo." This again does not improve upon . Each outgoing arc from "boo" is now considered; the arc from "boo" to "boon" has weight 1, and since , "boon" is added to . Similarly, since , "cook" is also added to .
Finally each of the two last elements in are considered in arbitrary order: suppose the node containing "cook" is popped first, improving to distance 1, then the node containing "boon" is popped last, which has distance 2 from "cool" and therefore does not improve the best result. Finally, "cook" is returned as the answer with .
The efficiency of BK-trees depends strongly on the structure of the tree and the distribution of distances between stored elements.
In the average case, both insertion and lookup operations take time, assuming the tree remains relatively balanced and the distance metric distributes elements evenly.
In the worst case, when the data or distance function causes the tree to become highly unbalanced (for example, when many elements are at similar distances), both insertion and lookup can degrade to .
In practical applications, the actual performance depends on the choice of distance metric (e.g., the Levenshtein distance) and on the allowed search radius during approximate matching.