In computing, a **threaded binary tree** is a binary tree variant that facilitates traversal in a particular order (often the same order already defined for the tree).

- Definition
- Motivation
- Relation to parent pointers
- Types of threaded binary trees
- The array of in-order traversal
- Example
- Null link
- Non recursive in-order traversal for a threaded binary tree
- Algorithm
- References
- External links

An entire binary sort tree can be easily traversed in order of the main key, but given only a pointer to a node, finding the node which comes next may be slow or impossible. For example, leaf nodes by definition have no descendants, so no other node can be reached given only a pointer to a leaf node -- of course that includes the desired "next" node. A threaded tree adds extra information in some or all nodes, so the "next" node can be found quickly. It can also be traversed without recursion and the extra storage (proportional to the tree's depth) that requires.

A threaded binary tree is defined as follows:

"A binary tree is

threadedby making all right child pointers that would normally be null point to the in-order successor of the node (ifit exists), and all left child pointers that would normally be null point to the in-order predecessor of the node."^{ [1] }

This definition assumes the traversal order is the same as in-order traversal of the tree. However, pointers can instead (or in addition) be added to tree nodes, rather than replacing linked lists thus defined are also commonly called "threads", and can be used to enable traversal in any order(s) desired. For example, a tree whose nodes represent information about people might be sorted by name, but have extra threads allowing quick traversal in order of birth date, weight, or any other known characteristic.

Trees, including (but not limited to) binary search trees, can be used to store items in a particular order, such as the value of some property stored in each node, often called a key. One useful operation on such a tree is *traversal*: visiting all the items in order of the key.

A simple recursive traversal algorithm that visits each node of a binary search tree is the following. Assume t is a pointer to a node, or nil. "Visiting" t can mean performing any action on the node t or its contents.

Algorithm traverse(t):

- Input: a pointer t to a node (or nil)
- If
*t*= nil, return. - Else:
- traverse(left-child(t))
- Visit t
- traverse(right-child(t))

One problem with this algorithm is that, because of its recursion, it uses stack space proportional to the height of a tree. If the tree is fairly balanced, this amounts to *O*(log *n*) space for a tree containing n elements. In the worst case, when the tree takes the form of a chain, the height of the tree is n so the algorithm takes *O*(*n*) space. A second problem is that all traversals must begin at the root when nodes have pointers only to their children. It is common to have a pointer to a particular node, but that is not sufficient to get back to the rest of the tree unless extra information is added, such as thread pointers.

In this approach, it may not be possible to tell whether the left and/or right pointers in a given node actually point to children, or are a consequence of threading. If the distinction is necessary, adding a single bit to each node is enough to record it.

In a 1968 textbook, Donald Knuth asked whether a non-recursive algorithm for in-order traversal exists, that uses no stack and leaves the tree unmodified.^{ [2] } One of the solutions to this problem is tree threading, presented by J. H. Morris in 1979.^{ [3] }^{ [4] } In the 1969 follow-up edition,^{ [5] } Knuth attributed the threaded tree representation to Perlis and Thornton (1960).^{ [6] }

Another way to achieve similar goals is to include a pointer in every node, to that node's parent node. Given that, the "next" node can always be reached. "right" pointers are still null whenever there are no right children. To find the "next" node from a node whose right pointer is null, walk up through "parent" pointers until reaching a node whose right pointer is not null, and is not the child you just came up from. That node is the "next" node, and after it come its descendants on the right.

It is also possible to discover the parent of a node from a threaded binary tree, without explicit use of parent pointers or a stack, although it is slower. To see this, consider a node *k* with right child *r*. Then the left pointer of *r* must be either a child or a thread back to *k*. In the case that *r* has a left child, that left child must in turn have either a left child of its own or a thread back to *k*, and so on for all successive left children. So by following the chain of left pointers from *r*, we will eventually find a thread pointing back to *k*. The situation is symmetrically similar when *q* is the left child of *p*—we can follow *q'*s right children to a thread pointing ahead to *p*.

- Single Threaded: each node is threaded towards
**either**the in-order predecessor**or**successor (left**or**right). - Double threaded: each node is threaded towards
**both**the in-order predecessor**and**successor (left**and**right).

In Python:

`defparent(node):ifnodeisnode.tree.root:returnNoneelse:x=nodey=nodewhileTrue:ifis_thread(y):p=y.rightifpisNoneorp.leftisnotnode:p=xwhilenotis_thread(p.left):p=p.leftp=p.leftreturnpelifis_thread(x):p=x.leftifpisNoneorp.rightisnotnode:p=ywhilenotis_thread(p.right):p=p.rightp=p.rightreturnpx=x.lefty=y.right`

Threads are reference to the predecessors and successors of the node according to an inorder traversal.

In-order traversal of the threaded tree is `A,B,C,D,E,F,G,H,I`

, the predecessor of `E`

is `D`

, the successor of `E`

is `F`

.

Let's make the Threaded Binary tree out of a normal binary tree:

The in-order traversal for the above tree is — D B A E C. So, the respective Threaded Binary tree will be --

In an m-way threaded binary tree with *n* nodes, there are ** n*m - (n-1) ** void links.

As this is a non-recursive method for traversal, it has to be an iterative procedure; meaning, all the steps for the traversal of a node have to be under a loop so that the same can be applied to all the nodes in the tree.

We will consider the IN-ORDER traversal again. Here, for every node, we'll visit the left sub-tree (if it exists) first (if and only if we haven't visited it earlier); then we visit (i.e. print its value, in our case) the node itself and then the right sub-tree (if it exists). If the right sub-tree is not there, we check for the threaded link and make the threaded node the current node in consideration. Please, follow the example given below.

- Step-1: For the current node check whether it has a left child which is not there in the visited list. If it has then go to step-2 or else step-3.
- Step-2: Put that left child in the list of visited nodes and make it your current node in consideration. Go to step-6
- Step-3: Print the node and If the node has a right child then go to step 4 else go to step 5
- Step-4: Make the right child as the current node. Go to step 6.
- Step-5: If there is a thread node then make it the current node.
- Step-6: If all nodes have been printed then END else go to step 1

Li | ||||
---|---|---|---|---|

step-1 | 'A' has a left child i.e. B, which has not been visited. So, we put B in our "list of visited nodes" and B becomes our current node in consideration. | B | ||

step-2 | 'B' also has a left child, 'D', which is not there in our list of visited nodes. So, we put 'D' in that list and make it our current node in consideration. | B D | ||

step-3 | 'D' has no left child, so we print 'D'. Then we check for its right child. 'D' has no right child and thus we check for its thread-link. It has a thread going till node 'B'. So, we make 'B' as our current node in consideration. | B D | D | |

step-4 | 'B' certainly has a left child but its already in our list of visited nodes. So, we print 'B'. Then we check for its right child but it doesn't exist. So, we make its threaded node (i.e. 'A') as our current node in consideration. | B D | D B | |

step-5 | 'A' has a left child, 'B', but it is already there in the list of visited nodes. So, we print 'A'. Then we check for its right child. 'A' has a right child, 'C' and it's not there in our list of visited nodes. So, we add it to that list and we make it our current node in consideration. | B D C | D B A | |

step-6 | 'C' has 'E' as the left child and it's not there in our list of visited nodes even. So, we add it to that list and make it our current node in consideration. | B D C E | D B A | |

step-7 | and finally..... | D B A E C | ||

In computer science, an **AVL tree** is a self-balancing binary search tree. It was the first such data structure to be invented. In an AVL tree, the heights of the two child subtrees of any node differ by at most one; if at any time they differ by more than one, rebalancing is done to restore this property. Lookup, insertion, and deletion all take O(log *n*) time in both the average and worst cases, where is the number of nodes in the tree prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations.

In computer science, a **binary search tree** (**BST**), also called an **ordered** or **sorted binary tree**, is a rooted binary tree whose internal nodes each store a key greater than all the keys in the node's left subtree and less than those in its right subtree. A binary tree is a type of data structure for storing data such as numbers in an organized way. Binary search trees allow binary search for fast lookup, addition and removal of data items, and can be used to implement dynamic sets and lookup tables. The order of nodes in a BST means that each comparison skips about half of the remaining tree, so the whole lookup takes time proportional to the binary logarithm of the number of items stored in the tree. This is much better than the linear time required to find items by key in an (unsorted) array, but slower than the corresponding operations on hash tables. Several variants of the binary search tree have been studied.

In computer science, a **binary tree** is a tree data structure in which each node has at most two children, which are referred to as the *left child* and the *right child*. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple, where *L* and *R* are binary trees or the empty set and *S* is a singleton set containing the root. Some authors allow the binary tree to be the empty set as well.

In computer science, a **red–black tree** is a kind of self-balancing binary search tree. Each node stores an extra bit representing color, used to ensure that the tree remains approximately balanced during insertions and deletions.

A **splay tree** is a self-balancing binary search tree with the additional property that recently accessed elements are quick to access again. It performs basic operations such as insertion, look-up and removal in O(log n) amortized time. For many sequences of non-random operations, splay trees perform better than other search trees, even when the specific pattern of the sequence is unknown. The splay tree was invented by Daniel Sleator and Robert Tarjan in 1985.

In computer science, a **tree** is a widely used abstract data type that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes.

In computer science, **tree traversal** is a form of graph traversal and refers to the process of visiting each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree, but they may be generalized to other trees as well.

In computer science, a **search tree** is a tree data structure used for locating specific keys from within a set. In order for a tree to function as a search tree, the key for each node must be greater than any keys in subtrees on the left, and less than any keys in subtrees on the right.

**Adaptive Huffman coding** is an adaptive coding technique based on Huffman coding. It permits building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data.

In computer science, **corecursion** is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is *generative recursion* which may lack a definite "direction" inherent in corecursion and recursion.

In computer science, a **radix tree** is a data structure that represents a space-optimized trie in which each node that is the only child is merged with its parent. The result is that the number of children of every internal node is at most the radix r of the radix tree, where r is a positive integer and a power x of 2, having x ≥ 1. Unlike regular trees, edges can be labeled with sequences of elements as well as single elements. This makes radix trees much more efficient for small sets and for sets of strings that share long prefixes.

In computer science, an **interval tree** is a tree data structure to hold intervals. Specifically, it allows one to efficiently find all intervals that overlap with any given interval or point. It is often used for windowing queries, for instance, to find all roads on a computerized map inside a rectangular viewport, or to find all visible elements inside a three-dimensional scene. A similar data structure is the segment tree.

An **AA tree** in computer science is a form of balanced tree used for storing and retrieving ordered data efficiently. AA trees are named for Arne Andersson, their inventor.

In graph theory, an *m***-ary tree** is a rooted tree in which each node has no more than *m* children. A binary tree is the special case where *m = 2*, and a ternary tree is another case with *m = 3* that limits its children to three.

In computer science, a **leftist tree** or **leftist heap** is a priority queue implemented with a variant of a binary heap. Every node x has an *s-value* which is the distance to the nearest leaf in subtree rooted at x. In contrast to a *binary heap*, a leftist tree attempts to be very unbalanced. In addition to the heap property, leftist trees are maintained so the right descendant of each node has the lower s-value.

In computer science, **recursion** is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.

The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.

In computer science, **weight-balanced binary trees** (**WBTs**) are a type of self-balancing binary search trees that can be used to implement dynamic sets, dictionaries (maps) and sequences. These trees were introduced by Nievergelt and Reingold in the 1970s as **trees of bounded balance**, or **BB[α] trees**. Their more common name is due to Knuth.

A **tree sort** is a sort algorithm that builds a binary search tree from the elements to be sorted, and then traverses the tree (in-order) so that the elements come out in sorted order. Its typical use is sorting elements online: after each insertion, the set of elements seen so far is available in sorted order.

A **link/cut tree** is a data structure for representing a forest, a set of rooted trees, and offers the following operations:

A **binary expression tree** is a specific kind of a binary tree used to represent expressions. Two common types of expressions that a binary expression tree can represent are algebraic and boolean. These trees can represent expressions that contain both unary and binary operators.

- ↑ Van Wyk, Christopher J.
__Data Structures and C Programs__, Addison-Wesley, 1988, p. 175. ISBN 978-0-201-16116-8. - ↑ Knuth, D.E. (1968).
*Fundamental Algorithms*. The Art of Computer Programming.**1**(1st ed.). Reading/MA: Addison Wesley. - ↑ Morris, Joseph H. (1979). "Traversing binary trees simply and cheaply".
*Information Processing Letters*.**9**(5). doi:10.1016/0020-0190(79)90068-1. - ↑ Mateti, Prabhaker; Manghirmalani, Ravi (1988). "Morris' tree traversal algorithm reconsidered".
*Science of Computer Programming*.**11**: 29–43. doi:10.1016/0167-6423(88)90063-9. - ↑ Knuth, D.E. (1969).
*Fundamental Algorithms*. The Art of Computer Programming.**1**(2 ed.). Addison Wesley. Hre: Sect.2.3.1 "Traversing Binary Trees". - ↑ Perlis, Alan Jay; Thornton, C. (Apr 1960). "Symbol manipulation by threaded lists".
*Communications of the ACM*.**3**(4): 195–204. doi:10.1145/367177.367202.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.