The Magnetic Tower of Hanoi (MToH) puzzle is a variation of the classical Tower of Hanoi puzzle (ToH), where each disk has two distinct sides, for example, with different colors "red" and "blue". The rules of the MToH puzzle are the same as the rules of the original puzzle, with the added constraints that each disk is flipped as it is moved, and that two disks may not be placed one on another if their touching sides have the same color. Each disk has a North and South pole, with similar poles repelling one another and opposite poles attracting one another. Magnets inside each disk physically prevent illegal moves. [1]
One of the striking features of the classical ToH puzzle is its relation to the base 2: the minimum number of total moves required to solve the puzzle is 2n − 1 (where n is the number of disks), while the minimum number of moves made by disk k is 2k − 1 (disks are numbered bottom up so that k = 1 being the largest disk, and k = n being the smallest). It will be shown below that just as the original ToH puzzle is related to base 2, so the MToH is related to base 3, though in a more complex manner. [2]
Mathematically equivalent puzzles to certain variations of the MToH have been known for some time. For example, an equivalent puzzle to one of the colored variations of the MToH appears in Concrete Mathematics. [3] In this puzzle moves are only allowed between certain posts, which is equivalent to assigning permanent colors to the posts (e.g. if two posts have the same permanent color assigned to them, then direct moves are not allowed between the two posts).
The free (non-colored) MToH first appeared publicly on the internet [4] around 2000 (though under the name of "Domino Hanoi") as part of a detailed review by the Mathematician Fred Lunnon of the different variations of the original Tower of Hanoi puzzle.
The MToH was independently invented by the Physicist Uri Levy in the summer of 1984, who also coined the name and the analogy to magnetism. [ clarification needed ] [5] Dr Levy later published a series of papers [1] [6] [7] dealing with the mathematical aspects of the MToH.
The MToH puzzle consists of three posts labeled source (S), destination (D), and intermediate (I), and a stack of n different sized disks with each side of a disk having a different color, either Red or Blue. At the beginning of the puzzle the disks are stacked on the S post in order of decreasing size (i.e. the largest disk is at the bottom), and such that all disks have their Red side facing upwards. The objective of the puzzle (in its "basic" version) is to move the entire stack, one disk at a time, to the D post, maintaining the order from largest to smallest disk, but with the Blue sides facing upwards.
The rules governing the movement of the disks are as follows:
In order to illustrate the rules of the MToH, and also show the route to a more general solution, it is useful to work through examples for n = 2 and n = 3. For the case of n = 2, four steps are required, as shown in the accompanying figure, compared to three steps in the n = 2 case of the original ToH. The extra step is required because after the second step the small disk cannot be moved directly from the I post to the D post, as this would mean that its Blue side would be facing downwards. Thus, an extra step is required to flip the color of the small disk, so that it can then be placed on the D post with its Blue side facing upwards.
For the n = 3 case, the solution involves the following steps:
This first stage is similar to the n = 2 puzzle described above, which also takes four moves, where the D and I posts are interchanged. However, it is not identical to the n = 2 puzzle due to the presence of the large disk on the S post, which "colors" it red. This means that a disk can only be placed on this post with its red side facing upwards.
At this stage one might be tempted to again make use of the n = 2 puzzle, and try to move disks 2 and 3 from I to D in 4 moves. However, here again care is needed. Due to the presence of disk 1 on D, D is now "colored" Blue, i.e. another disk can be placed on it only if it has its Blue side facing up. Furthermore, with the n = 2 puzzle the disks have their red side facing upwards in the starting position, whereas here they have their blue sides facing upwards. Thus, this intermediate configuration is not equivalent to the n = 2 MToH. Instead, we proceed as follows:
Thus, the solution requires 11 steps altogether. As just shown, it is natural to try to use the n = 2 solution to solve parts of the n = 3 puzzle in a recursive manner, as typically used for solving the classical ToH puzzle. However, in contrast to the classical ToH, here the n = 2 solution cannot be blindly applied due to the coloring of the posts and disks. This point illustrates that to achieve a more general solution for the n-disk MToH puzzle, it is necessary to consider variants of the puzzle where the posts are pre-colored (either Blue or Red). By considering these variants it is possible to develop full recursive relations for the MToH puzzle, and thus find a general solution.
The above description of the MToH puzzle assumes that while the disks themselves are colored, the posts are neutral. This means that an empty post may accept a disk with either its Red side or Blue side facing up. This basic version of the MToH is called the "free" MToH.
Other variations of the MToH puzzle are possible whereby the posts themselves are colored, as shown in the accompanying figure. If a post is pre-colored Red/Blue, then only disks with their Red/Blue side facing upwards may be placed on this pre-colored post. The different variations of the MToH may be named using a 3 letter label "SID", where S,I D refer to the color of the Source, Intermediate and Destination posts respectively, and may assume the values R (Red), B (Blue), or N (Neutral - no color). Thus, the "NNN" puzzle refers to the free MToH, while the "RBB" puzzle refers to the variation where the S post is pre-colored Red, while the I and D posts are pre-colored Blue.
While the variations of the MToH are puzzles within their own right, they also play a key role in solving the free MToH. As seen above, intermediate states of the free MToH can be considered as colored variations, since a post with a disk already on it assume the corresponding color of the disk (meaning that only disk with the same color facing upwards can be placed on the post).
For example, in the free n = 3 MToH puzzle described above, after 5 moves an intermediate state is reached where the large disk is on the D post. From this point on, the D post is considered to be colored Blue, and the puzzle becomes equivalent to the "NNB" puzzle. If a solution is known for the n = 2 "NNB" puzzle, it could immediately be applied to complete the n = 3 free puzzle.
Not all of the different colored variations are distinct puzzles, since symmetry means that some pre-colored puzzle variations are identical to others. For example, if we solve the RBB puzzle backwards, then this is the same as solving the RRB puzzle in the regular forward direction (note: the Blue and Red colors have been swapped to keep the rule that at the start of the puzzle all disks must be on the source post with their Red side facing up). Thus, the RBB and RRB puzzles form a time reversal symmetry pair. This means that they share the same characteristics with respect to the number of optimal moves required, even though each puzzle requires a distinct algorithm to solve it. In fact, it will be shown below that puzzles forming a time reversal symmetry pair are interdependent one on the other, in the sense that the solving algorithm of one makes use of the solving algorithm of the other.
As with the classical ToH puzzle, one of the simplest and most instructive methods for solving the MToH is to use recursive algorithms. Such algorithms are presented below for selected variations of the puzzle, and the optimality of the solutions is proved. Using these algorithms recursive relations, and subsequently closed form expressions, can be derived for the number of total moves required to solve the puzzle, and the number of moves each disk makes during the solution.
The recursive relations can also be presented and analyzed using a Markov type analysis, which is also discussed.
It is instructive to first consider the time reversal symmetry pair of puzzles RBB and RRB. As it turns out, these two puzzles are the simplest to solve in that their recursive algorithms depend only one on the other, and not on other variations of the puzzle.
In contrast, solutions for the semi-colored variations (where one or more posts are neutral) and the fully free variation are solved by more complex recursion relations. The RBB(n) and RRB(n) puzzles can be solved using the following pair of optimal algorithms:
For RBB(n):
For RRB(n):
The optimality of this pair of algorithms is proved via induction, as follows (this proof also forms a detailed explanation of the algorithms):
For n = 1 it is obvious that the algorithms are optimal, since there is only a single move. Next, it is assumed that the algorithms are optimal for n − 1, and using this assumption, it is shown that they are optimal for n.
Beginning with the RBB(n) algorithm, it is clear that before disk 1 can be placed on the D post, it must first be on the S post (which is the only post colored Red), and the rest of the disks must be on the I post. Thus, the solution must pass through this intermediate state, and, by assumption, the optimal way to achieve this intermediate state is to use the RBB(n − 1) algorithm with the D and I posts interchanged.
Next, disk 1 has to be moved from S to D, since it must be moved at least once.
Next, it is shown that from this state the final solution can only be reached via an intermediate state where all n − 1 disks are on the S post. For disk 2 to be placed on the D post, it must first be on the S post (the only Red post), while the other n − 2 disks must be on the I post. However, before disk 3 can be placed on the I post, it must first be on the S post on top of disk 2. This reasoning can proceed through all the disks, each of which must first be on the S post before passing to the I post, thus showing that the solution must pass through an intermediate state where all n − 1 disk are on the S post.
To achieve this intermediate state, it is necessary to use an optimal algorithm which transfers n − 1 disks from the Blue I post to the Red S post via the Blue D post, i.e. the optimal BBR(n − 1) algorithm, which is equivalent to the RRB(n − 1) algorithm (the colors are just swapped).
Finally, it is necessary to transfer the n − 1 smallest disks from the S to the D post via the I post. This of course is just the RBB(n − 1) algorithm.
Similar reasoning can be applied to show that the RRB(n) algorithm above is optimal. Solving algorithms can also be written, and their optimality proved, for other time-reversal pairs of the puzzles, namely:
These algorithms are generally more complex, and make use of the fully colored RBB and RRB algorithms described above. Full details of these algorithms and proofs of their optimality can be found in. [7]
To conclude this section, the solving algorithm of the fully free NNN puzzle is listed. The proof of optimality can also be found in. [7]
Once the solving algorithms are found, they can be used to derive recurrence relations for the total number of moves made during the execution of the algorithm, as well as for the number of moves made by each disk.
Denoting the total number of moves made by the optimal algorithms of the RBB and RRB puzzles as and , then referring to the solving algorithm listed above, it is easy to show that the following recurrence relation holds:
where use has been made of the fact that the RBB and RRB puzzles form a time reversal symmetry pair, and thus .
It is also possible to list a recursion relation for the total number of moves made by disk k, which we denote by and for the RBB and RRB algorithms respectively (note that is independent of the total number of disks n in the puzzle). Again working through the algorithms, and using the equality , it is simple to show that
From these recursive relations, it is quite simple to derive closed form expressions for and , which are given by
As can be seen, these quantities scale as 3n, in contrast to the classical ToH puzzle which scales as 2n. In fact, as shown in, [7] all variations of the MToH puzzle satisfy the asymptotic relations
with the factors s, p given by the following table:
Puzzle Variations | s | p |
---|---|---|
RRB / RBB | 1/2 | 1 |
RBN / NRB | 5/11 | 10/11 |
RNB | 4/11 | 8/11 |
RNN / NNB | 7/22 | 14/22 |
NNN | 10/33 | 20/33 |
Finally, while the integer sequences generated by the expression for and are well known and listed in the Online Encyclopedia of Integer Sequences (OEIS), [8] the integer sequences generated by the other puzzle variations are far less trivial, and were not found in OEIS prior to the analysis of the MToH. These new sequences, 15 in number, are now all listed.
A powerful method of analyzing the MToH puzzle (and other similar puzzles), suggested by Fred Lunnon and presented in his review of Tower puzzles variations, [4] is a matrix method.
In this method no effort is made to separate the various puzzles into independent groups whose solving algorithms depend only one on the other. Instead the solving algorithms are written in the most direct way so that algorithms of all the puzzle variations are interdependent. Once this has been done, the total number of moves (S,I,D being equal to R,B,N) for each puzzle variation can be written as follows:
Note that in contrast to the other variations and the general rule, MToH variations NNR and NBR end with the red sides of the disks facing upwards. This is a natural consequence of the destination post being colored red.
If we now define a vector
Then
and the recursion relations can be written in the following matrix form
where the Markov matrix is defined by
The equation for can now be written as
The characteristic polynomial of is given by
having the following eight roots
and eight eigenvectors such that
We can now express using the eight eigenvectors
so that
Now, since for all , it is clear that
Thus, as before we obtain the following asymptotic relation for all puzzle variations
with the factor s given by the following table:
Puzzle Variation | s |
---|---|
NNN | 20/66 |
RNN | 21/66 |
NNR | 39/66 |
NBR | 42/66 |
RNB | 24/66 |
RBN | 30/66 |
RRB | 33/66 |
The Tower of Hanoi is a mathematical game or puzzle consisting of three rods and a number of disks of various diameters, which can slide onto any rod. The puzzle begins with the disks stacked on one rod in order of decreasing size, the smallest at the top, thus approximating a conical shape. The objective of the puzzle is to move the entire stack to one of the other rods, obeying the following rules:
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
Optimal solutions for the Rubik's Cube are solutions that are the shortest in some sense. There are two common ways to measure the length of a solution. The first is to count the number of quarter turns. The second is to count the number of outer-layer twists, called "face turns". A move to turn an outer layer two quarter (90°) turns in the same direction would be counted as two moves in the quarter turn metric (QTM), but as one turn in the face metric.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color.
The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.
In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation
God's algorithm is a notion originating in discussions of ways to solve the Rubik's Cube puzzle, but which can also be applied to other combinatorial puzzles and mathematical games. It refers to any algorithm which produces a solution having the fewest possible moves. The allusion to the deity is based on the notion that an omniscient being would know an optimal step from any given configuration.
Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics.
In graph theory, the planar separator theorem is a form of isoperimetric inequality for planar graphs, that states that any planar graph can be split into smaller pieces by removing a small number of vertices. Specifically, the removal of vertices from an n-vertex graph can partition the graph into disjoint subgraphs each of which has at most vertices.
Rank SIFT algorithm is the revised SIFT algorithm which uses ranking techniques to improve the performance of the SIFT algorithm. In fact, ranking techniques can be used in key point localization or descriptor generation of the original SIFT algorithm.
In machine learning, a ranking SVM is a variant of the support vector machine algorithm, which is used to solve certain ranking problems. The ranking SVM algorithm was published by Thorsten Joachims in 2002. The original purpose of the algorithm was to improve the performance of an internet search engine. However, it was found that ranking SVM also can be used to solve other problems such as Rank SIFT.
In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.
In computational geometry, a maximum disjoint set (MDS) is a largest set of non-overlapping geometric shapes selected from a given set of candidate shapes.
The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.
The Bernstein–Vazirani algorithm, which solves the Bernstein–Vazirani problem, is a quantum algorithm invented by Ethan Bernstein and Umesh Vazirani in 1997. It is a restricted version of the Deutsch–Jozsa algorithm where instead of distinguishing between two different classes of functions, it tries to learn a string encoded in a function. The Bernstein–Vazirani algorithm was designed to prove an oracle separation between complexity classes BQP and BPP.