This article needs additional citations for verification .(September 2019) |
A bitboard is a specialized bit array data structure commonly used in computer systems that play board games, where each bit corresponds to a game board space or piece. This allows parallel bitwise operations to set or query the game state, or determine moves or plays in the game.
Bits in the same bitboard relate to each other by the rules of the game, often forming a game position when taken together. Other bitboards are commonly used as masks to transform or answer queries about positions. Bitboards are applicable to any game whose progress is represented by the state of, or presence of pieces on, discrete spaces of a gameboard, by mapping of the space states to bits in the data structure. Bitboards are a more efficient alternative board representation to the traditional mailbox representation, where each piece or space on the board is an array element.
Bitboards are especially effective when the associated bits of various related states on the board fit into a single word or double word of the CPU architecture, so that single bitwise operators like AND and OR can be used to build or query game states.
Among the computer game implementations that use bitboards are chess, checkers, othello and word games. The scheme was first employed in checkers programs in the 1950s, and since the mid-1970s has been the de facto standard for game board representation in computer automatons.
A bitboard, a specialized bit field, is a format that packs multiple related boolean variables into the same machine word, typically representing a position on a board game, or state of a game. Each bit represents a space; when the bit is positive, a property of that space is true. Bitboards allow the computer to answer some questions about game state with one bitwise operation. For example, if a chess program wants to know if the white player has any pawns in the center of the board (center four squares) it can just compare a bitboard for the player's pawns with one for the center of the board using a bitwise AND operation. If there are no center pawns then the result will be all zero bits (i.e. equal to zero). Multiple bitboards may represent different properties of spaces over the board, and special or temporary bitboards (like temporary variables) may represent local properties or hold intermediate collated results.
The efficacy of bitboards is augmented by two other properties of the implementation. First, bitboards are fast to incrementally update, such as flipping the bits at the source and destination positions in a bitboard for piece location when a piece is moved. Second, bitmaps representing static properties like all spaces attacked by each piece type for every position on a chessboard can be pre-collated and stored in a table, so that answering a question like "what are the legal moves of a knight on space e4?" can be answered by a single memory fetch.
Bitfield implementations take advantage of the presence of fullword (32-bit or 64-bit) bitwise logical operations like AND, OR, NOT and others on modern CPU architectures in order to be efficient. Bitboards may not be effective on earlier 8- and 16-bit minicomputer and microprocessor architectures.
As a result of necessary compression and encoding of the contents of massive tables and the probability of transcription or encoding errors, bitboard programs are tedious for software developers to either write or debug. Auxiliary generative methods not part of the application are usually required to build the tables.
Bitboard representations use parallel bitwise operations available on nearly all CPUs that complete in one cycle and are fully pipelined and cached etc. Nearly all CPUs have AND, OR, NOR, and XOR. Furthermore, modern CPUs have instruction pipelines that queue instructions for execution. A processor with multiple execution units can perform more than one instruction per cycle if more than one instruction is available in the pipeline. Normal instruction sequences with branches may cause the pipeline to empty if a branch is mispredicted. Many bitboard operations require fewer conditionals and therefore increase pipelining and make effective use of multiple execution units on many CPUs.
CPUs have a bit width which they are designed toward and can carry out bitwise operations in one cycle in this width. So, on a 64-bit or more CPU, 64-bit operations can occur in one instruction. There may be support for higher or lower width instructions. Many 32-bit CPUs may have some 64-bit instructions and those may take more than one cycle or otherwise be handicapped compared to their 32-bit instructions.
If the bitboard is larger than the width of the instruction set, multiple instructions will be required to perform a full-width operation on it. So a program using 64-bit bitboards would run faster on a 64-bit processor than on a 32-bit processor.
Bitboard representations have much longer code, both source and object code. Long bit-twiddling sequences are technically tricky to write and debug. The bitboards themselves are sparse, sometimes containing only a single one bit in 64, so bitboard implementations are memory-intensive. Both these issues may increase cache misses or cause cache thrashing.
If the processor does not have hardware instructions for 'first one' (or 'count leading zeros') and 'count ones' (or 'count zeros'), the implementation will be significantly handicapped, as these operations are extremely inefficient to code as assembly language loops.
Bitboards require more memory than piece-list board data structures, but are more execution efficient because many loop-and-compare operations are reduced to a single (or small number of) bitwise operation(s). For example, in mailbox, determining whether piece attacks space requires generating and looping through legal moves of piece and comparing the final space with space. With bitboards, the legal moves of piece are stored in a bitmap, and that map is ANDed with the bitmap for space. A non-zero result means that piece attacks space.
For some games, writing a bitboard engine requires a fair amount of source code including data tables that will be longer than the compact mailbox/enumeration implementation. For mobile devices (such as cell phones) with a limited number of registers or processor instruction cache, this can cause a problem. For full-sized computers, it may cause cache misses between level-one and level-two cache. This is only a potential problem, not a major drawback, as most machines will have enough instruction cache for this not to be an issue.
Some kinds of bitboards are derived from others by an elaborate process of cross-correlation, such as the attack maps in chess. Reforming all these maps at each change of game state (such as a move) can be prohibitively expensive, so derived bitmaps are incrementally updated, a process which requires intricate and precise code. This is much faster to execute, because only bitmaps associated with changed spaces, not all bitmaps over the board, need to change. Without incremental update, bitmapped representation may not be more efficient than the older mailbox representation where update is intrinsically local and incremental.
Some kinds of bitmaps that don't depend on board configurations can be precomputed and retrieved by table lookup rather than collated after a move or state change of the board, such as spaces attacked by a knight or king located on each of 64 spaces of a chessboard that would otherwise require an enumeration.
The obvious, and simplest representation of the configuration of pieces on a chessboard, is as a list (array) of pieces in a conveniently searchable order (such as smallest to largest in value) that maps each piece to its location on the board. Analogously, collating the spaces attacked by each piece requires a serial enumeration of such spaces for a piece. This scheme is called mailbox addressing. Separate lists are maintained for white and black pieces, and often for white and black pawns. The maps are updated each move, which requires a linear search (or two if a piece was captured) through the piece list. The advantage of mailbox is simple code; the disadvantage is linear lookups are slow. Faster, but more elaborate data structures that map pieces to locations are called bitboards.
In bitboard representations, each bit of a 64 bit word (or double word on 32-bit architectures) is associated with a square of the chessboard. Any mapping of bits to squares can be used, but by broad convention, bits are associated with squares from left to right and bottom to top, so that bit 0 represents square a1, bit 7 is square h1, bit 56 is square a8 and bit 63 is square h8.
Many different configurations of the board are usually represented by their own bitboards including the locations of the kings, all white pawns, all black pawns, as well as bitboards for each of the other piece types or combinations of pieces like all white pieces. Two attack bitboards are also universal: one bitboard per square for all pieces attacking the square, and the inverse bitboard for all squares attacked by a piece for each square containing a piece. Bitboards can also be constants like one representing the first rank, which would have one bits in positions 0 - 7. Other local or transitional bitboards like "all spaces adjacent to the king attacked by opposing pieces" may be collated as necessary or convenient. [1]
An example of the use of the bitboards would be determining whether a piece is en prise : bitboards for "all friendly pieces guarding space" and "all opposing pieces attacking space" would allow matching the pieces to readily determine whether a target piece on space is en prise.
One of the drawbacks of standard bitboards is collating the attack vectors of the sliding pieces (rook, bishop, queen), because they have an indefinite number of attack spaces depending on other occupied spaces. This requires several lengthy sequences of masks, shifts and complements per piece.
In acknowledgement of the code size and computing complexity of generating bitboards for the attack vectors of sliding pieces, alternate bitboard data structures have been devised to collate them. The bitboard representations of knights, kings, pawns and other board configurations is unaffected by the use of auxiliary bitboards for the sliding pieces.
Rotated bitboards are complementary bitboard data structures that enable tabularizing of sliding piece attack vectors, one for file attack vectors of rooks, and one each for the diagonal and anti-diagonal attack vectors of bishops (rank attacks of rooks can be indexed from standard bitboards). With these bitboards, a single table lookup replaces lengthy sequences of bitwise operations.
These bitboards rotate the board occupancy configuration by 90 degrees, 45 degrees, and/or 315 degrees. A standard bitboard will have one byte per rank of the chess board. With this bitboard it's easy to determine rook attacks across a rank, using a table indexed by the occupied square and the occupied positions in the rank (because rook attacks stop at the first occupied square). By rotating the bitboard 90 degrees, rook attacks up and down a file can be examined the same way. Adding bitboards rotated 45 degrees and 315 degrees (-45 degrees) produces bitboards in which the diagonals are easy to examine. The queen can be examined by combining rook and bishop attacks. Actually rotating a bitboard is an inelegant transformation that can take dozens of instructions. [2] [3]
The rank and file attack vectors of rooks and the diagonal and anti-diagonal attack vectors of bishops can be separately masked and used as indices into a hash table of precomputed attack vectors depending on occupancy, 8-bits each for rooks and 2-8 bits each for bishops. The full attack vector of a piece is obtained as the union of each of the two unidirectional vectors indexed from the hash table. The number of entries in the hash table is modest, on the order of 8*2^8 or 2K bytes, but two hash function computations and two lookups per piece are required., [4] see the hashing scheme employed. [5]
Magic bitboards are an extrapolation of the time-space tradeoff of direct hashing lookup of attack vectors. These use a transmutation of the full attack vector as an index into the hash table. Magic is a misnomer, and simply refers to the generation and use of a perfect hash function in conjunction with tricks to reduce the potential size of the hash table that would have to be stored in memory, which is 8*2^64 or 144 exabytes. [nb 1] The first observation is that the outer squares or first and eighth ranks together with the 'a' and 'h' files are irrelevant to the occupancy of the attack vector: the piece attacks those squares or not (depending on other blocking pieces) regardless of occupancy, so these can be eliminated from consideration, leaving just 6x6 or 36 squares (~bits in the corresponding hash function). As with other schemes which require a perfect hashing function, an exhaustive process of enumeration, partly algorithmic and partly trial and error, is necessary to generate the hash function. But the intractable issue remains: these are very active tables, and their size (fewer than a million entries in most cases) is huge relative to the lower level cache sizes of modern chip architectures, resulting in cache flooding. So magic bitboards in many applications provide no performance gain over more modest hashing schemes or rotated bitboards. [6] [7]
The bitboard method for representing a board game appears to have been invented in the mid-1950s, by Arthur Samuel and was used in his checkers program. [8] For the more complicated game of chess, it appears the method was independently rediscovered later by the Kaissa team in the Soviet Union in the late 1960s, [9] and again by the authors of the U.S. Northwestern University program "Chess" in the early 1970s. The 64-bit word length of 1970s super computers like Amdahl and Cray machines, facilitated the development of bitboard representations which conveniently mapped the 64-squares of the chessboard to bits of a word.
Rotated bitboards for collating the moves of sliding pieces were invented by Professor Robert Hyatt, author of Cray Blitz and Crafty chess engines, sometime in the mid-1990s and shared with the Dark Thought programming team. They were later implemented in Crafty and Dark Thought, but the first published description wasn't until 1997.
A decade later, direct lookup methods using masked ranks, files and diagonals to index a table of attack vectors depending on occupancy states of bits under the mask, were introduced. One such scheme utilizing a perfect hash function to eliminate hash collisions, was termed "magic bitboards". Nonetheless, the large size and high access rates of such tables caused memory occupancy and cache contention issues, and weren't necessarily more efficacious than the rotated bitboard approach. Today, game programs remain divided, with the best scheme being application dependent.
Many other games besides chess benefit from bitboards.
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.
Shogi, also known as Japanese chess, is a strategy board game for two players. It is one of the most popular board games in Japan and is in the same family of games as Western chess, chaturanga, xiangqi, Indian chess, and janggi. Shōgi means general's board game.
In computer science, a trie, also called digital tree or prefix tree, is a type of k-ary search tree, a tree data structure used for locating specific keys from within a set. These keys are most often strings, with links between nodes defined not by the entire key, but by individual characters. In order to access a key, the trie is traversed depth-first, following the links between nodes, which represent each character in the key.
The queen is the most powerful piece in the game of chess. It can move any number of squares vertically, horizontally or diagonally, combining the powers of the rook and bishop. Each player starts the game with one queen, placed in the middle of the first rank next to the king. Because the queen is the strongest piece, a pawn is promoted to a queen in the vast majority of cases.
An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing computer programs to estimate the value or goodness of a position in a game tree. Most of the time, the value is either a real number or a quantized integer, often in nths of the value of a playing piece such as a stone in go or a pawn in chess, where n may be tenths, hundredths or other convenient fraction, but sometimes, the value is an array of three values in the unit interval, representing the win, draw, and loss percentages of the position.
A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware unit that examines all memory references on the memory bus, translating these requests, known as virtual memory addresses, into physical addresses in main memory.
A transposition table is a cache of previously seen positions, and associated evaluations, in a game tree generated by a computer game playing program. If a position recurs via a different sequence of moves, the value of the position is retrieved from the table, avoiding re-searching the game tree below that position. Transposition tables are primarily useful in perfect-information games. The usage of transposition tables is essentially memoization applied to the tree search and is a form of dynamic programming.
A fairy chess piece, variant chess piece, unorthodox chess piece, or heterodox chess piece is a chess piece not used in conventional chess but incorporated into certain chess variants and some chess problems. Compared to conventional pieces, fairy pieces vary mostly in the way they move, but they may also follow special rules for capturing, promotions, etc. Because of the distributed and uncoordinated nature of unorthodox chess development, the same piece can have different names, and different pieces can have the same name in various contexts. Most are symbolised as inverted or rotated icons of the standard pieces in diagrams, and the meanings of these "wildcards" must be defined in each context separately. Pieces invented for use in chess variants rather than problems sometimes instead have special icons designed for them, but with some exceptions, many of these are not used beyond the individual games for which they were invented.
A translation lookaside buffer (TLB) is a memory cache that stores the recent translations of virtual memory to physical memory. It is used to reduce the time taken to access a user memory location. It can be called an address-translation cache. It is a part of the chip's memory-management unit (MMU). A TLB may reside between the CPU and the CPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop, laptop, and server processors include one or more TLBs in the memory-management hardware, and it is nearly always present in any processor that utilizes paged or segmented virtual memory.
A space–time trade-off, also known as time–memory trade-off or the algorithmic space-time continuum in computer science is a case where an algorithm or program trades increased space usage with decreased time. Here, space refers to the data storage consumed in performing a given task, and time refers to the time consumed in performing a given task.
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have a hierarchy of multiple cache levels, with different instruction-specific and data-specific caches at level 1. The cache memory is typically implemented with static random-access memory (SRAM), in modern CPUs by far the largest part of them by chip area, but SRAM is not always used for all levels, or even any level, sometimes some latter or all levels are implemented with eDRAM.
The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ℓ₁ norm of a bit vector. In this binary case, it is also called the population count, popcount, sideways sum, or bit summation.
A bit array is an array data structure that compactly stores bits. It can be used to implement a simple set data structure. A bit array is effective at exploiting bit-level parallelism in hardware to perform operations quickly. A typical bit array stores kw bits, where w is the number of bits in the unit of storage, such as a byte or word, and k is some nonnegative integer. If w does not divide the number of bits to be stored, some space is wasted due to internal fragmentation.
Tamerlane chess is a medieval chess variant. Like modern chess, it is derived from shatranj. It was developed in Central Asia during the reign of Emperor Timur, and its invention is also attributed to him. Because Tamerlane chess is a larger variant of chaturanga, it is also called Shatranj Al-Kabir, as opposed to Shatranj as-saghir. Although the game is similar to modern chess, it is distinctive in that there are varieties of pawn, each of which promotes in its own way.
Zobrist hashing is a hash function construction used in computer programs that play abstract board games, such as chess and Go, to implement transposition tables, a special kind of hash table that is indexed by a board position and used to avoid analyzing the same position more than once. Zobrist hashing is named for its inventor, Albert Lindsey Zobrist. It has also been applied as a method for recognizing substitutional alloy configurations in simulations of crystalline materials. Zobrist hashing is the first known instance of the generally useful underlying technique called tabulation hashing.
Board representation in computer chess is a data structure in a chess program representing the position on the chessboard and associated game state. Board representation is fundamental to all aspects of a chess program including move generation, the evaluation function, and making and unmaking moves as well as maintaining the state of the game during play. Several different board representations exist. Chess programs often utilize more than one board representation at different times, for efficiency. Execution efficiency and memory footprint are the primary factors in choosing a board representation; secondary considerations are effort required to code, test and debug the application.
The following tables compare general and technical information for a number of cryptographic hash functions. See the individual functions' articles for further information. This article is not all-inclusive or necessarily up-to-date. An overview of hash function security/cryptanalysis can be found at hash function security summary.
In computer software and hardware, find first set (ffs) or find first one is a bit operation that, given an unsigned machine word, designates the index or position of the least significant bit set to one in the word counting from the least significant bit position. A nearly equivalent operation is count trailing zeros (ctz) or number of trailing zeros (ntz), which counts the number of zero bits following the least significant one bit. The complementary operation that finds the index or position of the most significant set bit is log base 2, so called because it computes the binary logarithm ⌊log2(x)⌋. This is closely related to count leading zeros (clz) or number of leading zeros (nlz), which counts the number of zero bits preceding the most significant one bit. There are two common variants of find first set, the POSIX definition which starts indexing of bits at 1, herein labelled ffs, and the variant which starts indexing of bits at zero, which is equivalent to ctz and so will be called by that name.
A concurrent hash-trie or Ctrie is a concurrent thread-safe lock-free implementation of a hash array mapped trie. It is used to implement the concurrent map abstraction. It has particularly scalable concurrent insert and remove operations and is memory-efficient. It is the first known concurrent data-structure that supports O(1), atomic, lock-free snapshots.
The 0x88 chess board representation is a square-centric method of representing the chess board in computer chess programs. The number 0x88 is a hexadecimal integer (13610, 2108, 100010002). The rank and file positions are each represented by a nibble (hexadecimal digit), and the bit gaps simplify a number of computations to bitwise operations.
Call it Kindergarten Bitboards