Grid method multiplication

Last updated

The grid method (also known as the box method) of multiplication is an introductory approach to multi-digit multiplication calculations that involve numbers larger than ten. Because it is often taught in mathematics education at the level of primary school or elementary school, this algorithm is sometimes called the grammar school method. [1]

Contents

Compared to traditional long multiplication, the grid method differs in clearly breaking the multiplication and addition into two steps, and in being less dependent on place value.

Whilst less efficient than the traditional method, grid multiplication is considered to be more reliable, in that children are less likely to make mistakes. Most pupils will go on to learn the traditional method, once they are comfortable with the grid method; but knowledge of the grid method remains a useful "fall back", in the event of confusion. It is also argued that since anyone doing a lot of multiplication would nowadays use a pocket calculator, efficiency for its own sake is less important; equally, since this means that most children will use the multiplication algorithm less often, it is useful for them to become familiar with a more explicit (and hence more memorable) method.

Use of the grid method has been standard in mathematics education in primary schools in England and Wales since the introduction of a National Numeracy Strategy with its "numeracy hour" in the 1990s. It can also be found included in various curricula elsewhere. Essentially the same calculation approach, but not with the explicit grid arrangement, is also known as the partial products algorithm or partial products method.

Calculations

Introductory motivation

The grid method can be introduced by thinking about how to add up the number of points in a regular array, for example the number of squares of chocolate in a chocolate bar. As the size of the calculation becomes larger, it becomes easier to start counting in tens; and to represent the calculation as a box which can be sub-divided, rather than drawing a multitude of dots. [2] [3]

At the simplest level, pupils might be asked to apply the method to a calculation like 3 × 17. Breaking up ("partitioning") the 17 as (10 + 7), this unfamiliar multiplication can be worked out as the sum of two simple multiplications:

×107
33021

so 3 × 17 = 30 + 21 = 51.

This is the "grid" or "boxes" structure which gives the multiplication method its name.

Faced with a slightly larger multiplication, such as 34 × 13, pupils may initially be encouraged to also break this into tens. So, expanding 34 as 10 + 10 + 10 + 4 and 13 as 10 + 3, the product 34 × 13 might be represented:

×1010104
1010010010040
330303012

Totalling the contents of each row, it is apparent that the final result of the calculation is (100 + 100 + 100 + 40) + (30 + 30 + 30 + 12) = 340 + 102 = 442.

Standard blocks

Once pupils have become comfortable with the idea of splitting the whole product into contributions from separate boxes, it is a natural step to group the tens together, so that the calculation 34 × 13 becomes

×304
1030040
39012

giving the addition

  300    40    90  + 12  ————   442 

so 34 × 13 = 442.

This is the most usual form for a grid calculation. In countries such as the UK where teaching of the grid method is usual, pupils may spend a considerable period of time regularly setting out calculations like the above, until the method is entirely comfortable and familiar.

Larger numbers

The grid method extends straightforwardly to calculations involving larger numbers.

For example, to calculate 345 × 28, the student could construct the grid with six easy multiplications

×300405
206000800100
8240032040

to find the answer 6900 + 2760 = 9660.

However, by this stage (at least in standard current UK teaching practice) pupils may be starting to be encouraged to set out such a calculation using the traditional long multiplication form without having to draw up a grid.

Traditional long multiplication can be related to a grid multiplication in which only one of the numbers is broken into tens and units parts to be multiplied separately:

×345
206900
82760

The traditional method is ultimately faster and much more compact; but it requires two significantly more difficult multiplications which pupils may at first struggle with [ citation needed ]. Compared to the grid method, traditional long multiplication may also be more abstract [ citation needed ]and less manifestly clear [ citation needed ], so some pupils find it harder to remember what is to be done at each stage and why [ citation needed ]. Pupils may therefore be encouraged for quite a period to use the simpler grid method alongside the more efficient traditional long multiplication method, as a check and a fall-back.

Other applications

Fractions

While not normally taught as a standard method for multiplying fractions, the grid method can readily be applied to simple cases where it is easier to find a product by breaking it down.

For example, the calculation 21/2 × 11/2 can be set out using the grid method

×21/2
121/2
1/211/4

to find that the resulting product is 2 + 1/2 + 1 + 1/4 = 33/4

Algebra

The grid method can also be used to illustrate the multiplying out of a product of binomials, such as (a + 3)(b + 2), a standard topic in elementary algebra (although one not usually met until secondary school):

×a3
bab3b
22a6

Thus (a + 3)(b + 2) = ab + 3b + 2a + 6.

Computing

32-bit CPUs usually lack an instruction to multiply two 64-bit integers. However, most CPUs support a "multiply with overflow" instruction, which takes two 32-bit operands, multiplies them, and puts the 32-bit result in one register and the overflow in another, resulting in a carry. For example, these include the umull instruction added in the ARMv4t instruction set or the pmuludq instruction added in SSE2 which operates on the lower 32 bits of an SIMD register containing two 64-bit lanes.

On platforms that support these instructions, a slightly modified version of the grid method is used. The differences are:

  1. Instead of operating on multiples of 10, they are operated on 32-bit integers.
  2. Instead of higher bits being multiplied by ten, they are multiplied by 0x100000000. This is usually done by either shifting to the left by 32 or putting the value into a specific register that represents the higher 32 bits.
  3. Any values that lie above the 64th bit are truncated. This means that multiplying the highest bits is not required, because the result will be shifted out of the 64-bit range. This also means that only a 32-bit multiply is required for the higher multiples.
×ba
d-ad
cbcac

This would be the routine in C:

#include<stdint.h>uint64_tmultiply(uint64_tab,uint64_tcd){/* These shifts and masks are usually implicit, as 64-bit integers     * are often passed as 2 32-bit registers. */uint32_tb=ab>>32,a=ab&0xFFFFFFFF;uint32_td=cd>>32,c=cd&0xFFFFFFFF;/* multiply with overflow */uint64_tac=(uint64_t)a*(uint64_t)c;uint32_thigh=ac>>32;/* overflow */uint32_tlow=ac&0xFFFFFFFF;/* 32-bit multiply and add to high bits */high+=(a*d);/* add ad */high+=(b*c);/* add bc *//* multiply by 0x100000000 (via left shift) and add to the low bits with a binary or. */return((uint64_t)high<<32)|low;}

This would be the routine in ARM assembly:

multiply:@a=r0@b=r1@c=r2@d=r3push{r4,lr}@backupr4andlrtothestackumullr12,lr,r2,r0@multiplyr2andr0,storetheresultinr12andtheoverflowinlrmlar4,r2,r1,lr@multiplyr2andr1,addlr,andstoreinr4mlar1,r3,r0,r4@multiplyr3andr0,addr4,andstoreinr1@Thevalueisshiftedleftimplicitlybecausethe@highbitsofa64-bitintegerarereturnedinr1.movr0,r12@Setthelowbitsofthereturnvaluetor12(ac)pop{r4,lr}@restorer4andlrfromthestackbxlr@returnthelowandhighbitsinr0andr1respectively

Mathematics

Mathematically, the ability to break up a multiplication in this way is known as the distributive law, which can be expressed in algebra as the property that a(b+c) = ab + ac. The grid method uses the distributive property twice to expand the product, once for the horizontal factor, and once for the vertical factor.

Historically the grid calculation (tweaked slightly) was the basis of a method called lattice multiplication, which was the standard method of multiple-digit multiplication developed in medieval Arabic and Hindu mathematics. Lattice multiplication was introduced into Europe by Fibonacci at the start of the thirteenth century along with Arabic numerals themselves; although, like the numerals also, the ways he suggested to calculate with them were initially slow to catch on. Napier's bones were a calculating help introduced by the Scot John Napier in 1617 to assist lattice-method calculations.

See also

Related Research Articles

<span class="mw-page-title-main">Modular arithmetic</span> Computation modulo a fixed integer

In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.

A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal numeral system.

Two's complement is the most common method of representing signed integers on computers, and more generally, fixed point binary values. Two's complement uses the binary digit with the greatest value as the sign to indicate whether the binary number is positive or negative; when the most significant bit is 1 the number is signed as negative and when the most significant bit is 0 the number is signed as positive. As a result, non-negative numbers are represented as themselves: 6 is 0110, zero is 0000, and -6 is 1010. Note that while the number of binary bits is fixed throughout a computation it is otherwise arbitrary.

In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents. More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation.

In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts strong multiplications inside a loop into weaker additions – something that frequently occurs in array addressing.(Cooper, Simpson & Vick 1995, p. 1)

In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are potentially limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.

The Fletcher checksum is an algorithm for computing a position-dependent checksum devised by John G. Fletcher (1934–2012) at Lawrence Livermore Labs in the late 1970s. The objective of the Fletcher checksum was to provide error-detection properties approaching those of a cyclic redundancy check but with the lower computational effort associated with summation techniques.

In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery.

<span class="mw-page-title-main">Hamming weight</span> Number of nonzero symbols in a string

The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ₁ norm of a bit vector. In this binary case, it is also called the population count, popcount, sideways sum, or bit summation.

Linear genetic programming (LGP) is a particular method of genetic programming wherein computer programs in a population are represented as a sequence of instructions from an imperative programming language or machine language. The adjective "linear" stems from the fact that the sequence of instructions is normally executed in a linear fashion. Like in other programs, the data flow in LGP can be modeled as a graph that will visualize the potential multiple usage of register contents and the existence of structurally noneffective code (introns) which are two main differences of this genetic representation from the more common tree-based genetic programming (TGP) variant.

<span class="mw-page-title-main">Clipper architecture</span> 32-bit RISC-like computing architecture

The Clipper architecture is a 32-bit RISC-like instruction set architecture designed by Fairchild Semiconductor. The architecture never enjoyed much market success, and the only computer manufacturers to create major product lines using Clipper processors were Intergraph and High Level Hardware, although Opus Systems offered a product based on the Clipper as part of its Personal Mainframe range. The first processors using the Clipper architecture were designed and sold by Fairchild, but the division responsible for them was subsequently sold to Intergraph in 1987; Intergraph continued work on Clipper processors for use in its own systems.

In cryptography, a message authentication code based on universal hashing, or UMAC, is a type of message authentication code (MAC) calculated choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function used. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. In contrast to traditional MACs, which are serializable, UMAC can be executed in parallel. Thus as machines continue to offer more parallel processing capabilities, the speed of implementing UMAC will increase.

<span class="mw-page-title-main">Sharp EL-5120</span>

The Sharp EL-5120 is a scientific programmable calculator. It has about 1 KB of total RAM available to the user, and has 4 basic operational modes:

<span class="mw-page-title-main">TI-990</span> Series of 16-bit computers by Texas Instruments.

The TI-990 was a series of 16-bit minicomputers sold by Texas Instruments (TI) in the 1970s and 1980s. The TI-990 was a replacement for TI's earlier minicomputer systems, the TI-960 and the TI-980. It had several unique features, and was easier to program than its predecessors.

<span class="mw-page-title-main">Word addressing</span> Support by a hardware architecture of accessing memory only in units of words larger than a byte

In computer architecture, word addressing means that addresses of memory on a computer uniquely identify words of memory. It is usually used in contrast with byte addressing, where addresses uniquely identify bytes. Almost all modern computer architectures use byte addressing, and word addressing is largely only of historical interest. A computer that uses word addressing is sometimes called a word machine.

The Lehmer random number generator, sometimes also referred to as the Park–Miller random number generator, is a type of linear congruential generator (LCG) that operates in multiplicative group of integers modulo n. The general formula is

<span class="mw-page-title-main">Xorshift</span> Class of pseudorandom number generators

Xorshift random number generators, also called shift-register generators, are a class of pseudorandom number generators that were invented by George Marsaglia. They are a subset of linear-feedback shift registers (LFSRs) which allow a particularly efficient implementation in software without the excessive use of sparse polynomials. They generate the next number in their sequence by repeatedly taking the exclusive or of a number with a bit-shifted version of itself. This makes execution extremely efficient on modern computer architectures, but it does not benefit efficiency in a hardware implementation. Like all LFSRs, the parameters have to be chosen very carefully in order to achieve a long period.

MurmurHash is a non-cryptographic hash function suitable for general hash-based lookup. It was created by Austin Appleby in 2008 and is currently hosted on GitHub along with its test suite named 'SMHasher'. It also exists in a number of variants, all of which have been released into the public domain. The name comes from two basic operations, multiply (MU) and rotate (R), used in its inner loop.

Elliptic curve scalar multiplication is the operation of successively adding a point along an elliptic curve to itself repeatedly. It is used in elliptic curve cryptography (ECC). The literature presents this operation as scalar multiplication, as written in Hessian form of an elliptic curve. A widespread name for this operation is also elliptic curve point multiplication, but this can convey the wrong impression of being a multiplication between two points.

A permuted congruential generator (PCG) is a pseudorandom number generation algorithm developed in 2014 by Dr. M.E. O'Neill which applies an output permutation function to improve the statistical properties of a modulo-2n linear congruential generator. It achieves excellent statistical performance with small and fast code, and small state size.

References