Counting is the process of determining the number of elements of a finite set of objects; that is, determining the size of a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by a unit for every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related term enumeration refers to uniquely identifying the elements of a finite (combinatorial) set or infinite set by assigning a number to each element.
Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...).
There is archaeological evidence suggesting that humans have been counting for at least 50,000 years. [1] Counting was primarily used by ancient cultures to keep track of social and economic data such as the number of group members, prey animals, property, or debts (that is, accountancy). Notched bones were also found in the Border Caves in South Africa, which may suggest that the concept of counting was known to humans as far back as 44,000 BCE. [2] The development of counting led to the development of mathematical notation, numeral systems, and writing.
Verbal counting involves speaking sequential numbers aloud or mentally to track progress. Generally such counting is done with base 10 numbers: "1, 2, 3, 4", etc. Verbal counting is often used for objects that are currently present rather than for counting things over time, since following an interruption counting must resume from where it was left off, a number that has to be recorded or remembered.
Counting a small set of objects, especially over time, can be accomplished efficiently with tally marks: making a mark for each number and then counting all of the marks when done tallying. Tallying is base 1 counting.
Finger counting is convenient and common for small numbers. Children count on fingers to facilitate tallying and for performing simple mathematical operations. Older finger counting methods used the four fingers and the three bones in each finger (phalanges) to count to twelve. [3] Other hand-gesture systems are also in use, for example the Chinese system by which one can count to 10 using only gestures of one hand. With finger binary it is possible to keep a finger count up to 1023 = 210 − 1.
Various devices can also be used to facilitate counting, such as tally counters and abacuses.
Inclusive/exclusive counting are two different methods of counting. For exclusive counting, unit intervals are counted at the end of each interval. For inclusive counting, unit intervals are counted beginning with the start of the first interval and ending with end of the last interval. This results in a count which is always greater by one when using inclusive counting, as compared to using exclusive counting, for the same set. Apparently, the introduction of the number zero to the number line resolved this difficulty; however, inclusive counting is still useful for some things.
Refer also to the fencepost error, which is a type of off-by-one error.
Modern mathematical English language usage has introduced another difficulty, however. Because an exclusive counting is generally tacitly assumed, the term "inclusive" is generally used in reference to a set which is actually counted exclusively. For example; How many numbers are included in the set that ranges from 3 to 8, inclusive? The set is counted exclusively, once the range of the set has been made certain by the use of the word "inclusive". The answer is 6; that is 8-3+1, where the +1 range adjustment makes the adjusted exclusive count numerically equivalent to an inclusive count, even though the range of the inclusive count does not include the number eight unit interval. So, it's necessary to discern the difference in usage between the terms "inclusive counting" and "inclusive" or "inclusively", and one must recognize that it's not uncommon for the former term to be loosely used for the latter process.
Inclusive counting is usually encountered when dealing with time in Roman calendars and the Romance languages. [4] In the ancient Roman calendar, the nones (meaning "nine") is 8 days before the ides; more generally, dates are specified as inclusively counted days up to the next named day. [4]
In the Christian liturgical calendar, Quinquagesima (meaning 50) is 49 days before Easter Sunday. When counting "inclusively", the Sunday (the start day) will be day 1 and therefore the following Sunday will be the eighth day. For example, the French phrase for "fortnight" is quinzaine (15 [days]), and similar words are present in Greek (δεκαπενθήμερο, dekapenthímero), Spanish (quincena) and Portuguese (quinzena).
In contrast, the English word "fortnight" itself derives from "a fourteen-night", as the archaic "sennight" does from "a seven-night"; the English words are not examples of inclusive counting. In exclusive counting languages such as English, when counting eight days "from Sunday", Monday will be day 1, Tuesday day 2, and the following Monday will be the eighth day.[ citation needed ] For many years it was a standard practice in English law for the phrase "from a date" to mean "beginning on the day after that date": this practice is now deprecated because of the high risk of misunderstanding. [5]
Similar counting is involved in East Asian age reckoning, in which newborns are considered to be 1 at birth.
Musical terminology also uses inclusive counting of intervals between notes of the standard scale: going up one note is a second interval, going up two notes is a third interval, etc., and going up seven notes is an octave .
Learning to count is an important educational/developmental milestone in most cultures of the world. Learning to count is a child's very first step into mathematics, and constitutes the most fundamental idea of that discipline. However, some cultures in Amazonia and the Australian Outback do not count, [6] [7] and their languages do not have number words.
Many children at just 2 years of age have some skill in reciting the count list (that is, saying "one, two, three, ..."). They can also answer questions of ordinality for small numbers, for example, "What comes after three?". They can even be skilled at pointing to each object in a set and reciting the words one after another. This leads many parents and educators to the conclusion that the child knows how to use counting to determine the size of a set. [8] Research suggests that it takes about a year after learning these skills for a child to understand what they mean and why the procedures are performed. [9] [10] In the meantime, children learn how to name cardinalities that they can subitize.
In mathematics, the essence of counting a set and finding a result n, is that it establishes a one-to-one correspondence (or bijection) of the subject set with the subset of positive integers {1, 2, ..., n}. A fundamental fact, which can be proved by mathematical induction, is that no bijection can exist between {1, 2, ..., n} and {1, 2, ..., m} unless n = m; this fact (together with the fact that two bijections can be composed to give another bijection) ensures that counting the same set in different ways can never result in different numbers (unless an error is made). This is the fundamental mathematical theorem that gives counting its purpose; however you count a (finite) set, the answer is the same. In a broader context, the theorem is an example of a theorem in the mathematical field of (finite) combinatorics—hence (finite) combinatorics is sometimes referred to as "the mathematics of counting."
Many sets that arise in mathematics do not allow a bijection to be established with {1, 2, ..., n} for any natural number n; these are called infinite sets, while those sets for which such a bijection does exist (for some n) are called finite sets. Infinite sets cannot be counted in the usual sense; for one thing, the mathematical theorems which underlie this usual sense for finite sets are false for infinite sets. Furthermore, different definitions of the concepts in terms of which these theorems are stated, while equivalent for finite sets, are inequivalent in the context of infinite sets.
The notion of counting may be extended to them in the sense of establishing (the existence of) a bijection with some well-understood set. For instance, if a set can be brought into bijection with the set of all natural numbers, then it is called "countably infinite." This kind of counting differs in a fundamental way from counting of finite sets, in that adding new elements to a set does not necessarily increase its size, because the possibility of a bijection with the original set is not excluded. For instance, the set of all integers (including negative numbers) can be brought into bijection with the set of natural numbers, and even seemingly much larger sets like that of all finite sequences of rational numbers are still (only) countably infinite. Nevertheless, there are sets, such as the set of real numbers, that can be shown to be "too large" to admit a bijection with the natural numbers, and these sets are called "uncountable." Sets for which there exists a bijection between them are said to have the same cardinality, and in the most general sense counting a set can be taken to mean determining its cardinality. Beyond the cardinalities given by each of the natural numbers, there is an infinite hierarchy of infinite cardinalities, although only very few such cardinalities occur in ordinary mathematics (that is, outside set theory that explicitly studies possible cardinalities).
Counting, mostly of finite sets, has various applications in mathematics. One important principle is that if two sets X and Y have the same finite number of elements, and a function f: X → Y is known to be injective, then it is also surjective, and vice versa. A related fact is known as the pigeonhole principle, which states that if two sets X and Y have finite numbers of elements n and m with n > m, then any map f: X → Y is not injective (so there exist two distinct elements of X that f sends to the same element of Y); this follows from the former principle, since if f were injective, then so would its restriction to a strict subset S of X with m elements, which restriction would then be surjective, contradicting the fact that for x in X outside S, f(x) cannot be in the image of the restriction. Similar counting arguments can prove the existence of certain objects without explicitly providing an example. In the case of infinite sets this can even apply in situations where it is impossible to give an example.[ citation needed ]
The domain of enumerative combinatorics deals with computing the number of elements of finite sets, without actually counting them; the latter usually being impossible because infinite families of finite sets are considered at once, such as the set of permutations of {1, 2, ..., n} for any natural number n.
In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of sets, each containing at least one element, it is possible to construct a new set by choosing one element from each set, even if the collection is infinite. Formally, it states that for every indexed family of nonempty sets, there exists an indexed set such that for every . The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem.
A bijection, bijective function, or one-to-one correspondence between two mathematical sets is a function such that each element of the second set is the image of exactly one element of the first set. Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set.
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is countable if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements.
In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter (aleph) marked with subscript indicating their rank among the infinite cardinals.
In mathematics, cardinality describes a relationship between sets which compares their relative size. For example, the sets and are the same size as they each contain 3 elements. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two notions often used when referring to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers. The cardinality of a set may also be called its size, when no confusion with other notions of size is possible.
Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".
In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example,
In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers0, 1, 2, 3, ..., while others start with 1, defining them as the positive integers1, 2, 3, .... Some authors acknowledge both definitions whenever convenient. Sometimes, the whole numbers are the natural numbers plus zero. In other cases, the whole numbers refer to all of the integers, including negative integers. The counting numbers are another term for the natural numbers, particularly in primary school education, and are ambiguous as well although typically start at 1.
In mathematics, the power set (or powerset) of a set S is the set of all subsets of S, including the empty set and S itself. In axiomatic set theory (as developed, for example, in the ZFC axioms), the existence of the power set of any set is postulated by the axiom of power set. The powerset of S is variously denoted as P(S), 𝒫(S), P(S), , or 2S. Any subset of P(S) is called a family of sets over S.
In mathematics, a set is a collection of different things; these things are called elements or members of the set and are typically mathematical objects of any kind: numbers, symbols, points in space, lines, other geometrical shapes, variables, or even other sets. A set may have a finite number of elements or be an infinite set. There is a unique set with no elements, called the empty set; a set with a single element is a singleton.
In mathematics, a well-order on a set S is a total ordering on S with the property that every non-empty subset of S has a least element in this ordering. The set S together with the ordering is then called a well-ordered set. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering.
An enumeration is a complete, ordered listing of all the items in a collection. The term is commonly used in mathematics and computer science to refer to a listing of all of the elements of a set. The precise requirements for an enumeration depend on the discipline of study and the context of a given problem.
In mathematics, two sets or classes A and B are equinumerous if there exists a one-to-one correspondence (or bijection) between them, that is, if there exists a function from A to B such that for every element y of B, there is exactly one element x of A with f(x) = y. Equinumerous sets are said to have the same cardinality (number of elements). The study of cardinality is often called equinumerosity (equalness-of-number). The terms equipollence (equalness-of-strength) and equipotence (equalness-of-power) are sometimes used instead.
In mathematics, in the areas of order theory and combinatorics, Dilworth's theorem states that, in any finite partially ordered set, the maximum size of an antichain of incomparable elements equals the minimum number of chains needed to cover all elements. This number is called the width of the partial order. The theorem is named for the mathematician Robert P. Dilworth, who published it in 1950.
In combinatorics, bijective proof is a proof technique for proving that two sets have equally many elements, or that the sets in two combinatorial classes have equal size, by finding a bijective function that maps one set one-to-one onto the other. This technique can be useful as a way of finding a formula for the number of elements of certain sets, by corresponding them with other sets that are easier to count. Additionally, the nature of the bijection itself often provides powerful insights into each or both of the sets.
In mathematics, a set A is Dedekind-infinite if some proper subset B of A is equinumerous to A. Explicitly, this means that there exists a bijective function from A onto some proper subset B of A. A set is Dedekind-finite if it is not Dedekind-infinite. Proposed by Dedekind in 1888, Dedekind-infiniteness was the first definition of "infinite" that did not rely on the definition of the natural numbers.
In mathematics, in the branch of combinatorics, a graded poset is a partially-ordered set (poset) P equipped with a rank functionρ from P to the set N of all natural numbers. ρ must satisfy the following two properties:
This article contains a discussion of paradoxes of set theory. As with most mathematical paradoxes, they generally reveal surprising and counter-intuitive mathematical results, rather than actual logical contradictions within modern axiomatic set theory.
Combinatorics on words is a fairly new field of mathematics, branching from combinatorics, which focuses on the study of words and formal languages. The subject looks at letters or symbols, and the sequences they form. Combinatorics on words affects various areas of mathematical study, including algebra and computer science. There have been a wide range of contributions to the field. Some of the first work was on square-free words by Axel Thue in the early 1900s. He and colleagues observed patterns within words and tried to explain them. As time went on, combinatorics on words became useful in the study of algorithms and coding. It led to developments in abstract algebra and answering open questions.