Integer (computer science)

Last updated

In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.

Contents

Value and representation

The value of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well). [1]

An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit digit group separators. [2]

The internal representation of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value.

The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width, precision, or bitness [3] of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through 2n−1. Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code, or as printed character codes such as ASCII.

There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1)−1. Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones' complement.

Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language, on a different processor, or in an execution context of different bitness; see § Words.

Some older computer architectures used decimal representations of integers, stored in binary-coded decimal (BCD) or other format. These values generally require data sizes of 4 bits per decimal digit (sometimes called a nibble), usually with additional bits for a sign. Many modern CPUs provide limited support for decimal integers as an extended datatype, providing instructions for converting such values to and from binary values. Depending on the architecture, decimal integers may have fixed sizes (e.g., 7 decimal digits plus a sign fit into a 32-bit word), or may be variable-length (up to some maximum digit size), typically occupying two digits per byte (octet).

Common integral data types

BitsNameRange (assuming two's complement for signed)Decimal digitsUsesImplementations
C/C++ C# Pascal and Delphi Java SQL [lower-alpha 1] FORTRAN D Rust
4 nibble, semioctet Signed: From −8 to 7, from −(23) to 23  10.9 Binary-coded decimal, single decimal digit representation
Unsigned: From 0 to 15, which equals 24  11.2
8 byte, octet, i8, u8Signed: From −128 to 127, from −(27) to 27  12.11 ASCII characters, code units in the UTF-8 character encoding int8_t, signed char [lower-alpha 2] sbyteShortintbytetinyintinteger(1)bytei8
Unsigned: From 0 to 255, which equals 28  12.41uint8_t, unsigned char [lower-alpha 2] byteByteunsigned tinyintubyteu8
16halfword, word, short, i16, u16Signed: From −32,768 to 32,767, from −(215) to 215  14.52 UCS-2 characters, code units in the UTF-16 character encoding int16_t, short [lower-alpha 2] , int [lower-alpha 2] shortSmallintshortsmallintinteger(2)shorti16
Unsigned: From 0 to 65,535, which equals 216  14.82uint16_t, unsigned [lower-alpha 2] , unsigned int [lower-alpha 2] ushortWordchar [lower-alpha 3] unsigned smallintushortu16
32word, long, doubleword, longword, int, i32, u32Signed: From −2,147,483,648 to 2,147,483,647, from −(231) to 231  19.33 UTF-32 characters, true color with alpha, FourCC, pointers in 32-bit computing int32_t, int [lower-alpha 2] , long [lower-alpha 2] intLongInt; Integer [lower-alpha 4] intintinteger(4)inti32
Unsigned: From 0 to 4,294,967,295, which equals 232  19.63uint32_t, unsigned [lower-alpha 2] , unsigned int [lower-alpha 2] , unsigned long [lower-alpha 2] uintLongWord; DWord; Cardinal [lower-alpha 4] unsigned intuintu32
64word, doubleword, longword, long, long long, quad, quadword, qword, int64, i64, u64Signed: From −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, from −(263) to 263  118.96Time (milliseconds since the Unix epoch), pointers in 64-bit computing int64_t, long [lower-alpha 2] , long long [lower-alpha 2] longInt64longbigintinteger(8)longi64
Unsigned: From 0 to 18,446,744,073,709,551,615, which equals 264  119.27uint64_t, unsigned long long [lower-alpha 2] ulongUInt64; QWordunsigned bigintulongu64
128octaword, double quadword, i128, u128Signed: From −170,141,183,460,469,231,731,687,303,715,884,105,728 to 170,141,183,460,469,231,731,687,303,715,884,105,727, from −(2127) to 2127  138.23Complex scientific calculations,

IPv6 addresses, GUIDs

C: only available as non-standard compiler-specific extensioninteger(16)cent [lower-alpha 5] i128
Unsigned: From 0 to 340,282,366,920,938,463,463,374,607,431,768,211,455, which equals 2128  138.53ucent [lower-alpha 5] u128
nn-bit integer
(general case)
Signed: −(2n−1) to (2n−1  1)(n  1) log10 2C23: _BitInt(n), signed _BitInt(n) Ada: range -2**(n-1)..2**(n-1)-1
Unsigned: 0 to (2n − 1)n log10 2C23: unsigned _BitInt(n)Ada: range 0..2**n-1, mod 2**n; standard libraries' or third-party arbitrary arithmetic libraries' BigDecimal or Decimal classes in many languages such as Python, C++, etc.

Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths.

The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (that can represent only the integers in a specified range).

Some languages, such as Lisp, Smalltalk, REXX, Haskell, Python, and Raku, support arbitrary precision integers (also known as infinite precision integers or bignums ). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package. [6] These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they, too, can only represent a finite subset of the mathematical integers. These schemes support very large numbers; for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long.

A Boolean or Flag type is a type that can represent only two values: 0 and 1, usually identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access.

A four-bit quantity is known as a nibble (when eating, being smaller than a bite) or nybble (being a pun on the form of the word byte). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal.

Bytes and octets

The term byte initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term byte was usually not used at all in connection with bit- and word-addressed machines.

The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate.

In modern usage byte almost invariably means eight bits, since all other sizes have fallen into disuse; thus byte has come to be synonymous with octet.

Words

The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 40-, 48-, 60-, and 64-bit. Since it is architectural, the size of a word is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from word, such as longword, doubleword, quadword, and halfword, also vary with the CPU and OS. [7]

Practically all new desktop processors are capable of using 64-bit words, though embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers.

One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 215−1, the program will fail on computers with 16-bit integers. That variable should have been declared as long, which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers. This issue is resolved by C99 in stdint.h in the form of intptr_t.

The bitness of a program may refer to the word size (or bitness) of the processor on which it runs, or it may refer to the width of a memory address or pointer, which can differ between execution modes or contexts. For example, 64-bit versions of Microsoft Windows support existing 32-bit binaries, and programs compiled for Linux's x32 ABI run in 64-bit mode yet use 32-bit memory addresses. [8]

Standard integer

The standard integer size is platform-dependent.

In C, it is denoted by int and required to be at least 16 bits. Windows and Unix systems have 32-bit ints on both 32-bit and 64-bit architectures.

Short integer

A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine.

In C, it is denoted by short. It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. [9] [10] A conforming program can assume that it can safely store values between −(215−1) [11] and 215−1, [12] but it may not assume that the range is not larger. In Java, a short is always a 16-bit integer. In the Windows API, the datatype SHORT is defined as a 16-bit signed integer on all machines. [7]

Common short integer sizes
Programming language Data type name Signedness Size in bytes Minimum valueMaximum value
C and C++ shortsigned2−32,767 [lower-alpha 6] +32,767
unsigned shortunsigned2065,535
C# shortsigned2−32,768+32,767
ushortunsigned2065,535
Java shortsigned2−32,768+32,767
SQL smallintsigned2−32,768+32,767

Long integer

A long integer can represent a whole integer whose range is greater than or equal to that of a standard integer on the same machine.

In C, it is denoted by long. It is required to be at least 32 bits, and may or may not be larger than a standard integer. A conforming program can assume that it can safely store values between −(231−1) [11] and 231−1, [12] but it may not assume that the range is not larger.

Common long integer sizes
Programming language Approval Type Platforms Data type nameStorage in bytes Signed range Unsigned range
C ISO/ANSI C99International Standard Unix, 16/32-bit systems [7]
Windows, 16/32/64-bit systems [7]
long4
(minimum requirement 4)
−2,147,483,647 to +2,147,483,6470 to 4,294,967,295
(minimum requirement)
C ISO/ANSI C99International Standard Unix,
64-bit systems [7] [10]
long8
(minimum requirement 4)
−9,223,372,036,854,775,807 to +9,223,372,036,854,775,8070 to 18,446,744,073,709,551,615
C++ ISO/ANSIInternational Standard Unix, Windows,
16/32-bit system
long4 [13]
(minimum requirement 4)
−2,147,483,648 to +2,147,483,647
0 to 4,294,967,295
(minimum requirement)
C++/CLI International Standard
ECMA-372
Unix, Windows,
16/32-bit systems
long4 [14]
(minimum requirement 4)
−2,147,483,648 to +2,147,483,647
0 to 4,294,967,295
(minimum requirement)
VB Company Standard Windows Long4 [15] −2,147,483,648 to +2,147,483,647
VBA Company Standard Windows, Mac OS X Long4 [16] −2,147,483,648 to +2,147,483,647
SQL Server Company Standard Windows BigInt8−9,223,372,036,854,775,808 to +9,223,372,036,854,775,8070 to 18,446,744,073,709,551,615
C#/ VB.NET ECMA International Standard Microsoft .NET long or Int648−9,223,372,036,854,775,808 to +9,223,372,036,854,775,8070 to 18,446,744,073,709,551,615
Java International/Company Standard Java platform long8−9,223,372,036,854,775,808 to +9,223,372,036,854,775,807
Pascal  ? Windows, UNIX int648−9,223,372,036,854,775,808 to +9,223,372,036,854,775,8070 to 18,446,744,073,709,551,615 (Qword type)

Long long

In the C99 version of the C programming language and the C++11 version of C++, a long long type is supported that has double the minimum capacity of the standard long. This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the long long type did not exist in C++03. For an ANSI/ISO compliant compiler, the minimum requirements for the specified ranges, that is, −(2631) [11] to 263−1 for signed and 0 to 264−1 for unsigned, [12] must be fulfilled; however, extending this range is permitted. [17] [18] This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides stdint.h ; this was introduced in C99 and C++11.

Syntax

Literals for integers can be written as regular Arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value. However, most programming languages disallow use of commas or spaces for digit grouping. Examples of integer literals are:

There are several alternate methods for writing integer literals in many programming languages:

See also

Notes

  1. Not all SQL dialects have unsigned datatypes. [4] [5]
  2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 The sizes of char, short, int, long and long long in C/C++ are dependent upon the implementation of the language.
  3. Java does not directly support arithmetic on char types. The results must be cast back into char from an int.
  4. 1 2 The sizes of Delphi's Integer and Cardinal are not guaranteed, varying from platform to platform; usually defined as LongInt and LongWord respectively.
  5. 1 2 Reserved for future use. Not implemented yet.
  6. The ISO C standard allows implementations to reserve the value with sign bit 1 and all other bits 0 (for sign–magnitude and two's complement representation) or with all bits 1 (for ones' complement) for use as a "trap" value, used to indicate (for example) an overflow. [11]

Related Research Articles

<span class="mw-page-title-main">Binary-coded decimal</span> System of digitally encoding numbers

In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications.

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision:

In mathematics and computing, the hexadecimal numeral system is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9, and "A"–"F" to represent values from ten to fifteen.

<span class="mw-page-title-main">Nibble</span> Four-bit unit of digital storage

In computing, a nibble (occasionally nybble, nyble, or nybl to match the spelling of byte) is a four-bit aggregation, or half an octet. It is also known as half-byte or tetrade. In a networking or telecommunication context, the nibble is often called a semi-octet, quadbit, or quartet. A nibble has sixteen (24) possible values. A nibble can be represented by a single hexadecimal digit (0F) and called a hex digit.

Octal is a numeral system with eight as the base.

<span class="mw-page-title-main">String (computer science)</span> Sequence of characters, data type

In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed. A string is generally considered as a data type and is often implemented as an array data structure of bytes that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence data types and structures.

A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

Double-precision floating-point format is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.

In computer science, primitive data types are a set of basic data types from which all other data types are constructed. Specifically it often refers to the limited set of data representations in use by a particular processor, which all compiled programs must use. Most processors support a similar set of primitive data types, although the specific representations vary. More generally, "primitive data types" may refer to the standard data types built into a programming language. Data types which are not primitive are referred to as derived or composite.

<span class="mw-page-title-main">Power of two</span> Two raised to an integer power

A power of two is a number of the form 2n where n is an integer, that is, the result of exponentiation with number two as the base and integer n as the exponent. If n is negative, 2n is called an negative power of two or inverse power of two.

In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents. More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation.

<span class="mw-page-title-main">C syntax</span> Set of rules defining correctly structured programs

The syntax of the C programming language is the set of rules governing writing of software in C. It is designed to allow for programs that are extremely terse, have a close relationship with the resulting object code, and yet provide relatively high-level data abstraction. C was the first widely successful high-level language for portable operating-system development.

In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word is an important characteristic of any specific processor design or computer architecture.

<span class="mw-page-title-main">Integer overflow</span> Computer arithmetic error

In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside of the range that can be represented with a given number of digits – either higher than the maximum or lower than the minimum representable value.

In the C programming language, data types constitute the semantics and characteristics of storage of data elements. They are expressed in the language syntax in form of declarations for memory locations or variables. Data types also determine the types of operations or methods of processing of data elements.

A bit field is a data structure that consists of one or more adjacent bits which have been allocated for specific purposes, so that any single bit or group of bits within the structure can be set or inspected. A bit field is most commonly used to represent integral types of known, fixed bit-width, such as single-bit Booleans.

In computing, bit numbering is the convention used to identify the bit positions in a binary number.

This article compares a large number of programming languages by tabulating their data types, their expression, statement, and declaration syntax, and some common operating-system interfaces.

In computer science, an integer literal is a kind of literal for an integer whose value is directly represented in source code. For example, in the assignment statement x = 1, the string 1 is an integer literal indicating the value 1, while in the statement x = 0x10 the string 0x10 is an integer literal indicating the value 16, which is represented by 10 in hexadecimal.

References

  1. Cheever, Eric. "Representation of numbers". Swarthmore College. Retrieved 2011-09-11.
  2. Madhusudhan Konda (2011-09-02). "A look at Java 7's new features - O'Reilly Radar". Radar.oreilly.com. Retrieved 2013-10-15.
  3. Barr, Adam (2018-10-23). The Problem with Software: Why Smart Engineers Write Bad Code. MIT Press. ISBN   978-0-262-34821-8.
  4. "Sybase Adaptive Server Enterprise 15.5: Exact Numeric Datatypes".
  5. "MySQL 5.6 Numeric Datatypes".
  6. "BigInteger (Java Platform SE 6)". Oracle. Retrieved 2011-09-11.
  7. 1 2 3 4 5 Fog, Agner (2010-02-16). "Calling conventions for different C++ compilers and operating systems: Chapter 3, Data Representation" (PDF). Retrieved 2010-08-30.
  8. Thorsten Leemhuis (2011-09-13). "Kernel Log: x32 ABI gets around 64-bit drawbacks". www.h-online.com. Archived from the original on 28 October 2011. Retrieved 2011-11-01.
  9. Giguere, Eric (1987-12-18). "The ANSI Standard: A Summary for the C Programmer" . Retrieved 2010-09-04.
  10. 1 2 Meyers, Randy (2000-12-01). "The New C: Integers in C99, Part 1". drdobbs.com. Retrieved 2010-09-04.
  11. 1 2 3 4 "ISO/IEC 9899:201x" (PDF). open-std.org. section 6.2.6.2, paragraph 2. Retrieved 2016-06-20.
  12. 1 2 3 "ISO/IEC 9899:201x" (PDF). open-std.org. section 5.2.4.2.1. Retrieved 2016-06-20.
  13. "Fundamental types in C++". cppreference.com. Retrieved 5 December 2010.
  14. "Chapter 8.6.2 on page 12" (PDF). ecma-international.org.
  15. VB 6.0 help file
  16. "The Integer, Long, and Byte Data Types (VBA)". microsoft.com. Retrieved 2006-12-19.
  17. Giguere, Eric (December 18, 1987). "The ANSI Standard: A Summary for the C Programmer" . Retrieved 2010-09-04.
  18. "American National Standard Programming Language C specifies the syntax and semantics of programs written in the C programming language". Archived from the original on 2010-08-22. Retrieved 2010-09-04.
  19. ECMAScript 6th Edition draft: https://people.mozilla.org/~jorendorff/es6-draft.html#sec-literals-numeric-literals Archived 2013-12-16 at the Wayback Machine