Character (computing)

Last updated

In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language. [1]

Contents

Examples of characters include letters, numerical digits, common punctuation marks (such as "." or "-"), and whitespace. The concept also includes control characters, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters include carriage return or tab, as well as instructions to printers or other devices that display or otherwise process text.

Characters are typically combined into strings.

Historically, the term character was also used to just denote a specific number of contiguous bits. While a character is most commonly assumed to refer to 8 bits (one byte) today, other definitions, like 6-bit character code was once popular (using only upper case, while enough bits to also represent lower case, not with numbers and punctuation allowed for), [2] [3] and even 5-bit Baudot code have been used in the past as well, and while the term has also been applied to 4 bits [4] with only 16 possible values, it wasn't meant to, nor can, represent the full English alphabet. See also Universal Character Set characters, where 8 bits are not enough to represent, while all can be represented with one or more 8-bit code units with UTF-8.

Encoding

Computers and communication equipment represent characters using a character encoding that assigns each character to something  an integer quantity represented by a sequence of digits, typically  that can be stored or transmitted through a network. Two examples of usual encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.

Terminology

Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API. Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular visual appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.

With the advent and widespread acceptance of Unicode [5] and bit-agnostic coded character sets,[ clarification needed ] a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organisation, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the separation of presentation and content.

For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity (ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.

The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.

Combining character

The combining character is also addressed by Unicode. For instance, Unicode allocates a code point to each of

This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character 'i ' with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as 'ï '.

These are considered canonically equivalent by the Unicode standard.

char

A char in the C programming language is a data type with the size of exactly one byte, [6] which in turn is defined to be large enough to contain any member of the “basic execution character set”. The exact number of bits can be checked via CHAR_BIT macro. By far the most common size is 8 bits, and the POSIX standard requires it to be 8 bits. [7] In newer C standards char is required to hold UTF-8 code units [6] which requires a minimum size of 8 bits.

A Unicode code point may require as many as 21 bits. [8] This will not fit in a char on most systems, so more than one is used for some of them, as in the variable-length encoding UTF-8 where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance with combining characters), depending on what is meant by the word "character".

The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data. [9] [10] However it still contains errors such as defining an array of char as a character array (rather than a byte array). [11]

Unicode can also be stored in strings made up of code units that are larger than char. These are called "wide characters". The original C type was called wchar_t . Due to some platforms defining wchar_t as 16 bits and others defining it as 32 bits, recent versions have added char16_t, char32_t. Even then the objects being stored might not be characters, for instance the variable-length UTF-16 is often stored in arrays of char16_t.

Other languages also have a char type. Some such as C++ use 8 bits like C. Others such as Java use 16 bits for char in order to represent UTF-16 values.

See also

Notes

  1. See also the [:word:] regular expression character class.

Related Research Articles

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as The Internet Protocol (1981) refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the endianness. The first bit is number 0, making the eighth bit number 7.

Character encoding Using numbers to represent text characters

In computing, data storage, and data transmission, character encoding is used to represent a repertoire of characters by some kind of encoding system that assigns a number to each character for digital representation. Depending on the abstraction level and context, corresponding code points and the resulting code space may be regarded as bit patterns, octets, natural numbers, electrical pulses, etc. A character encoding is used in computation, data storage, and transmission of textual data. "Character set", "character map", "codeset" and "code page" are related, but not identical, terms.

Unicode Character encoding standard

Unicode is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard is maintained by the Unicode Consortium, and as of March 2020, it has a total of 143,859 characters, with Unicode 13.0 covering 154 modern and historic scripts, as well as multiple symbol sets and emoji. The character repertoire of the Unicode Standard is synchronized with ISO/IEC 10646, each being code-for-code identical with the other.

Web pages authored using hypertext markup language (HTML) may contain multilingual text represented with the Unicode universal character set. Key to the relationship between Unicode and HTML is the relationship between the "document character set", which defines the set of characters that may be present in a HTML document and assigns numbers to them, and the "external character encoding", or "charset", used to encode a given document as a sequence of bytes.

UTF-8 is a variable-width character encoding used for electronic communication. Defined by the Unicode Standard, the name is derived from UnicodeTransformation Format – 8-bit.

UTF-16 Variable-width encoding of Unicode, using one or two 16-bit code units

UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 non-surrogate code points of Unicode (in fact this number of code points is dictated by the design of UTF-16). The encoding is variable-length, as code points are encoded with one or two 16-bit code units. UTF-16 arose from an earlier fixed-width 16-bit encoding known as UCS-2 (for 2-byte Universal Character Set) once it became clear that more than 216 (65,536) code points were needed.

The byte order mark (BOM) is a particular usage of the special Unicode character, U+FEFFBYTE ORDER MARK, whose appearance as a magic number at the start of a text stream can signal several things to a program reading the text:

UTF-32 (32-bit Unicode Transformation Format) is a fixed-length encoding used to encode Unicode code points that uses exactly 32 bits (four bytes) per code point (but a number of leading bits must be zero as there are far fewer than 232 Unicode code points, needing actually only 21 bits). UTF-32 is a fixed-length encoding, in contrast to all other Unicode transformation formats, which are variable-length encodings. Each 32-bit value in UTF-32 represents one Unicode code point and is exactly equal to that code point's numerical value.

A text file is a kind of computer file that is structured as a sequence of lines of electronic text. A text file exists stored as data within a computer file system. In operating systems such as CP/M and MS-DOS, where the operating system does not keep track of the file size in bytes, the end of a text file is denoted by placing one or more special characters, known as an end-of-file marker, as padding after the last line in a text file. On modern operating systems such as Microsoft Windows and Unix-like systems, text files do not contain any special EOF character, because file systems on those operating systems keep track of the file size in bytes. There are for most text files a need to have end-of-line delimiters, which are done in a few different ways depending on operating system. Some operating systems with record-orientated file systems may not use new line delimiters and will primarily store text files with lines separated as fixed or variable length records.

GB 18030 Unicode character encoding mostly used for Simplified Chinese

GB 18030 is a Chinese government standard, described as Information Technology — Chinese coded character set and defines the required language and character support necessary for software in China. GB18030 is the registered Internet name for the official character set of the People's Republic of China (PRC) superseding GB2312. As a Unicode Transformation Format, GB18030 supports both simplified and traditional Chinese characters. It is also compatible with legacy encodings including GB2312, CP936, and GBK 1.0.

ISO/IEC 2022Information technology—Character code structure and extension techniques, is an ISO standard specifying:

A wide character is a computer character datatype that generally has a size greater than the traditional 8-bit character. The increased datatype size allows for the use of larger coded character sets.

UTF-EBCDIC is a character encoding capable of encoding all 1,112,064 valid character code points in Unicode using one to five one-byte (8-bit) code units. It is meant to be EBCDIC-friendly, so that legacy EBCDIC applications on mainframes may process the characters without much difficulty. Its advantages for existing EBCDIC-based systems are similar to UTF-8's advantages for existing ASCII-based systems. Details on UTF-EBCDIC are defined in Unicode Technical Report #16.

In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor. The number of bits in a word is an important characteristic of any specific processor design or computer architecture.

In character encoding terminology, a code point or code position is any of the numerical values that make up the codespace. Many code points represent single characters but they can also have other meanings, such as for formatting.

A CCSID is a 16-bit number that represents a particular encoding of a specific code page. For example, Unicode is a code page that has several encoding forms, like UTF-8, UTF-16 and UTF-32, but which may or may not actually be accompanied by a CCSID number to indicate that this encoding is being used.

A six-bit character code is a character encoding designed for use on computers with word lengths a multiple of 6. Six bits can only encode 64 distinct characters, so these codes generally include only the upper-case letters, the numerals, some punctuation characters, and sometimes control characters. Such codes with additional parity bit were a natural way of storing data on 7-track magnetic tape.

Universal Character Set characters Wikimedia list article

The Unicode Consortium (UC) and the International Organisation for Standardisation (ISO) collaborate on the Universal Character Set (UCS). The UCS is an international standard to map characters used in natural language, mathematics, music, and other domains to machine readable values. By creating this mapping, the UCS enables computer software vendors to interoperate and transmit UCS encoded text strings from one to another. Because it is a universal map, it can be used to represent multiple languages at the same time. This avoids the confusion of using multiple legacy character encodings, which can result in the same sequence of codes having multiple meanings and thus be improperly decoded if the wrong one is chosen.

The Universal Coded Character Set is a standard set of characters defined by the International Standard ISO/IEC 10646, Information technology — Universal Coded Character Set (UCS), which is the basis of many character encodings, improving as characters from previously unrepresented writing systems are added.

The C programming language has a set of functions implementing operations on strings in its standard library. Various operations, such as copying, concatenation, tokenization and searching are supported. For character strings, the standard library uses the convention that strings are null-terminated: a string of n characters is represented as an array of n + 1 elements, the last of which is a "NUL" character.

References

  1. "Definition of CHARACTER". www.merriam-webster.com. Retrieved 2018-04-01.
  2. Dreyfus, Phillippe (1958). "System design of the Gamma 60". Managing Requirements Knowledge, International Workshop on, Los Angeles. New York. pp. 130–133. doi:10.1109/AFIPS.1958.32. […] Internal data code is used: Quantitative (numerical) data are coded in a 4-bit decimal code; qualitative (alpha-numerical) data are coded in a 6-bit alphanumerical code. The internal instruction code means that the instructions are coded in straight binary code.
    As to the internal information length, the information quantum is called a "catena," and it is composed of 24 bits representing either 6 decimal digits, or 4 alphanumerical characters. This quantum must contain a multiple of 4 and 6 bits to represent a whole number of decimal or alphanumeric characters. Twenty-four bits was found to be a good compromise between the minimum 12 bits, which would lead to a too-low transfer flow from a parallel readout core memory, and 36 bits or more, which was judged as too large an information quantum. The catena is to be considered as the equivalent of a character in variable word length machines, but it cannot be called so, as it may contain several characters. It is transferred in series to and from the main memory.
    Not wanting to call a "quantum" a word, or a set of characters a letter, (a word is a word, and a quantum is something else), a new word was made, and it was called a "catena." It is an English word and exists in Webster's although it does not in French. Webster's definition of the word catena is, "a connected series;" therefore, a 24-bit information item. The word catena will be used hereafter.
    The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60  [ fr ] is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. […]
  3. Blaauw, Gerrit Anne; Brooks Jr., Frederick Phillips; Buchholz, Werner (1962), "4: Natural Data Units" (PDF), in Buchholz, Werner (ed.), Planning a Computer System – Project Stretch, McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, LCCN   61-10466, archived (PDF) from the original on 2017-04-03, retrieved 2017-04-03, […] Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below.
    Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit.)
    A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60  [ fr ] computer.)
    Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. […]
  4. "Terms And Abbreviations". MCS-4 Assembly Language Programming Manual - The INTELLEC 4 Microcomputer System Programming Manual (PDF) (Preliminary ed.). Santa Clara, California, USA: Intel Corporation. December 1973. pp. v, 2-6. MCS-030-1273-1. Archived (PDF) from the original on 2020-03-01. Retrieved 2020-03-02. […] Bit - The smallest unit of information which can be represented. (A bit may be in one of two states I 0 or 1). […] Byte - A group of 8 contiguous bits occupying a single memory location. […] Character - A group of 4 contiguous bits of data. […] (NB. This Intel 4004 manual uses the term character referring to 4-bit rather than 8-bit data entities. Intel switched to use the more common term nibble for 4-bit entities in their documentation for the succeeding processor 4040 in 1974 already.)
  5. Davis, Mark (2008-05-05). "Moving to Unicode 5.1". Google Blog. Retrieved 2008-09-28.
  6. 1 2 "§1.7 The C++ memory model / §5.3.3 Sizeof". ISO/IEC 14882:2011.
  7. "<limits.h>". pubs.opengroup.org. Retrieved 2018-04-01.
  8. "Glossary of Unicode Terms Code Point" . Retrieved 2019-05-14.
  9. "POSIX definition of Character".
  10. "POSIX strlen reference".
  11. "POSIX definition of Character Array".
  12. Goyvaerts, Jan. "Regexp Tutorial - Character Classes or Character Sets". www.regular-expressions.info. Retrieved 2018-04-01.