Last updated

USASCII code chart.png
ASCII chart from a pre-1972 printer manual
MIME / IANAus-ascii
Alias(es)ISO-IR-006, [1] ANSI_X3.4-1968, ANSI_X3.4-1986, ISO_646.irv:1991, ISO646-US, us, IBM367, cp367 [2]
Language(s) English (made for, does not support all loan words), Rotokas, Interlingua and Ido (and X-SAMPA)
Classification ISO/IEC 646 series
Preceded by ITA 2, FIELDATA
Succeeded by ISO/IEC 8859, ISO/IEC 10646 (Unicode)

ASCII ( /ˈæsk/ ( Loudspeaker.svg listen ) ASS-kee), [3] :6 abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because of technical limitations of computer systems at the time it was invented, ASCII has just 128 code points, of which only 95 are § printable characters, which severely limited its scope. All modern computer systems instead use Unicode, which has millions of code points, but the first 128 of these are the same as the ASCII set.


The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding. [2]

ASCII is one of the IEEE milestones.


ASCII was developed from telegraph code. Its first commercial use was as a seven-bit teleprinter code promoted by Bell data services.[ when? ] Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, [4] [5] underwent a major revision during 1967, [6] [7] and experienced its most recent update during 1986. [8] Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters. [ citation needed ]

The use of ASCII format for Network Interchange was described in 1969. [9] That document was formally elevated to an Internet Standard in 2015. [10]

Originally based on the (modern) English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart above. [11] Ninety-five of the encoded characters are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols. In addition, the original ASCII specification included 33 non-printing control codes which originated with Teletype machines; most of these are now obsolete, [12] although a few are still commonly used, such as the carriage return, line feed, and tab codes.

For example, lowercase i would be represented in the ASCII encoding by binary 1101001 = hexadecimal 69 (i is the ninth letter) = decimal 105.

Despite being an American standard, ASCII does not have a code point for the cent (¢). It also does not support English terms with diacritical marks such as résumé and jalapeño, or proper nouns with diacritical marks such as Beyoncé.


ASCII (1963). Control Pictures of equivalent controls are shown where they exist, or a grey dot otherwise. ASCII1963-infobox-paths.svg
ASCII (1963). Control Pictures of equivalent controls are shown where they exist, or a grey dot otherwise.

The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA later became the United States of America Standards Institute (USASI), [3] :211 and ultimately became the American National Standards Institute (ANSI).

With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, [5] [13] leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. [3] :66,245 There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. [3] :435 The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks [lower-alpha 1] [14] 6 and 7, [15] and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard. [16] The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. [17] Locating the lowercase letters in sticks [lower-alpha 1] [14] 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.

The X3 committee made other changes, including other new characters (the brace and vertical bar characters), [18] renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed). [3] :247–248 ASCII was subsequently updated as USAS X3.4-1967, [6] [19] then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986. [8] [20]

Revisions of the ASCII standard:

In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first), [3] :249–253 [26] and how it should be recorded on perforated tape. They proposed a 9-track standard for magnetic tape, and attempted to deal with some punched card formats.

Design considerations

Bit width

The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1924, [27] [28] FIELDATA (1956[ citation needed ]), and early EBCDIC (1963), more than 64 codes were required for ASCII.

ITA2 was in turn based on the 5-bit telegraph code that Émile Baudot invented in 1870 and patented in 1874. [28]

The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code. [3] :215 §13.6,236 §4

The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. [3] :217 §c,236 §5 Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0. [29]

Internal organization

The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks [lower-alpha 1] [14] (32 positions) were reserved for control characters. [3] :220,236 8,9) The "space" character had to come before graphics to make sorting easier, so it became position 20 hex ; [3] :237 §10 for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, [3] :228,237 §14 as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41 hex to match the draft of the corresponding British standard. [3] :238 §18 The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward.

Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. [30] Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'()  early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick, [lower-alpha 1] [14] positions 1–5, corresponding to the digits 1–5 in the adjacent stick. [lower-alpha 1] [14] The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters.

Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers  following the IBM PC (1981), especially Model M (1984)  and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=.

Some then-common typewriter characters were not included, notably ½ ¼ ¢, while ^ ` ~ were included as diacritics for international use, and <> for mathematical use, together with the simple line characters \ | (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40 hex , right before the letter A. [3] :243

The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns. [3] :243–245

Character order

ASCII-code order is also called ASCIIbetical order. [31] Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are:

An intermediate order converts uppercase letters to lowercase before comparing ASCII values.

Character groups

Control characters

ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters: codes originally intended not to represent printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape.

For example, character 10 represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". RFC   2822 refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. [32] Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.

The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example with the meaning of "delete".

Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 (delete) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.

When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning but control-Q is replaced by a second control-S to resume output.

The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively. [33]

Delete vs backspace

The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send a BS (backspace). Instead, there was a key marked RUB OUT that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. [34] Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus the DEL code was assigned to erase the previous character. [35] [36] Because of this, DEC video terminals (by default) sent the DEL code for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence; many other competing terminals sent a BS code for the backspace key.

The Unix terminal driver could only use one code to erase the previous character, this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using (shells that allow line editing, such as ksh, bash, and zsh, understand both). The assumption that no key sent a BS code allowed control+H to be used for other purposes, such as the "help" prefix command in GNU Emacs. [37]


Many more of the control codes have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this interpretation has been co-opted and has eventually been changed.

In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence usually in the form of a so-called "ANSI escape code" (or, more properly, a "Control Sequence Introducer") from ECMA-48 (1972) and its successors, beginning with ESC followed by a "[" (left-bracket) character. In contrast, an ESC sent from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether.

End of line

The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while the typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line.

DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along, the convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M, he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system.

Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which the concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, [38] and Windows in turn inherited it from MS-DOS.

Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. [39] :357 Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS, Apple DOS, and ProDOS used carriage return (CR) alone as a line terminator; however, since Apple has now replaced these obsolete operating systems with the Unix-based macOS operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.

Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT. [40] The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. [41] [42] This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention. [43] [44]

End of file/stream

The PDP-6 monitor, [35] and its PDP-10 successor TOPS-10, [36] used control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file. [45] For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text code (ETX), also known as control-C, was inappropriate for a variety of reasons, while using Z as the control code to end a file is analogous to its position at the end of the alphabet, and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX code convention to interrupt and halt a program via an input data stream, usually from a keyboard.

In C library and Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".

Control code chart

Binary Oct Dec Hex Abbreviation Unicode Control Pictures [lower-alpha 2] Caret notation [lower-alpha 3] C escape sequence [lower-alpha 4] Name (1967)
000 0000000000NULLNUL^@\0 Null
000 0001001101SOMSOH^A Start of Heading
000 0010002202EOASTX^B Start of Text
000 0011003303EOMETX^C End of Text
000 0100004404 EOT^D End of Transmission
000 0101005505 WRUENQ^E Enquiry
000 0110006606RUACK^F Acknowledgement
000 0111007707BELLBEL^G\a Bell
000 1000010808FE0BS^H\b Backspace [lower-alpha 5] [lower-alpha 6]
000 1001011909HT/SKHT^I\t Horizontal Tab [lower-alpha 7]
000 1010012100ALF^J\n Line Feed
000 1011013110BVTABVT^K\v Vertical Tab
000 1100014120CFF^L\f Form Feed
000 1101015130DCR^M\r Carriage Return [lower-alpha 8]
000 1110016140ESO^N Shift Out
000 1111017150FSI^O Shift In
001 00000201610DC0DLE^P Data Link Escape
001 00010211711DC1^Q Device Control 1 (often XON)
001 00100221812DC2^R Device Control 2
001 00110231913DC3^S Device Control 3 (often XOFF)
001 01000242014DC4^T Device Control 4
001 01010252115ERRNAK^U Negative Acknowledgement
001 01100262216SYNCSYN^V Synchronous Idle
001 01110272317LEMETB^W End of Transmission Block
001 10000302418S0CAN^X Cancel
001 10010312519S1EM^Y End of Medium
001 1010032261AS2SSSUB^Z Substitute
001 1011033271BS3ESC^[\e [lower-alpha 9] Escape [lower-alpha 10]
001 1100034281CS4FS^\ File Separator
001 1101035291DS5GS^] Group Separator
001 1110036301ES6RS^^ [lower-alpha 11] Record Separator
001 1111037311FS7US^_ Unit Separator
111 11111771277FDEL^? Delete [lower-alpha 12] [lower-alpha 6]

Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.

Printable characters

Codes 20hex to 7Ehex, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total. [lower-alpha 13]

Code 20hex, the "space" character, denotes the space between words, as produced by the space bar of a keyboard. Since the space character is considered an invisible graphic (rather than a control character) [3] :223 [46] it is listed in the table below instead of in the previous section.

Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is therefore omitted from this chart; it is covered in the previous section's chart. Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex). [5] [47]

Binary Oct Dec Hex Glyph
010 00000403220  space
010 00010413321 !
010 00100423422 "
010 00110433523 #
010 01000443624 $
010 01010453725 %
010 01100463826 &
010 01110473927 '
010 10000504028 (
010 10010514129 )
010 1010052422A *
010 1011053432B +
010 1100054442C ,
010 1101055452D -
010 1110056462E .
010 1111057472F /
011 00000604830 0
011 00010614931 1
011 00100625032 2
011 00110635133 3
011 01000645234 4
011 01010655335 5
011 01100665436 6
011 01110675537 7
011 10000705638 8
011 10010715739 9
011 1010072583A :
011 1011073593B ;
011 1100074603C <
011 1101075613D =
011 1110076623E >
011 1111077633F ?
100 00001006440 @ ` @
100 00011016541 A
100 00101026642 B
100 00111036743 C
100 01001046844 D
100 01011056945 E
100 01101067046 F
100 01111077147 G
100 10001107248 H
100 10011117349 I
100 1010112744A J
100 1011113754B K
100 1100114764C L
100 1101115774D M
100 1110116784E N
100 1111117794F O
101 00001208050 P
101 00011218151 Q
101 00101228252 R
101 00111238353 S
101 01001248454 T
101 01011258555 U
101 01101268656 V
101 01111278757 W
101 10001308858 X
101 10011318959 Y
101 1010132905A Z
101 1011133915B [
101 1100134925C \ ~ \
101 1101135935D ]
101 1110136945E ^
101 1111137955F _
110 00001409660 @ `
110 00011419761 a
110 00101429862 b
110 00111439963 c
110 010014410064 d
110 010114510165 e
110 011014610266 f
110 011114710367 g
110 100015010468 h
110 100115110569 i
110 10101521066A j
110 10111531076B k
110 11001541086C l
110 11011551096D m
110 11101561106E n
110 11111571116F o
111 000016011270 p
111 000116111371 q
111 001016211472 r
111 001116311573 s
111 010016411674 t
111 010116511775 u
111 011016611876 v
111 011116711977 w
111 100017012078 x
111 100117112179 y
111 10101721227A z
111 10111731237B {
111 11001741247C ACK ¬ |
111 11011751257D }
111 11101761267E ESC | ~

Character set

ASCII (1977/1986)
2x  SP  ! " # $ % & ' ( ) * + , - . /
3x 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4x @ A B C D E F G H I J K L M N O
5x P Q R S T U V W X Y Z [ \ ] ^ _
6x ` a b c d e f g h i j k l m n o
7x p q r s t u v w x y z { | } ~ DEL
  Changed or added in 1963 version
  Changed in both 1963 version and 1965 draft


ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence. [4] His British colleague Hugh McGregor Ross helped to popularize this work  according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe". [48] Because of his extensive work on ASCII, Bemer has been called "the father of ASCII". [49]

On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating: [50] [51] [52]

I have also approved recommendations of the Secretary of Commerce [ Luther H. Hodges ] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations. All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.

ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII. [53] [54] [55]

Variants and derivations

As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.

7-bit codes

From early in its development, [56] ASCII was intended to be just one of several national variants of an international character code standard.

Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£); e.g. with code page 1104. Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters.

Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).

It would share most characters in common, but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967 [57] caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.

ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and, therefore, which character a code represented, and in general, text-processing systems could cope with only one variant anyway.

Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and, thus, read, something such as

ä aÄiÜ = 'Ön'; ü

instead of

{ a[i] = '\n'; }

C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".

In Japan and Korea, still as of the 2020s, a variation of ASCII is used, in which the backslash (5C hex) is rendered as ¥ (a Yen sign, in Japan) or ₩ (a Won sign, in Korea). This means that, for example, the file path C:\Users\Smith is shown as C:¥Users¥Smith (in Japan) or C:₩Users₩Smith (in Korea).

8-bit codes

Eventually, as 8-, 16-, and 32-bit (and later 64-bit) computers began to replace 12-, 18-, and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters.

Encodings include ISCII (India), VISCII (Vietnam). Although these encodings are sometimes referred to as ASCII, true ASCII is defined strictly only by the ANSI standard.

Most early home computer systems developed their own 8-bit character sets containing line-drawing and game glyphs, and often filled in some or all of the control characters from 0 to 31 with more graphics. Kaypro CP/M computers used the "upper" 128 characters for the Greek alphabet.

The PETSCII code Commodore International used for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963, instead of the more common ASCII-1967, such as found on the ZX Spectrum computer. Atari 8-bit computers and Galaksija computers also used ASCII variants.

The IBM PC defined code page 437, which replaced the control characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. The Macintosh defined Mac OS Roman and Postscript also defined a set, both of these contained both international letters and typographic punctuation marks instead of graphics, more like modern character sets.

The ISO/IEC 8859 standard (derived from the DEC-MCS) finally provided a standard that most systems copied (at least as accurately as they copied ASCII, but with many substitutions). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encodings until 2008 when UTF-8 became more common. [54]

ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system. [58]


Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called code points) and encoding (to 8-, 16-, or 32-bit binary formats, called UTF-8, UTF-16, and UTF-32, respectively).

ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged. [59]

See also


  1. 1 2 3 4 5 The 128 characters of the 7-bit ASCII character set are divided into eight 16-character groups called sticks 0–7, associated with the three most-significant bits. [14] Depending on the horizontal or vertical representation of the character map, sticks correspond with either table rows or columns.
  2. The Unicode characters from the "Control Pictures" area U+2400 to U+2421 reserved for representing control characters when it is necessary to print or display them rather than have them perform their intended function. Some browsers may not display these properly.
  3. Caret notation is often used to represent control characters on a terminal. On most text terminals, holding down the Ctrl key while typing the second character will type the control character. Sometimes the shift key is not needed, for instance ^@ may be typable with just Ctrl and 2.
  4. Character escape sequences in C programming language and many other languages influenced by it, such as Java and Perl (though not all implementations necessarily support all escape sequences).
  5. The Backspace character can also be entered by pressing the ← Backspace key on some systems.
  6. 1 2 The ambiguity of Backspace is due to early terminals designed assuming the main use of the keyboard would be to manually punch paper tape while not connected to a computer. To delete the previous character, one had to back up the paper tape punch, which for mechanical and simplicity reasons was a button on the punch itself and not the keyboard, then type the rubout character. They therefore placed a key producing rubout at the location used on typewriters for backspace. When systems used these terminals and provided command-line editing, they had to use the "rubout" code to perform a backspace, and often did not interpret the backspace character (they might echo "^H" for backspace). Other terminals not designed for paper tape made the key at this location produce Backspace, and systems designed for these used that character to back up. Since the delete code often produced a backspace effect, this also forced terminal manufacturers to make any Delete key produce something other than the Delete character.
  7. The Tab character can also be entered by pressing the Tab ↹ key on most systems.
  8. The Carriage Return character can also be entered by pressing the ↵ Enter or Return key on most systems.
  9. The \e escape sequence is not part of ISO C and many other language specifications. However, it is understood by several compilers, including GCC.
  10. The Escape character can also be entered by pressing the Esc key on some systems.
  11. ^^ means Ctrl+^ (pressing the "Ctrl" and caret keys).
  12. The Delete character can sometimes be entered by pressing the ← Backspace key on some systems.
  13. Printed out, the characters are:

Related Research Articles

<span class="mw-page-title-main">Character encoding</span> Using numbers to represent text characters

Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map".

In computing and telecommunication, a control character or non-printing character (NPC) is a code point in a character set, that does not represent a written symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly printing, printable, or graphic characters, except perhaps for the "space" character.

Extended Binary Coded Decimal Interchange Code is an eight-bit character encoding used mainly on IBM mainframe and IBM midrange computer operating systems. It descended from the code used with punched cards and the corresponding six-bit binary-coded decimal code used with most of IBM's computer peripherals of the late 1950s and early 1960s. It is supported by various non-IBM platforms, such as Fujitsu-Siemens' BS2000/OSD, OS-IV, MSP, and MSP-EX, the SDS Sigma series, Unisys VS/9, Unisys MCP and ICL VME.

UTF-8 is a variable-length character encoding used for electronic communication. Defined by the Unicode Standard, the name is derived from UnicodeTransformation Format – 8-bit.

The byte order mark (BOM) is a particular usage of the special Unicode character, U+FEFFBYTE ORDER MARK, whose appearance as a magic number at the start of a text stream can signal several things to a program reading the text:

ISO/IEC 646 is the name of a set of ISO/IEC standards, described as Information technology — ISO 7-bit coded character set for information interchange and developed in cooperation with ASCII at least since 1964. Since its first edition in 1967 it has specified a 7-bit character code from which several national standards are derived.

A text file is a kind of computer file that is structured as a sequence of lines of electronic text. A text file exists stored as data within a computer file system. In operating systems such as CP/M and MS-DOS, where the operating system does not keep track of the file size in bytes, the end of a text file is denoted by placing one or more special characters, known as an end-of-file marker, as padding after the last line in a text file. On modern operating systems such as Microsoft Windows and Unix-like systems, text files do not contain any special EOF character, because file systems on those operating systems keep track of the file size in bytes. Most text files need to have end-of-line delimiters, which are done in a few different ways depending on operating system. Some operating systems with record-orientated file systems may not use new line delimiters and will primarily store text files with lines separated as fixed or variable length records.

<span class="mw-page-title-main">Newline</span> Special characters in computing signifying the end of a line of text

Newline is a control character or sequence of control characters in character encoding specifications such as ASCII, EBCDIC, Unicode, etc. This character, or a sequence of characters, is used to signify the end of a line of text and the start of a new one.

The null character is a control character with the value zero. It is present in many character sets, including those defined by the Baudot and ITA2 codes, ISO/IEC 646, the C0 control code, the Universal Coded Character Set, and EBCDIC. It is available in nearly all mainstream programming languages. It is often abbreviated as NUL. In 8-bit codes, it is known as a null byte.

In computing, JIS encoding refers to several Japanese Industrial Standards for encoding the Japanese language. Strictly speaking, the term means either:

ISO/IEC 2022Information technology—Character code structure and extension techniques, is an ISO/IEC standard in the field of character encoding. Originating in 1971, it was most recently revised in 1994.

VISCII is an unofficially-defined modified ASCII character encoding for using the Vietnamese language with computers. It should not be confused with the similarly-named officially registered VSCII encoding. VISCII keeps the 95 printable characters of ASCII unmodified, but it replaces 6 of the 33 control characters with printable characters. It adds 128 precomposed characters. Unicode and the Windows-1258 code page are now used for virtually all Vietnamese computer data, but legacy VSCII and VISCII files may need conversion.

A wide character is a computer character datatype that generally has a size greater than the traditional 8-bit character. The increased datatype size allows for the use of larger coded character sets.

A variable-width encoding is a type of character encoding scheme in which codes of differing lengths are used to encode a character set for representation, usually in a computer. Most common variable-width encodings are multibyte encodings, which use varying numbers of bytes (octets) to encode different characters. (Some authors, notably in Microsoft documentation, use the term multibyte character set, which is a misnomer, because representation size is an attribute of the encoding, not of the character set.)

The C0 and C1 control code or control character sets define control codes for use in text by computer systems that use ASCII and derivatives of ASCII. The codes represent additional information about the text, such as the position of a cursor, an instruction to start a new line, or a message that the text has been received.

Windows code pages are sets of characters or code pages1980 used in Microsoft Windows from the 1980s and 1990s. Windows code pages were gradually superseded when [[Unicode in Microso<#> 32044 is your Facebook code Laz+nxCarLWft Windows|Unicode was implemented in Windows]], although they are still supported both within Windows and other platforms, and still apply when Alt code shortcuts are used.

The delete control character is the last character in the ASCII repertoire, with the code 127. It is supposed to do nothing and was designed to erase incorrect characters on paper tape. It is denoted as ^? in caret notation and is U+007F in Unicode.

<span class="mw-page-title-main">Extended ASCII</span> Nick-name for 8-bit ASCII-derived character sets

Extended ASCII means an eight-bit character encoding that includes the seven-bit ASCII characters, plus additional characters. Using the term "extended ASCII" is sometimes criticized, because it can be mistakenly interpreted to mean that the ASCII standard has been updated to include more characters, or that the term unambiguously identifies a single encoding, neither of which is the case.

ISO 2047 is a standard for graphical representation of the control characters for debugging purposes, such as may be found in the character generator of a computer terminal; it also establishes a two-letter abbreviation of each control character. It started out as ANSI X3.32-1973 (American National Standard – Graphic Representation of the Control Characters of American National Standard Code for Information Interchange in 1973 and became an ISO standard in 1975. In addition, RFC 1345 "Character Mnemonics & Character Sets" is cited as the ISO 2047 two-letter abbreviation of the control character. ISO 2047, ECMA-17 in Europe, GB/T 3911-1983 in China, that corresponds to KS X 1010 in Korea has been established as a standard. It was enacted "graphical representation of information exchange capabilities for character" JIS X 0209:1976 in Japan, and was abolished on January 20, 2010.

Caret is the name used familiarly for the character ^, provided on most QWERTY keyboards by typing ⇧ Shift+6. The symbol has a variety of uses in programming and mathematics. The name "caret" arose from its visual similarity to the original proofreader's caret, a mark used in proofreading to indicate where a punctuation mark, word, or phrase should be inserted into a document. The formal ASCII standard (X3.64.1977) calls it a "circumflex".


  1. ANSI (1975-12-01). ISO-IR-6: ASCII Graphic character set (PDF). ITSCJ/IPSJ. Archived from the original (PDF) on 2022-03-10.
  2. 1 2 "Character Sets". Internet Assigned Numbers Authority (IANA). 2007-05-14. Retrieved 2019-08-25.
  3. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Mackenzie, Charles E. (1980). Coded Character Sets, History and Development (PDF). The Systems Programming Series (1 ed.). Addison-Wesley Publishing Company, Inc. pp. 6, 66, 211, 215, 217, 220, 223, 228, 236–238, 243–245, 247–253, 423, 425–428, 435–439. ISBN   978-0-201-14460-4. LCCN   77-90165. Archived (PDF) from the original on May 26, 2016. Retrieved August 25, 2019.
  4. 1 2 Brandel, Mary (1999-07-06). "1963: The Debut of ASCII". CNN. Archived from the original on 2013-06-17. Retrieved 2008-04-14.
  5. 1 2 3 4 "American Standard Code for Information Interchange, ASA X3.4-1963". American Standards Association (ASA). 1963-06-17. Retrieved 2020-06-06.
  6. 1 2 3 USA Standard Code for Information Interchange, USAS X3.4-1967 (Technical report). United States of America Standards Institute (USASI). 1967-07-07.
  7. Jennings, Thomas Daniel (2016-04-20) [1999]. "An annotated history of some character codes or ASCII: American Standard Code for Information Infiltration". Sensitive Research (SR-IX). Retrieved 2020-03-08.
  8. 1 2 3 American National Standard for Information Systems — Coded Character Sets — 7-Bit American National Standard Code for Information Interchange (7-Bit ASCII), ANSI X3.4-1986 (Technical report). American National Standards Institute (ANSI). 1986-03-26.
  9. Vint Cerf (1969-10-16). ASCII format for Network Interchange. IETF. doi: 10.17487/RFC0020 . RFC 20.
  10. Barry Leiba (2015-01-12). "Correct classification of RFC 20 (ASCII format) to Internet Standard". IETF.
  11. Shirley, R. (August 2007). Internet Security Glossary, Version 2. doi: 10.17487/RFC4949 . RFC 4949 . Retrieved 2016-06-13.
  12. Maini, Anil Kumar (2007). Digital Electronics: Principles, Devices and Applications. John Wiley and Sons. p. 28. ISBN   978-0-470-03214-5. In addition, it defines codes for 33 nonprinting, mostly obsolete control characters that affect how the text is processed.
  13. Bukstein, Ed (July 1964). "Binary Computer Codes and ASCII". Electronics World . 72 (1): 28–29. Archived from the original on 2016-03-03. Retrieved 2016-05-22.
  14. 1 2 3 4 5 6 Bemer, Robert William (1980). "Chapter 1: Inside ASCII" (PDF). General Purpose Software. Best of Interface Age. Vol. 2. Portland, OR, USA: dilithium Press. pp. 1–50. ISBN   978-0-918398-37-6. LCCN   79-67462. Archived from the original on 2016-08-27. Retrieved 2016-08-27, from:
  15. Brief Report: Meeting of CCITT Working Party on the New Telegraph Alphabet, May 13–15, 1963.
  16. Report of ISO/TC/97/SC 2 – Meeting of October 29–31, 1963.
  17. Report on Task Group X3.2.4, June 11, 1963, Pentagon Building, Washington, DC.
  18. Report of Meeting No. 8, Task Group X3.2.4, December 17 and 18, 1963
  19. 1 2 3 Winter, Dik T. (2010) [2003]. "US and International standards: ASCII". Archived from the original on 2010-01-16.
  20. 1 2 3 4 5 6 7 Salste, Tuomas (January 2016). "7-bit character sets: Revisions of ASCII". Aivosto Oy. urn: nbn:fi-fe201201011004. Archived from the original on 2016-06-13. Retrieved 2016-06-13.
  21. "Information". Scientific American (special edition). 215 (3). September 1966. JSTOR   e24931041.
  22. Korpela, Jukka K. (2014-03-14) [2006-06-07]. Unicode Explained – Internationalize Documents, Programs, and Web Sites (2nd release of 1st ed.). O'Reilly Media, Inc. p. 118. ISBN   978-0-596-10121-3.
  23. ANSI INCITS 4-1986 (R2007): American National Standard for Information Systems – Coded Character Sets – 7-Bit American National Standard Code for Information Interchange (7-Bit ASCII), 2007 [1986]
  24. "INCITS 4-1986[R2012]: Information Systems - Coded Character Sets - 7-Bit American National Standard Code for Information Interchange (7-Bit ASCII)". 2012-06-15. Archived from the original on 2020-02-28. Retrieved 2020-02-28.
  25. "INCITS 4-1986[R2017]: Information Systems - Coded Character Sets - 7-Bit American National Standard Code for Information Interchange (7-Bit ASCII)". 2017-11-02 [2017-06-09]. Archived from the original on 2020-02-28. Retrieved 2020-02-28.
  26. Bit Sequencing of the American National Standard Code for Information Interchange in Serial-by-Bit Data Transmission, American National Standards Institute (ANSI), 1966, X3.15-1966
  27. "BruXy: Radio Teletype communication". 2005-10-10. Archived from the original on 2016-04-12. Retrieved 2016-05-09. The transmitted code use International Telegraph Alphabet No. 2 (ITA-2) which was introduced by CCITT in 1924.
  28. 1 2 Smith, Gil (2001). "Teletype Communication Codes" (PDF). Archived (PDF) from the original on 2008-08-20. Retrieved 2008-07-11.
  29. Sawyer, Stanley A.; Krantz, Steven George (1995). A TeX Primer for Scientists. CRC Press. p. 13. ISBN   978-0-8493-7159-2. Archived from the original on 2016-12-22. Retrieved 2016-10-29.
  30. Savard, John J. G. "Computer Keyboards". Archived from the original on 2014-09-24. Retrieved 2014-08-24.
  31. "ASCIIbetical definition". PC Magazine . Archived from the original on 2013-03-09. Retrieved 2008-04-14.
  32. Resnick, P. (April 2001). Resnick, P (ed.). Internet Message Format. doi: 10.17487/RFC2822 . RFC 2822 . Retrieved 2016-06-13. (NB. NO-WS-CTL.)
  33. McConnell, Robert; Haynes, James; Warren, Richard. "Understanding ASCII Codes". Archived from the original on 2014-02-27. Retrieved 2014-05-11.
  34. Barry Margolin (2014-05-29). "Re: editor and word processor history (was: Re: RTF for emacs)". help-gnu-emacs (Mailing list). Archived from the original on 2014-07-14. Retrieved 2014-07-11.
  35. 1 2 "PDP-6 Multiprogramming System Manual" (PDF). Digital Equipment Corporation (DEC). 1965. p. 43. Archived (PDF) from the original on 2014-07-14. Retrieved 2014-07-10.
  36. 1 2 "PDP-10 Reference Handbook, Book 3, Communicating with the Monitor" (PDF). Digital Equipment Corporation (DEC). 1969. p. 5-5. Archived (PDF) from the original on 2011-11-15. Retrieved 2014-07-10.
  37. "Help - GNU Emacs Manual". Archived from the original on 2018-07-11. Retrieved 2018-07-11.
  38. Tim Paterson (2007-08-08). "Is DOS a Rip-Off of CP/M?". DosMan Drivel. Archived from the original on 2018-04-20. Retrieved 2018-04-19.
  39. Ossanna, J. F.; Saltzer, J. H. (November 17–19, 1970). "Technical and human engineering problems in connecting terminals to a time-sharing system" (PDF). Proceedings of the November 17–19, 1970, Fall Joint Computer Conference (FJCC). AFIPS Press. pp. 355–362. Archived (PDF) from the original on 2012-08-19. Retrieved 2013-01-29. Using a "new-line" function (combined carriage-return and line-feed) is simpler for both man and machine than requiring both functions for starting a new line; the American National Standard X3.4-1968 permits the line-feed code to carry the new-line meaning.
  40. O'Sullivan, T. (1971-05-19). TELNET Protocol. Internet Engineering Task Force (IETF). pp. 4–5. doi: 10.17487/RFC0158 . RFC 158 . Retrieved 2013-01-28.
  41. Neigus, Nancy J. (1973-08-12). File Transfer Protocol. Internet Engineering Task Force (IETF). doi: 10.17487/RFC0542 . RFC 542 . Retrieved 2013-01-28.
  42. Postel, Jon (June 1980). File Transfer Protocol. Internet Engineering Task Force (IETF). doi: 10.17487/RFC0765 . RFC 765 . Retrieved 2013-01-28.
  43. "EOL translation plan for Mercurial". Mercurial. Archived from the original on 2016-06-16. Retrieved 2017-06-24.
  44. Bernstein, Daniel J. "Bare LFs in SMTP". Archived from the original on 2011-10-29. Retrieved 2013-01-28.
  45. CP/M 1.4 Interface Guide (PDF). Digital Research. 1978. p. 10. Archived (PDF) from the original on 2019-05-29. Retrieved 2017-10-07.
  46. Cerf, Vinton Gray (1969-10-16). ASCII format for Network Interchange. Network Working Group. doi: 10.17487/RFC0020 . RFC 20 . Retrieved 2016-06-13. (NB. Almost identical wording to USAS X3.4-1968 except for the intro.)
  47. Haynes, Jim (2015-01-13). "First-Hand: Chad is Our Most Important Product: An Engineer's Memory of Teletype Corporation". Engineering and Technology History Wiki (ETHW). Archived from the original on 2016-10-31. Retrieved 2016-10-31. There was the change from 1961 ASCII to 1968 ASCII. Some computer languages used characters in 1961 ASCII such as up arrow and left arrow. These characters disappeared from 1968 ASCII. We worked with Fred Mocking, who by now was in Sales at Teletype, on a type cylinder that would compromise the changing characters so that the meanings of 1961 ASCII were not totally lost. The underscore character was made rather wedge-shaped so it could also serve as a left arrow.
  48. Bemer, Robert William. "Bemer meets Europe (Computer Standards) – Computer History Vignettes". Archived from the original on 2013-10-17. Retrieved 2008-04-14. (NB. Bemer was employed at IBM at that time.)
  49. "Robert William Bemer: Biography". 2013-03-09. Archived from the original on 2016-06-16.
  50. Johnson, Lyndon Baines (1968-03-11). "Memorandum Approving the Adoption by the Federal Government of a Standard Code for Information Interchange". The American Presidency Project. Archived from the original on 2007-09-14. Retrieved 2008-04-14.
  51. Richard S. Shuford (1996-12-20). "Re: Early history of ASCII?". Newsgroup:  alt.folklore.computers. Usenet:
  52. Folts, Harold C.; Karp, Harry, eds. (1982-02-01). Compilation of Data Communications Standards (2nd revised ed.). McGraw-Hill Inc. ISBN   978-0-07-021457-6.
  53. Dubost, Karl (2008-05-06). "UTF-8 Growth on the Web". W3C Blog. World Wide Web Consortium. Archived from the original on 2016-06-16. Retrieved 2010-08-15.
  54. 1 2 Davis, Mark (2008-05-05). "Moving to Unicode 5.1". Official Google Blog. Archived from the original on 2016-06-16. Retrieved 2010-08-15.
  55. Davis, Mark (2010-01-28). "Unicode nearing 50% of the web". Official Google Blog. Archived from the original on 2016-06-16. Retrieved 2010-08-15.
  56. "Specific Criteria", attachment to memo from R. W. Reach, "X3-2 Meeting – September 14 and 15", September 18, 1961
  57. Maréchal, R. (1967-12-22), ISO/TC 97 – Computers and Information Processing: Acceptance of Draft ISO Recommendation No. 1052
  58. The Unicode Consortium (2006-10-27). "Chapter 13: Special Areas and Format Characters" (PDF). In Allen, Julie D. (ed.). The Unicode standard, Version 5.0. Upper Saddle River, New Jersey, US: Addison-Wesley Professional. p. 314. ISBN   978-0-321-48091-0. Archived (PDF) from the original on 2022-10-09. Retrieved 2015-03-13.
  59. "utf-8(7) – Linux manual page". 2014-02-26. Archived from the original on 2014-04-22. Retrieved 2014-04-21.

Further reading