Specials (Unicode block)

Last updated
Specials
RangeU+FFF0..U+FFFF
(16 code points)
Plane BMP
Scripts Common
Assigned5 code points
Unused9 reserved code points
2 non-characters
Unicode version history
1.0.0 (1991)1 (+1)
2.1 (1998)2 (+1)
3.0 (1999)5 (+3)
Chart
Code chart
Note: [1] [2]

Specials is a short Unicode block of characters allocated at the very end of the Basic Multilingual Plane, at U+FFF0FFFF. Of these 16 code points, five have been assigned since Unicode 3.0:

Contents

FFFE and FFFF are not unassigned in the usual sense, but guaranteed not to be Unicode characters at all. They can be used to guess a text's encoding scheme, since any text containing these is by definition not a correctly encoded Unicode text. Unicode's U+FEFF BYTE ORDER MARK character can be inserted at the beginning of a Unicode text to signal its endianness: a program reading such a text and encountering 0xFFFE would then know that it should switch the byte order for all the following characters.

Its block name in Unicode 1.0 was Special. [3]

Replacement character

Replacement character Replacement character.svg
Replacement character

The replacement character � (often displayed as a black rhombus with a white question mark) is a symbol found in the Unicode standard at code point U+FFFD in the Specials table. It is used to indicate problems when a system is unable to render a stream of data to a correct symbol. [4] It is usually seen when the data is invalid and does not match any character:

Consider a text file containing the German word für (meaning 'for') encoded in ISO 8859-1 (0x66 0xFC 0x72). This file is now opened with a text editor that assumes the input is UTF-8. The first and third bytes are valid UTF-8 encodings of ASCII, but the second byte (0xFC) is not valid in UTF-8. A text editor could replace this byte with the replacement character to produce a valid string of Unicode code points for display, so the user sees "f�r".

A poorly implemented text editor might save the replacement character when writing the file back out; the data in the file will then become 0x66 0xEF 0xBF 0xBD 0x72. If the file is then re-opened using ISO-8859-1, it displays "f�r" (this is called mojibake). Since the replacement is the same for all errors it is impossible to recover the original character. A design that is better (but harder to implement) is to preserve the original bytes, including the error, and only convert to the replacement when displaying the text. This will allow the text editor to save the original byte sequence, while still showing the error indicator to the user.

At one time the replacement character was often used when there was no glyph available in a font for that character. However, most modern text rendering systems instead use a font's .notdef character, which in most cases is an empty box (or "?" or "X" in a box [5] ), sometimes called a 'tofu' (this browser displays 􏿾). There is no Unicode code point for this symbol.

Thus the replacement character is now only seen for encoding errors, such as invalid UTF-8. Some software attempts to hide this by translating the bytes of invalid UTF-8 to matching characters in Windows-1252 (since that is the most likely source of these errors), so that the replacement character is never seen.

Unicode chart

Specials [1] [2] [3]
Official Unicode Consortium code chart (PDF)
 0123456789ABCDEF
U+FFFxIAAIASIAT
Notes
1. ^ As of Unicode version 15.0
2. ^ Grey areas indicate non-assigned code points
3. ^ Black areas indicate noncharacters (code points that are guaranteed never to be assigned as encoded characters in the Unicode Standard)

History

The following Unicode-related documents record the purpose and process of defining specific characters in the Specials block:

Version Final code points [lower-alpha 1] Count UTC  ID L2  ID WG2  IDDocument
1.0.0U+FFFD1(to be determined)
U+FFFE..FFFF2(to be determined)
L2/01-295R Moore, Lisa (2001-11-06), "Motion 88-M2", Minutes from the UTC/L2 meeting #88
L2/01-355 N2369 (html, doc) Davis, Mark (2001-09-26), Request to allow FFFF, FFFE in UTF-8 in the text of ISO/IEC 10646
L2/02-154 N2403 Umamaheswaran, V. S. (2002-04-22), "9.3 Allowing FFFF and FFFE in UTF-8", Draft minutes of WG 2 meeting 41, Hotel Phoenix, Singapore, 2001-10-15/19
2.1U+FFFC1UTC/1995-056Sargent, Murray (1995-12-06), Recommendation to encode a WCH_EMBEDDING character
UTC/1996-002 Aliprand, Joan; Hart, Edwin; Greenfield, Steve (1996-03-05), "Embedded Objects", UTC #67 Minutes
N1365Sargent, Murray (1996-03-18), Proposal Summary – Object Replacement Character
N1353 Umamaheswaran, V. S.; Ksar, Mike (1996-06-25), "8.14", Draft minutes of WG2 Copenhagen Meeting # 30
L2/97-288 N1603 Umamaheswaran, V. S. (1997-10-24), "7.3", Unconfirmed Meeting Minutes, WG 2 Meeting # 33, Heraklion, Crete, Greece, 20 June – 4 July 1997
L2/98-004R N1681Text of ISO 10646 – AMD 18 for PDAM registration and FPDAM ballot, 1997-12-22
L2/98-070 Aliprand, Joan; Winkler, Arnold, "Additional comments regarding 2.1", Minutes of the joint UTC and L2 meeting from the meeting in Cupertino, February 25-27, 1998
L2/98-318 N1894 Revised text of 10646-1/FPDAM 18, AMENDMENT 18: Symbols and Others, 1998-10-22
3.0U+FFF9..FFFB3 L2/97-255R Aliprand, Joan (1997-12-03), "3.D Proposal for In-Line Notation (ruby)", Approved Minutes – UTC #73 & L2 #170 joint meeting, Palo Alto, CA – August 4-5, 1997
L2/98-055 Freytag, Asmus (1998-02-22), Support for Implementing Inline and Interlinear Annotations
L2/98-070 Aliprand, Joan; Winkler, Arnold, "3.C.5. Support for implementing inline and interlinear annotations", Minutes of the joint UTC and L2 meeting from the meeting in Cupertino, February 25-27, 1998
L2/98-099N1727Freytag, Asmus (1998-03-18), Support for Implementing Interlinear Annotations as used in East Asian Typography
L2/98-158 Aliprand, Joan; Winkler, Arnold (1998-05-26), "Inline and Interlinear Annotations", Draft Minutes – UTC #76 & NCITS Subgroup L2 #173 joint meeting, Tredyffrin, Pennsylvania, April 20-22, 1998
L2/98-286 N1703 Umamaheswaran, V. S.; Ksar, Mike (1998-07-02), "8.14", Unconfirmed Meeting Minutes, WG 2 Meeting #34, Redmond, WA, USA; 1998-03-16--20
L2/98-270Hiura, Hideki; Kobayashi, Tatsuo (1998-07-29), Suggestion to the inline and interlinear annotation proposal
L2/98-281R (pdf, html)Aliprand, Joan (1998-07-31), "In-Line and Interlinear Annotation (III.C.1.c)", Unconfirmed Minutes – UTC #77 & NCITS Subgroup L2 # 174 JOINT MEETING, Redmond, WA -- July 29-31, 1998
L2/98-363 N1861 Sato, T. K. (1998-09-01), Ruby markers
L2/98-372 N1884R2 (pdf, doc)Whistler, Ken; et al. (1998-09-22), Additional Characters for the UCS
L2/98-416 N1882.zip Support for Implementing Interlinear Annotations, 1998-09-23
L2/98-329 N1920 Combined PDAM registration and consideration ballot on WD for ISO/IEC 10646-1/Amd. 30, AMENDMENT 30: Additional Latin and other characters, 1998-10-28
L2/98-421R Suignard, Michel; Hiura, Hideki (1998-12-04), Notes concerning the PDAM 30 interlinear annotation characters
L2/99-010 N1903 (pdf, html, doc)Umamaheswaran, V. S. (1998-12-30), "8.2.15", Minutes of WG 2 meeting 35, London, U.K.; 1998-09-21--25
L2/98-419 (pdf, doc)Aliprand, Joan (1999-02-05), "Interlinear Annotation Characters", Approved Minutes -- UTC #78 & NCITS Subgroup L2 # 175 Joint Meeting, San Jose, CA -- December 1-4, 1998
UTC/1999-021 Duerst, Martin; Bosak, Jon (1999-06-08), W3C XML CG statement on annotation characters
L2/99-176R Moore, Lisa (1999-11-04), "W3C Liaison Statement on Annotation Characters", Minutes from the joint UTC/L2 meeting in Seattle, June 8-10, 1999
L2/01-301 Whistler, Ken (2001-08-01), "E. Indicated as "strongly discouraged" for plain text interchange", Analysis of Character Deprecation in the Unicode Standard
  1. Proposed code points and characters names may differ from final code points and names

See also

Related Research Articles

<span class="mw-page-title-main">Character encoding</span> Using numbers to represent text characters

Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map".

While Hypertext Markup Language (HTML) has been in use since 1991, HTML 4.0 from December 1997 was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display.

<span class="mw-page-title-main">Unicode</span> Character encoding standard

Unicode, formally The Unicode Standard, is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, which is maintained by the Unicode Consortium, defines as of the current version (15.0) 149,186 characters covering 161 modern and historic scripts, as well as symbols, 3664 emoji, and non-visual control and formatting codes.

Web pages authored using HyperText Markup Language (HTML) may contain multilingual text represented with the Unicode universal character set. Key to the relationship between Unicode and HTML is the relationship between the "document character set", which defines the set of characters that may be present in a HTML document and assigns numbers to them, and the "external character encoding", or "charset", used to encode a given document as a sequence of bytes.

UTF-8 is a variable-length character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from UnicodeTransformation Format – 8-bit.

<span class="mw-page-title-main">UTF-16</span> Variable-width encoding of Unicode, using one or two 16-bit code units

UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 valid code points of Unicode (in fact this number of code points is dictated by the design of UTF-16). The encoding is variable-length, as code points are encoded with one or two 16-bit code units. UTF-16 arose from an earlier obsolete fixed-width 16-bit encoding, now known as UCS-2 (for 2-byte Universal Character Set), once it became clear that more than 216 (65,536) code points were needed.

The byte order mark (BOM) is a particular usage of the special Unicode character, U+FEFFBYTE ORDER MARK, whose appearance as a magic number at the start of a text stream can signal several things to a program reading the text:

UTF-32 (32-bit Unicode Transformation Format) is a fixed-length encoding used to encode Unicode code points that uses exactly 32 bits (four bytes) per code point (but a number of leading bits must be zero as there are far fewer than 232 Unicode code points, needing actually only 21 bits). UTF-32 is a fixed-length encoding, in contrast to all other Unicode transformation formats, which are variable-length encodings. Each 32-bit value in UTF-32 represents one Unicode code point and is exactly equal to that code point's numerical value.

<span class="mw-page-title-main">Mojibake</span> Garbled text as a result of incorrect character encodings

Mojibake is the garbled text that is the result of text being decoded using an unintended character encoding. The result is a systematic replacement of symbols with completely unrelated ones, often from a different writing system.

UTF-7 is an obsolete variable-length character encoding for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable.

<span class="mw-page-title-main">GB 18030</span> Unicode character encoding mostly used for Simplified Chinese

GB 18030 is a Chinese government standard, described as Information Technology — Chinese coded character set and defines the required language and character support necessary for software in China. GB18030 is the registered Internet name for the official character set of the People's Republic of China (PRC) superseding GB2312. As a Unicode Transformation Format, GB18030 supports both simplified and traditional Chinese characters. It is also compatible with legacy encodings including GB2312, CP936, and GBK 1.0.

A numeric character reference (NCR) is a common markup construct used in SGML and SGML-derived markup languages such as HTML and XML. It consists of a short sequence of characters that, in turn, represents a single character. Since WebSgml, XML and HTML 4, the code points of the Universal Character Set (UCS) of Unicode are used. NCRs are typically used in order to represent characters that are not directly encodable in a particular document. When the document is interpreted by a markup-aware reader, each NCR is treated as if it were the character it represents.

Binary Ordered Compression for Unicode (BOCU) is a MIME compatible Unicode compression scheme. BOCU-1 combines the wide applicability of UTF-8 with the compactness of Standard Compression Scheme for Unicode (SCSU). This Unicode encoding is designed to be useful for compressing short strings, and maintains code point order. BOCU-1 is specified in a Unicode Technical Note.

The Compatibility Encoding Scheme for UTF-16: 8-Bit (CESU-8) is a variant of UTF-8 that is described in Unicode Technical Report #26. A Unicode code point from the Basic Multilingual Plane (BMP), i.e. a code point in the range U+0000 to U+FFFF, is encoded in the same way as in UTF-8. A Unicode supplementary character, i.e. a code point in the range U+10000 to U+10FFFF, is first represented as a surrogate pair, like in UTF-16, and then each surrogate code point is encoded in UTF-8. Therefore, CESU-8 needs six bytes for each Unicode supplementary character while UTF-8 needs only four. Though not specified in the technical report, unpaired surrogates are also encoded as 3 bytes each, and CESU-8 is exactly the same as applying an older UCS-2 to UTF-8 converter to UTF-16 data.

In Unicode, a Private Use Area (PUA) is a range of code points that, by definition, will not be assigned characters by the Unicode Consortium. Three private use areas are defined: one in the Basic Multilingual Plane, and one each in, and nearly covering, planes 15 and 16. The code points in these areas cannot be considered as standardized characters in Unicode itself. They are intentionally left undefined so that third parties may define their own characters without conflicting with Unicode Consortium assignments. Under the Unicode Stability Policy, the Private Use Areas will remain allocated for that purpose in all future Unicode versions.

This article compares Unicode encodings. Two situations are considered: 8-bit-clean environments, and environments that forbid use of byte values that have the high bit set. Originally such prohibitions were to allow for links that used only seven data bits, but they remain in some standards and so some standard-conforming software must generate messages that comply with the restrictions. Standard Compression Scheme for Unicode and Binary Ordered Compression for Unicode are excluded from the comparison tables because it is difficult to simply quantify their size.

<span class="mw-page-title-main">Universal Character Set characters</span> Complete list of the characters available on most computers

The Unicode Consortium and the ISO/IEC JTC 1/SC 2/WG 2 jointly collaborate on the list of the characters in the Universal Coded Character Set. The Universal Coded Character Set, most commonly called the Universal Character Set, is an international standard to map characters, discrete symbols used in natural language, mathematics, music, and other domains, to unique machine-readable data values. By creating this mapping, the UCS enables computer software vendors to interoperate, and transmit—interchange—UCS-encoded text strings from one to another. Because it is a universal map, it can be used to represent multiple languages at the same time. This avoids the confusion of using multiple legacy character encodings, which can result in the same sequence of codes having multiple interpretations depending on the character encoding in use, resulting in mojibake if the wrong one is chosen.

Many Unicode characters are used to control the interpretation or display of text, but these characters themselves have no visual or spatial representation. For example, the null character is used in C-programming application environments to indicate the end of a string of characters. In this way, these programs only require a single starting memory address for a string, since the string ends once the program reads the null character.

The Universal Coded Character Set is a standard set of characters defined by the international standard ISO/IEC 10646, Information technology — Universal Coded Character Set (UCS), which is the basis of many character encodings, improving as characters from previously unrepresented typing systems are added.

This article describes and classifies the Unicode characters that may validly appear in XML.

References

  1. "Unicode character database". The Unicode Standard. Archived from the original on 2017-09-25. Retrieved 2016-07-09.
  2. "Enumerated Versions of The Unicode Standard". The Unicode Standard. Archived from the original on 2016-06-29. Retrieved 2016-07-09.
  3. "3.8: Block-by-Block Charts" (PDF). The Unicode Standard. version 1.0. Unicode Consortium. Archived (PDF) from the original on 2021-02-11. Retrieved 2020-09-30.
  4. Wichary, Marcin. "When Fonts Fall". Figma. Archived from the original on 13 June 2021. Retrieved 6 June 2021.
  5. "Recommendations for OpenType Fonts (OpenType 1.7) - Typography". docs.microsoft.com. Archived from the original on 19 October 2020. Retrieved 18 October 2020.