This article needs additional citations for verification .(March 2022) |
Unicode has a certain amount of duplication of characters. These are pairs of single Unicode code points that are canonically equivalent. The reason for this are compatibility issues with legacy systems.
Unless two characters are canonically equivalent, they are not "duplicate" in the narrow sense. There is, however, room for disagreement on whether two Unicode characters really encode the same grapheme in cases such as the U+00B5µ MICRO SIGN versus U+03BCμ GREEK SMALL LETTER MU .
This should be clearly distinguished from Unicode characters that are rendered as identical glyphs or near-identical glyphs (homoglyphs), either because they are historically cognate (such as Greek Η vs. Latin H) or because of coincidental similarity (such as Greek Ρ vs. Latin P, or Greek Η vs. Cyrillic Н, or the following homoglyph septuplet: astronomical symbol for "Sun" ☉, "circled dot operator" ⊙, the Gothic letter 𐍈, the IPA symbol for a bilabial click ʘ , the Osage letter 𐓃, the Tifinagh letter ⵙ, and the archaic Cyrillic letter Ꙩ).
Unicode aims at encoding graphemes, not individual "meanings" ("semantics") of graphemes, and not glyphs. It is a matter of case-by-case judgement whether such characters should receive separate encoding when used in technical contexts, e.g. Greek letters used as mathematical symbols: thus, the choice to have a "micro- sign" µ separate from Greek μ, but not a "Mega sign" separate from Latin M, was a pragmatic decision by the Unicode consortium for historical reasons (namely, compatibility with Latin-1, which included a micro sign). Technically µ and μ are not duplicate characters in that the consortium viewed these symbols as distinct characters (while it regarded M for "Mega" and Latin M as one and the same character).
Note that merely having different "meanings" is not sufficient grounds to split a grapheme into several characters: Thus, the acute accent may represent word accent in Welsh or Swedish, it may express vowel quality in French, and it may express vowel length in Hungarian, Icelandic or Irish. Since all these languages are written in the same script, namely Latin script, the acute accent in its various meanings is considered one and the same combining diacritic character U+0301́COMBINING ACUTE ACCENT, and so the accented letter é is the same character in French and Hungarian. There is a separate "combining diacritic acute tone mark" at U+0341́COMBINING ACUTE TONE MARK for the romanization of tone languages, one important difference from the acute accent being that in a language like French, the acute accent can replace the dot over the lowercase i, whereas in a language like Vietnamese, the acute tone mark is added above the dot. Diacritic signs for alphabets considered independent may be encoded separately, such as the acute ("tonos") for the Greek alphabet at U+0384΄GREEK TONOS, and for the Armenian alphabet at U+055B՛ARMENIAN EMPHASIS MARK. Some Cyrillic-based alphabets (such as Russian) also use the acute accent, but there is no "Cyrillic acute" encoded separately and U+0301 should be used for Cyrillic as well as Latin (see Cyrillic characters in Unicode). The point that the same grapheme can have many "meanings" is even more obvious considering e.g. the letter U, which has entirely different phonemic referents in the various languages that use it in their orthographies (English /juː/,/ʊ/,/ʌ/ etc., French /y/, German /uː/,/u/, etc., not to mention various uses of U as a symbol).
In traditional Chinese character encodings, characters usually took either a single byte (known as halfwidth) or two bytes (known as fullwidth). Characters that took a single byte were generally displayed at half the width of those that took two bytes. Some characters such as the Latin alphabet were available in both halfwidth and fullwidth versions. As the halfwidth versions were more commonly used, they were generally the ones mapped to the standard code points for those characters. Therefore a separate section was needed for the fullwidth forms to preserve the distinction.
In some cases, specific graphemes have acquired a specialized symbolic or technical meaning separate from their original function. A prominent example is the Greek letter π which is widely recognized as the symbol for the mathematical constant of a circle's circumference divided by its diameter even by people not literate in Greek.
Several variants of the entire Greek and Latin alphabets specifically for use as mathematical symbols are encoded in the Mathematical Alphanumeric Symbols range. This range disambiguates characters that would usually be considered font variants but are encoded separately because of widespread use of font variants e.g. L vs. "script L" ℒ vs. "blackletter L" 𝔏 vs. "boldface blackletter L" 𝕷) as distinctive mathematical symbols. It is intended for use only in mathematical or technical notation, not use in non-technical text. [1]
Many Greek letters are used as technical symbols. All of the Greek letters are encoded in the Greek section of Unicode but many are encoded a second time under the name of the technical symbol they represent. The "micro sign" (U+00B5µMICRO SIGN) is obviously inherited from ISO 8859-1, but the origin of the others is less clear.
Other Greek glyph variants encoded as separate characters include the lunate sigma Ϲ ϲ contrasting with Σ σ, final sigma ς (strictly speaking a contextual glyph variant) contrasting with σ, The Qoppa numeral symbol Ϟ ϟ contrasting with the archaic Ϙ ϙ.
Greek letters assigned separate "symbol" codepoints include the Letterlike Symbols ϐ, ϵ, ϑ, ϖ, ϱ, ϒ, and ϕ (contrasting with β, ε, θ, π, ρ, Υ, φ); the Ohm symbol Ω (contrasting with Ω); and the mathematical operators for the product ∏ and sum ∑ (contrasting with Π and Σ).
Unicode has a number of characters specifically designated as Roman numerals, as part of the Number Forms range from U+2160 to U+2183. For example, Roman 1988 (MCMLXXXVIII) could alternatively be written as ⅯⅭⅯⅬⅩⅩⅩⅧ. This range includes both uppercase and lowercase numerals, as well as pre-combined glyphs for numbers up to 12 (Ⅻ for XII), mainly intended for clock faces.
The pre-combined glyphs should only be used to represent the individual numbers where the use of individual glyphs is not wanted, and not to replace compounded numbers. For example, one can combine Ⅹ with Ⅰ to produce Roman numeral 11 (ⅩⅠ), so U+216A (Ⅺ) is canonically equivalent to ⅩⅠ. Such characters are also referred to as composite compatibility characters or decomposable compatibility characters. Such characters would not normally have been included within the Unicode standard except for compatibility with other existing encodings (see Unicode compatibility characters). The goal was to accommodate simple translation from existing encodings into Unicode. This makes translations in the opposite direction complicated because multiple Unicode characters may map to a single character in another encoding. Without the compatibility concerns the only characters necessary would be: Ⅰ, Ⅴ, Ⅹ, Ⅼ, Ⅽ, Ⅾ, Ⅿ, ⅰ, ⅴ, ⅹ, ⅼ, ⅽ, ⅾ, ⅿ, ↀ, ↁ, ↂ, ↇ, ↈ, Ↄ, and ↄ; all other Roman numerals can be composed from these characters.
Unicode has encoded compatibility characters for contextual Arabic letter forms where its contextual forms are encoded as separate code points (isolated, final, initial, and medial). For example, U+0647هARABIC LETTER HEH has its contextual forms encoded at these 4 code points:
The contextual-form characters are not recommended for general use. There are also compatibility Arabic ligatures encoded such as U+FDF2ﷲARABIC LIGATURE ALLAH ISOLATED FORM and U+FDFD﷽ARABIC LIGATURE BISMILLAH AR-RAHMAN AR-RAHEEM.
Hebrew presentation forms include ligatures, several precomposed characters and wide variants of Hebrew letters. The aleph-lamed ligature is encoded as a separate character at U+FB4FﭏHEBREW LIGATURE ALEF LAMED. The wide variants are listed below:
These characters are variants of ordinary Hebrew letters encoded for justification of texts written in Hebrew, such as the Torah. Unicode also encodes a stylistic variant of U+05E2עHEBREW LETTER AYIN at U+FB20ﬠHEBREW LETTER ALTERNATIVE AYIN
A diacritic is a glyph added to a letter or to a basic glyph. The term derives from the Ancient Greek διακριτικός, from διακρίνω. The word diacritic is a noun, though it is sometimes used in an attributive sense, whereas diacritical is only an adjective. Some diacritics, such as the acute ⟨ó⟩, grave ⟨ò⟩, and circumflex ⟨ô⟩, are often called accents. Diacritics may appear above or below a letter or in some other position such as within the letter or between two letters.
The Coptic alphabet is the script used for writing the Coptic language, the most recent development of Egyptian. The repertoire of glyphs is based on the uncial Greek alphabet, augmented by letters borrowed from the Egyptian Demotic. It was the first alphabetic script used for the Egyptian language. There are several Coptic alphabets, as the script varies greatly among the various dialects and eras of the Coptic language.
Greek numerals, also known as Ionic, Ionian, Milesian, or Alexandrian numerals, is a system of writing numbers using the letters of the Greek alphabet. In modern Greece, they are still used for ordinal numbers and in contexts similar to those in which Roman numerals are still used in the Western world. For ordinary cardinal numbers, however, modern Greece uses Arabic numerals.
In the polytonic orthography of Ancient Greek, the rough breathing character is a diacritical mark used to indicate the presence of an sound before a vowel, diphthong, or after rho. It remained in the polytonic orthography even after the Hellenistic period, when the sound disappeared from the Greek language. In the monotonic orthography of Modern Greek phonology, in use since 1982, it is not used at all.
Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters. Han characters are a feature shared in common by written Chinese (hanzi), Japanese (kanji), Korean (hanja) and Vietnamese.
In writing and typography, a ligature occurs where two or more graphemes or letters are joined to form a single glyph. Examples are the characters ⟨æ⟩ and ⟨œ⟩ used in English and French, in which the letters ⟨a⟩ and ⟨e⟩ are joined for the first ligature and the letters ⟨o⟩ and ⟨e⟩ are joined for the second ligature. For stylistic and legibility reasons, ⟨f⟩ and ⟨i⟩ are often merged to create ⟨fi⟩ ; the same is true of ⟨s⟩ and ⟨t⟩ to create ⟨st⟩. The common ampersand, ⟨&⟩, developed from a ligature in which the handwritten Latin letters ⟨e⟩ and ⟨t⟩ were combined.
In digital typography, combining characters are characters that are intended to modify other characters. The most common combining characters in the Latin script are the combining diacritical marks.
A precomposed character is a Unicode entity that can also be defined as a sequence of one or more other characters. A precomposed character may typically represent a letter with a diacritical mark, such as é. Technically, é (U+00E9) is a character that can be decomposed into an equivalent string of the base letter e (U+0065) and combining acute accent (U+0301). Similarly, ligatures are precompositions of their constituent letters or graphemes.
The Greek alphabet has been used to write the Greek language since the late 9th or early 8th century BC. It was derived from the earlier Phoenician alphabet, and is the earliest known alphabetic script to have developed distinct letters for consonants as well as vowels. In Archaic and early Classical times, the Greek alphabet existed in many local variants, but, by the end of the 4th century BC, the Ionic-based Euclidean alphabet, with 24 letters, ordered from alpha to omega, had become standard throughout the Greek-speaking world and is the version that is still used for Greek writing today.
In graphemics and typography, the term allograph is used of a glyph that is a design variant of a letter or other grapheme, such as a letter, a number, an ideograph, a punctuation mark or other typographic symbol. In graphemics, an obvious example in English is the distinction between uppercase and lowercase letters. Allographs can vary greatly, without affecting the underlying identity of the grapheme. Even if the word "cat" is rendered as "cAt", it remains recognizable as the sequence of the three graphemes ⟨c⟩, ⟨a⟩, ⟨t⟩.
Diacritical marks of two dots¨, placed side-by-side over or under a letter, are used in several languages for several different purposes. The most familiar to English-language speakers are the diaeresis and the umlaut, though there are numerous others. For example, in Albanian, ë represents a schwa. Such diacritics are also sometimes used for stylistic reasons.
Unicode supports several phonetic scripts and notation systems through its existing scripts and the addition of extra blocks with phonetic characters. These phonetic characters are derived from an existing script, usually Latin, Greek or Cyrillic. Apart from the International Phonetic Alphabet (IPA), extensions to the IPA and obsolete and nonstandard IPA symbols, these blocks also contain characters from the Uralic Phonetic Alphabet and the Americanist Phonetic Alphabet.
Unicode equivalence is the specification by the Unicode character encoding standard that some sequences of code points represent essentially the same character. This feature was introduced in the standard to allow compatibility with pre-existing standard character sets, which often included similar or identical characters.
The Unicode Consortium and the ISO/IEC JTC 1/SC 2/WG 2 jointly collaborate on the list of the characters in the Universal Coded Character Set. The Universal Coded Character Set, most commonly called the Universal Character Set, is an international standard to map characters, discrete symbols used in natural language, mathematics, music, and other domains, to unique machine-readable data values. By creating this mapping, the UCS enables computer software vendors to interoperate, and transmit—interchange—UCS-encoded text strings from one to another. Because it is a universal map, it can be used to represent multiple languages at the same time. This avoids the confusion of using multiple legacy character encodings, which can result in the same sequence of codes having multiple interpretations depending on the character encoding in use, resulting in mojibake if the wrong one is chosen.
In Unicode and the UCS, a compatibility character is a character that is encoded solely to maintain round-trip convertibility with other, often older, standards. As the Unicode Glossary says:
A character that would not have been encoded except for compatibility and round-trip convertibility with other standards
A numeral is a character that denotes a number. The decimal number digits 0–9 are used widely in various writing systems throughout the world, however the graphemes representing the decimal digits differ widely. Therefore Unicode includes 22 different sets of graphemes for the decimal digits, and also various decimal points, thousands separators, negative signs, etc. Unicode also includes several non-decimal numerals such as Aegean numerals, Roman numerals, counting rod numerals, Mayan numerals, Cuneiform numerals and ancient Greek numerals. There is also a large number of typographical variations of the Western Arabic numerals provided for specialized mathematical use and for compatibility with earlier character sets, such as ² or ②, and composite characters such as ½.
A character is a semiotic sign, symbol, grapheme, or glyph – typically a letter, a numerical digit, an ideogram, a hieroglyph, a punctuation mark or another typographic mark.
Many scripts in Unicode, such as Arabic, have special orthographic rules that require certain combinations of letterforms to be combined into special ligature forms. In English, the common ampersand (&) developed from a ligature in which the handwritten Latin letters e and t were combined. The rules governing ligature formation in Arabic can be quite complex, requiring special script-shaping technologies such as the Arabic Calligraphic Engine by Thomas Milo's DecoType.
A typographic approximation is a replacement of an element of the writing system with another glyph or glyphs. The replacement may be a nearly homographic character, a digraph, or a character string. An approximation is different from a typographical error in that an approximation is intentional and aims to preserve the visual appearance of the original. The concept of approximation also applies to the World Wide Web and other forms of textual information available via digital media, though usually at the level of characters, not glyphs.