Unified English Braille Code (UEBC, formerly UBC, now usually simply UEB) is an English language Braille code standard, developed to encompass the wide variety of literary and technical material in use in the English-speaking world today, in uniform fashion.
Standard 6-dot braille only provides 63 distinct characters (not including the space character), and thus, over the years a number of distinct rule-sets have been developed to represent literary text, mathematics, scientific material, computer software, the @ symbol used in email addresses, and other varieties of written material. Different countries also used differing encodings at various times: during the 1800s American Braille competed with English Braille and New York Point in the War of the Dots. As a result of the expanding need to represent technical symbolism, and divergence during the past 100 years across countries, braille users who desired to read or write a large range of material have needed to learn different sets of rules, depending on what kind of material they were reading at a given time. Rules for a particular type of material were often not compatible from one system to the next (the rule-sets for literary/mathematical/computerized encoding-areas were sometimes conflicting—and of course differing approaches to encoding mathematics were not compatible with each other), so the reader would need to be notified as the text in a book moved from computer braille code for programming to Nemeth Code for mathematics to standard literary braille. Moreover, the braille rule-set used for math and computer science topics, and even to an extent braille for literary purposes, differed among various English-speaking countries.
Unified English Braille is intended to develop one set of rules, the same everywhere in the world, which could be applied across various types of English-language material. The notable exception to this unification is Music Braille, which UEB specifically does not encompass, because it is already well-standardized internationally. Unified English Braille is designed to be readily understood by people familiar with the literary braille (used in standard prose writing), while also including support for specialized math and science symbols, computer-related symbols (the @ sign [1] as well as more specialised programming-language syntax), foreign alphabets, and visual effects (bullets, bold type, accent marks, and so on).
According to the original [2] 1991 specification [3] for UEB, the goals were:
Some goals were specially and explicitly called out as key objectives, not all of which are mentioned above:
Goals that were specifically not part of the UEB upgrade process were the ability to handle languages outside the Roman alphabet (cf. the various national variants of ASCII in the ISO 8859 series versus the modern pan-universal Unicode standard, which governs how writing systems are encoded for computerized use).
Work on UEB formally began in 1991, [4] and preliminary draft standard was published in March 1995 (as UBC), [5] then upgraded several times thereafter. Unified English Braille (UEB) was originally known as Unified Braille Code (UBC), with the English-specific nature being implied, but later[ when? ] the word "English" was formally incorporated into its name—Unified English Braille Code (UEBC)—and still more recently[ when? ] it has come to be called Unified English Braille (UEB). [6] On April 2, 2004, the International Council on English Braille (ICEB) gave the go-ahead for the unification of various English braille codes. This decision was reached following 13 years of analysis, research, and debate. ICEB said that Unified English Braille was sufficiently complete for recognition as an international standard for English braille, which the seven ICEB member-countries could consider for adoption as their national code. [7] [8] South Africa adopted the UEB almost immediately (in May 2004 [9] ). During the following year, the standard was adopted by Nigeria (February 5, 2005 [10] ), Australia (May 14, 2005 [11] ), and New Zealand (November 2005 [12] ). On April 24, 2010, the Canadian Braille Authority (CBA) voted to adopt UEB, making Canada the fifth nation to adopt UEB officially. [13] On October 21, 2011, the UK Association for Accessible Formats voted to adopt UEB as the preferred[ clarification needed ] code in the UK. [14] On November 2, 2012, the Braille Authority of North America (BANA) became the sixth of the seven member-countries of the ICEB to officially adopt the UEB. [15]
The major criticism against UEB is that it fails to handle mathematics or computer science as compactly as codes designed to be optimal for those disciplines. Besides requiring more space to represent and more time to read and write, the verbosity of UEB can make learning mathematics more difficult. [16] Nemeth Braille, officially used in the United States since 1952, [17] and as of 2002 the de facto standard [18] for teaching and doing mathematics in braille in the US, was specifically invented [17] to correct the cumbersomeness of doing mathematics in braille. However, although the Nemeth encoding standard was officially adopted by the JUTC of the US and the UK in the 1950s, in practice only the USA switched their mathematical braille to the Nemeth system, whereas the UK continued to use the traditional Henry Martyn Taylor coding (not to be confused with Hudson Taylor, who was involved with the use of Moon type for the blind in China during the 1800s) for their braille mathematics. Programmers in the United States who write their programming codefiles in braille—as opposed to in ASCII text with use of a screenreader for example—tend to use Nemeth-syntax numerals, whereas programmers in the UK use yet another system (not Taylor-numerals and not literary-numerals). [19]
The key difference [20] of Nemeth Braille compared to Taylor (and UEB which uses an upgraded version of the Taylor encoding for math) is that Nemeth uses "down-shifted" numerals from the fifth decade of the Braille alphabet (overwriting various punctuation characters), whereas UEB/Taylor uses the traditional 1800s approach with "up-shifted" numerals from the first decade of the (English) Braille alphabet (overwriting the first ten letters, namely ABCDEFGHIJ). Traditional 1800s braille, and also UEB, require insertion of numeral-prefixes when speaking of numerals, which makes representing some mathematical equations 42% more verbose. [4] As an alternative to UEB, there were proposals in 2001 [4] and 2009, [21] and most recently these were the subject of various technical workshops during 2012. [22] Although UEB adopts some features of Nemeth, the final version of UEB mandates up-shifted numerals, [1] which are the heart of the controversy. According to BANA, which adopted UEB in 2012, the official braille codes for the USA will be UEB and Nemeth Braille (as well as Music Braille for vocals and instrumentals plus IPA Braille for phonetic linguistics), [23] despite the use of contradictory representation of numerals and arithmetical symbols in the UEB and Nemeth encodings. Thus, although UEB has officially been adopted in most English-speaking ICEB member-countries, in the USA (and possibly the UK where UEB is only the "preferred" system) the new encoding is not to be the sole encoding.
Another proposed braille-notation for encoding math is GS8/GS6, which was specifically invented [24] in the early 1990s as an attempt to get rid of the "up-shifted" numerals used in UEB—see Gardner–Salinas Braille. GS6 implements "extra-dot" numerals [25] from the fourth decade of the English Braille alphabet (overwriting various two-letter ligatures). GS8 expands the braille-cell from 2×3 dots to 2×4 dots, quadrupling the available codepoints from the traditional 64 up to 256, but in GS8 the numerals are still represented in the same way as in GS6 (albeit with a couple unused dot-positions at the bottom). [26]
Attempts to give the numerals their own distinct position in braille are not new: the original 1829 specification by Louis Braille gave the numerals their own distinct symbols, with the modern digraph-based literary-braille approach mentioned as an optional fallback. However, after trying the system out in the classroom, the dashes used in the numerals—as well as several other rows of special characters—were found to be too difficult to distinguish from dot-pairs, and thus the typical digraph-based numerals became the official standard in 1837.
This section needs to be updated.(June 2022) |
As of 2013, with the majority of English-speaking ICEB member-countries having officially adopted UEB, there remain barriers [27] to implementation [28] and deployment. Besides ICEB member-nations, there are also many other countries with blind citizens that teach and use English: India, Hong Kong/China, Pakistan, the Philippines, and so on. Many of these countries use non-UEB math notation, for English-speaking countries specifically, versions of the Nemeth Code were widespread by 1990 (in the United States, Western Samoa, Canada including Quebec, New Zealand, Israel, Greece, India, Pakistan, Sri Lanka, Thailand, Malaysia, Indonesia, Cambodia, Vietnam, and Lebanon) in contrast to the similar-to-UEB-but-not-identical Taylor notation in 1990 (used by the UK, Ireland, Australia, Nigeria, Hong Kong, Jordan, Kenya, Sierra Leone, Singapore, and Zimbabwe). [29] Some countries in the Middle East used Nemeth and Taylor math-notations as of 1990, i.e. Iran and Saudi Arabia. As of 2013, it is unclear whether the English-using blind populations of various ICEB and non-ICEB nations will move to adopt the UEB, and if so, at what rate. Beyond official adoption rates in schools and by individuals, there are other difficulties. The vast majority[ citation needed ] of existing Braille materials, both printed and electronic, are in non-UEB encodings. Furthermore, other technologies that compete with braille are now ever-more-widely affordable (screen readers for electronic-text-to-speech, plus physical-pages-to-electronic-text software combined with high-resolution digital cameras and high-speed document scanners, and the increasing ubiquity of tablets/smartphones/PDAs/PCs). The percentage of blind children who are literate in braille is already declining—and even those who know some system tend not to know UEB, since that system is still very new. Still, as of 2012 many of the original goals for UEB have already been fully or partially accomplished:
The ten Arabic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 are the most commonly used symbols for writing numbers. The term often also implies a positional notation using the numerals, as well as the use of a decimal base, in particular when contrasted with other systems such as Roman numerals. However, the symbols are also used to write numbers in other bases such as octal, as well as for writing non-numerical information such as trademarks or license plate identifiers.
Braille is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser.
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.
In mathematics and computing, the hexadecimal numeral system is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9, and "A"–"F" to represent values from ten to fifteen.
A decimal separator is a symbol used to separate the integer part from the fractional part of a number written in decimal form. Different countries officially designate different symbols for use as the separator. The choice of symbol also affects the choice of symbol for the thousands separator used in digit grouping.
A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits can represent any of 256 possible values and can, therefore, represent a wide variety of different items.
Mathematical notation consists of using symbols for representing operations, unspecified numbers, relations, and any other mathematical objects and assembling them into expressions and formulas. Mathematical notation is widely used in mathematics, science, and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way.
Braille music is a braille code that allows music to be notated using braille cells so music can be read by visually impaired musicians. The system was incepted by Louis Braille.
The Nemeth Braille Code for Mathematics is a Braille code for encoding mathematical and scientific notation linearly using standard six-dot Braille cells for tactile reading by the visually impaired. The code was developed by Abraham Nemeth. The Nemeth Code was first written up in 1952. It was revised in 1956, 1965, and 1972. It is an example of a compact human-readable markup language.
Abraham Nemeth was an American mathematician. He was professor of mathematics at the University of Detroit Mercy in Detroit, Michigan. Nemeth was blind and is known for developing Nemeth Braille, a system for blind people to read and write mathematics.
Braille ASCII is a subset of the ASCII character set which uses 64 of the printable ASCII characters to represent all possible dot combinations in six-dot braille. It was developed around 1969 and, despite originally being known as North American Braille ASCII, it is now used internationally.
The Gardner–Salinasbraille codes are a method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8. There is also a corresponding 6-cell form called GS6.
In Unicode, a script is a collection of letters and other written signs used to represent textual information in one or more writing systems. Some scripts support one and only one writing system and language, for example, Armenian. Other scripts support many different writing systems; for example, the Latin script supports English, French, German, Italian, Vietnamese, Latin itself, and several other languages. Some languages make use of multiple alternate writing systems and thus also use several scripts; for example, in Turkish, the Arabic script was used before the 20th century but transitioned to Latin in the early part of the 20th century. More or less complementary to scripts are symbols and Unicode control characters.
In the Unicode standard, a plane is a contiguous group of 65,536 (216) code points. There are 17 planes, identified by the numbers 0 to 16, which corresponds with the possible values 00–1016 of the first two positions in six position hexadecimal format (U+hhhhhh). Plane 0 is the Basic Multilingual Plane (BMP), which contains most commonly used characters. The higher planes 1 through 16 are called "supplementary planes". The last code point in Unicode is the last code point in plane 16, U+10FFFF. As of Unicode version 15.1, five of the planes have assigned code points (characters), and seven are named.
English Braille, also known as Grade 2 Braille, is the braille alphabet used for English. It consists of around 250 letters (phonograms), numerals, punctuation, formatting marks, contractions, and abbreviations (logograms). Some English Braille letters, such as ⠡⟨ch⟩, correspond to more than one letter in print.
Henry Martyn Taylor, FRS, FRAS, was an English mathematician and barrister.
Greek Braille is the braille alphabet of the Greek language. It is based on international braille conventions, generally corresponding to Latin transliteration. In Greek, it is known as Κώδικας Μπράιγ Kôdikas Brég "Braille Code".
Computer Braille is an adaptation of braille for precise representation of computer-related materials such as programs, program lines, computer commands, and filenames. Unlike standard 6-dot braille scripts, but like Gardner–Salinas braille codes, this may employ the extended 8-dot braille patterns.
IPA Braille is the modern standard Braille encoding of the International Phonetic Alphabet (IPA), as recognized by the International Council on English Braille.
The Braille Authority of North America (BANA) is the standardizing body of English Braille orthography in the United States and Canada. It consists of a number of member organizations, such as the Braille Institute of America, the National Braille Association, and the Canadian National Institute for the Blind.