Unified English Braille

Last updated

Unified English Braille Code (UEBC, formerly UBC, now usually simply UEB) is an English language Braille code standard, developed to encompass the wide variety of literary and technical material in use in the English-speaking world today, in uniform fashion.

Contents

Background

Standard 6-dot braille only provides 63 distinct characters (not including the space character), and thus, over the years a number of distinct rule-sets have been developed to represent literary text, mathematics, scientific material, computer software, the @ symbol used in email addresses, and other varieties of written material. Different countries also used differing encodings at various times: during the 1800s American Braille competed with English Braille and New York Point in the War of the Dots. As a result of the expanding need to represent technical symbolism, and divergence during the past 100 years across countries, braille users who desired to read or write a large range of material have needed to learn different sets of rules, depending on what kind of material they were reading at a given time. Rules for a particular type of material were often not compatible from one system to the next (the rule-sets for literary/mathematical/computerized encoding-areas were sometimes conflicting—and of course differing approaches to encoding mathematics were not compatible with each other), so the reader would need to be notified as the text in a book moved from computer braille code for programming to Nemeth Code for mathematics to standard literary braille. Moreover, the braille rule-set used for math and computer science topics, and even to an extent braille for literary purposes, differed among various English-speaking countries.

Goals

Unified English Braille is intended to develop one set of rules, the same everywhere in the world, which could be applied across various types of English-language material. The notable exception to this unification is Music Braille, which UEB specifically does not encompass, because it is already well-standardized internationally. Unified English Braille is designed to be readily understood by people familiar with the literary braille (used in standard prose writing), while also including support for specialized math and science symbols, computer-related symbols (the @ sign [1] as well as more specialised programming-language syntax), foreign alphabets, and visual effects (bullets, bold type, accent marks, and so on).

According to the original [2] 1991 specification [3] for UEB, the goals were:

1. simplify and unify the system of braille used for encoding English, reducing community-fragmentation
2. reduce the overall number of official coding systems, which currently include:
a. literary code (since 1933, English Braille Grade 2 has been the main component)
i. BANA flavor used in North America, et cetera
ii. BAUK flavor used in United Kingdom, etc.
b. Textbook Formats and Techniques code
c. math-notation and science-notation codes
i. Nemeth Code (since 1952, in North America and several other countries)
ii. modern variants of Taylor Code, a subset of literary code (since 18xx, standard elsewhere, alternative in North America)
iii. Extended Nemeth Code With Chemistry Module
iv. Extended Nemeth Code With Ancient Numeration Module
v. Mathematical Diagrams Module (not actually associated with any particular coding-system)
d. Computer Braille Code (since the 1980s,[ when? ] for special characters)
i. the basic CBC
ii. CBC With Flowchart Module
e. Braille Music Code (since 1829, last upgraded/unified 1997, used for vocals and instrumentals—this one explicitly not to be unified nor eliminated)
f. [added later[ when? ]] IPA Braille code (used for phonetic transcriptions—this one did not yet exist in 1991)
3. if possible, unify the literary-code used across English-speaking countries
4. where it is not possible to reduce the number of coding systems, reduce conflicts
a. most especially, rule-conflicts (which make the codes incompatible at a "software" level—in human brains and computer algorithms)
b. symbol conflicts, for example, the characters "$", "%", "]", and "[" are all represented differently in the various code systems
c. sometimes the official coding-systems themselves are not explicitly in conflict, but ambiguity in their rules can lead to accidental conflicts
5. the overall goal of steps 1 to 4 above is to make acquisition of reading, writing, and teaching skill in the use of braille quicker, easier, and more efficient
6. this in turn will help reverse the trend of steadily eroding usage of Braille itself (which is being replaced by electronics and/or illiteracy)
7. besides those practical goals, it is also desired that braille—as a writing system—have the properties required for long-term success:
a. universal, with no special code-system for particular subject-matter, no special-purpose "modules", and no serious disagreements about how to encode English
b. coherent, with no internal conflicts, and thus no need for authoritative fiat to "resolve" such conflicts by picking winners and losers
c. ease of use, with dramatically less need for braille-coding-specific lessons, certifications, workshops, literature, etc.
d. uniform yet extensible, with symbol-assignment giving an unvarying identity-relationship, and new symbols possible without conflicts or overhauls
8. philosophically, an additional goal is to upgrade the braille system to be practical for employment in a workplace, not just for reading recreational and religious texts
a. computer-friendly (braille-production on modern keyboards and braille-consumption via computerized file formats—see also Braille e-book which did not really exist back in 1990)
b. tech-writing-friendly (straightforward handling of notations used in math/science/medical/programming/engineering/similar)
c. precise bidirectional representation (both #8a and #8b can be largely satisfied by a precision writing system…but the existing braille systems as of 1990 were not fully precise, replacing symbols with words, converting unit-systems, altering punctuation, and so on)
9. upgrades to existing braille-codes are required, and then these modified codes can be merged into a unified code (preferably singular plus the music-code)

Some goals were specially and explicitly called out as key objectives, not all of which are mentioned above:

Goals that were specifically not part of the UEB upgrade process were the ability to handle languages outside the Roman alphabet (cf. the various national variants of ASCII in the ISO 8859 series versus the modern pan-universal Unicode standard, which governs how writing systems are encoded for computerized use).

History and adoption

Work on UEB formally began in 1991, [4] and preliminary draft standard was published in March 1995 (as UBC), [5] then upgraded several times thereafter. Unified English Braille (UEB) was originally known as Unified Braille Code (UBC), with the English-specific nature being implied, but later[ when? ] the word "English" was formally incorporated into its name—Unified English Braille Code (UEBC)—and still more recently[ when? ] it has come to be called Unified English Braille (UEB). [6] On April 2, 2004, the International Council on English Braille (ICEB) gave the go-ahead for the unification of various English braille codes. This decision was reached following 13 years of analysis, research, and debate. ICEB said that Unified English Braille was sufficiently complete for recognition as an international standard for English braille, which the seven ICEB member-countries could consider for adoption as their national code. [7] [8] South Africa adopted the UEB almost immediately (in May 2004 [9] ). During the following year, the standard was adopted by Nigeria (February 5, 2005 [10] ), Australia (May 14, 2005 [11] ), and New Zealand (November 2005 [12] ). On April 24, 2010, the Canadian Braille Authority (CBA) voted to adopt UEB, making Canada the fifth nation to adopt UEB officially. [13] On October 21, 2011, the UK Association for Accessible Formats voted to adopt UEB as the preferred[ clarification needed ] code in the UK. [14] On November 2, 2012, the Braille Authority of North America (BANA) became the sixth of the seven member-countries of the ICEB to officially adopt the UEB. [15]

Mathematical notation

The major criticism against UEB is that it fails to handle mathematics or computer science as compactly as codes designed to be optimal for those disciplines. Besides requiring more space to represent and more time to read and write, the verbosity of UEB can make learning mathematics more difficult. [16] Nemeth Braille, officially used in the United States since 1952, [17] and as of 2002 the de facto standard [18] for teaching and doing mathematics in braille in the US, was specifically invented [17] to correct the cumbersomeness of doing mathematics in braille. However, although the Nemeth encoding standard was officially adopted by the JUTC of the US and the UK in the 1950s, in practice only the USA switched their mathematical braille to the Nemeth system, whereas the UK continued to use the traditional Henry Martyn Taylor coding (not to be confused with Hudson Taylor, who was involved with the use of Moon type for the blind in China during the 1800s) for their braille mathematics. Programmers in the United States who write their programming codefiles in braille—as opposed to in ASCII text with use of a screenreader for example—tend to use Nemeth-syntax numerals, whereas programmers in the UK use yet another system (not Taylor-numerals and not literary-numerals). [19]

The key difference [20] of Nemeth Braille compared to Taylor (and UEB which uses an upgraded version of the Taylor encoding for math) is that Nemeth uses "down-shifted" numerals from the fifth decade of the Braille alphabet (overwriting various punctuation characters), whereas UEB/Taylor uses the traditional 1800s approach with "up-shifted" numerals from the first decade of the (English) Braille alphabet (overwriting the first ten letters, namely ABCDEFGHIJ). Traditional 1800s braille, and also UEB, require insertion of numeral-prefixes when speaking of numerals, which makes representing some mathematical equations 42% more verbose. [4] As an alternative to UEB, there were proposals in 2001 [4] and 2009, [21] and most recently these were the subject of various technical workshops during 2012. [22] Although UEB adopts some features of Nemeth, the final version of UEB mandates up-shifted numerals, [1] which are the heart of the controversy. According to BANA, which adopted UEB in 2012, the official braille codes for the USA will be UEB and Nemeth Braille (as well as Music Braille for vocals and instrumentals plus IPA Braille for phonetic linguistics), [23] despite the use of contradictory representation of numerals and arithmetical symbols in the UEB and Nemeth encodings. Thus, although UEB has officially been adopted in most English-speaking ICEB member-countries, in the USA (and possibly the UK where UEB is only the "preferred" system) the new encoding is not to be the sole encoding.

Another proposed braille-notation for encoding math is GS8/GS6, which was specifically invented [24] in the early 1990s as an attempt to get rid of the "up-shifted" numerals used in UEB—see Gardner–Salinas Braille. GS6 implements "extra-dot" numerals [25] from the fourth decade of the English Braille alphabet (overwriting various two-letter ligatures). GS8 expands the braille-cell from 2×3 dots to 2×4 dots, quadrupling the available codepoints from the traditional 64 up to 256, but in GS8 the numerals are still represented in the same way as in GS6 (albeit with a couple unused dot-positions at the bottom). [26]

Attempts to give the numerals their own distinct position in braille are not new: the original 1829 specification by Louis Braille gave the numerals their own distinct symbols, with the modern digraph-based literary-braille approach mentioned as an optional fallback. However, after trying the system out in the classroom, the dashes used in the numerals—as well as several other rows of special characters—were found to be too difficult to distinguish from dot-pairs, and thus the typical digraph-based numerals became the official standard in 1837.

Implementation

As of 2013, with the majority of English-speaking ICEB member-countries having officially adopted UEB, there remain barriers [27] to implementation [28] and deployment. Besides ICEB member-nations, there are also many other countries with blind citizens that teach and use English: India, Hong Kong/China, Pakistan, the Philippines, and so on. Many of these countries use non-UEB math notation, for English-speaking countries specifically, versions of the Nemeth Code were widespread by 1990 (in the United States, Western Samoa, Canada including Quebec, New Zealand, Israel, Greece, India, Pakistan, Sri Lanka, Thailand, Malaysia, Indonesia, Cambodia, Vietnam, and Lebanon) in contrast to the similar-to-UEB-but-not-identical Taylor notation in 1990 (used by the UK, Ireland, Australia, Nigeria, Hong Kong, Jordan, Kenya, Sierra Leone, Singapore, and Zimbabwe). [29] Some countries in the Middle East used Nemeth and Taylor math-notations as of 1990, i.e. Iran and Saudi Arabia. As of 2013, it is unclear whether the English-using blind populations of various ICEB and non-ICEB nations will move to adopt the UEB, and if so, at what rate. Beyond official adoption rates in schools and by individuals, there are other difficulties. The vast majority[ citation needed ] of existing Braille materials, both printed and electronic, are in non-UEB encodings. Furthermore, other technologies that compete with braille are now ever-more-widely affordable (screen readers for electronic-text-to-speech, plus physical-pages-to-electronic-text software combined with high-resolution digital cameras and high-speed document scanners, and the increasing ubiquity of tablets/smartphones/PDAs/PCs). The percentage of blind children who are literate in braille is already declining—and even those who know some system tend not to know UEB, since that system is still very new. Still, as of 2012 many of the original goals for UEB have already been fully or partially accomplished:

See also

Related Research Articles

The ten Arabic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 are the most commonly used symbols for writing numbers. The term often also implies a positional notation using the numerals, as well as the use of a decimal base, in particular when contrasted with other systems such as Roman numerals. However, the symbols are also used to write numbers in other bases such as octal, as well as for writing non-numerical information such as trademarks or license plate identifiers.

<span class="mw-page-title-main">Braille</span> Tactile writing system

Braille is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser.

In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.

In mathematics and computing, the hexadecimal numeral system is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9, and "A"–"F" to represent values from ten to fifteen.

<span class="mw-page-title-main">Decimal separator</span> Numerical symbol

A decimal separator is a symbol used to separate the integer part from the fractional part of a number written in decimal form. Different countries officially designate different symbols for use as the separator. The choice of symbol also affects the choice of symbol for the thousands separator used in digit grouping.

A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits can represent any of 256 possible values and can, therefore, represent a wide variety of different items.

Mathematical notation consists of using symbols for representing operations, unspecified numbers, relations, and any other mathematical objects and assembling them into expressions and formulas. Mathematical notation is widely used in mathematics, science, and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way.

<span class="mw-page-title-main">Braille music</span> Braille form of musical notation

Braille music is a braille code that allows music to be notated using braille cells so music can be read by visually impaired musicians. The system was incepted by Louis Braille.

The Nemeth Braille Code for Mathematics is a Braille code for encoding mathematical and scientific notation linearly using standard six-dot Braille cells for tactile reading by the visually impaired. The code was developed by Abraham Nemeth. The Nemeth Code was first written up in 1952. It was revised in 1956, 1965, and 1972. It is an example of a compact human-readable markup language.

Abraham Nemeth was an American mathematician. He was professor of mathematics at the University of Detroit Mercy in Detroit, Michigan. Nemeth was blind and is known for developing Nemeth Braille, a system for blind people to read and write mathematics.

Braille ASCII is a subset of the ASCII character set which uses 64 of the printable ASCII characters to represent all possible dot combinations in six-dot braille. It was developed around 1969 and, despite originally being known as North American Braille ASCII, it is now used internationally.

The Gardner–Salinasbraille codes are a method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8. There is also a corresponding 6-cell form called GS6.

In Unicode, a script is a collection of letters and other written signs used to represent textual information in one or more writing systems. Some scripts support one and only one writing system and language, for example, Armenian. Other scripts support many different writing systems; for example, the Latin script supports English, French, German, Italian, Vietnamese, Latin itself, and several other languages. Some languages make use of multiple alternate writing systems and thus also use several scripts; for example, in Turkish, the Arabic script was used before the 20th century but transitioned to Latin in the early part of the 20th century. More or less complementary to scripts are symbols and Unicode control characters.

In the Unicode standard, a plane is a contiguous group of 65,536 (216) code points. There are 17 planes, identified by the numbers 0 to 16, which corresponds with the possible values 00–1016 of the first two positions in six position hexadecimal format (U+hhhhhh). Plane 0 is the Basic Multilingual Plane (BMP), which contains most commonly used characters. The higher planes 1 through 16 are called "supplementary planes". The last code point in Unicode is the last code point in plane 16, U+10FFFF. As of Unicode version 15.1, five of the planes have assigned code points (characters), and seven are named.

<span class="mw-page-title-main">English Braille</span> Tactile writing system for English

English Braille, also known as Grade 2 Braille, is the braille alphabet used for English. It consists of around 250 letters (phonograms), numerals, punctuation, formatting marks, contractions, and abbreviations (logograms). Some English Braille letters, such as ⟨ch⟩, correspond to more than one letter in print.

Henry Martyn Taylor, FRS, FRAS, was an English mathematician and barrister.

Greek Braille is the braille alphabet of the Greek language. It is based on international braille conventions, generally corresponding to Latin transliteration. In Greek, it is known as Κώδικας Μπράιγ Kôdikas Brég "Braille Code".

Computer Braille is an adaptation of braille for precise representation of computer-related materials such as programs, program lines, computer commands, and filenames. Unlike standard 6-dot braille scripts, but like Gardner–Salinas braille codes, this may employ the extended 8-dot braille patterns.

IPA Braille is the modern standard Braille encoding of the International Phonetic Alphabet (IPA), as recognized by the International Council on English Braille.

<span class="mw-page-title-main">Braille Authority of North America</span>

The Braille Authority of North America (BANA) is the standardizing body of English Braille orthography in the United States and Canada. It consists of a number of member organizations, such as the Braille Institute of America, the National Braille Association, and the Canadian National Institute for the Blind.

References

  1. 1 2 "Overview of Changes". BrailleAuthority.org.
  2. "Untitled Document". BrailleAuthority.org.
  3. Cranmer; Nemeth). "A Uniform Braille Code". ICEB.org.
  4. 1 2 3 Archived 2012-10-30 at the Wayback Machine
  5. "Unified Braille Code". Archived from the original on 2013-10-30. Retrieved 2013-07-09.
  6. "A Single Braille Code for All English-Speaking Peoples of the World". International Council on English Braille. Retrieved 22 October 2012.
  7. "Green Light for Unified English Braille". International Council on English Braille. Retrieved 20 October 2012.
  8. "Unified English Braille (UEB)". Australian Braille Authority. Retrieved 20 October 2012.
  9. "UEB in South Africa". Australian Braille Authority. Retrieved 20 October 2012.
  10. "The National Braille Council of Nigeria votes to accept the Unified English Braille Code (UEB) for future implementation in Nigeria". Australian Braille Authority. Retrieved 20 October 2012.
  11. "Resolution regarding Unified English Braille passed by the Australian Braille Authority May 2005". Australian Braille Authority. Retrieved 20 October 2012.
  12. "About The Trust". The Braille Authority of New Zealand Aotearoa Trust. Retrieved 20 October 2012.
  13. "News". Canadian Braille Authority. Retrieved 20 October 2012.
  14. "UK Association for Accessible Formats adopts UEB as the braille code for the UK". UK Association for Accessible Formats. Retrieved 20 October 2012.
  15. "BANA Adopts UEB". Braille Authority of North America. November 2012. Retrieved December 18, 2012.
  16. "unifiedbrailleforall.com – unifiedbrailleforall Resources and Information".
  17. 1 2 "The History of the Nemeth Code: An Interview with Dr. Abraham Nemeth".
  18. "The World of Blind Mathematicians" (PDF). Retrieved 2024-01-28.
  19. "Brief Overview of Current Braille Codes". Archived from the original on 2013-10-29. Retrieved 2013-07-09.
  20. "Unified English Braille".
  21. "NUBS Documents".
  22. 1 2 "unifiedbrailleforall.com".
  23. 1 2 "BANA Adopts UEB".
  24. "CNN – Blind physicist creates better Braille – Nov 9, 1995".
  25. "GS Numbers". Archived from the original on 2014-10-02. Retrieved 2013-07-08.
  26. Gardner–Salinas Braille#Digits
  27. 1 2 "Considerations and Impact".
  28. 1 2 "Planning the Transition to UEB".
  29. "chezdom.net » Braille Mathematical Notations". 22 July 2008.
  30. 1 2 3 "BANA file".
  31. Contrast page 12 definition versus page 13 example and compare with page 236 which follows the second style