One-hot

Last updated

Decimal Binary Unary One-hot
00000000000000000001
10010000000100000010
20100000001100000100
30110000011100001000
41000000111100010000
51010001111100100000
61100011111101000000
71110111111110000000

In digital circuits and machine learning, a one-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). [1] A similar implementation in which all bits are '1' except one '0' is sometimes called one-cold. [2] In statistics, dummy variables represent a similar technique for representing categorical data.

Contents

Applications

Digital circuitry

One-hot encoding is often used for indicating the state of a state machine. When using binary, a decoder is needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in the nth state if, and only if, the nth bit is high.

A ring counter with 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15 flip flops chained in series with the Q output of each flip flop connected to the D input of the next and the D input of the first flip flop connected to the Q output of the 15th flip flop. The first flip flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip flop, which represents the last state. Upon reset of the state machine all of the flip flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip flops advances the one 'hot' bit to the second flip flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.

An address decoder converts from binary to one-hot representation. A priority encoder converts from one-hot representation to binary.

Comparison with other encoding methods

Advantages
  • Determining the state has a low and constant cost of accessing one flip-flop
  • Changing the state has the constant cost of accessing two flip-flops
  • Easy to design and modify
  • Easy to detect illegal states
  • Takes advantage of an FPGA's abundant flip-flops
  • Using a one-hot implementation typically allows a state machine to run at a faster clock rate than any other encoding of that state machine [3]
Disadvantages
  • Requires more flip-flops than other encodings, making it impractical for PAL devices
  • Many of the states are illegal [4]

Natural language processing

In natural language processing, a one-hot vector is a 1 × N matrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary. [5] The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'.

Machine learning and statistics

In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part. [6]

Label Encoding
Food NameCategorical #Calories
Apple195
Chicken2231
Broccoli350
One Hot Encoding
AppleChickenBroccoliCalories
10095
010231
00150

Categorical data can be either nominal or ordinal. [7] Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding. [8] An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data. [9] Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm.

For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE). [10]

Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy. [11]

Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application. [12]

In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R. [13]


See also

Related Research Articles

The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, on/off, or +/ are also commonly used.

In digital logic and computing, a counter is a device which stores the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the clock and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter.

<span class="mw-page-title-main">Multiplexer</span> A device that selects between several analog or digital input signals

In electronics, a multiplexer, also known as a data selector, is a device that selects between several analog or digital input signals and forwards the selected input to a single output line. The selection is directed by a separate set of digital inputs known as select lines. A multiplexer of inputs has select lines, which are used to select which input line to send to the output.

Pattern recognition is the automated recognition of patterns and regularities in data. It has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power. These activities can be viewed as two facets of the same field of application, and they have undergone substantial development over the past few decades.

In computer programming, Base64 is a group of binary-to-text encoding schemes that represent binary data in sequences of 24 bits that can be represented by four 6-bit Base64 digits.

In digital electronics, a binary decoder is a combinational logic circuit that converts binary information from the n coded inputs to a maximum of 2n unique outputs. They are used in a wide variety of applications, including instruction decoding, data multiplexing and data demultiplexing, seven segment displays, and as address decoders for memory and port-mapped I/O.

In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph. LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear to their block length.

In statistics, a categorical variable is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or nominal category on the basis of some qualitative property. In computer science and some branches of mathematics, categorical variables are referred to as enumerations or enumerated types. Commonly, each of the possible values of a categorical variable is referred to as a level. The probability distribution associated with a random categorical variable is called a categorical distribution.

Binary data is data whose unit can take on only two possible states. These are often labelled as 0 and 1 in accordance with the binary numeral system and Boolean algebra.

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient.

A binary-to-text encoding is encoding of data in plain text. More precisely, it is an encoding of binary data in a sequence of printable characters. These encodings are necessary for transmission of data when the channel does not allow binary data or is not 8-bit clean. PGP documentation uses the term "ASCII armor" for binary-to-text encoding when referring to Base64.

A ring counter is a type of counter composed of flip-flops connected into a shift register, with the output of the last flip-flop fed to the input of the first, making a "circular" or "ring" structure.

LEB128 or Little Endian Base 128 is a variable-length code compression used to store arbitrarily large integers in a small number of bytes. LEB128 is used in the DWARF debug file format and the WebAssembly binary encoding for all integer literals.

<span class="mw-page-title-main">Priority encoder</span>

A priority encoder is a circuit or algorithm that compresses multiple binary inputs into a smaller number of outputs. The output of a priority encoder is the binary representation of the index of the most significant activated line, starting from zero. They are often used to control interrupt requests by acting on the highest priority interrupt input.

<span class="mw-page-title-main">Flip-flop (electronics)</span> Electronic circuit with two stable states

In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information – a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.

In cryptography, Mutual Irregular Clocking KEYstream generator (MICKEY) is a stream cipher algorithm developed by Steve Babbage and Matthew Dodd. The cipher is designed to be used in hardware platforms with limited resources, and was one of the three ciphers accepted into Profile 2 of the eSTREAM portfolio. The algorithm is not patented and is free for any use.

Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. It is a generalized random-access memory (RAM) for long binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits.

State encoding assigns a unique pattern of ones and zeros to each defined state of a finite-state machine (FSM). Traditionally, design criteria for FSM synthesis were speed, area or both. Following Moore's law, with technology advancement, density and speed of integrated circuits have increased exponentially. With this, power dissipation per area has inevitably increased, which has forced designers for portable computing devices and high-speed processors to consider power dissipation as a critical parameter during design consideration.

<span class="mw-page-title-main">Transformer (machine learning model)</span> Machine learning algorithm used for natural-language processing

A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP) and computer vision (CV).

References

  1. Harris, David and Harris, Sarah (2012-08-07). Digital design and computer architecture (2nd ed.). San Francisco, Calif.: Morgan Kaufmann. p. 129. ISBN   978-0-12-394424-5.
  2. Harrag, Fouzi; Gueliani, Selmene (2020). "Event Extraction Based on Deep Learning in Food Hazard Arabic Texts". arXiv: 2008.05014 .{{cite journal}}: Cite journal requires |journal= (help)
  3. Xilinx. "HDL Synthesis for FPGAs Design Guide". section 3.13: "Encoding State Machines". Appendix A: "Accelerate FPGA Macros with One-Hot Approach". 1995.
  4. Cohen, Ben (2002). Real Chip Design and Verification Using Verilog and VHDL. Palos Verdes Peninsula, CA, US: VhdlCohen Publishing. p. 48. ISBN   0-9705394-2-8.
  5. Arnaud, Émilien; Elbattah, Mahmoud; Gignon, Maxime; Dequen, Gilles (August 2021). NLP-Based Prediction of Medical Specialties at Hospital Admission Using Triage Notes. 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI). Victoria, British Columbia. pp. 548–553. doi:10.1109/ICHI52183.2021.00103 . Retrieved 2022-05-22.
  6. Brownlee, Jason. (2017). "Why One-Hot Encode Data in Machine Learning?". Machinelearningmastery. https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/
  7. Stevens, S. S. (1946). “On the Theory of Scales of Measurement”. Science, New Series, 103.2684, pp. 677–680. http://www.jstor.org/stable/1671815.
  8. Brownlee, Jason. (2020). "Ordinal and One-Hot Encodings for Categorical Data". Machinelearningmastery. https://machinelearningmastery.com/one-hot-encoding-for-categorical-data//
  9. Brownlee, Jason. (2020). "Ordinal and One-Hot Encodings for Categorical Data". Machinelearningmastery. https://machinelearningmastery.com/one-hot-encoding-for-categorical-data//
  10. Dinesh, Yadav. (2019). "Categorical encoding using Label-Encoding and One-Hot-Encoder". Towards Data Science. https://towardsdatascience.com/categorical-encoding-using-label-encoding-and-one-hot-encoder-911ef77fb5bd
  11. Andre, Ye. (2020). " Stop One-Hot Encoding Your Categorical Variables. ". Towards Data Science. https://towardsdatascience.com/stop-one-hot-encoding-your-categorical-variables-bbb0fba89809
  12. Brownlee, Jason. (2017). "Why One-Hot Encode Data in Machine Learning?". Machinelearningmastery. https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/
  13. Kuhn, Max. “dummyVars”. RDocumentation. https://www.rdocumentation.org/packages/caret/versions/6.0-86/topics/dummyVars