Nominal category

Last updated

Introduction to nominal data

A variable used to associate each data point in a set of observations, or in a particular instance, to a certain qualitative category is a categorical variable. Categorical variables have two types of scales, ordinal and nominal. [1] The first type of categorical scale is dependent on natural ordering, levels that are defined by a sense of quality. Variables with this ordering convention are known as ordinal variables. In comparison, variables with unordered scales are nominal variables. [1]

Contents

Visual difference between nominal and ordinal data (w/examples), the two scales of categorical data Nominal versus Ordinal.png
Visual difference between nominal and ordinal data (w/examples), the two scales of categorical data

A nominal variable, or nominal group, is a group of objects or ideas collectively grouped by a particular qualitative characteristic. [3] Nominal variables do not have a natural order, which means that statistical analyses of these variables will always produce the same results, regardless of the order in which the data is presented. [1] [3]

Even though ordinal variable statistical methods cannot be used for nominal groups, nominal group methods can be used for both types of categorical data sets; however, nominally categorizing ordinal data will remove order, limiting further dataset analysis to result in nominal outcomes. [1]

Valid performable operations on nominal data

Since a nominal group consists of data that is either identified as a member or non-member, each individual data point carries no additional significance beyond group identification. Additionally, data identification justifies whether it is necessary to form new nominal groups based on the information available. [3] Because nominal categories cannot be numerically organized or ranked, members associated with a nominal group cannot be placed in an ordinal or ratio form.

Nominal data is often compared to ordinal and ratio data to determine if individual data points influence the behavior of quantitatively driven datasets. [1] [4] For example, the effect of race (nominal) on income (ratio) could be investigated by regressing the level of income upon one or more dummy variables that specify race. When nominal variables are used in these contexts, the valid data operations that may be performed are limited. While arithmetic operations and calculations measuring the central tendency of data (quantitative assignments of data analysis, including mean, median) cannot be performed on nominal categories, performable data operations include the comparison of frequencies and the frequency distribution, the determination of a mode, the creation of pivot tables, and uses of Chi-square goodness of fit and independence tests, coding and recoding, and logistic or probit regressions. [1] [3] [4]

Collection and description of nominal data (from frequency distribution to bar charts) using qualitative information, such as computer brand owned Nominal data-bar chart.png
Collection and description of nominal data (from frequency distribution to bar charts) using qualitative information, such as computer brand owned

Examples and logical analysis of nominal data

As ‘nominal’ suggests, nominal groups are based on the name of the data it encapsulates. [3] For example, citizenship is a nominal group. A person can either be a citizen of a country or not. With this, a citizen of Canada does not have “more citizenship” than another citizen of Canada; therefore, it is impossible to order citizenship by any mathematical logic.

Another example of name categorization would be identifying "words that start with the letter 'a'". There are thousands of words that start with the letter 'a' but none have "more" of this nominal quality than others, meaning that the word starting with the letter ‘a’ is more important than determining the number of ‘a’s as the first letters of an instance because this is associated with membership rather than quantifying the data as an ordinal group.

With this, the correlation of two nominal categories is difficult because some relationships that occur are spurious, where two or more variables are incorrectly assumed to correlate with one another. Data compared within categories may also be unimportant. For example, figuring out whether proportionally more Canadians have first names starting with the letter 'a' than non-Canadians would be a fairly arbitrary, random exercise. However, the use of comparing nominal data with a frequency distribution to associate gender and political affiliation would be more effective since a correlation between the counts of a particular party affiliation would compare to the number of male and or female voters accounted in a dataset.

From a quantitative analysis perspective, one of the most common operations to perform on nominal data is dummy variable assignment, a method earlier introduced. For example, if a nominal variable has three categories (A, B, and C), two dummy variables would be created (for A and B) where C is the reference category, the nominal variable that serves as a baseline for variable comparison. [6] Another example of this is the use of indicator variable coding that assigns a numerical value of 0 or 1 to each data point in a set. This method identifies whether individual observations belong to a particular group (set to one) or not (set to zero). [6] This numerical association allows for more flexibility in nominal data analysis as it captures differences not only between distinct nominal groups, but also the differences present among data within a set, determining the interactions between nominal variables and other variables in a systematic context. [6]  

Related Research Articles

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as is parametric statistics. Nonparametric statistics can be used for descriptive statistics or statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are evidently violated.

In regression analysis, a dummy variable is one that takes a binary value to indicate the absence or presence of some categorical effect that may be expected to shift the outcome. For example, if we were studying the relationship between biological sex and income, we could use a dummy variable to represent the sex of each individual in the study. The variable could take on a value of 1 for males and 0 for females. In machine learning this is known as one-hot encoding.

<span class="mw-page-title-main">Interaction (statistics)</span> Statistical term

In statistics, an interaction may arise when considering the relationship among three or more variables, and describes a situation in which the effect of one causal variable on an outcome depends on the state of a second causal variable. Although commonly thought of in terms of causal relationships, the concept of an interaction can also describe non-causal associations. Interactions are often considered in the context of regression analyses or factorial experiments.

Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio. This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others. Other classifications include those by Mosteller and Tukey, and by Chrisman.

In statistics, a categorical variable is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or nominal category on the basis of some qualitative property. In computer science and some branches of mathematics, categorical variables are referred to as enumerations or enumerated types. Commonly, each of the possible values of a categorical variable is referred to as a level. The probability distribution associated with a random categorical variable is called a categorical distribution.

In statistics, a contingency table is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business intelligence, engineering, and scientific research. They provide a basic picture of the interrelation between two variables and can help find interactions between them. The term contingency table was first used by Karl Pearson in "On the Theory of Contingency and Its Relation to Association and Normal Correlation", part of the Drapers' Company Research Memoirs Biometric Series I published in 1904.

<span class="mw-page-title-main">Feature (machine learning)</span> Measurable property or characteristic

In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a phenomenon. Choosing informative, discriminating and independent features is a crucial element of effective algorithms in pattern recognition, classification and regression. Features are usually numeric, but structural features such as strings and graphs are used in syntactic pattern recognition. The concept of "feature" is related to that of explanatory variable used in statistical techniques such as linear regression.

Binary data is data whose unit can take on only two possible states. These are often labelled as 0 and 1 in accordance with the binary numeral system and Boolean algebra.

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient.

In digital circuits and machine learning, a one-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). A similar implementation in which all bits are '1' except one '0' is sometimes called one-cold. In statistics, dummy variables represent a similar technique for representing categorical data.

<span class="mw-page-title-main">Data analysis</span> The process of analyzing data to discover useful information and support decision-making

Data analysis is the process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.

<span class="mw-page-title-main">Data and information visualization</span> Visual representation of data

Data and information visualization is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data. When intended for the general public to convey a concise version of known, specific information in a clear and engaging manner, it is typically called information graphics.

In statistics, ordered probit is a generalization of the widely used probit analysis to the case of more than two outcomes of an ordinal dependent variable. Similarly, the widely used logit method also has a counterpart ordered logit. Ordered probit, like ordered logit, is a particular method of ordinal regression.

<span class="mw-page-title-main">Plot (graphics)</span> Graphical technique for data sets

A plot is a graphical technique for representing a data set, usually as a graph showing the relationship between two or more variables. The plot can be drawn by hand or by a computer. In the past, sometimes mechanical or electronic plotters were used. Graphs are a visual representation of the relationship between variables, which are very useful for humans who can then quickly derive an understanding which may not have come from lists of values. Given a scale or ruler, graphs can also be used to read off the value of an unknown variable plotted as a function of a known one, but this can also be done with data presented in tabular form. Graphs of functions are used in mathematics, sciences, engineering, technology, finance, and other areas.

In statistics, multiple correspondence analysis (MCA) is a data analysis technique for nominal categorical data, used to detect and represent underlying structures in a data set. It does this by representing data as points in a low-dimensional Euclidean space. The procedure thus appears to be the counterpart of principal component analysis for categorical data. MCA can be viewed as an extension of simple correspondence analysis (CA) in that it is applicable to a large set of categorical variables.

<span class="mw-page-title-main">Bivariate analysis</span> Concept in statistical analysis

Bivariate analysis is one of the simplest forms of quantitative (statistical) analysis. It involves the analysis of two variables, for the purpose of determining the empirical relationship between them.

In statistics, groups of individual data points may be classified as belonging to any of various statistical data types, e.g. categorical, real number, odd number (1,3,5) etc. The data type is a fundamental component of the semantic content of the variable, and controls which sorts of probability distributions can logically be used to describe the variable, the permissible operations on the variable, the type of regression analysis used to predict the variable, etc. The concept of data type is similar to the concept of level of measurement, but more specific: For example, count data require a different distribution than non-negative real-valued data require, but both fall under the same level of measurement.

Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed.

Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described by S. S. Stevens in 1946. The ordinal scale is distinguished from the nominal scale by having a ranking. It also differs from the interval scale and ratio scale by not having category widths that represent equal increments of the underlying attribute.

References

  1. 1 2 3 4 5 6 Agresti, Alan (2007). An Introduction to categorical data analysis. Wiley series in probability and statistics (2nd ed.). Hoboken (N.J.): Wiley-Interscience. ISBN   978-0-471-22618-5.
  2. Dahouda, Mwamba Kasongo; Joe, Inwhee (2021). "A Deep-Learned Embedding Technique for Categorical Features Encoding". IEEE Access. 9: 114381–114391. doi: 10.1109/ACCESS.2021.3104357 . ISSN   2169-3536.
  3. 1 2 3 4 5 Rugg, Gordon; Petre, Marian (2006), A Gentle Guide To Research Methods, McGraw-Hill International, ISBN   9780335219278 .
  4. 1 2 T.Reynolds, H. (1984). Analysis of Nominal Data. SAGE Publications, Inc. doi:10.4135/9781412983303. ISBN   978-1-4129-8330-3.
  5. Reid, Howard M. (2014). Introduction to statistics: fundamental concepts and procedures of data analysis. Los Angeles: SAGE. ISBN   978-1-4522-7196-5.
  6. 1 2 3 Ryan, Thomas P. (2009). Solutions manual to accompany modern regression methods. Wiley series in probability and statistics (2nd ed.). Hoboken, N.J: Wiley. ISBN   978-0-470-08186-0.