Complexity

Last updated

Complexity characterises the behaviour of a system or model whose components interact in multiple ways and follow local rules, leading to non-linearity, randomness, collective dynamics, hierarchy, and emergence. [1] [2]

Contents

The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex linkages at various scales is the main goal of complex systems theory.

The intuitive criterion of complexity can be formulated as follows: a system would be more complex if more parts could be distinguished, and if more connections between them existed. [3]

As of 2010, a number of approaches to characterizing complexity have been used in science; Zayed et al. [4] reflect many of these. Neil Johnson states that "even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples..." Ultimately Johnson adopts the definition of "complexity science" as "the study of the phenomena which emerge from a collection of interacting objects". [5]

Overview

Definitions of complexity often depend on the concept of a "system" – a set of parts or elements that have relationships among them differentiated from relationships with other elements outside the relational regime. Many definitions tend to postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of relationships among the elements. However, what one sees as complex and what one sees as simple is relative and changes with time.

Warren Weaver posited in 1948 two forms of complexity: disorganized complexity, and organized complexity. [6] Phenomena of 'disorganized complexity' are treated using probability theory and statistical mechanics, while 'organized complexity' deals with phenomena that escape such approaches and confront "dealing simultaneously with a sizable number of factors which are interrelated into an organic whole". [6] Weaver's 1948 paper has influenced subsequent thinking about complexity. [7]

The approaches that embody concepts of systems, multiple elements, multiple relational regimes, and state spaces might be summarized as implying that complexity arises from the number of distinguishable relational regimes (and their associated state spaces) in a defined system.

Some definitions relate to the algorithmic basis for the expression of a complex phenomenon or model or mathematical expression, as later set out herein.

Disorganized vs. organized

One of the problems in addressing complexity issues has been formalizing the intuitive conceptual distinction between the large number of variances in relationships extant in random collections, and the sometimes large, but smaller, number of relationships between elements in systems where constraints (related to correlation of otherwise independent elements) simultaneously reduce the variations from element independence and create distinguishable regimes of more-uniform, or correlated, relationships, or interactions.

Weaver perceived and addressed this problem, in at least a preliminary way, in drawing a distinction between "disorganized complexity" and "organized complexity".

In Weaver's view, disorganized complexity results from the particular system having a very large number of parts, say millions of parts, or many more. Though the interactions of the parts in a "disorganized complexity" situation can be seen as largely random, the properties of the system as a whole can be understood by using probability and statistical methods.

A prime example of disorganized complexity is a gas in a container, with the gas molecules as the parts. Some would suggest that a system of disorganized complexity may be compared with the (relative) simplicity of planetary orbits – the latter can be predicted by applying Newton's laws of motion. Of course, most real-world systems, including planetary orbits, eventually become theoretically unpredictable even using Newtonian dynamics; as discovered by modern chaos theory. [8]

Organized complexity, in Weaver's view, resides in nothing else than the non-random, or correlated, interaction between the parts. These correlated relationships create a differentiated structure that can, as a system, interact with other systems. The coordinated system manifests properties not carried or dictated by individual parts. The organized aspect of this form of complexity in regards to other systems, rather than the subject system, can be said to "emerge," without any "guiding hand".

The number of parts does not have to be very large for a particular system to have emergent properties. A system of organized complexity may be understood in its properties (behavior among the properties) through modeling and simulation, particularly modeling and simulation with computers. An example of organized complexity is a city neighborhood as a living mechanism, with the neighborhood people among the system's parts. [9]

Sources and factors

There are generally rules which can be invoked to explain the origin of complexity in a given system.

The source of disorganized complexity is the large number of parts in the system of interest, and the lack of correlation between elements in the system.

In the case of self-organizing living systems, usefully organized complexity comes from beneficially mutated organisms being selected to survive by their environment for their differential reproductive ability or at least success over inanimate matter or less organized complex organisms. See e.g. Robert Ulanowicz's treatment of ecosystems. [10]

Complexity of an object or system is a relative property. For instance, for many functions (problems), such a computational complexity as time of computation is smaller when multitape Turing machines are used than when Turing machines with one tape are used. Random Access Machines allow one to even more decrease time complexity (Greenlaw and Hoover 1998: 226), while inductive Turing machines can decrease even the complexity class of a function, language or set (Burgin 2005). This shows that tools of activity can be an important factor of complexity.

Varied meanings

In several scientific fields, "complexity" has a precise meaning:

Other fields introduce less precisely defined notions of complexity:

Study

Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. From one perspective, that which is somehow complex – displaying variation without being random – is most worthy of interest given the rewards found in the depths of exploration.

The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions. [17] This means that complex is the opposite of independent, while complicated is the opposite of simple.

While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human brains or social systems. [18] One such interdisciplinary group of fields is relational order theories.

Topics

Behaviour

The behavior of a complex system is often said to be due to emergence and self-organization. Chaos theory has investigated the sensitivity of systems to variations in initial conditions as one cause of complex behaviour.

Mechanisms

Recent developments in artificial life, evolutionary computation and genetic algorithms have led to an increasing emphasis on complexity and complex adaptive systems.

Simulations

In social science, the study on the emergence of macro-properties from the micro-properties, also known as macro-micro view in sociology. The topic is commonly recognized as social complexity that is often related to the use of computer simulation in social science, i.e. computational sociology.

Systems

Systems theory has long been concerned with the study of complex systems (in recent times, complexity theory and complex systems have also been used as names of the field). These systems are present in the research of a variety disciplines, including biology, economics, social studies and technology. Recently, complexity has become a natural domain of interest of real world socio-cognitive systems and emerging systemics research. Complex systems tend to be high-dimensional, non-linear, and difficult to model. In specific circumstances, they may exhibit low-dimensional behaviour.

Data

In information theory, algorithmic information theory is concerned with the complexity of strings of data.

Complex strings are harder to compress. While intuition tells us that this may depend on the codec used to compress a string (a codec could be theoretically created in any arbitrary language, including one in which the very small command "X" could cause the computer to output a very complicated string like "18995316"), any two Turing-complete languages can be implemented in each other, meaning that the length of two encodings in different languages will vary by at most the length of the "translation" language – which will end up being negligible for sufficiently large data strings.

These algorithmic measures of complexity tend to assign high values to random noise. However, under a certain understanding of complexity, arguably the most intuitive one, random noise is meaningless and so not complex at all.

Information entropy is also sometimes used in information theory as indicative of complexity, but entropy is also high for randomness. In the case of complex systems, information fluctuation complexity was designed so as not to measure randomness as complex and has been useful in many applications. More recently, a complexity metric was developed for images that can avoid measuring noise as complex by using the minimum description length principle. [19]

Classification Problems

There has also been interest in measuring the complexity of classification problems in supervised machine learning. This can be useful in meta-learning to determine for which data sets filtering (or removing suspected noisy instances from the training set) is the most beneficial [20] and could be expanded to other areas. For binary classification, such measures can consider the overlaps in feature values from differing classes, the separability of the classes, and measures of geometry, topology, and density of manifolds. [21]

For non-binary classification problems, instance hardness [22] is a bottom-up approach that first seeks to identify instances that are likely to be misclassified (assumed to be the most complex). The characteristics of such instances are then measured using supervised measures such as the number of disagreeing neighbors or the likelihood of the assigned class label given the input features.

In molecular recognition

A recent study based on molecular simulations and compliance constants describes molecular recognition as a phenomenon of organisation. [23] Even for small molecules like carbohydrates, the recognition process can not be predicted or designed even assuming that each individual hydrogen bond's strength is exactly known.

The law of requisite complexity

Driving from the law of requisite variety, Boisot and McKelvey formulated the ‘Law of Requisite Complexity’, that holds that, in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts. [24]

Positive, appropriate and negative complexity

The application in project management of the Law of Requisite Complexity, as proposed by Stefan Morcov, is the analysis of positive, appropriate and negative complexity. [25] [26]

In project management

Project complexity is the property of a project which makes it difficult to understand, foresee, and keep under control its overall behavior, even when given reasonably complete information about the project system. [27] [28]

In systems engineering

Maik Maurer considers complexity as a reality in engineering. He proposed a methodology for managing complexity in systems engineering [29] :

                             1.           Define the system.

                             2.           Identify the type of complexity.

                             3.           Determine the strategy.

                             4.           Determine the method.

                             5.           Model the system.

                             6.           Implement the method.

Applications

Computational complexity theory is the study of the complexity of problems – that is, the difficulty of solving them. Problems can be classified by complexity class according to the time it takes for an algorithm – usually a computer program – to solve them as a function of the problem size. Some problems are difficult to solve, while others are easy. For example, some difficult problems need algorithms that take an exponential amount of time in terms of the size of the problem to solve. Take the travelling salesman problem, for example. It can be solved, as denoted in Big O notation, in time (where n is the size of the network to visit – the number of cities the travelling salesman must visit exactly once). As the size of the network of cities grows, the time needed to find the route grows (more than) exponentially.

Even though a problem may be computationally solvable in principle, in actual practice it may not be that simple. These problems might require large amounts of time or an inordinate amount of space. Computational complexity may be approached from many different aspects. Computational complexity can be investigated on the basis of time, memory or other resources used to solve the problem. Time and space are two of the most important and popular considerations when problems of complexity are analyzed.

There exist a certain class of problems that although they are solvable in principle they require so much time or space that it is not practical to attempt to solve them. These problems are called intractable.

There is another form of complexity called hierarchical complexity. It is orthogonal to the forms of complexity discussed so far, which are called horizontal complexity.

Emerging applications in other fields

The concept of complexity is being increasingly used in the study of cosmology, big history, and cultural evolution with increasing granularity, as well as increasing quantification.

Application in cosmology

Eric Chaisson has advanced a cosmoglogical complexity [30] metric which he terms Energy Rate Density. [31] This approach has been expanded in various works, most recently applied to measuring evolving complexity of nation-states and their growing cities. [32]

See also

Related Research Articles

<span class="mw-page-title-main">Kolmogorov complexity</span> Measure of algorithmic complexity

In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.

In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Entropy (information theory)</span> Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable , which takes values in the alphabet and is distributed according to :

<span class="mw-page-title-main">Quantum information</span> Information held in the state of a quantum system

Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both the technical definition in terms of Von Neumann entropy and the general computational term.

<span class="mw-page-title-main">Emergence</span> Unpredictable phenomenon in complex systems

In philosophy, systems theory, science, and art, emergence occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole.

<span class="mw-page-title-main">Genetic algorithm</span> Competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, causal inference, etc.

The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

Ray Solomonoff was the inventor of algorithmic probability, his General Theory of Inductive Inference, and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.

<span class="mw-page-title-main">Algorithmic probability</span>

In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs.

Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

Specified complexity is a creationist argument introduced by William Dembski, used by advocates to promote the pseudoscience of intelligent design. According to Dembski, the concept can formalize a property that singles out patterns that are both specified and complex, where in Dembski's terminology, a specified pattern is one that admits short descriptions, whereas a complex pattern is one that is unlikely to occur by chance. Proponents of intelligent design use specified complexity as one of their two main arguments, alongside irreducible complexity.

<span class="mw-page-title-main">No free lunch in search and optimization</span> Average solution cost is the same with any method

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure that can be exploited more efficiently than random search or even has closed-form solutions that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization, is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning. Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

Project complexity is the property of a project which makes it difficult to understand, foresee, and keep under control its overall behavior, even when given reasonably complete information about the project system. With a lens of systems thinking, project complexity can be defined as an intricate arrangement of the varied interrelated parts in which the elements can change and evolve constantly with an effect on the project objectives. The identification of complex projects is specifically important to multi-project engineering environments.

References

  1. Johnson, Steven (2001). Emergence: The Connected Lives of Ants, Brains, Cities. New York: Scribner. p. 19. ISBN   978-3411040742.
  2. "What is complex systems science? | Santa Fe Institute". www.santafe.edu. Archived from the original on 2022-04-14. Retrieved 2022-04-17.
  3. Heylighen, Francis (1999). The Growth of Structural and Functional Complexity during Evolution, in; F. Heylighen, J. Bollen & A. Riegler (Eds.) The Evolution of Complexity. (Kluwer Academic, Dordrecht): 17–44.
  4. J. M. Zayed, N. Nouvel, U. Rauwald, O. A. Scherman. Chemical Complexity – supramolecular self-assembly of synthetic and biological building blocks in water. Chemical Society Reviews, 2010, 39, 2806–2816 http://pubs.rsc.org/en/Content/ArticleLanding/2010/CS/b922348g
  5. 1 2 Johnson, Neil F. (2009). "Chapter 1: Two's company, three is complexity" (PDF). Simply complexity: A clear guide to complexity theory. Oneworld Publications. p. 3. ISBN   978-1780740492. Archived from the original (PDF) on 2015-12-11. Retrieved 2013-06-29.
  6. 1 2 Weaver, Warren (1948). "Science and Complexity" (PDF). American Scientist. 36 (4): 536–44. PMID   18882675. Archived from the original (PDF) on 2009-10-09. Retrieved 2007-11-21.
  7. Johnson, Steven (2001). Emergence: the connected lives of ants, brains, cities, and software . New York: Scribner. p.  46. ISBN   978-0-684-86875-2.
  8. "Sir James Lighthill and Modern Fluid Mechanics", by Lokenath Debnath, The University of Texas-Pan American, US, Imperial College Press: ISBN   978-1-84816-113-9: ISBN   1-84816-113-1, Singapore, page 31. Online at http://cs5594.userapi.com/u11728334/docs/25eb2e1350a5/Lokenath_Debnath_Sir_James_Lighthill_and_mode.pdf%5B%5D
  9. Jacobs, Jane (1961). The Death and Life of Great American Cities . New York: Random House.
  10. Ulanowicz, Robert, "Ecology, the Ascendant Perspective", Columbia, 1997
  11. Burgin, M. (1982) Generalized Kolmogorov complexity and duality in theory of computations, Notices of the Russian Academy of Sciences, v.25, No. 3, pp. 19–23
  12. Crutchfield, J.P.; Young, K. (1989). "Inferring statistical complexity". Physical Review Letters . 63 (2): 105–108. Bibcode:1989PhRvL..63..105C. doi:10.1103/PhysRevLett.63.105. PMID   10040781.
  13. Crutchfield, J.P.; Shalizi, C.R. (1999). "Thermodynamic depth of causal states: Objective complexity via minimal representations". Physical Review E . 59 (1): 275–283. Bibcode:1999PhRvE..59..275C. doi:10.1103/PhysRevE.59.275.
  14. Grassberger, P. (1986). "Toward a quantitative theory of self-generated complexity". International Journal of Theoretical Physics . 25 (9): 907–938. Bibcode:1986IJTP...25..907G. doi:10.1007/bf00668821. S2CID   16952432.
  15. Prokopenko, M.; Boschetti, F.; Ryan, A. (2009). "An information-theoretic primer on complexity, self-organisation and emergence". Complexity. 15 (1): 11–28. Bibcode:2009Cmplx..15a..11P. doi:10.1002/cplx.20249.
  16. A complex network analysis example: "Complex Structures and International Organizations" (Grandjean, Martin (2017). "Analisi e visualizzazioni delle reti in storia. L'esempio della cooperazione intellettuale della Società delle Nazioni". Memoria e Ricerca (2): 371–393. doi:10.14647/87204. See also: French version).
  17. Lissack, Michael R.; Johan Roos (2000). The Next Common Sense, The e-Manager's Guide to Mastering Complexity. Intercultural Press. ISBN   978-1-85788-235-3.
  18. Bastardas-Boada, Albert (January 2019). "Complexics as a meta-transdisciplinary field". Congrès Mondial Pour la Pensée Complexe. Les Défis d'Un Monde Globalisé. (Paris, 8-9 Décembre). Unesco.
  19. Mahon, L.; Lukasiewicz, T. (2023). "Minimum Description Length Clustering to Measure Meaningful Image Complexity". Pattern Recognition, 2023 (144).
  20. Sáez, José A.; Luengo, Julián; Herrera, Francisco (2013). "Predicting Noise Filtering Efficacy with Data Complexity Measures for Nearest Neighbor Classification". Pattern Recognition. 46 (1): 355–364. Bibcode:2013PatRe..46..355S. doi:10.1016/j.patcog.2012.07.009.
  21. Ho, T.K.; Basu, M. (2002). "Complexity Measures of Supervised Classification Problems". IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (3), pp 289–300.
  22. Smith, M.R.; Martinez, T.; Giraud-Carrier, C. (2014). "An Instance Level Analysis of Data Complexity". Machine Learning, 95(2): 225–256.
  23. Jorg Grunenberg (2011). "Complexity in molecular recognition". Phys. Chem. Chem. Phys. 13 (21): 10136–10146. Bibcode:2011PCCP...1310136G. doi:10.1039/c1cp20097f. PMID   21503359.
  24. Boisot, M.; McKelvey, B. (2011). "Complexity and organization-environment relations: revisiting Ashby's law of requisite variety". P. Allen, the Sage Handbook of Complexity and Management: 279–298.
  25. Morcov, Stefan; Pintelon, Liliane; Kusters, Rob J. (2020). "IT Project Complexity Management Based on Sources and Effects: Positive, Appropriate and Negative" (PDF). Proceedings of the Romanian Academy - Series A. 21 (4): 329–336. Archived (PDF) from the original on 2020-12-30.
  26. Morcov, S. (2021). Managing Positive and Negative Complexity: Design and Validation of an IT Project Complexity Management Framework. KU Leuven University. Available at https://lirias.kuleuven.be/retrieve/637007 Archived 2021-11-07 at the Wayback Machine
  27. Marle, Franck; Vidal, Ludovic‐Alexandre (2016). Managing Complex, High Risk Projects - A Guide to Basic and Advanced Project Management. London: Springer-Verlag.
  28. Morcov, Stefan; Pintelon, Liliane; Kusters, Rob J. (2020). "Definitions, characteristics and measures of IT Project Complexity - a Systematic Literature Review" (PDF). International Journal of Information Systems and Project Management. 8 (2): 5–21. doi:10.12821/ijispm080201. S2CID   220545211. Archived (PDF) from the original on 2020-07-11.
  29. Maurer, Maik (2017). Complexity management in engineering design -- a primer. Berlin, Germany. ISBN   978-3-662-53448-9. OCLC   973540283.{{cite book}}: CS1 maint: location missing publisher (link)
  30. Chaisson Eric J. 2002. Cosmic Evolution - the Rise of Complexity in Nature. Harvard University Press.https://www.worldcat.org/title/1023218202
  31. Chaisson, Eric J.. “Energy rate density. II. Probing further a new complexity metric.” Complex. 17 (2011): 44-63.https://onlinelibrary.wiley.com/doi/10.1002/cplx.20373 , https://lweb.cfa.harvard.edu/~ejchaisson/reprints/EnergyRateDensity_II_galley_2011.pdf
  32. Chaisson, Eric J. "Energy Budgets of Evolving Nations and Their Growing Cities", Energies 15, no. 21 (2022): 8212.

Further reading