Tertiary review

Last updated

In software engineering, a tertiary review is a systematic review of systematic reviews. [1] It is also referred to as a tertiary study in the software engineering literature. However, Umbrella review is the term more commonly used in medicine.

Kitchenham et al. [1] suggest that methodologically there is no difference between a systematic review and a tertiary review. However, as the software engineering community has started performing tertiary reviews new concerns unique to tertiary reviews have surfaced. These include the challenge of quality assessment of systematic reviews, [2] search validation [3] and the additional risk of double counting. [4]

Examples of Tertiary reviews in software engineering literature

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.

<span class="mw-page-title-main">Pair programming</span> Collaborative technique for software development

Pair programming is a software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.

A process theory is a system of ideas that explains how an entity changes and develops. Process theories are often contrasted with variance theories, that is, systems of ideas that explain the variance in a dependent variable based on one or more independent variables. While process theories focus on how something happens, variance theories focus on why something happens. Examples of process theories include evolution by natural selection, continental drift and the nitrogen cycle.

Code review is a software quality assurance activity in which one or more people check a program, mainly by viewing and reading parts of its source code, either after implementation or as an interruption of implementation. At least one of the persons must not have authored the code. The persons performing the checking, excluding the author, are called "reviewers".

In the context of software engineering, software quality refers to two related but distinct notions:

Experimental software engineering involves running experiments on the processes and procedures involved in the creation of software systems, with the intent that the data be used as the basis of theories about the processes involved in software engineering. A number of research groups primarily use empirical and experimental techniques.

Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.

Empirical software engineering (ESE) is a subfield of software engineering (SE) research that uses empirical research methods to study and evaluate an SE phenomenon of interest. The phenomenon may refer to software development tools/technology, practices, processes, policies, or other human and organizational aspects.

Pearl growing is a metaphor taken from the process of small bits of sand growing to make a beautiful pearl, which is used in information literacy. This is also called "snowballing", alluding to the process of how a snowball can grow into a big snow-man by accumulating snow. In this context this refers to the process of using one information item to find content that provides more information items. This search strategy is most successfully employed at the beginning of the research process as the searcher uncovers new pearls about his or her topic.

Continuous deployment (CD) is a software engineering approach in which software functionalities are delivered frequently and through automated deployments.

Software process simulation modelling: Like any simulation, software process simulation (SPS) is the numerical evaluation of a mathematical model that imitates the behavior of the software development process being modeled. SPS has the ability to model the dynamic nature of software development and handle the uncertainty and randomness inherent in it.

The GRADE approach is a method of assessing the certainty in evidence and the strength of recommendations in health care. It provides a structured and transparent evaluation of the importance of outcomes of alternative management strategies, acknowledgment of patients and the public values and preferences, and comprehensive criteria for downgrading and upgrading certainty in evidence. It has important implications for those summarizing evidence for systematic reviews, health technology assessments, and clinical practice guidelines as well as other decision makers.

<span class="mw-page-title-main">Benefit dependency network</span>

A benefit dependency network (BDN) is a diagram of cause and effect relationships. It is drawn according to a specific structure that visualizes multiple cause-effect relationships organized into capabilities, changes and benefits. It can be considered a business-oriented method of what engineers would call goal modeling and is usually read from right to left to provide a one-page overview of how a business generates value, starting with the high level drivers for change, such as found with Digital Initiatives or cross-organizational ERP management. First proposed by Cranfield School of Management as part of a Benefits Management approach the original model has developed to encompass all the domains required for Benefits Management namely Why, What, Who and How. Recent development has added weights to the connections to create a weighted graph so that causal analysis, sometimes referred to as causality, is possible on the represented value chains so different strategies can be compared according to value and outcome. These chains provide a way to construct a compelling story or message that shows how the benefits proposed can be realized from the changes being considered. In software engineering, Jabbari et al. report the use of BDN for the purpose of software process improvement. They use BDN to structure the results of a systematic review on DevOps.

Magne Jørgensen is a Norwegian scientist and software engineer in the field of scientific computing. Jørgensen is chief research scientist at Simula Research Laboratory and is involved in the Research Group for Programming and Software Engineering as professor at the Department for Informatics at the University of Oslo.

Tore Dybå is a Norwegian scientist and software engineer in the fields of information systems and computer science. He has been a Chief Scientist at SINTEF ICT since 2003.

Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science". In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."

Rapid reviews are a systematic survey of literature on a topic or question of interest. Compared to a systematic review of literature, in a rapid review, several design decisions and practical steps are undertaken to reduce the time it takes to identify, aggregate and answer the question of interest. The Cochrane Rapid Reviews Methods Group proposes that rapid reviews can take different forms, and they define rapid reviews as: "A form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting specific methods to produce evidence for stakeholders in a resource-efficient manner".

Ali Dehghantanha is an academic-entrepreneur in cybersecurity and cyber threat intelligence. He is a Professor of Cybersecurity and a Canada Research Chair in Cybersecurity and Threat Intelligence.

Katrina Groth is an American mechanical engineer and professor. Groth is an associate professor in Mechanical Engineering at the University of Maryland, College Park, where she is the associate director for research for the Center for Risk and Reliability and the director of the Systems Risk and Reliability Analysis lab (SyRRA). Groth previously served as the Principal Research & Development Engineer at Sandia National Laboratories.

References

  1. 1 2 Kitchenham, B. A. (2016). Evidence-based software engineering and systematic reviews. Boca Raton. ISBN   9781482228656.{{cite book}}: CS1 maint: location missing publisher (link)
  2. Usman, Muhammad; Ali, Nauman Bin; Wohlin, Claes (2023). "A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering". E-Informatica Software Engineering Journal. 17: 230105. arXiv: 2109.10134 . doi: 10.37190/e-inf230105 . S2CID   237581262.
  3. Börstler, Jürgen; bin Ali, Nauman; Unterkalmsteiner, Michael (2022). "How good are my search strings? Reflections on using an existing review as a quasi-gold standard". E-Informatica Software Engineering Journal. 16 (1): 220103. doi: 10.37190/e-inf220103 . S2CID   245255682.
  4. Börstler, Jürgen; bin Ali, Nauman; Petersen, Kai (2023-06-01). "Double-counting in software engineering tertiary studies — An overlooked threat to validity". Information and Software Technology. 158: 107174. doi: 10.1016/j.infsof.2023.107174 . ISSN   0950-5849. S2CID   257173845.
  5. Tran, Huynh Khanh Vi; Unterkalmsteiner, Michael; Börstler, Jürgen; Ali, Nauman bin (November 2021). "Assessing test artifact quality—A tertiary study". Information and Software Technology. 139: 106620. doi: 10.1016/j.infsof.2021.106620 .
  6. Kotti, Zoe; Galanopoulou, Rafaila; Spinellis, Diomidis (2 March 2023). "Machine Learning for Software Engineering: A Tertiary Study". ACM Computing Surveys. 55 (12): 256:1–256:39. arXiv: 2211.09425 . doi:10.1145/3572905.
  7. Munir, Hussan; Moayyed, Misagh; Petersen, Kai (1 April 2014). "Considering rigor and relevance when evaluating test driven development: A systematic review". Information and Software Technology. 56 (4): 375–394. doi:10.1016/j.infsof.2014.01.002.