Discipline | Software engineering |
---|---|
Language | English |
Edited by | Tao Xie and Robert M. Hierons |
Publication details | |
History | 1991–present |
Publisher | |
Frequency | 8/year |
1.267 (2020) | |
Standard abbreviations | |
ISO 4 | Softw. Test. Verif. Reliab. |
Indexing | |
CODEN | JTREET |
ISSN | 0960-0833 (print) 1099-1689 (web) |
LCCN | 2001212279 |
OCLC no. | 27920252 |
Links | |
Software Testing, Verification, & Reliability is a peer-reviewed scientific journal in the field of software testing, verification, and reliability published by John Wiley & Sons.
STVR was founded in 1991 by Derek Yates. [1] Martin Woodward become editor-in-chief in 1992, [2] and was later joined by Lee White. They were succeeded in 2006 by Jeff Offutt, [1] who was joined by Rob Hierons in 2011. [3] Jeff Offutt resigned and Tao Xie became co editor-in-chief in July 2019. [4]
The journal is abstracted and indexed in the Science Citation Index Expanded and Current Contents/Engineering, Computing & Technology. According to the Journal Citation Reports , the journal has a 2020 impact factor of 1.267. [5]
Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a system under test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach.
Software quality assurance (SQA) is a means and practice of monitoring all software engineering processes, methods, and work products to ensure compliance against defined standards. It may include ensuring conformance to standards or models, such as ISO/IEC 9126, SPICE or CMMI.
Jonathan P. Bowen FBCS FRSA is a British computer scientist and an Emeritus Professor at London South Bank University, where he headed the Centre for Applied Formal Methods. Prof. Bowen is also the Chairman of Museophile Limited and has been a Professor of Computer Science at Birmingham City University, Visiting Professor at the Pratt Institute, University of Westminster and King's College London, and a visiting academic at University College London.
Mutation testing is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behaviour of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. Mutants are based on well-defined mutation operators that either mimic typical programming errors or force the creation of valuable tests. The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution. Mutation testing is a form of white-box testing.
In computer science, formal specifications are mathematically based techniques whose purpose are to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools. These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information.
The cleanroom software engineering process is a software development process intended to produce software with a certifiable level of reliability. The central principles are software development based on formal methods, incremental implementation under statistical quality control, and statistically sound testing.
A software regression is a type of software bug where a feature that has worked before stops working. This may happen after changes are applied to the software's source code, including the addition of new features and bug fixes. They may also be introduced by changes to the environment in which the software is running, such as system upgrades, system patching or a change to daylight saving time. A software performance regression is a situation where the software still functions correctly, but performs more slowly or uses more memory or resources than before. Various types of software regressions have been identified in practice, including the following:
Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.
Dr. William C. Hetzel is an expert in the field of software testing. He compiled the papers from the 1972 Computer Program Test Methods Symposium, also known as the Chapel Hill Symposium, into the book Program Test Methods. The book, published in 1973, details the problems of software validation and testing.
Software and Systems Modeling (SoSyM) is a peer-reviewed scientific journal covering the development and application of software and systems modeling languages and techniques, including modeling foundations, semantics, analysis and synthesis techniques, model transformations, language definition and language engineering issues. It was established in 2002 and is published by Springer Science+Business Media. The editors-in-chief are Jeff Gray and Bernhard Rumpe. They are supported by the associate editors Marsha Chechik, Martin Gogolla, and Jean-Marc Jezequel and the assistant editors Huseyin Ergin and Martin Schindler. The members of the editorial board can be found on http://www.sosym.org/.
In computing, software engineering, and software testing, a test oracle is a mechanism for determining whether a test has passed or failed. The use of oracles involves comparing the output(s) of the system under test, for a given test-case input, to the output(s) that the oracle determines that product should have. The term "test oracle" was first introduced in a paper by William E. Howden. Additional work on different kinds of oracles was explored by Elaine Weyuker.
Dr Martin R. Woodward was a British computer scientist who made leading contributions in the field of in software testing.
Prof. Mark Harman is a British computer scientist. Since 2010, he has been a professor at University College London (UCL) and since 2017 he has been at Facebook London. He was founder of the Centre for Research on Evolution Search and Testing (CREST) initially at King's College London in 2006, latterly at UCL, and was the Director until 2017. Harman has received both of the major research awards for software engineering : the IEEE Harlan D. Mills Award, for "fundamental contributions throughout software engineering, including seminal contributions in establishing search-based software engineering, reigniting research in slicing and testing, and founding genetic improvement"; and the ACM SIGSOFT Outstanding Research Award
The Journal of Software: Evolution and Process is a peer-reviewed scientific journal covering all aspects of software development and evolution. It is published by John Wiley & Sons. The journal was established in 1989 as the Journal of Software Maintenance: Research and Practice, renamed in 2001 to Journal of Software Maintenance and Evolution: Research and Practice, and obtained its current title in 2012. The editors-in-chief are Massimiliano Di Penta, Darren Dalcher, Xin Peng, and David Raffo.
Professor Michael A. Hennell is a British computer scientist who has made leading contributions in the field of software testing.
Jeff Offutt is a professor of Software Engineering at George Mason University. His primary interests are software testing and analysis, web software engineering, and software evolution and change-impact analysis.
Frontiers Media SA is a publisher of peer-reviewed, open access, scientific journals currently active in science, technology, and medicine. It was founded in 2007 by Kamila and Henry Markram. Frontiers is based in Lausanne, Switzerland, with other offices in London, Madrid, Seattle and Brussels. In 2022, Frontiers employed more than 1,400 people, across 14 countries. All Frontiers journals are published under a Creative Commons Attribution License.
Sergiy A. Vilkomir was a Ukrainian-born computer scientist.
Hussein S. M. Zedan was a computer scientist of Egyptian descent, mainly based in the United Kingdom.
Reliability verification or reliability testing is a method to evaluate the reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan. It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action. Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality in R&D, design, and manufacturing.