Software Process simulation

Last updated

Software process simulation modelling: Like any simulation, software process simulation (SPS) is the numerical evaluation of a mathematical model that imitates the behavior of the software development process being modeled. SPS has the ability to model the dynamic nature of software development and handle the uncertainty and randomness inherent in it. [1]

Contents

Uses of software process simulation

Following main purposes have been proposed for SPS: [2]

How to do software process simulation

Software process simulation starts with identifying a question that we want to answer. The question could be, for example, related to assessment of an alternative, incorporating a new practice in the software development process. Introducing such changes in the actual development process will be expensive and if the consequences of change are not positive the implications can be dire for the organization. Thus, through the use of simulation we attempt to get an initial assessment of such changes on the model instead of an active development project. Based on this problem description an appropriate scope of the process is chosen. A simulation approach is chosen to model the development process. Such a model is then calibrated using empirical data and then used to conduct simulation based investigations. A detailed description of each step in general can be found in Balci's work, [5] and in particular for software process simulation a comprehensive overview can be found in Ali et al. [6]

In a recent initiative, by ACM special interest group on software engineering (SIGSOFT), a standard for assessing simulation-based scientific studies has been proposed. [7]

Examples of using software process simulation for practical issues in industrial settings

Key venues

Software process simulation has been an active research area for many decades some of the key venues include the International Conference on Software and Systems Process [10] and its predecessor Workshop on Software Process Simulation Modeling (ProSim) from 1998-2004. [11]

Related Research Articles

In computer science, static program analysis is the analysis of computer programs performed without executing them, in contrast with dynamic program analysis, which is performed on programs during their execution in the integrated environment.

<span class="mw-page-title-main">Software architecture</span> High level structures of a software system

Software architecture is the set of structures needed to reason about a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations.

In computer science, formal methods are mathematically rigorous techniques for the specification, development, analysis, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.

<span class="mw-page-title-main">Code review</span> Activity where one or more people check a programs code

Code review is a software quality assurance activity in which one or more people examine the source code of a computer program, either after implementation or during the development process. The persons performing the checking, excluding the author, are called "reviewers". At least one reviewer must not be the code's author.

In software project management, software testing, and software engineering, verification and validation is the process of checking that a software engineer system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?"

<span class="mw-page-title-main">Multi-agent system</span> Built of multiple interacting agents

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning. With advancements in Large language model (LLMs), LLM-based multi-agent systems have emerged as a new area of research, enabling more sophisticated interactions and coordination among agents.

The cleanroom software engineering process is a software development process intended to produce software with a certifiable level of reliability. The central principles are software development based on formal methods, incremental implementation under statistical quality control, and statistically sound testing.

<span class="mw-page-title-main">Value-stream mapping</span> Lean-management method for analyzing the current state and designing a future state

Value-stream mapping, also known as material- and information-flow mapping, is a lean-management method for analyzing the current state and designing a future state for the series of events that take a product or service from the beginning of the specific process until it reaches the customer. A value stream map is a visual tool that displays all critical steps in a specific process and easily quantifies the time and volume taken at each stage. Value stream maps show the flow of both materials and information as they progress through the process.

Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with high quality standard.

Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.

In software development, effort estimation is the process of predicting the most realistic amount of effort required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.

In software testing, a test oracle is a provider of information that describes correct output based on the input of a test case. Testing with an oracle involves comparing actual results of the system under test (SUT) with the expected results as provided by the oracle.

FlexSim is a discrete-event simulation software package developed by FlexSim Software Products, Inc. The FlexSim product family currently includes the general purpose FlexSim product and healthcare systems modeling environment.

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.

Continuous deployment (CD) is a software engineering approach in which software functionalities are delivered frequently and through automated deployments.

<span class="mw-page-title-main">INGENIAS</span>

INGENIAS is an open-source software framework for the analysis, design and implementation of multi-agent systems (MAS).

A digital twin is a digital model of an intended or actual real-world physical product, system, or process that serves as a digital counterpart of it for purposes such as simulation, integration, testing, monitoring, and maintenance.

System-level simulation (SLS) is a collection of practical methods used in the field of systems engineering, in order to simulate, with a computer, the global behavior of large cyber-physical systems.

Predictive engineering analytics (PEA) is a development approach for the manufacturing industry that helps with the design of complex products. It concerns the introduction of new software tools, the integration between those, and a refinement of simulation and testing processes to improve collaboration between analysis teams that handle different applications. This is combined with intelligent reporting and data analytics. The objective is to let simulation drive the design, to predict product behavior rather than to react on issues which may arise, and to install a process that lets design continue after product delivery.

References

  1. Ali, NB; Petersen, K; Wohlin, C (2014). "A Systematic Literature Review on the Industrial Use of Software Process Simulation". Journal of Systems and Software. 97: 65–85. CiteSeerX   10.1.1.717.3797 . doi:10.1016/j.jss.2014.06.059.
  2. Kellner, Marc I; Madachy, Raymond J; Raffo, David M (1999). "Software process simulation modeling: Why? What? How?". Journal of Systems and Software. 46 (2–3): 91–105. CiteSeerX   10.1.1.587.8752 . doi:10.1016/s0164-1212(99)00003-5.
  3. "Use of simulation for software process education: a case study" (PDF). Archived from the original (PDF) on 2016-03-04. Retrieved 2014-12-01.
  4. von Wangenheim, C.G.; Shull, F. (2009). "To Game or Not to Game?". IEEE Software. 26 (2): 92–94. doi:10.1109/MS.2009.54. S2CID   13354988.
  5. Osman Balci (2012), "A Life Cycle for Modeling and Simulation," Simulation: Transactions of the Society for Modeling and Simulation International 88, 7, 870–883.
  6. Ali, N.B.; Petersen, K. (September 2012). "A Consolidated Process for Software Process Simulation: State of the Art and Industry Experience" (PDF). 2012 38th Euromicro Conference on Software Engineering and Advanced Applications. Software Engineering and Advanced Applications (SEAA). pp. 327–336. doi:10.1109/SEAA.2012.69. ISBN   978-0-7695-4790-9.[ permanent dead link ]
  7. Franca, Breno. "Simulation (quantitative)". Empirical standards. Retrieved 25 February 2021.
  8. Ali, NB; Petersen, K; de França, BBN (2015). "Evaluation of simulation-assisted value stream mapping for software product development: Two industrial cases". Information and Software Technology. 68: 45–61. doi:10.1016/j.infsof.2015.08.005.
  9. Garousi, Vahid; Pfahl, Dietmar (2015). "When to automate software testing? A decision-support approach based on process simulation". Journal of Software: Evolution and Process.
  10. "Icssp2015". Archived from the original on 2015-02-21. Retrieved 2014-12-01.
  11. Unknown