System integration testing

Last updated

System integration testing (SIT) involves the overall testing of a complete system of many subsystem components or elements. The system under test may be composed of electromechanical or computer hardware, or software, or hardware with embedded software, or hardware/software with human-in-the-loop testing. SIT is typically performed on a larger integrated system of components and subassemblies that have previously undergone subsystem testing.

Contents

SIT consists, initially, of the "process of assembling the constituent parts of a system in a logical, cost-effective way, comprehensively checking system execution (all nominal and exceptional paths), and including a full functional check-out." [1] Following integration, system test is a process of "verifying that the system meets its requirements, and validating that the system performs in accordance with the customer or user expectations." [1]

In technology product development, the beginning of system integration testing is often the first time that an entire system has been assembled such that it can be tested as a whole. In order to make system testing most productive, the many constituent assemblies and subsystems will have typically gone through a subsystem test and successfully verified that each subsystem meets its requirements at the subsystem interface level.

In the context of software systems and software engineering, system integration testing is a testing process that exercises a software system's coexistence with others. With multiple integrated systems, assuming that each have already passed system testing, [2] SIT proceeds to test their required interactions. Following this, the deliverables are passed on to acceptance testing.[ clarification needed ]

Software system integration testing

For software SIT is part of the software testing life cycle for collaborative projects. Usually, a round of SIT precedes the user acceptance test (UAT) round. Software providers usually run a pre-SIT round of tests before consumers run their SIT test cases.

For example, if an integrator (company) is providing an enhancement to a customer's existing solution, then they integrate the new application layer and the new database layer with the customer's existing application and database layers. After the integration is complete, users use both the new part (extended part) and old part (pre-existing part) of the integrated application to update data. A process should exist to exchange data imports and exports between the two data layers. This data exchange process should keep both systems up-to-date. The purpose of system integration testing is to ensure all parts of these systems successfully co-exist and exchange data where necessary.[ citation needed ]

There may be more parties in the integration, for example the primary customer (consumer) can have their own customers; there may be also multiple providers.[ citation needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Acceptance testing</span> Test to determine if the requirements of a specification or contract are met

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.

<span class="mw-page-title-main">Legacy system</span> Old computing technology or system that remains in use

In computing, a legacy system is an old method, technology, computer system, or application program, "of, relating to, or being a previous or outdated computer system", yet still in use. Often referencing a system as "legacy" means that it paved the way for the standards that would follow it. This can also imply that the system is out of date or in need of replacement.

<span class="mw-page-title-main">Software testing</span> Checking software against a standard

Software testing is the act of checking whether software satisfies expectations.

<span class="mw-page-title-main">Embedded system</span> Computer system with a dedicated function

An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts. Because an embedded system typically controls physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. In 2009, it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems.

<span class="mw-page-title-main">Systems development life cycle</span> Systems engineering terms

In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.

<span class="mw-page-title-main">IBM 3790</span>

The IBM 3790 Communications System was one of the first distributed computing platforms. The 3790 was developed by IBM's Data Processing Division (DPD) and announced in 1974. It preceded the IBM 8100, announced in 1979.

<span class="mw-page-title-main">Systems architect</span>

The systems architect is an information and communications technology professional. Systems architects define the architecture of a computerized system in order to fulfill certain requirements. Such definitions include: a breakdown of the system into components, the component interactions and interfaces, and the technologies and resources to be used in its design and implementation.

Software multitenancy is a software architecture in which a single instance of software runs on a server and serves multiple tenants. Systems designed in such manner are "shared". A tenant is a group of users who share a common access with specific privileges to the software instance. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance - including its data, configuration, user management, tenant individual functionality and non-functional properties. Multitenancy contrasts with multi-instance architectures, where separate software instances operate on behalf of different tenants.

(In the automation and engineering environments, the hardware engineer or architect encompasses the electronics engineering and electrical engineering fields, with subspecialities in analog, digital, or electromechanical systems.)

Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another. The test results, defined by the individual needs of the system, are some measurement of how easily the component or application will be to integrate into the environment and these results will then be compared to the software system's non-functional requirement of portability for correctness. The levels of correctness are usually measured by the cost to adapt the software to the new environment compared to the cost of redevelopment.

A service delivery platform (SDP) is a set of components that provides a service(s) delivery architecture for a type of service delivered to consumer, whether it be a customer or other system. Although it is commonly used in the context of telecommunications, it can apply to any system that provides a service. Although the TM Forum (TMF) is working on defining specifications in this area, there is no standard definition of SDP in industry and different players define its components, breadth, and depth in slightly different ways.

<span class="mw-page-title-main">Computer appliance</span> Dedicated computer system

A computer appliance is a computer system with a combination of hardware, software, or firmware that is specifically designed to provide a particular computing resource. Such devices became known as appliances because of the similarity in role or management to a home appliance, which are generally closed and sealed, and are not serviceable by the user or owner. The hardware and software are delivered as an integrated product and may even be pre-configured before delivery to a customer, to provide a turn-key solution for a particular application. Unlike general purpose computers, appliances are generally not designed to allow the customers to change the software and the underlying operating system, or to flexibly reconfigure the hardware.

Knowledge Discovery Metamodel (KDM) is a publicly available specification from the Object Management Group (OMG). KDM is a common intermediate representation for existing software systems and their operating environments, that defines common metadata required for deep semantic integration of Application Lifecycle Management tools. KDM was designed as the OMG's foundation for software modernization, IT portfolio management and software assurance. KDM uses OMG's Meta-Object Facility to define an XMI interchange format between tools that work with existing software as well as an abstract interface (API) for the next-generation assurance and modernization tools. KDM standardizes existing approaches to knowledge discovery in software engineering artifacts, also known as software mining.

<span class="mw-page-title-main">V-model (software development)</span> Software development methodology

In software development, the V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model. Instead of moving down linearly, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.

<span class="mw-page-title-main">Cloud computing</span> Form of shared internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Data as a service (DaaS) is a cloud-based software tool used for working with data, such as managing data in a data warehouse or analyzing data with business intelligence. It is enabled by software as a service (SaaS). Like all "as a service" (aaS) technology, DaaS builds on the concept that its data product can be provided to the user on demand, regardless of geographic or organizational separation between provider and consumer. Service-oriented architecture (SOA) and the widespread use of APIs have rendered the platform on which the data resides as irrelevant.

The High-performance Integrated Virtual Environment (HIVE) is a distributed computing environment used for healthcare-IT and biological research, including analysis of Next Generation Sequencing (NGS) data, preclinical, clinical and post market data, adverse events, metagenomic data, etc. Currently it is supported and continuously developed by US Food and Drug Administration, George Washington University, and by DNA-HIVE, WHISE-Global and Embleema. HIVE currently operates fully functionally within the US FDA supporting wide variety (+60) of regulatory research and regulatory review projects as well as for supporting MDEpiNet medical device postmarket registries. Academic deployments of HIVE are used for research activities and publications in NGS analytics, cancer research, microbiome research and in educational programs for students at GWU. Commercial enterprises use HIVE for oncology, microbiology, vaccine manufacturing, gene editing, healthcare-IT, harmonization of real-world data, in preclinical research and clinical studies.

In software deployment, an environment or tier is a computer system or set of systems in which a computer program or software component is deployed and executed. In simple cases, such as developing and immediately executing a program on the same machine, there may be a single environment, but in industrial use, the development environment and production environment are separated, often with several stages in between. This structured release management process allows phased deployment (rollout), testing, and rollback in case of problems.

<span class="mw-page-title-main">Banking as a service</span>

Banking as a service (BaaS) is the provision of banking products to non-bank third parties through APIs.

This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.

References

  1. 1 2 Houser, Pete (November 2011). "Best Practices for Systems Integration" (PDF). dtic.mil. Archived from the original (PDF) on 12 May 2013. Retrieved 15 March 2016.
  2. What is System integration testing?