Test harness

Last updated

In software testing, a test harness is a collection of stubs and drivers configured to assist with the testing of an application or component. [1] [2] It acts as imitation infrastructure for test environments or containers where the full infrastructure is either not available or not desired.

Contents

Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness provides a hook for the developed code, which can be tested using an automation framework.

A test harness is used to facilitate testing where all or some of an application's production infrastructure is unavailable, this may be due to licensing costs, security concerns meaning test environments are air gapped, resource limitations, or simply to increase the execution speed of tests by providing pre-defined test data and smaller software components instead of calculated data from full applications.

These individual objectives may be fulfilled by unit test framework tools, stubs or drivers. [3]

Example

When attempting to build an application that needs to interface with an application on a mainframe computer, but no mainframe is available during development, a test harness may be built to use as a substitute this can mean that normally complex operations can be handled with a small amount of resources by providing pre-defined data and responses so the calculations performed by the mainframe are not needed.

A test harness may be part of a project deliverable. It may be kept separate from the application source code and may be reused on multiple projects. A test harness simulates application functionality; it has no knowledge of test suites, test cases or test reports. Those things are provided by a testing framework and associated automated testing tools.

A part of its job is to set up suitable test fixtures.

The test harness will generally be specific to a development environment such as Java. However, interoperability test harnesses have been developed for use in more complex systems. [4]

Related Research Articles

<span class="mw-page-title-main">Legacy system</span> Old computing technology or system that remains in use

In computing, a legacy system is an old method, technology, computer system, or application program, "of, relating to, or being a previous or outdated computer system", yet still in use. Often referencing a system as "legacy" means that it paved the way for the standards that would follow it. This can also imply that the system is out of date or in need of replacement.

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not necessarily limited to:

Interoperability is a characteristic of a product or system to work with other products or systems. While the term was initially defined for information technology or systems engineering services to allow for information exchange, a broader definition takes into account social, political, and organizational factors that impact system-to-system performance.

Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is as opposed to software being developed first and test cases created later.

A programming tool or software development tool is a computer program that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs, that can be combined to accomplish a task, much as one might use multiple hands to fix a physical object. The most basic tools are a source code editor and a compiler or interpreter, which are used ubiquitously and continuously. Other tools are used more or less depending on the language, development methodology, and individual engineer, often used for a discrete task, like a debugger or profiler. Tools may be discrete programs, executed separately – often from the command line – or may be parts of a single large program, called an integrated development environment (IDE). In many cases, particularly for simpler use, simple ad hoc techniques are used instead of a tool, such as print debugging instead of using a debugger, manual timing instead of a profiler, or tracking bugs in a text file or spreadsheet instead of a bug tracking system.

In software engineering, the terms frontend and backend refer to the separation of concerns between the presentation layer (frontend), and the data access layer (backend) of a piece of software, or the physical infrastructure or hardware. In the client–server model, the client is usually considered the frontend and the server is usually considered the backend, even when some presentation work is actually done on the server itself.

Windows Management Instrumentation (WMI) consists of a set of extensions to the Windows Driver Model that provides an operating system interface through which instrumented components provide information and notification. WMI is Microsoft's implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the Distributed Management Task Force (DMTF).

In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional job and batch are distinguished and contrasted; see that page for details. Other synonyms include batch system, distributed resource management system (DRMS), distributed resource manager (DRM), and, commonly today, workload automation (WLA). The data structure of jobs to run is known as the job queue.

<span class="mw-page-title-main">Uniface (programming language)</span> Low-code development platform

Uniface is a low-code development and deployment platform for enterprise applications that can run in a large range of runtime environments, including mobile, mainframe, web, Service-oriented architecture (SOA), Windows, Java EE, and .NET. Uniface is used to create mission-critical applications.

A software factory is a structured collection of related software assets that aids in producing computer software applications or software components according to specific, externally defined end-user requirements through an assembly process. A software factory applies manufacturing techniques and principles to software development to mimic the benefits of traditional manufacturing. Software factories are generally involved with outsourced software creation.

Mobile app development is the act or process by which a mobile app is developed for one or more mobile devices, which can include personal digital assistants (PDA), enterprise digital assistants (EDA), or mobile phones. Such software applications are specifically designed to run on mobile devices, taking numerous hardware constraints into consideration. Common constraints include CPU architecture and speeds, available memory (RAM), limited data storage capacities, and considerable variation in displays and input methods. These applications can be pre-installed on phones during manufacturing or delivered as web applications, using server-side or client-side processing to provide an "application-like" experience within a web browser.

In Microsoft Windows applications programming, OLE Automation is an inter-process communication mechanism created by Microsoft. It is based on a subset of Component Object Model (COM) that was intended for use by scripting languages – originally Visual Basic – but now is used by several languages on Windows. All automation objects are required to implement the IDispatch interface. It provides an infrastructure whereby applications called automation controllers can access and manipulate shared automation objects that are exported by other applications. It supersedes Dynamic Data Exchange (DDE), an older mechanism for applications to control one another. As with DDE, in OLE Automation the automation controller is the "client" and the application exporting the automation objects is the "server".

Instrument control consists of connecting a desktop instrument to a computer and taking measurements.

In software testing, test stubs are programs that simulate the behaviours of software components that a module undergoing tests depends on. Test stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.

Robotics middleware is middleware to be used in complex robot control software systems.

In software engineering, service virtualization or service virtualisation is a method to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. It is used to provide software development and QA/testing teams access to dependent system components that are needed to exercise an application under test (AUT), but are unavailable or difficult-to-access for development and testing purposes. With the behavior of the dependent components "virtualized", testing and development can proceed without accessing the actual live components. Service virtualization is recognized by vendors, industry analysts, and industry publications as being different than mocking. See here for a Comparison of API simulation tools.

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources. The definitions may be in a version control system. The code in the definition files may use either scripts or declarative definitions, rather than maintaining the code through manual processes, but IaC more often employs declarative approaches.

<span class="mw-page-title-main">DevOps toolchain</span> DevOps toolchain release package.

A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organisation that uses DevOps practices.

In software testing a test driver is a software component or application that initiates and controls the execution of a program under test, especially when such components are part of a larger system and cannot run in isolation. Drivers control applications across various stages of software testing, from unit and integration testing right through to system integration testing and acceptance testing , especially when the target module is a component of a larger system that is not yet fully implemented or otherwise unavailable.

References

  1. "Test Harness". ISTQB Glossary. Retrieved 10 September 2023.
  2. Rocha, Camila Ribeiro; Martins, Eliane (2008). "A Method for Model Based Test Harness Generation for Component Testing". Journal of the Brazilian Computer Society: 8. Retrieved 10 September 2023.
  3. ISTQB Exam Certification - "What is Test harness/ Unit test framework tools in software testing?", accessed 19 October 2015
  4. Ricardo Jardim-Gonçalves, Jörg Müller, Kai Mertins, Martin Zelm, editors, Enterprise Interoperability II: New Challenges and Approaches, Springer, 2007, p. 674, accessed 19 October 2015

Further reading