Classification Tree Method

Last updated

The Classification Tree Method is a method for test design, [1] as it is used in different areas of software development. [2] It was developed by Grimm and Grochtmann in 1993. [3] Classification Trees in terms of the Classification Tree Method must not be confused with decision trees.

Contents

The classification tree method consists of two major steps: [4] [5]

  1. Identification of test relevant aspects (so called classifications) and their corresponding values (called classes) as well as
  2. Combination of different classes from all classifications into test cases.

The identification of test relevant aspects usually follows the (functional) specification (e.g. requirements, use cases …) of the system under test. These aspects form the input and output data space of the test object.

The second step of test design then follows the principles of combinatorial test design. [4]

While the method can be applied using a pen and a paper, the usual way involves the usage of the Classification Tree Editor, a software tool implementing the classification tree method. [6]

Application

Prerequisites for applying the classification tree method (CTM) is the selection (or definition) of a system under test. The CTM is a black-box testing method and supports any type of system under test. This includes (but is not limited to) hardware systems, integrated hardware-software systems, plain software systems, including embedded software, user interfaces, operating systems, parsers, and others (or subsystems of mentioned systems).

With a selected system under test, the first step of the classification tree method is the identification of test relevant aspects. [4] Any system under test can be described by a set of classifications, holding both input and output parameters. (Input parameters can also include environments states, pre-conditions and other, rather uncommon parameters). [2] Each classification can have any number of disjoint classes, describing the occurrence of the parameter. The selection of classes typically follows the principle of equivalence partitioning for abstract test cases and boundary-value analysis for concrete test cases. [5] Together, all classifications form the classification tree. For semantic purpose, classifications can be grouped into compositions.

The maximum number of test cases is the Cartesian product of all classes of all classifications in the tree, quickly resulting in large numbers for realistic test problems. The minimum number of test cases is the number of classes in the classification with the most containing classes.

In the second step, test cases are composed by selecting exactly one class from every classification of the classification tree. The selection of test cases originally [3] was a manual task to be performed by the test engineer.

Example

Classification Tree for a Database System Classification Tree for Database System.png
Classification Tree for a Database System

For a database system, test design has to be performed. Applying the classification tree method, the identification of test relevant aspects gives the classifications: User Privilege, Operation and Access Method. For the User Privileges, two classes can be identified: Regular User and Administrator User. There are three Operations: Add, Edit and Delete. For the Access Method, again three classes are identified: Native Tool, Web Browser , API . The Web Browser class is further refined with the test aspect Brand, three possible classes are included here: Internet Explorer , Mozilla Firefox , and Apple Safari .

The first step of the classification tree method now is complete. Of course, there are further possible test aspects to include, e.g. access speed of the connection, number of database records present in the database, etc. Using the graphical representation in terms of a tree, the selected aspects and their corresponding values can quickly be reviewed.

For the statistics, there are 30 possible test cases in total (2 privileges * 3 operations * 5 access methods). For minimum coverage, 5 test cases are sufficient, as there are 5 access methods (and access method is the classification with the highest number of disjoint classes).

In the second step, three test cases have been manually selected:

  1. A regular user adds a new data set to the database using the native tool.
  2. An administrator user edits an existing data set using the Firefox browser.
  3. A regular user deletes a data set from the database using the API.

Enhancements

Background

The CTM introduced the following advantages [2] over the Category Partition Method [7] (CPM) by Ostrand and Balcer:

CPM only offers restrictions to handle this scenario.
CTM allows modeling of hierarchical refinements in the classification tree, also called implicit dependencies.
Grochtmann and Wegener presented their tool, the Classification Tree Editor (CTE) which supports both partitioning as well as test case generation. [6]
Classification Tree for Embedded System Example containing concrete values, concrete timing, (different) transitions and distinguish between States and Actions Classification Tree for Embedded System.png
Classification Tree for Embedded System Example containing concrete values, concrete timing, (different) transitions and distinguish between States and Actions

Classification Tree Method for Embedded Systems

The classification tree method first was intended for the design and specification of abstract test cases. With the classification tree method for embedded systems, [8] test implementation can also be performed. Several additional features are integrated with the method:

  1. In addition to atomic test cases, test sequences containing several test steps can be specified.
  2. A concrete timing (e.g. in Seconds, Minutes ...) can be specified for each test step.
  3. Signal transitions (e.g. linear, spline, sine ...) between selected classes of different test steps can be specified.
  4. A distinction between event and state can be modelled, represented by different visual marks in a test.

The module and unit testing tool Tessy relies on this extension.

Dependency Rules and Automated Test Case Generation

One way of modelling constraints is using the refinement mechanism in the classification tree method. This, however, does not allow for modelling constraints between classes of different classifications. Lehmann and Wegener introduced Dependency Rules based on Boolean expressions with their incarnation of the CTE. [9] Further features include the automated generation of test suites using combinatorial test design (e.g. all-pairs testing).

Prioritized Test Case Generation

Recent enhancements to the classification tree method include the prioritized test case generation: It is possible to assign weights to the elements of the classification tree in terms of occurrence and error probability or risk. These weights are then used during test case generation to prioritize test cases. [10] [11] Statistical testing is also available (e.g. for wear and fatigue tests) by interpreting the element weights as a discrete probability distribution.

Test Sequence Generation

With the addition of valid transitions between individual classes of a classification, classifications can be interpreted as a state machine, and therefore the whole classification tree as a Statechart. This defines an allowed order of class usages in test steps and allows to automatically create test sequences. [12] Different coverage levels are available, such as state coverage, transitions coverage and coverage of state pairs and transition pairs.

Numerical Constraints

In addition to Boolean dependency rules referring to classes of the classification tree, Numerical Constraints allow to specify formulas with classifications as variables, which will evaluate to the selected class in a test case. [13]

Classification Tree Editor

The Classification Tree Editor (CTE) is a software tool for test design that implements the classification tree method. [14] [15] [16] [17]

Over the time, several editions of the CTE tool have appeared, written in several (by that time popular) programming languages and developed by several companies.

CTE 1

The original version of CTE was developed at Daimler-Benz Industrial Research [6] [16] facilities in Berlin. It appeared in 1993 and was written in Pascal. It was only available on Unix systems.

CTE 2

In 1997 a major re-implementation was performed, leading to CTE 2. Development again was at Daimler-Benz Industrial Research. It was written in C and available for win32 systems.

The CTE 2 was licensed to Razorcat in 1997 and is part of the TESSY unit test tool. The classification tree editor for embedded systems [8] [15] also based upon this edition.

Razorcat has been developing the CTE since 2001 and has CTE registered a brand name in 2003.

The last version CTE 3.2 was published with the tool TESSY 4.0 in 2016. Note the Versions table below.

CTE 4

The CTE 4 was implemented in TESSY 4.1.7 as an Eclipse plug-in in 2018. The latest CTE 4 version is still being developed as part of TESSY 4.3 in 2021.

CTE XL

In 2000, Lehmann and Wegener introduced Dependency Rules with their incarnation of the CTE, the CTE XL (eXtended Logics). [9] [14] [17] [18] Further features include the automated generation of test suites using combinatorial test design (e.g. all-pairs testing). [19]

Development was performed by DaimlerChrysler. CTE XL was written in Java and was supported on win32 systems. CTE XL was available for download free of charge.

In 2008, Berner&Mattner acquired all rights on CTE XL and continued development till CTE XL 1.9.4.

CTE XL Professional

Starting in 2010, CTE XL Professional was developed by Berner&Mattner. [10] A complete re-implementation was done, again using Java but this time Eclipse-based. CTE XL Professional was available on win32 and win64 systems.

New developments included:

TESTONA

In 2014, Berner&Mattner started releasing its classification tree editor under the brand name TESTONA.

A free edition of TESTONA is still available for download free of charge, however, with reduced functionality.

Versions

VersionDateCommentWritten inOS
CTE 1.01993Original Version, [6] [16] limited to 1000 test cases (fix!) Pascal Unix
CTE 2.01998Windows Version, [15] unlimited number of test cases C++ win32
CTE 2.12003Embedded system version of Razorcats part of the TESSY tool.C++win32
CTE XL 1.02000Dependency Rules, Test Case Generation [9] [14] [17] Java win32
CTE XL 1.62006Last Version by Daimler-Benz [18] Javawin32
CTE XL 1.82008Development by Berner&MattnerJavawin32
CTE XL 1.92009Last Java-only VersionJavawin32
CTE XL Professional 2.12011-02-21First Eclipse-based Version, Prioritized Test Case Generation, [10] Deterministic Test Case Generation, Requirements-Tracing with DOORS Java 6, Eclipse 3.5win32
CTE XL Professional 2.32011-08-02 QualityCenter integration, Requirements Coverage Analysis and Traceability Matrix, API Java 6, Eclipse 3.6win32
CTE XL Professional 2.52011-11-11Test result annotation, MindMap importJava 6, Eclipse 3.6win32, win64
CTE XL Professional 2.72012-01-30Bug fix releaseJava 6, Eclipse 3.6win32, win64
CTE XL Professional 2.92012-06-08Implicit Mark Mode, Default classes, command-line interface Java 6, Eclipse 3.7win32, win64
CTE XL Professional 3.12012-10-19Test Post-Evaluation (e.g. for Root Cause Analysis), Test Sequence Generation, [12] Numerical Constraints [13] Java 6, Eclipse 3.7win32, win64
CTE XL Professional 3.32013-05-28Test Coverage Analysis, Variant Management (e.g. as part of Product Family Engineering), Equivalence Class TestingJava 6, Eclipse 3.7win32, win64
CTE XL Professional 3.52013-12-18Boundary Value Analysis Wizard, Import of AUTOSAR and MATLAB models Java 7, Eclipse 3.8win32, win64
TESTONA 4.12014-09-22Bug fix releaseJava 7, Eclipse 3.8win32, win64
TESTONA 4.32015-07-08Generation of Executable Test Scripts (Code Generation), Import of Test Results [21] Java 7, Eclipse 3.8win32, win64
TESTONA 4.52016-01-21Enhanced Export Facilities, GUI ImprovementsJava 7, Eclipse 3.8win32, win64
TESTONA 5.12016-07-19Bug fix release, Switch to Java 8, Eclipse 4.5 Java 8, Eclipse 4.5win32, win64
CTE 4.02018-08-01New implementation of Razorcat as a plug-in for the TESSY 4.1 tool based on Eclipse. Support in creating (model-based) test cases.Javawin32

win64

Advantages

Limitations

Related Research Articles

<span class="mw-page-title-main">Software</span> Non-tangible executable component of a computer

Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work.

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not necessarily limited to:

<span class="mw-page-title-main">Embedded system</span> Computer system with a dedicated function

An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts. Because an embedded system typically control the physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. In 2009, it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems.

In computer programming, unit testing is a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use. It is a standard step in development and implementation approaches such as Agile.

In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods. It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification.

<span class="mw-page-title-main">Requirements analysis</span> Engineering process

In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements.

<span class="mw-page-title-main">Model-based testing</span>

Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a system under test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach.

FitNesse is a web server, a wiki and an automated testing tool for software. It is based on Ward Cunningham's Framework for Integrated Test and is designed to support acceptance testing rather than unit testing in that it facilitates detailed readable description of system function.

Object-oriented analysis and design (OOAD) is a technical approach for analyzing and designing an application, system, or business by applying object-oriented programming, as well as using visual modeling throughout the software development process to guide stakeholder communication and product quality.

Software project management is the process of planning and leading software projects. It is a sub-discipline of project management in which software projects are planned, implemented, monitored and controlled.

In software engineering, graphical user interface testing is the process of testing a product's graphical user interface (GUI) to ensure it meets its specifications. This is normally done through the use of a variety of test cases.

Data-driven testing (DDT), also known as table-driven testing or parameterized testing, is a software testing methodology that is used in the testing of computer software to describe testing done using a table of conditions directly as test inputs and verifiable outputs as well as the process where test environment settings and control are not hard-coded. In the simplest form the tester supplies the inputs from a row in the table and expects the outputs which occur in the same row. The table typically contains values which correspond to boundary or partition input spaces. In the control methodology, test configuration is "read" from a database.

Model-based design (MBD) is a mathematical and visual method of addressing problems associated with designing complex control, signal processing and communication systems. It is used in many motion control, industrial equipment, aerospace, and automotive applications. Model-based design is a methodology applied in designing embedded software.

Domain-driven design (DDD) is a major software design approach, focusing on modeling software to match a domain according to input from that domain's experts.

A metaCASE tool is a type of application software that provides the possibility to create one or more modeling methods, languages or notations for use within the process of software development. Often the result is a modeling tool for that language. MetaCASE tools are thus a kind of language workbench, generally considered as being focused on graphical modeling languages.

<span class="mw-page-title-main">TPT (software)</span> Software test automation tool

TPT is a systematic test methodology for the automated software test and verification of embedded control systems, cyber-physical systems, and dataflow programs. TPT is specialised on testing and validation of embedded systems whose inputs and outputs can be represented as signals and is a dedicated method for testing continuous behaviour of systems. Most control systems belong to this system class. The outstanding characteristic of control systems is the fact that they interact closely interlinked with a real world environment. Controllers need to observe their environment and react correspondingly to its behaviour. The system works in an interactional cycle with its environment and is subject to temporal constraints. Testing these systems is to stimulate and to check the timing behaviour. Traditional functional testing methods use scripts – TPT uses model-based testing.

In computer programming and software development, debugging is the process of finding and resolving bugs within computer programs, software, or systems.

Fastest is a model-based testing tool that works with specifications written in the Z notation. The tool implements the Test Template Framework (TTF) proposed by Phil Stocks and David Carrington.

<span class="mw-page-title-main">VisualSim Architect</span> Electronic system modeling and simulation software

VisualSim Architect is an electronic system-level software for modeling and simulation of electronic systems, embedded software and semiconductors. VisualSim Architect is a commercial version of the Ptolemy II research project at University of California Berkeley. The product was first released in 2003. VisualSim is a graphical tool that can be used for performance trade-off analyses using such metrics as bandwidth utilization, application response time and buffer requirements. It can be used for architectural analysis of algorithms, components, software instructions and hardware/ software partitioning.

Tricentis Tosca is a software testing tool that is used to automate end-to-end testing for software applications. It is developed by Tricentis.

References

  1. Bath, Graham; McKay, Judy (2008). The software test engineer's handbook : a study guide for the ISTQB test analyst and technical test analyst advanced level certificates (1st ed.). Santa Barbara, CA: Rocky Nook. ISBN   9781933952246.
  2. 1 2 3 4 Hass, Anne Mette Jonassen (2008). Guide to advanced software testing. Boston: Artech House. pp. 179–186. ISBN   978-1596932869.
  3. 1 2 Grochtmann, Matthias; Grimm, Klaus (1993). "Classification Trees for Partition Testing". Software Testing, Verification & Reliability. 3 (2): 63–82. doi:10.1002/stvr.4370030203. S2CID   33987358.
  4. 1 2 3 4 Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu (2013). Introduction to combinatorial testing. Crc Pr Inc. pp. 76–81. ISBN   978-1466552296.
  5. 1 2 Henry, Pierre (2008). The testing network an integral approach to test activities in large software projects. Berlin: Springer. p. 87. ISBN   978-3-540-78504-0.
  6. 1 2 3 4 Grochtmann, Matthias; Wegener, Joachim (1995). "Test Case Design Using Classification Trees and the Classification-Tree Editor CTE" (PDF). Proceedings of the 8th International Software Quality Week(QW '95), San Francisco, USA. Archived from the original (PDF) on 2015-09-24. Retrieved 2013-08-12.
  7. Ostrand, T. J.; Balcer, M. J. (1988). "The category-partition method for specifying and generating functional tests". Communications of the ACM. 31 (6): 676–686. doi: 10.1145/62959.62964 . S2CID   207647895.
  8. 1 2 Conrad, Mirko; Krupp, Alexander (1 October 2006). "An Extension of the Classification-Tree Method for Embedded Systems for the Description of Events". Electronic Notes in Theoretical Computer Science. 164 (4): 3–11. doi: 10.1016/j.entcs.2006.09.002 .
  9. 1 2 3 Lehmann, Eckard; Wegener, Joachim (2000). "Test Case Design by Means of the CTE XL" (PDF). Proceedings of the 8th European International Conference on Software Testing, Analysis & Review (EuroSTAR 2000). Archived from the original (PDF) on 2016-03-04. Retrieved 2013-08-12.
  10. 1 2 3 4 Kruse, Peter M.; Luniak, Magdalena (December 2010). "Automated Test Case Generation Using Classification Trees". Software Quality Professional. 13 (1): 4–12.
  11. Franke M, Gerke D, Hans C. und andere. Method-Driven Test Case Generation for Functional System Verification. Proceedings ATOS. Delft. 2012. P.36-44.
  12. 1 2 3 Kruse, Peter M.; Wegener, Joachim (April 2012). "Test Sequence Generation from Classification Trees". 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation. pp. 539–548. doi:10.1109/ICST.2012.139. ISBN   978-0-7695-4670-4. S2CID   581740.
  13. 1 2 3 Kruse, Peter M.; Bauer, Jürgen; Wegener, Joachim (April 2012). "Numerical Constraints for Combinatorial Interaction Testing". 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation. pp. 758–763. doi:10.1109/ICST.2012.170. ISBN   978-0-7695-4670-4. S2CID   16683773.
  14. 1 2 3 International, SAE (2004). Vehicle electronics to digital mobility : the next generation of convergence ; proceedings of the 2004 International Congress on Transportation Electronics, Convergence 2004, [Cobo Center, Detroit, Michigan, USA, October 18 - 20, 2004]. Warrendale, Pa.: Society of Automotive Engineers. pp. 305–306. ISBN   978-0768015430.
  15. 1 2 3 [edited by] Gomes, Luís; Fernandes, João M. (2010). Behavioral modeling for embedded systems and technologies applications for design and implementation. Hershey, PA: Information Science Reference. p. 386. ISBN   978-1605667515.{{cite book}}: |last= has generic name (help)
  16. 1 2 3 [edited by] Zander, Justyna; Schieferdecker, Ina; Mosterman, Pieter J. (2011-09-15). Model-based testing for embedded systems. Boca Raton: CRC Press. p. 10. ISBN   978-1439818459.{{cite book}}: |last= has generic name (help)
  17. 1 2 3 [edited by] Rech, Jörg; Bunse, Christian (2009). Model-driven software development integrating quality assurance. Hershey: Information Science Reference. p. 101. ISBN   978-1605660073.{{cite book}}: |last= has generic name (help)
  18. 1 2 Olejniczak, Robert (2008). Systematisierung des funktionalen Tests eingebetteter Software (PDF). Doctoral dissertation: Technical University Munich. pp. 61–63. Archived from the original (PDF) on 6 March 2016. Retrieved 10 October 2013.
  19. Cain, Andrew; Chen, Tsong Yueh; Grant, Doug; Poon, Pak-Lok; Tang, Sau-Fun; Tse, TH (2004). "An Automatic Test Data Generation System Based on the Integrated Classification-Tree Methodology". Software Engineering Research and Applications. Lecture Notes in Computer Science. Vol. 3026. pp.  225–238. doi:10.1007/978-3-540-24675-6_18. hdl:10722/43692. ISBN   978-3-540-21975-0 . Retrieved 10 October 2013.
  20. Franke, M.; Gerke, D.; Hans, C; and others: Method-Driven Test Case Genera-tion for Functional System Verification, Air Transport and Operations Sym-posium 2012; p.354-365. Proceedings ATOS. Delft 2012.
  21. Berner&Mattner. "Press Release: Test Case Implementation with TESTONA 4.3".
  22. Chen, T.Y.; Poon, P.-L. (1996). "Classification-Hierarchy Table: A methodology for constructing the classification tree". Proceedings of 1996 Australian Software Engineering Conference. pp. 93–104. doi:10.1109/ASWEC.1996.534127. ISBN   978-0-8186-7635-2. S2CID   6789744.