Part of a series on |
Software development |
---|
Software prototyping is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing. [1]
A prototype typically simulates only a few aspects of, and may be completely different from, the final product.
Prototyping has several benefits: the software designer and implementer can get valuable feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in prototyping have been in development and debate since its proposal in the early 1970s. [2]
The purpose of a prototype is to allow users of the software to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions. Software prototyping provides an understanding of the software's functions and potential threats or issues. [3] Prototyping can also be used by end users to describe and prove requirements that have not been considered, and that can be a key factor in the commercial relationship between developers and their clients. [4] Interaction design in particular makes heavy use of prototyping with that goal.
This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost.[ citation needed ] The monolithic approach has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping can also avoid the great expense and difficulty of having to change a finished software product.
The practice of prototyping is one of the points Frederick P. Brooks makes in his 1975 book The Mythical Man-Month and his 10-year anniversary article "No Silver Bullet".
An early example of large-scale software prototyping was the implementation of NYU's Ada/ED translator for the Ada programming language. [5] It was implemented in SETL with the intent of producing an executable semantic model for the Ada language, emphasizing clarity of design and user interface over speed and efficiency. The NYU Ada/ED system was the first validated Ada implementation, certified on April 11, 1983. [6]
The process of prototyping involves the following steps:[ citation needed ]
Nielsen summarizes the various dimensions of prototypes in his book Usability Engineering :
A common term for a user interface prototype is the horizontal prototype. It provides a broad view of an entire system or subsystem, focusing on user interaction more than low-level system functionality, such as database access. Horizontal prototypes are useful for:
A vertical prototype is an enhanced complete elaboration of a single subsystem or function. It is useful for obtaining detailed requirements for a given function, with the following benefits:
Software prototyping has many variants. However, all of the methods are in some way based on two major forms of prototyping: throwaway prototyping and evolutionary prototyping.
Also called close-ended prototyping. Throwaway or rapid prototyping refers to the creation of a model that will eventually be discarded rather than becoming part of the final delivered software. After preliminary requirements gathering is accomplished, a simple working model of the system is constructed to visually show the users what their requirements may look like when they are implemented into a finished system. It is also a form of rapid prototyping.
The most obvious reason for using throwaway prototyping is that it can be done quickly. If the users can get quick feedback on their requirements, they may be able to refine them early in the development of the software. Making changes early in the development lifecycle is extremely cost effective since there is nothing at that point to redo. If a project is changed after a considerable amount of work has been done then small changes could require large efforts to implement since software systems have many dependencies. Speed is crucial in implementing a throwaway prototype, since with a limited budget of time and money little can be expended on a prototype that will be discarded.
Another strength of throwaway prototyping is its ability to construct interfaces that the users can test. The user interface is what the user sees as the system, and by seeing it in front of them, it is much easier to grasp how the system will function.
Prototypes can be classified according to the fidelity with which they resemble the actual product in terms of appearance, interaction and timing. One method of creating a low fidelity throwaway prototype is paper prototyping. The prototype is implemented using paper and pencil, and thus mimics the function of the actual product, but does not look at all like it. Another method to easily build high fidelity throwaway prototypes is to use a GUI Builder and create a click dummy, a prototype that looks like the goal system, but does not provide any functionality.
The usage of storyboards, animatics or drawings is not exactly the same as throwaway prototyping, but certainly falls within the same family. These are non-functional implementations but show how the system will look.
Summary: In this approach the prototype is constructed with the idea that it will be discarded and the final system will be built from scratch. The steps in this approach are:
Evolutionary prototyping (also known as breadboard prototyping) is quite different from throwaway prototyping. The main goal when using evolutionary prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this approach is that the evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will then be built.
When developing a system using evolutionary prototyping, the system is continually refined and rebuilt.
This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase.
Evolutionary prototypes have an advantage over throwaway prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
In evolutionary prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system.
The final product is built as separate prototypes. At the end, the separate prototypes are merged in an overall design. By the help of incremental prototyping the time gap between user and software developer is reduced.
Extreme prototyping as a development process is used especially for developing web applications. Basically, it breaks down web development into three phases, each one based on the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In the second phase, the screens are programmed and fully functional using a simulated services layer. In the third phase, the services are implemented.
There are many advantages to using prototyping in software development – some tangible, some abstract. [13]
Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. Because changes cost exponentially more to implement as they are detected later in development, the early determination of what the user really wants can result in faster and less expensive software. [8]
Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications. [7] The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said. Since users know the problem domain better than anyone on the development team does, increased interaction can result in a final product that has greater tangible and intangible quality. The final product is more likely to satisfy the user's desire for look, feel and performance.
Using, or perhaps misusing, prototyping can also have disadvantages.
Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project. This can lead to overlooking better solutions, preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. Further, since a prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if developers are too focused on building a prototype as a model.
User confusion of prototype and finished system: Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for example, often unaware of the effort needed to add error-checking and security features which a prototype may not have.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. If users are able to require all proposed features be included in the final system this can lead to conflict.
Developer misunderstanding of user objectives: Developers may assume that users share their objectives (e.g. to deliver core functionality on time and within budget), without understanding wider commercial issues. For example, user representatives attending Enterprise software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where changes are logged and displayed in a difference grid view) without being told that this feature demands additional coding and often requires more hardware to handle extra database accesses. Users might believe they can demand auditing on every field, whereas developers might think this is feature creep because they have made assumptions about the extent of user requirements. If the developer has committed delivery before the user requirements were reviewed, developers are between a rock and a hard place, particularly if user management derives some advantage from their failure to implement requirements.
Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing; this can lead to problems, such as attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. (This may suggest that throwaway prototyping, rather than evolutionary prototyping, should be used.)
Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop a prototype that is too complex. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Users can become stuck in debates over details of the prototype, holding up the development team and delaying the final product.
Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. Many companies have development methodologies in place, and changing them can mean retraining, retooling, or both. Many companies tend to just begin prototyping without bothering to retrain their workers as much as they should.
It has been argued that prototyping, in some form or another, should be used all the time. However, prototyping is most beneficial in systems that will have many interactions with the users.
Systems with little user interaction, such as batch processing or systems that mostly do calculations, benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small. [7]
Prototyping is especially good for designing good human–computer interfaces. "One of the most productive uses of rapid prototyping to date has been as a tool for iterative user requirements engineering and human–computer interface design." [8]
Dynamic Systems Development Method (DSDM) [15] is a framework for delivering business solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved. It expands upon most understood definitions of a prototype. According to DSDM the prototype may be a diagram, a business process, or even a system placed into production. DSDM prototypes are intended to be incremental, evolving from simple forms into more comprehensive ones.
DSDM prototypes can sometimes be throwaway or evolutionary. Evolutionary prototypes may be evolved horizontally (breadth then depth) or vertically (each section is built in detail with additional iterations detailing subsequent sections). Evolutionary prototypes can eventually evolve into final systems.
The four categories of prototypes as recommended by DSDM are:
The DSDM lifecycle of a prototype is to:
Operational prototyping was proposed by Alan Davis as a way to integrate throwaway and evolutionary prototyping with conventional system development. "It offers the best of both the quick-and-dirty and conventional-development worlds in a sensible manner. Designers develop only well-understood features in building the evolutionary baseline, while using throwaway prototyping to experiment with the poorly understood features." [9]
Davis' belief is that to try to "retrofit quality onto a rapid prototype" is not the correct method when trying to combine the two approaches. His idea is to engage in an evolutionary prototyping methodology and rapidly prototype the features of the system after each evolution.
The specific methodology follows these steps: [9]
Obviously, a key to this method is to have well trained prototypers available to go to the user sites. The operational prototyping methodology has many benefits in systems that are complex and have few known requirements in advance.
Evolutionary Systems Development is a class of methodologies that attempt to formally implement evolutionary prototyping. One particular type, called Systemscraft is described by John Crinnion in his book Evolutionary Systems Development.
Systemscraft was designed as a 'prototype' methodology that should be modified and adapted to fit the specific environment in which it was implemented.
The basis of Systemscraft, not unlike evolutionary prototyping, is to create a working system from the initial requirements and build upon it in a series of revisions. Systemscraft places heavy emphasis on traditional analysis being used throughout the development of the system.
Evolutionary Rapid Development (ERD) [16] was developed by the Software Productivity Consortium, a technology development and integration agent for the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA).
To elicit customer/user input, frequent scheduled and ad hoc/impromptu meetings with the stakeholders are held. Demonstrations of system capabilities are held to solicit feedback before design/implementation decisions are solidified. Frequent releases (e.g., betas) are made available for use to provide insight into how the system could better support user and customer needs. This assures that the system evolves to satisfy existing user needs.
The design framework for the system is based on using existing published or de facto standards. The system is organized to allow for evolving a set of capabilities that includes considerations for performance, capacities, and functionality. The architecture is defined in terms of abstract interfaces that encapsulate the services and their implementation (e.g., COTS applications). The architecture serves as a template to be used for guiding development of more than a single instance of the system. It allows for multiple application components to be used to implement the services. A core set of functionality not likely to change is also identified and established.
The ERD process is structured to use demonstrated functionality rather than paper products as a way for stakeholders to communicate their needs and expectations. Central to this goal of rapid delivery is the use of the "timebox" method. Timeboxes are fixed periods of time in which specific tasks (e.g., developing a set of functionality) must be performed. Rather than allowing time to expand to satisfy some vague set of goals, the time is fixed (both in terms of calendar weeks and person-hours) and a set of goals is defined that realistically can be achieved within these constraints. To keep development from degenerating into a "random walk," long-range plans are defined to guide the iterations. These plans provide a vision for the overall system and set boundaries (e.g., constraints) for the project. Each iteration within the process is conducted in the context of these long-range plans.
Once an architecture is established, software is integrated and tested on a daily basis. This allows the team to assess progress objectively and identify potential problems quickly. Since small amounts of the system are integrated at one time, diagnosing and removing the defect is rapid. User demonstrations can be held at short notice since the system is generally ready to exercise at all times.
Efficiently using prototyping requires that an organization have the proper tools and a staff trained to use those tools. Tools used in prototyping can vary from individual tools, such as 4th generation programming languages used for rapid prototyping to complex integrated CASE tools. 4th generation visual programming languages like Visual Basic and ColdFusion are frequently used since they are cheap, well known and relatively easy and fast to use. CASE tools, supporting requirements analysis, like the Requirements Engineering Environment (see below) are often developed or selected by the military or large organizations. Object oriented tools are also being developed like LYMB from the GE Research and Development Center. Users may prototype elements of an application themselves in a spreadsheet.
As web-based applications continue to grow in popularity, so too, have the tools for prototyping such applications. Frameworks such as Bootstrap, Foundation, and AngularJS provide the tools necessary to quickly structure a proof of concept. These frameworks typically consist of a set of controls, interactions, and design guidelines that enable developers to quickly prototype web applications.
Screen generating programs are also commonly used and they enable prototypers to show user's systems that do not function, but show what the screens may look like. Developing Human Computer Interfaces can sometimes be the critical part of the development effort, since to the users the interface essentially is the system.
Software factories can generate code by combining ready-to-use modular components. This makes them ideal for prototyping applications, since this approach can quickly deliver programs with the desired behaviour, with a minimal amount of manual coding.
A new class of software called Application definition or simulation software enables users to rapidly build lightweight, animated simulations of another computer program, without writing code. Application simulation software allows both technical and non-technical users to experience, test, collaborate and validate the simulated program, and provides reports such as annotations, screenshot and schematics. As a solution specification technique, Application Simulation falls between low-risk, but limited, text or drawing-based mock-ups (or wireframes) sometimes called paper-based prototyping, and time-consuming, high-risk code-based prototypes, allowing software professionals to validate requirements and design choices early on, before development begins. In doing so, the risks and costs associated with software implementations can be dramatically reduced. [17]
To simulate applications one can also use software that simulates real-world software programs for computer-based training, demonstration, and customer support, such as screencasting software as those areas are closely related.
"The Requirements Engineering Environment (REE), under development at Rome Laboratory since 1985, provides an integrated toolset for rapidly representing, building, and executing models of critical aspects of complex systems." [18]
Requirements Engineering Environment is currently used by the United States Air Force to develop systems. It is:
REE is composed of three parts. The first, called proto is a CASE tool specifically designed to support rapid prototyping. The second part is called the Rapid Interface Prototyping System or RIP, which is a collection of tools that facilitate the creation of user interfaces. The third part of REE is a user interface to RIP and proto that is graphical and intended to be easy to use.
Rome Laboratory, the developer of REE, intended that to support their internal requirements gathering methodology. Their method has three main parts:
In 1996, Rome Labs contracted Software Productivity Solutions (SPS) to further enhance REE to create "a commercial quality REE that supports requirements specification, simulation, user interface prototyping, mapping of requirements to hardware architectures, and code generation..." [19] This system is named the Advanced Requirements Engineering Workstation or AREW.
Non-relational definition of data (e.g. using Caché or associative models) can help make end-user prototyping more productive by delaying or avoiding the need to normalize data at every iteration of a simulation. This may yield earlier/greater clarity of business requirements, though it does not specifically confirm that requirements are technically and economically feasible in the target production system.
PSDL is a prototype description language to describe real-time software. [20] The associated tool set is CAPS (Computer Aided Prototyping System). [21] Prototyping software systems with hard real-time requirements is challenging because timing constraints introduce implementation and hardware dependencies. PSDL addresses these issues by introducing control abstractions that include declarative timing constraints. CAPS uses this information to automatically generate code and associated real-time schedules, monitor timing constraints during prototype execution, and simulate execution in proportional real time relative to a set of parameterized hardware models. It also provides default assumptions that enable execution of incomplete prototype descriptions, integrates prototype construction with a software reuse repository for rapidly realizing efficient implementations, and provides support for rapid evolution of requirements and designs. [22]
An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop and NetBeans, do not.
Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to:
A prototype is an early sample, model, or release of a product built to test a concept or process. It is a term used in a variety of contexts, including semantics, design, electronics, and software programming. A prototype is generally used to evaluate a new design to enhance precision by system analysts and users. Prototyping serves to provide specifications for a real, working system rather than a theoretical one. In some design workflow models, creating a prototype is the step between the formalization and the evaluation of an idea.
Rapid application development (RAD), also called rapid application building (RAB), is both a general term for adaptive software development approaches, and the name for James Martin's method of rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process. Prototypes are often used in addition to or sometimes even instead of design specifications.
In human–computer interaction, paper prototyping is a widely used method in the user-centered design process, a process that helps developers to create software that meets the user's expectations and needs – in this case, especially for designing and testing user interfaces. It is throwaway prototyping and involves creating rough, even hand-sketched, drawings of an interface to use as prototypes, or models, of a design. While paper prototyping seems simple, this method of usability testing can provide useful feedback to aid the design of easier-to-use products. This is supported by many usability professionals.
The following outline is provided as an overview of and topical guide to software engineering:
In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements.
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
Web development is the work involved in developing a website for the Internet or an intranet. Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.
Dynamic systems development method (DSDM) is an agile project delivery framework, initially used as a software development method. First released in 1994, DSDM originally sought to provide some discipline to the rapid application development (RAD) method. In later versions the DSDM Agile Project Framework was revised and became a generic approach to project management and solution delivery rather than being focused specifically on software development and code creation and could be used for non-IT projects. The DSDM Agile Project Framework covers a wide range of activities across the whole project lifecycle and includes strong foundations and governance, which set it apart from some other Agile methods. The DSDM Agile Project Framework is an iterative and incremental approach that embraces principles of Agile development, including continuous user/customer involvement.
In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking that a software system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?"
Computer-aided production engineering (CAPE) is a relatively new and significant branch of engineering. Global manufacturing has changed the environment in which goods are produced. Meanwhile, the rapid development of electronics and communication technologies has required design and manufacturing to keep pace.
Object-oriented analysis and design (OOAD) is a technical approach for analyzing and designing an application, system, or business by applying object-oriented programming, as well as using visual modeling throughout the software development process to guide stakeholder communication and product quality.
A software factory is a structured collection of related software assets that aids in producing computer software applications or software components according to specific, externally defined end-user requirements through an assembly process. A software factory applies manufacturing techniques and principles to software development to mimic the benefits of traditional manufacturing. Software factories are generally involved with outsourced software creation.
A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform.
In software development, the V-model represents a development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.
A metaCASE tool is a type of application software that provides the possibility to create one or more modeling methods, languages or notations for use within the process of software development. Often the result is a modeling tool for that language. MetaCASE tools are thus a kind of language workbench, generally considered as being focused on graphical modeling languages.
In software engineering, a software development process or software development life cycle (SDLC) is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.
The Knowledge Based Software Assistant (KBSA) was a research program funded by the United States Air Force. The goal of the program was to apply concepts from artificial intelligence to the problem of designing and implementing computer software. Software would be described by models in very high level languages (essentially equivalent to first order logic) and then transformation rules would transform the specification into efficient code. The air force hoped to be able to generate the software to control weapons systems and other command and control systems using this method. As software was becoming ever more critical to USAF weapons systems it was realized that improving the quality and productivity of the software development process could have significant benefits for the military, as well as for information technology in other major US industries.
VisualSim Architect is an electronic system-level software for modeling and simulation of electronic systems, embedded software and semiconductors. VisualSim Architect is a commercial version of the Ptolemy II research project at University of California Berkeley. The product was first released in 2003. VisualSim is a graphical tool that can be used for performance trade-off analyses using such metrics as bandwidth utilization, application response time and buffer requirements. It can be used for architectural analysis of algorithms, components, software instructions and hardware/ software partitioning.