Deployment environment

Last updated

In software deployment, an environment or tier is a computer system or set of systems in which a computer program or software component is deployed and executed. In simple cases, such as developing and immediately executing a program on the same machine, there may be a single environment, but in industrial use, the development environment (where changes are originally made) and production environment (what end users use) are separated, often with several stages in between. This structured release management process allows phased deployment (rollout), testing, and rollback in case of problems.

Contents

Environments may vary significantly in size: the development environment is typically an individual developer's workstation, while the production environment may be a network of many geographically distributed machines in data centers, or virtual machines in cloud computing. Code, data, and configuration may be deployed in parallel, and need not connect to the corresponding tier—for example, pre-production code might connect to a production database.

Architectures

Deployment architectures vary significantly, but, broadly, the tiers are bookended by starting at development (DEV) and ending at production (PROD). A common 4-tier architecture is development, testing, model, production (DEV, TEST, MODL, PROD), with software being deployed to each in order. Other common environments include Quality Control (QC), for acceptance testing; sandbox or experimental (EXP), for experiments that are not intended to proceed to production; and Disaster Recovery, to provide an immediate fallback in case of problems with production. Another common architecture is development, testing, acceptance and production (DTAP).

This language is particularly suited for server programs, where servers run in a remote data center; for code that runs on an end user's device, such as applications (apps) or clients, one can refer to the user environment (USER) or local environment (LOCAL) instead.

Exact definitions and boundaries between environments vary – test may be considered part of dev, Acceptance may be considered part of test, part of stage, or be separate, etc. The main tiers are progressed through in order, with new releases being deployed (rolled out or pushed) to each in turn. [1] [2] Experimental and recovery tiers, if present, are outside this flow – experimental releases are terminal, while recovery is typically an old or duplicate version of production, deployed after production. In case of problems, one can roll back to the old release, most simply by pushing the old release as if it were a new release. The last step, deploying to production ("pushing to prod") is the most sensitive, as any problems result in immediate user impact. For this reason this is often handled differently, at least being monitored more carefully, and in some cases having phased rollout or only requiring flipping a switch, allowing rapid rollback. It is best to avoid a name like Quality Assurance (QA); QA doesn’t mean software testing. Testing is important, but it is different from QA.

Sometimes deployment is done outside of this regular process, primarily to provide urgent or relatively minor changes, without requiring a full release. This may consist of a single patch, a large service pack, or a small hotfix.

Environments can be of very different sizes: development is typically an individual developer's workstation (though there may be thousands of developers), while production may be many geographically distributed machines; test and QC may be small or large, depending on the resources devoted to these, and staging can range from a single machine (similar to canary) to an exact duplicate of production.

Environments

The table below describes a finely-divided list of tiers[ citation needed ].

Environment / Tier NameDescription
LocalDeveloper's desktop/workstation
Development / TrunkDevelopment server acting as a sandbox where unit testing may be performed by the developer
Integration CI build target, or for developer testing of side effects
Testing / Test / QC / Internal acceptanceThe environment where interface testing is performed. A quality control team ensures that the new code will not have any impact on the existing functionality and tests major functionalities of the system after deploying the new code in the test environment.
Staging / Stage / Model / Pre-production / External-client acceptance / DemoMirror of production environment
Production / LiveServes end-users/clients

Development

The development environment (dev) is the environment in which changes to software are developed, most simply an individual developer's workstation. This differs from the ultimate target environment in various ways – the target may not be a desktop computer (it may be a smartphone, embedded system, headless machine in a data center, etc.), and even if otherwise similar, the developer's environment will include development tools like a compiler, integrated development environment, different or additional versions of libraries and support software, etc., which are not present in a user's environment.

In the context of revision control, particularly with multiple developers, finer distinctions are drawn: a developer has a working copy of source code on their machine, and changes are submitted to the repository, being committed either to the trunk or a branch, depending on development methodology. The environment on an individual workstation, in which changes are worked on and tried out, may be referred to as the local environment or a sandbox. Building the repository's copy of the source code in a clean environment is a separate step, part of integration (integrating disparate changes), and this environment may be called the integration environment or the development environment; in continuous integration this is done frequently, as often as for every revision. The source code level concept of "committing a change to the repository", followed by building the trunk or branch, corresponds to pushing to release from local (individual developer's environment) to integration (clean build); a bad release at this step means a change broke the build, and rolling back the release corresponds to either rolling back all changes from that point onward, or undoing just the breaking change, if possible.

Testing

The purpose of the test environment is to allow human testers to exercise new and changed code via either automated checks or non-automated techniques. After the developer accepts the new code and configurations through unit testing in the development environment, the items are moved to one or more test environments. [3] Upon test failure, the test environment can remove the faulty code from the test platforms, contact the responsible developer, and provide detailed test and result logs. If all tests pass, the test environment or a continuous integration framework controlling the tests can automatically promote the code to the next deployment environment.

Different types of testing suggest different types of test environments, some or all of which may be virtualized [4] to allow rapid, parallel testing to take place. For example, automated user interface tests [5] may occur across several virtual operating systems and displays (real or virtual). Performance tests may require a normalized physical baseline hardware configuration, so that performance test results can be compared over time. Availability or durability testing may depend on failure simulators in virtual hardware and virtual networks.

Tests may be serial (one after the other) or parallel (some or all at once) depending on the sophistication of the test environment. A significant goal for agile and other high-productivity software development practices is reducing the time from software design or specification to delivery in production. [6] Highly automated and parallelized test environments are important contributors to rapid software development.

Staging

A stage, staging or pre-production environment is an environment for testing that exactly resembles a production environment. [7] It seeks to mirror an actual production environment as closely as possible and may connect to other production services and data, such as databases. For example, servers will be run on remote machines, rather than locally (as on a developer's workstation during dev, or on a single test machine during the test), which tests the effects of networking on the system.

The primary use of a staging environment is to test all the installation/configuration/migration scripts and procedures before they're applied to a production environment. This ensures all major and minor upgrades to a production environment are completed reliably, without errors, and in a minimum of time.

Another important use of staging is performance testing, particularly load testing, as this is often sensitive to the environment.

Staging is also used by some organizations to preview new features to select customers or to validate integrations with live versions of external dependencies.

Production

The production environment is also known as live, particularly for servers, as it is the environment that users directly interact with.

Deploying to production is the most sensitive step; it may be done by deploying new code directly (overwriting old code, so only one copy is present at a time), or by deploying a configuration change. This can take various forms: deploying a parallel installation of a new version of code, and switching between them with a configuration change; deploying a new version of code with the old behavior and a feature flag, and switching to the new behavior with a configuration change that performs a flag flip; or by deploying separate servers (one running the old code, one the new) and redirecting traffic from old to new with a configuration change at the traffic routing level. These in turn may be done all at once or gradually, in phases.

Deploying a new release generally requires a restart, unless hot swapping is possible, and thus requires either an interruption in service (usual for user software, where applications are restarted), or redundancy – either restarting instances slowly behind a load balancer, or starting up new servers ahead of time and then simply redirecting traffic to the new servers.

When deploying a new release to production, rather than immediately deploying to all instances or users, it may be deployed to a single instance or fraction of users first, and then either deployed to all or gradually deployed in phases, in order to catch any last-minute problems. This is similar to staging, except actually done in production, and is referred to as a canary release, by analogy with coal mining. This adds complexity due to multiple releases being run simultaneously, and is thus usually over quickly, to avoid compatibility problems.

Frameworks integration

Development, Staging and Production are known and documented environment variables in ASP.NET Core. Depending on the defined variable, different code is executed and content rendered, and different security and debug settings are applied. [8]

See also

Related Research Articles

Web development is the work involved in developing a website for the Internet or an intranet. Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.

Software deployment is all of the activities that make a software system available for use.

<span class="mw-page-title-main">Continuous integration</span> Software development practice of building and testing frequently

Continuous integration (CI) is the practice of integrating source code changes frequently and ensuring that the integrated codebase is in a workable state.

<span class="mw-page-title-main">Uniface (programming language)</span> Low-code development platform

Uniface is a low-code development and deployment platform for enterprise applications that can run in a large range of runtime environments, including mobile, mainframe, web, Service-oriented architecture (SOA), Windows, Java EE, and .NET. Uniface is used to create mission-critical applications.

Mobile app development is the act or process by which a mobile app is developed for one or more mobile devices, which can include personal digital assistants (PDA), enterprise digital assistants (EDA), or mobile phones. Such software applications are specifically designed to run on mobile devices, taking numerous hardware constraints into consideration. Common constraints include CPU architecture and speeds, available memory (RAM), limited data storage capacities, and considerable variation in displays and input methods. These applications can be pre-installed on phones during manufacturing or delivered as web applications, using server-side or client-side processing to provide an "application-like" experience within a web browser.

Build automation is the practice of building software systems in an relatively unattended fashion. The build is configured to run with minimized or no software developer interaction and without using a developer's personal computer. Build automation encompasses the act of configuring the build system as well the resulting system itself.

Azure DevOps Server, formerly known as Team Foundation Server (TFS) and Visual Studio Team System (VSTS), is a Microsoft product that provides version control, reporting, requirements management, project management, automated builds, testing and release management capabilities. It covers the entire application lifecycle and enables DevOps capabilities. Azure DevOps can be used as a back-end to numerous integrated development environments (IDEs) but is tailored for Microsoft Visual Studio and Eclipse on all platforms.

<span class="mw-page-title-main">UFT One</span> Software testing automation tool

OpenText UFT One, formerly known as Micro Focus Unified Functional Testing and QuickTest Professional (QTP), is software that provides functional and regression test automation for software applications and environments.

AnthillPro is a software tool originally developed and released as one of the first continuous integration servers. AnthillPro automates the process of building code into software projects and testing it to verify that project quality has been maintained. Software developers are able to identify bugs and errors earlier by using AnthillPro to track, collate, and test changes in real time to a collectively maintained body of computer code.

Turbo is a set of software products and services developed by the Code Systems Corporation for application virtualization, portable application creation, and digital distribution. Code Systems Corporation is an American corporation headquartered in Seattle, Washington, and is best known for its Turbo products that include Browser Sandbox, Turbo Studio, TurboServer, and Turbo.

<span class="mw-page-title-main">Definitive media library</span>

A definitive media library is a secure information technology repository in which an organisation's definitive, authorised versions of software media are stored and protected. Before an organisation releases any new or changed application software into its operational environment, any such software should be fully tested and quality assured. The definitive media library provides the storage area for software objects ready for deployment and should only contain master copies of controlled software media configuration items (CIs) that have passed appropriate quality assurance checks, typically including both procured and bespoke application and gold build source code and executables. In the context of the ITIL best practice framework, the term definitive media library supersedes the term definitive software library referred to prior to version ITIL v3.

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing was originally proposed as a way of reducing waiting time for feedback to developers by introducing development environment-triggered tests as well as more traditional developer/tester-triggered tests.

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.

<span class="mw-page-title-main">BuildMaster</span>

BuildMaster is an application release automation tool, designed by the software development team Inedo. It combines build management and ARA capabilities to manage and automate processes primarily related to continuous integration, database change scripts, and production deployments, overall releasing applications reliably. The tool is browser-based and able to be used "out-of-the-box". Its feature set and scope puts it in line with the DevOps movement, and is marketed as "more than a release automatigs together the people, processes, and practices that allow teams to deliver software rapidly, reliably, and responsibly.” It's a tool that embodies incremental DevOps adoption.

<span class="mw-page-title-main">BOSH (software)</span>

BOSH is an open-source software project that offers a toolchain for release engineering, software deployment and application lifecycle management of large-scale distributed services. The toolchain is made up of a server and a command line tool. BOSH is typically used to package, deploy and manage cloud software. While BOSH was initially developed by VMware in 2010 to deploy Cloud Foundry PaaS, it can be used to deploy other software. BOSH is designed to manage the whole lifecycle of large distributed systems.

Infrastructure as code (IaC) is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources. The definitions may be in a version control system, rather than maintaining the code through manual processes. The code in the definition files may use either scripts or declarative definitions, but IaC more often employs declarative approaches.

<span class="mw-page-title-main">DevOps toolchain</span> DevOps toolchain release package.

A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organisation that uses DevOps practices.

Quarkus is a Java framework tailored for deployment on Kubernetes. Key technology components surrounding it are OpenJDK HotSpot and GraalVM. The goal of Quarkus is to make Java a leading platform in Kubernetes and serverless environments while offering developers a unified reactive and imperative programming model to optimally address a wider range of distributed application architectures.

TestOps refers to the discipline of managing the operational aspects of testing within the software delivery lifecycle.

References

  1. "Traditional Development/Integration/Staging/Production Practice for Software Development". Disruptive Library Technology Jester. December 4, 2006.
  2. "Development Sandboxes: An Agile 'Best Practice'". www.agiledata.org.
  3. Ellison, Richard (2016-06-20). "Software Testing Environments Best Practices". Software Testing Magazine. Martinig & Associates. Retrieved 2016-12-02. Once the developer performs the unit test cases, the code will be moved into QA to start testing. Often you will have a few environments for testing. For example you will have one set up for system testing and another that is used for performance testing and yet another that is used for user acceptance testing (UAT). This is caused by the unique needs for each type of testing.
  4. Dubie, Denise (2008-01-17). "How to keep virtual test environments in check". Network World, Inc. IDG. Retrieved 2016-12-02. Virtual server technology makes it easy for enterprise companies to set up and tear down test environments in which they can ensure applications will run up to par on production servers and client machines.
  5. "Use UI Automation To Test Your Code". Microsoft.com. Microsoft. Retrieved 2016-12-02. Automated tests that drive your application through its user interface (UI) are known as coded UI tests (CUITs). These tests include functional testing of the UI controls. They let you verify that the whole application, including its user interface, is functioning correctly. Coded UI Tests are particularly useful where there is validation or other logic in the user interface, for example in a web page.
  6. Heusser, Matthew (2015-07-07). "Are you over-testing your software?". CIO.com. IDG. Archived from the original on 2017-06-03. Retrieved 2016-12-03. Release candidate testing takes too long. For many agile teams, this is the single biggest challenge. Legacy applications start with a test window longer than the sprint.
  7. Sharma, Anurag (2018). Test Environment Management. ITSM Press. p. 11. ISBN   9781912651269.
  8. "Use multiple environments in ASP.NET Core". docs.microsoft.com. Retrieved 2019-04-05.