Benefit dependency network

Last updated
Shows a network of cause and effect; in response to drivers, to work towards desired outcomes, enablers and capabilities support various beneficial changes. BDN for social networking.jpg
Shows a network of cause and effect; in response to drivers, to work towards desired outcomes, enablers and capabilities support various beneficial changes.

A benefit dependency network (BDN) is a diagram of cause and effect relationships. It is drawn according to a specific structure that visualizes multiple cause-effect relationships organized into capabilities, changes and benefits. It can be considered a business-oriented method of what engineers would call goal modeling and is usually read from right to left to provide a one-page overview of how a business generates value, starting with the high level drivers for change, such as found with Digital Initiatives [1] or cross-organizational ERP management. [2] First proposed by Cranfield School of Management as part of a Benefits Management approach [3] the original model has developed to encompass all the domains required for Benefits Management [4] namely Why, What, Who and How. Recent development has added weights to the connections to create a weighted graph so that causal analysis, sometimes referred to as causality, is possible on the represented value chains so different strategies can be compared according to value and outcome. These chains provide a way to construct a compelling story or message that shows how the benefits proposed can be realized from the changes being considered. In software engineering, Jabbari et al. [5] report the use of BDN for the purpose of software process improvement. They use BDN to structure the results of a systematic review on DevOps. [5]

Related Research Articles

Project management is the process of leading the work of a team to achieve all project goals within the given constraints. This information is usually described in project documentation, created at the beginning of the development process. The primary constraints are scope, time, and budget. The secondary challenge is to optimize the allocation of necessary inputs and apply them to meet pre-defined objectives.

A feasibility study is an assessment of the practicality of a project or system. A feasibility study aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained.

Enterprise data management (EDM) is the ability of an organization to precisely define, easily integrate and effectively retrieve data for both internal applications and external communication. EDM focuses on the creation of accurate, consistent, and transparent content. EDM emphasizes data precision, granularity, and meaning and is concerned with how the content is integrated into business applications as well as how it is passed along from one business process to another.

The incremental build model is a method of software development where the product is designed, implemented and tested incrementally until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping. According to the Project Management Institute, an incremental approach is an "adaptive development approach in which the deliverable is produced successively, adding functionality until the deliverable contains the necessary and sufficient capability to be considered complete."

The term demand chain has been used in a business and management context as contrasting terminology alongside, or in place of, "supply chain". Madhani suggests that the demand chain "comprises all the demand processes necessary to understand, create, and stimulate customer demand". Cranfield School of Management academic Martin Christopher has suggested that "ideally the supply chain should become a demand chain", explaining that ideally all product logistics and processing should occur "in response to a known customer requirement".

CollabNet VersionOne is a software firm headquartered in Alpharetta, Georgia, United States. CollabNet VersionOne products and services belong to the industry categories of value stream management, devops, agile management, application lifecycle management (ALM), and enterprise version control. These products are used by companies and government organizations to reduce the time it takes to create and release software.

IT portfolio management is the application of systematic management to the investments, projects and activities of enterprise Information Technology (IT) departments. Examples of IT portfolios would be planned initiatives, projects, and ongoing IT services. The promise of IT portfolio management is the quantification of previously informal IT efforts, enabling measurement and objective evaluation of investment scenarios.

Azure DevOps Server is a Microsoft product that provides version control, reporting, requirements management, project management, automated builds, testing and release management capabilities. It covers the entire application lifecycle and enables DevOps capabilities. Azure DevOps can be used as a back-end to numerous integrated development environments (IDEs) but is tailored for Microsoft Visual Studio and Eclipse on all platforms.

A glossary of terms relating to project management and consulting.

Lean IT is the extension of lean manufacturing and lean services principles to the development and management of information technology (IT) products and services. Its central concern, applied in the context of IT, is the elimination of waste, where waste is work that adds no value to a product or service.

Benefits Realization Management (BRM) is one of the many ways of managing how time and resources are invested into making desirable changes.

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing was originally proposed as a way of reducing waiting time for feedback to developers by introducing development environment-triggered tests as well as more traditional developer/tester-triggered tests.

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, following a pipeline through a "production-like environment", without doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.

Application-release automation (ARA) refers to the process of packaging and deploying an application or update of an application from development, across various environments, and ultimately to production. ARA solutions must combine the capabilities of deployment automation, environment management and modeling, and release coordination.

In software engineering, service virtualization or service virtualisation is a method to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. It is used to provide software development and QA/testing teams access to dependent system components that are needed to exercise an application under test (AUT), but are unavailable or difficult-to-access for development and testing purposes. With the behavior of the dependent components "virtualized", testing and development can proceed without accessing the actual live components. Service virtualization is recognized by vendors, industry analysts, and industry publications as being different than mocking. See here for a Comparison of API simulation tools.

UNICOM Focal Point is a portfolio management and decision analysis tool used by the product organizations of corporations and government agencies to collect information and feedback from internal and external stakeholders on the value of applications, products, systems, technologies, capabilities, ideas, and other organizational artifacts—prioritize on which ones will provide the most value to the business, and manage the roadmap of how artifacts will be fielded, improved, or removed from the market or organization. UNICOM Focal Point is also used to manage a portfolio of projects, to understand resources used on those projects, and timelines for completion. The product is also used for pure product management—where product managers use it to gather and analyze enhancement requests from customers to decide on what features to put in a product, and develop roadmaps for future product versions.

Performance management work (PMW) describes all activities that are necessary to ensure that performance requirements of application systems (AS) can be met. Therefore, PMW integrates software performance engineering (SPE) and application performance management (APM) activities. SPE and APM are part of different lifecycle phases of an AS, namely systems development and IT operations. PMW supports a comprehensive coordination of all SPE and APM activities, which is inevitable due to an increased complexity of AS architectures.

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources. The definitions may be in a version control system. The code in the definition files may use either scripts or declarative definitions, rather than maintaining the code through manual processes, but IaC more often employs declarative approaches.

<span class="mw-page-title-main">DevOps toolchain</span> DevOps toolchain release package.

A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organisation that uses DevOps practices.

Continuous configuration automation (CCA) is the methodology or process of automating the deployment and configuration of settings and software for both physical and virtual data center equipment.

References

  1. Joe Peppard Harvard Business Review https://hbr.org/2016/06/a-tool-to-map-your-next-digital-initiative
  2. Eckartz Daneva Wieringa van Hillegersberg SAC 09 http://dl.acm.org/citation.cfm?id=1529641
  3. Benefits Management Best Practice Guidelines by John Ward, Peter Murray and Elizabeth Daniel, Cranfield School of Management, 2004
  4. A look at existing methods by Torsten Langner, LinkedIn https://www.linkedin.com/pulse/look-existing-methods-torsten-langner%5B%5D
  5. 1 2 Jabbari, Ramtin; bin Ali, Nauman; Petersen, Kai; Tanveer, Binish (November 2018). "Towards a benefits dependency network for DevOps based on a systematic literature review". Journal of Software: Evolution and Process. 30 (11): e1957. doi:10.1002/smr.1957. S2CID   53951886.