Application service management

Last updated

Application service management (ASM) is an emerging discipline within systems management that focuses on monitoring and managing the performance and quality of service of business transactions.

ASM can be defined as a well-defined process and use of related tools to detect, diagnose, remedy, and report the service quality of complex business transactions to ensure that they meet or exceed end-users Performance measurements relate to how fast transactions are completed or information is delivered to the end-user by the aggregate of applications, operating systems, hypervisors (if applicable), hardware platforms, and network interconnects. The critical components of ASM include application discovery & mapping, application "health" measurement & management, transaction-level visibility, and incident-related triage. Thus, the ASM tools and processes are commonly used by such roles as Sysop, DevOps, and AIOps.

ASM is related to application performance management (APM) but serves as a more pragmatic, "top-down" approach that focuses on the delivery of business services. In a strict definition, ASM differs from APM in two critical ways.

  1. APM focuses exclusively on the performance of an instance of an application, ignoring the complex set of interdependencies that may exist behind that application in the data center. ASM specifically mandates that each application or infrastructure software, operating system, hardware platform, and transactional "hop" be discretely measurable, even if that measurement is inferential. This is critical to ASM's requirement to be able to isolate the source of service-impacting conditions.
  2. APM often requires instrumentation of the application for management and measurability. ASM advocates an application-centric approach, asserting that the application and operating system have comprehensive visibility of an application's transactions, dependencies, [1] whether on-machine or off-machine, as well as the operating system itself and the hardware platform it is running on. Further, an in-context agent can also infer network latencies with a high degree of accuracy, and with a lesser degree of accuracy when the transaction occurs between instrumented and non-instrumented platforms.

Application service management extends the concepts of end-user experience management and real user monitoring in that measuring the experience of real users is a critical data point. However, ASM also requires the ability to quickly isolate the root cause of those slow-downs, thereby expanding the scope of real user monitoring/management.

The use of application service management is common for complex, multi-tier transactional applications. Further, the introduction of service-oriented architecture and microservices approaches together with hypervisor-based virtualization technologies have proven a catalyst for the adoption of ASM technologies, as complex applications are disproportionately impacted by the introduction of hypervisors into an existing environment A study by the Aberdeen Group indicates that most deployments of virtualization technologies are hampered by their impact on complex transactional applications.

More and more often ASM approaches are equipped in automated adaptive controllers that consider service-level agreement, [2] cloud computing, real-time [3] and energy-aware application controller [4] targets.

Related Research Articles

In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

Benchmark (computing) Comparing the relative performance of computers by running the same program on all of them

In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.

In online transaction processing (OLTP), information systems typically facilitate and manage transaction-oriented applications.

Transaction processing is a way of computing that divides work into individual, indivisible operations, called transactions. A transaction processing system (TPS) is a software system, or software/hardware combination, that supports transaction processing.

In the fields of information technology and systems management, application performance management (APM) is the monitoring and management of the performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is "the translation of IT metrics into business meaning ."

A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

In software design, web design, and electronic product design, synthetic monitoring is a monitoring technique that is done by using a simulation or scripted recordings of transactions. Behavioral scripts are created to simulate an action or path that a customer or end-user would take on a site, application or other software. Those paths are then continuously monitored at specified intervals for performance, such as: functionality, availability, and response time measures.

Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.

Computer appliance Dedicated computer system

A computer appliance is a home appliance with software or firmware that is specifically designed to provide a specific computing resource. Such devices became known as appliances because of the similarity in role or management to a home appliance, which are generally closed and sealed, and are not serviceable by the user or owner. The hardware and software are delivered as an integrated product and may even be pre-configured before delivery to a customer, to provide a turn-key solution for a particular application. Unlike general purpose computers, appliances are generally not designed to allow the customers to change the software and the underlying operating system, or to flexibly reconfigure the hardware.

IT Application Portfolio Management (APM) is a practice that has emerged in mid to large-size information technology (IT) organizations since the mid-1990s. Application Portfolio Management attempts to use the lessons of financial portfolio management to justify and measure the financial benefits of each application in comparison to the costs of the application's maintenance and operations.

In computing, virtualization or virtualisation is the act of creating a virtual version of something, including virtual computer hardware platforms, storage devices, and computer network resources.

Business transaction management (BTM), also known as business transaction monitoring, application transaction profiling or user defined transaction profiling, is the practice of managing information technology (IT) from a business transaction perspective. It provides a tool for tracking the flow of transactions across IT infrastructure, in addition to detection, alerting, and correction of unexpected changes in business or technical conditions. BTM provides visibility into the flow of transactions across infrastructure tiers, including a dynamic mapping of the application topology.

Cloud computing Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each location being a data center. Cloud computing relies on sharing of resources to achieve coherence and typically using a "pay-as-you-go" model which can help in reducing capital expenses but may also lead to unexpected operating expenses for unaware users.

Cloud testing is a form of software testing in which web applications use cloud computing environments to simulate real-world user traffic.

An embedded hypervisor is a hypervisor that supports the requirements of embedded systems.

Nastel Technologies is an information technology (IT) monitoring company that sells software for Artificial Intelligent IT Operations (AIOps), monitoring and managing middleware, transaction tracking and tracing, IT Operational Analytics (ITOA), Decision Support Systems (DSS) business transaction management (BTM) and application performance management (APM).

IBM cloud computing is a set of cloud computing services for business offered by the information technology company IBM. IBM Cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models, in addition to the components that make up those clouds.

Software-defined storage (SDS) is a marketing term for computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form of storage virtualization to separate the storage hardware from the software that manages it. The software enabling a software-defined storage environment may also provide policy management for features such as data deduplication, replication, thin provisioning, snapshots and backup.

Cloud management is the management of cloud computing products and services.

Unikernel

A unikernel is a specialised, single address space machine image constructed by using library operating systems. A developer selects, from a modular stack, the minimal set of libraries which correspond to the OS constructs required for the application to run. These libraries are then compiled with the application and configuration code to build sealed, fixed-purpose images (unikernels) which run directly on a hypervisor or hardware without an intervening OS such as Linux or Windows.

References

  1. Alexander Keller; Gautam Kar (5 May 2000). "Dynamic Dependencies in Application Service Management" (PDF). IBM Research Report.{{cite journal}}: Cite journal requires |journal= (help)
  2. Benny Rochwerger; David Breitgand; Eliezer Levy; Alex Galis; Kenneth Nagin; Ignacio Martín Llorente; Rubén Montero (6 April 2009). "The reservoir model and architecture for open federated cloud computing" (PDF). IBM Journal of Research and Development 53, no. 4 : 4-1.{{cite journal}}: Cite journal requires |journal= (help)
  3. Michael Boniface; Bassem Nasser; Juri Papay; Stephen Phillips; Arturo Servin; Xiaoyu Yang; Zlatko Zlatev; Spyridon Gogouvitis; Gregory Katsaros; Kleopatra Konstanteli; George Kousiouris; Andreas Menychtas; Dimosthenis Kyriazis (2010). "Platform-as-a-service architecture for real-time quality of service management in clouds" (PDF). Fifth International Conference on Internet and Web Applications and Services. IEEE.{{cite journal}}: Cite journal requires |journal= (help)
  4. Anton Beloglazov; Jemal Abawajy; Rajkumar Buyya (4 May 2011). "Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing". Future generation computer systems 28.5: 755-768.{{cite journal}}: Cite journal requires |journal= (help)

See also