Abbreviation | ARM |
---|---|
Status | Published |
Year started | 1996 |
Latest version | 4.1 version 1 2007 |
Organization | The Open Group |
Authors | Tivoli Software, Hewlett-Packard, The Open Group |
Domain | Enterprise application integration, Application programming interfaces |
Website | collaboration |
Application Response Measurement (ARM) is an open standard published by the Open Group for monitoring and diagnosing performance bottlenecks within complex enterprise applications that use loosely-coupled designs or service-oriented architectures.
It includes an API for C and Java that allows timing information associated with each step in processing a transaction to be logged to a remote server for later analysis.
Version 1 of ARM was developed jointly by Tivoli Software and Hewlett-Packard in 1996. Version 2 was developed by an industry partnership (the ARM Working Group) and became available in December 1997 as an open standard approved by the Open Group. ARM 4.0 was released in 2003 and revised in 2004.
As of 2007 [update] , ARM 4.1 version 1 is the latest version of the ARM standard.
Current application design tends to be more complex and distributed over networks. This leads to new challenges in today's development and monitoring tools to provide application developers, system- and application administrators with the information they need.
Within distributed applications it is not easy to estimate if the application performs well. The following issues help in the evaluation of distributed applications:
ARM helps answer these questions. It's important to mention that the ARM benefits as they are defined here are now just a subset of the Application Performance Management space.
The main approach of using ARM is:
ARM defines the following concepts to provide the described functionality.
Complex distributed applications usually consist of many different single applications (processes). In order to be able to understand the relationship between all single applications the concept of an ARM application is introduced with version 4.0 of the ARM standard. Each ARM transaction is executed exactly within one ARM application.
Transactions are the main concept of the ARM standard and represents a single performance measurement. A transaction definition defines the type (name) and additional attributes of an ARM transaction. A transaction can be executed (started and stopped) several times which results in multiple measurements. Each measurement has basic attributes like status of completion (good, failed, aborted), start and stop timestamp, the resulting duration and the system address (host) it was executed on. Additionally special metrics or context properties can be associated with a transaction measurement.
Uniquely defines a host by its name, IP address or other unique information.
ARM correlators are used to express a correlation between two ARM transactions. This is a synchronous relationship also known as parent-child relationship. Commonly, a parent transaction triggers a child transaction and only continues its execution when the child transaction has finished. Using correlators, it is possible to split a complex transaction into several nested child transactions, where each child transaction can have child transactions of its own. This results in a tree of transactions with the topmost parent transaction being the root of the tree.
ARM 4.1 defines asynchronous relationships to support data flow driven architectures.
ARM Metrics can be used to get more information about the execution of a transaction. ARM defines a set of metric types for different purposes such as a counter, a gauge or just a numeric value.
Properties are a set of so-called name–value pair strings which qualifies an ARM transaction or an ARM application beyond the basic definition of these entities and allows to associate additional context information to each transaction measurement.
Defines a name of a user on behalf an transaction measurement was executed.
The following applications are already instrumented with ARM calls:
In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program to process HTTP or HTTPS user requests.
z/OS is a 64-bit operating system for IBM z/Architecture mainframes, introduced by IBM in October 2000. It derives from and is the successor to OS/390, which in turn was preceded by a string of MVS versions. Like OS/390, z/OS combines a number of formerly separate, related products, some of which are still optional. z/OS has the attributes of modern operating systems but also retains much of the older functionality originated in the 1960's and still in regular use—z/OS is designed for backward compatibility.
Jakarta Enterprise Beans is one of several Java APIs for modular construction of enterprise software. EJB is a server-side software component that encapsulates business logic of an application. An EJB web container provides a runtime environment for web related software components, including computer security, Java servlet lifecycle management, transaction processing, and other web services. The EJB specification is a subset of the Java EE specification.
Jakarta EE, formerly Java Platform, Enterprise Edition and Java 2 Platform, Enterprise Edition (J2EE), is a set of specifications, extending Java SE with specifications for enterprise features such as distributed computing and web services. Jakarta EE applications are run on reference runtimes, which can be microservices or application servers, which handle transactions, security, scalability, concurrency and management of the components they are deploying.
IBM CICS is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.
Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices and service-oriented networks. Those resources are represented by objects called MBeans. In the API, classes can be dynamically loaded and instantiated. Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.
Microsoft Message Queuing (MSMQ) is a message queue implementation developed by Microsoft and deployed in its Windows Server operating systems since Windows NT 4 and Windows 95. Windows Server 2016 and Windows 10 also includes this component. In addition to its mainstream server platform support, MSMQ has been incorporated into Microsoft Embedded platforms since 1999 and the release of Windows CE 3.0.
Jakarta Connectors are a set of Java programming language tools designed for connecting application servers and enterprise information systems (EIS) as a part of enterprise application integration (EAI). While JDBC is specifically used to establish connections between Java applications and databases, JCA provides a more versatile architecture for connecting to legacy systems.
Tuxedo is a middleware platform used to manage distributed transaction processing in distributed computing environments. Tuxedo is a transaction processing system or transaction-oriented middleware, or enterprise application server for a variety of systems and programming languages. Developed by AT&T in the 1980s, it became a software product of Oracle Corporation in 2008 when they acquired BEA Systems. Tuxedo is now part of the Oracle Fusion Middleware.
WebSphere Application Server (WAS) is a software product that performs the role of a web application server. More specifically, it is a software framework and middleware that hosts Java-based web applications. It is the flagship product within IBM's WebSphere software suite. It was initially created by Donald F. Ferguson, who later became CTO of Software for Dell. The first version was launched in 1998. This project was an offshoot from IBM HTTP Server team starting with the Domino Go web server.
In software engineering, profiling is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering.
A hardware security module (HSM) is a physical computing device that safeguards and manages secrets, performs encryption and decryption functions for digital signatures, strong authentication and other cryptographic functions. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. A hardware security module contains one or more secure cryptoprocessor chips.
To quiesce is to pause or alter a device or application to achieve a consistent state, usually in preparation for a backup or other maintenance.
In the fields of information technology and systems management, application performance management (APM) is the monitoring and management of the performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is "the translation of IT metrics into business meaning ."
Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.
Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. The software was created by Russian developer Igor Sysoev and publicly released in 2004. Nginx is free and open-source software, released under the terms of the 2-clause BSD license. A large fraction of web servers use Nginx, often as a load balancer.
Web2py is an open-source web application framework written in the Python programming language. Web2py allows web developers to program dynamic web content using Python. Web2py is designed to help reduce tedious web development tasks, such as developing web forms from scratch, although a web developer may build a form from scratch if required.
Rational Performance Tester is a tool for automated performance testing of web- and server-based applications from the Rational Software division of IBM. It allows users to create tests that mimic user transactions between an application client and server. During test execution, these transactions are replicated in parallel to simulate a large transaction load on the server. Server response time measurements are collected to identify the presence and cause of any potential application bottlenecks. It is primarily used by Software Quality Assurance teams to perform automated software performance testing.
Enduro/X is an open-source middleware platform for distributed transaction processing. It is built on proven APIs such as X/Open group's XATMI and XA. The platform is designed for building real-time microservices based applications with a clusterization option. Enduro/X functions as an extended drop-in replacement for Oracle Tuxedo. The platform uses in-memory POSIX Kernel queues which insures high interprocess communication throughput.
Hyperledger is an umbrella project of open source blockchains and related tools that the Linux Foundation started in December 2015. IBM, Intel, and SAP Ariba have contributed to support the collaborative development of blockchain-based distributed ledgers. It was renamed the Hyperledger Foundation in October 2021.