Legacy system

Last updated
In 2011, MS-DOS was still used in some enterprises to run legacy applications, such as this US Navy food service management system. US Navy 110129-N-7676W-152 Culinary Specialist 3rd Class John Smith uses the existing DOS-based food service management system aboard the aircraft.jpg
In 2011, MS-DOS was still used in some enterprises to run legacy applications, such as this US Navy food service management system.

In computing, a legacy system is an old method, technology, computer system, or application program, "of, relating to, or being a previous or outdated computer system", [1] yet still in use. Often referencing a system as "legacy" means that it paved the way for the standards that would follow it. This can also imply that the system is out of date or in need of replacement.

Contents

Legacy code is old computer source code that is no longer supported on the standard hardware and environments, and is a codebase that is in some respect obsolete or supporting something obsolete. Legacy code may be written in programming languages, use frameworks and external libraries, or use architecture and patterns that are no longer considered modern, increasing the mental burden and ramp-up time for software engineers who work on the codebase. Legacy code may have zero or insufficient automated tests, making refactoring dangerous and likely to introduce bugs. [2] Long-lived code is susceptible to software rot, where changes to the runtime environment, or surrounding software or hardware may require maintenance or emulation of some kind to keep working. Legacy code may be present to support legacy hardware, a separate legacy system, or a legacy customer using an old feature or software version.

While the term usually refers to source code, it can also apply to executable code that no longer runs on a later version of a system, or requires a compatibility layer to do so. An example would be a classic Macintosh application which will not run natively on macOS, but runs inside the Classic environment, or a Win16 application running on Windows XP using the Windows on Windows feature in XP.

An example of legacy hardware are legacy ports like PS/2 and VGA ports, and CPUs with older, incompatible instruction sets (with e.g. newer operating systems). Examples in legacy software include legacy file formats like .swf for Adobe Flash or .123 for Lotus 1-2-3, and text files encoded with legacy character encodings like EBCDIC.

Overview

Although off-support since April 2014, Windows XP has endured continued use in fields such as ATM operating system software. Windows XP sighted 'in the wild' on a cash point, 3 August 2018.jpg
Although off-support since April 2014, Windows XP has endured continued use in fields such as ATM operating system software.

The first use of the term legacy to describe computer systems probably occurred in the 1960s. [3] By the 1980s it was commonly used to refer to existing computer systems to distinguish them from the design and implementation of new systems. Legacy was often heard during a conversion process, for example, when moving data from the legacy system to a new database.

While this term may indicate that some engineers may feel that a system is out of date, a legacy system can continue to be used for a variety of reasons. It may simply be that the system still provides for the users' needs. In addition, the decision to keep an old system may be influenced by economic reasons such as return on investment challenges or vendor lock-in, the inherent challenges of change management, or a variety of other reasons other than functionality. Backward compatibility (such as the ability of newer systems to handle legacy file formats and character encodings) is a goal that software developers often include in their work.

Even if it is no longer used, a legacy system may continue to impact the organization due to its historical role. Historic data may not have been converted into the new system format and may exist within the new system with the use of a customized schema crosswalk, or may exist only in a data warehouse. In either case, the effect on business intelligence and operational reporting can be significant. A legacy system may include procedures or terminology which are no longer relevant in the current context, and may hinder or confuse understanding of the methods or technologies used.

Organizations can have compelling reasons for keeping a legacy system, such as:

Problems posed by legacy computing

Legacy systems are considered to be potentially problematic by some software engineers for several reasons. [4]

Improvements on legacy software systems

Where it is impossible to replace legacy systems through the practice of application retirement, it is still possible to enhance (or "re-face") them. Most development often goes into adding new interfaces to a legacy system. The most prominent technique is to provide a Web-based interface to a terminal-based mainframe application. This may reduce staff productivity due to slower response times and slower mouse-based operator actions, yet it is often seen as an "upgrade", because the interface style is familiar to unskilled users and is easy for them to use. John McCormick discusses such strategies that involve middleware. [9]

Printing improvements are problematic because legacy software systems often add no formatting instructions, or they use protocols that are not usable in modern PC/Windows printers. A print server can be used to intercept the data and translate it to a more modern code. Rich Text Format (RTF) or PostScript documents may be created in the legacy application and then interpreted at a PC before being printed.

Biometric security measures are difficult to implement on legacy systems. A workable solution is to use a Telnet or HTTP proxy server to sit between users and the mainframe to implement secure access to the legacy application.

The change being undertaken in some organizations is to switch to automated business process (ABP) software which generates complete systems. These systems can then interface to the organizations' legacy systems and use them as data repositories. This approach can provide a number of significant benefits: the users are insulated from the inefficiencies of their legacy systems, and the changes can be incorporated quickly and easily in the ABP software.

Model-driven reverse and forward engineering approaches can be also used for the improvement of legacy software. [10]

NASA example

Andreas M. Hein researched the use of legacy systems in space exploration at the Technical University of Munich. According to Hein, legacy systems are attractive for reuse if an organization has the capabilities for verification, validation, testing, and operational history. [11] [12] These capabilities must be integrated into various software life cycle phases such as development, implementation, usage, or maintenance. For software systems, the capability to use and maintain the system are crucial. Otherwise the system will become less and less understandable and maintainable.

According to Hein, verification, validation, testing, and operational history increases the confidence in a system's reliability and quality. However, accumulating this history is often expensive. NASA's now retired Space Shuttle program used a large amount of 1970s-era technology. Replacement was cost-prohibitive because of the expensive requirement for flight certification. The original hardware completed the expensive integration and certification requirement for flight, but any new equipment would have had to go through that entire process again. This long and detailed process required extensive tests of the new components in their new configurations before a single unit could be used in the Space Shuttle program. Thus any new system that started the certification process becomes a de facto legacy system by the time it is approved for flight.

Additionally, the entire Space Shuttle system, including ground and launch vehicle assets, was designed to work together as a closed system. Since the specifications did not change, all of the certified systems and components performed well in the roles for which they were designed. [13] Even before the Shuttle was scheduled to be retired in 2010, NASA found it advantageous to keep using many pieces of 1970s technology rather than to upgrade those systems and recertify the new components.

Perspectives on legacy code

Some in the software engineering prefer to describe "legacy code" without the connotation of being obsolete. Among the most prevalent neutral conceptions are source code inherited from someone else and source code inherited from an older version of the software. Eli Lopian, CEO of Typemock, has defined it as "code that developers are afraid to change". [14] Michael Feathers [15] introduced a definition of legacy code as code without tests, which reflects the perspective of legacy code being difficult to work with in part due to a lack of automated regression tests. He also defined characterization tests to start putting legacy code under test.

Ginny Hendry characterized creation of code as a `challenge` to current coders to create code that is "like other legacies in our lives—like the antiques, heirlooms, and stories that are cherished and lovingly passed down from one generation to the next. What if legacy code was something we took pride in?". [16]

Additional uses of the term Legacy in computing

The term legacy support is often used in conjunction with legacy systems. The term may refer to a feature of modern software. For example, Operating systems with "legacy support" can detect and use older hardware. The term may also be used to refer to a business function; e.g. a software or hardware vendor that is supporting, or providing software maintenance, for older products.

A "legacy" product may be a product that is no longer sold, has lost substantial market share, or is a version of a product that is not current. A legacy product may have some advantage over a modern product making it appealing for customers to keep it around. A product is only truly "obsolete" if it has an advantage to nobody—if no person making a rational decision would choose to acquire it new.

The term "legacy mode" often refers specifically to backward compatibility. A software product that is capable of performing as though it were a previous version of itself, is said to be "running in legacy mode". This kind of feature is common in operating systems and internet browsers, where many applications depend on these underlying components.

The computer mainframe era saw many applications running in legacy mode. In the modern business computing environment, n-tier, or 3-tier architectures are more difficult to place into legacy mode as they include many components making up a single system.

Virtualization technology is a recent innovation allowing legacy systems to continue to operate on modern hardware by running older operating systems and browsers on a software system that emulates legacy hardware.

Brownfield architecture

Programmers have borrowed the term brownfield from the construction industry, where previously developed land (often polluted and abandoned) is described as brownfield. [17]

Alternative view

There is an alternate favorable opinion—growing since the end of the Dotcom bubble in 1999—that legacy systems are simply computer systems in working use:

"Legacy code" often differs from its suggested alternative by actually working and scaling.

IT analysts estimate that the cost of replacing business logic is about five times that of reuse, [18] even discounting the risk of system failures and security breaches. Ideally, businesses would never have to rewrite most core business logic: debits = credits is a perennial requirement.

The IT industry is responding with "legacy modernization" and "legacy transformation": refurbishing existing business logic with new user interfaces, sometimes using screen scraping and service-enabled access through web services. These techniques allow organizations to understand their existing code assets (using discovery tools), provide new user and application interfaces to existing code, improve workflow, contain costs, minimize risk, and enjoy classic qualities of service (near 100% uptime, security, scalability, etc.). [19]

This trend also invites reflection on what makes legacy systems so durable. Technologists are relearning the importance of sound architecture from the start, to avoid costly and risky rewrites. The most common legacy systems tend to be those which embraced well-known IT architectural principles, with careful planning and strict methodology during implementation. Poorly designed systems often don't last, both because they wear out and because their inherent faults invite replacement. Thus, many organizations are rediscovering the value of both their legacy systems and the theoretical underpinnings of those systems.

See also

Related Research Articles

<span class="mw-page-title-main">BIOS</span> Firmware for hardware initialization and OS runtime services

In computing, BIOS is firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the booting process. The BIOS firmware comes pre-installed on an IBM PC or IBM PC compatible's system board and exists in some UEFI-based systems to maintain compatibility with operating systems that do not support UEFI native operation. The name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies looking to create compatible systems. The interface of that original system serves as a de facto standard.

<span class="mw-page-title-main">MVS</span> Operating system for IBM mainframes

Multiple Virtual Storage, more commonly called MVS, is the most commonly used operating system on the System/370, System/390 and IBM Z IBM mainframe computers. IBM developed MVS, along with OS/VS1 and SVS, as a successor to OS/360. It is unrelated to IBM's other mainframe operating system lines, e.g., VSE, VM, TPF.

In software engineering, multitier architecture is a client–server architecture in which presentation, application processing and data management functions are physically separated. The most widespread use of multitier architecture is the three-tier architecture.

<span class="mw-page-title-main">Mainframe computer</span> Large computer

A mainframe computer, informally called a mainframe or big iron, is a computer used primarily by large organizations for critical applications like bulk data processing for tasks such as censuses, industry and consumer statistics, enterprise resource planning, and large-scale transaction processing. A mainframe computer is large but not as large as a supercomputer and has more processing power than some other classes of computers, such as minicomputers, servers, workstations, and personal computers. Most large-scale computer-system architectures were established in the 1960s, but they continue to evolve. Mainframe computers are often used as servers.

<span class="mw-page-title-main">Operating system</span> Software that manages computer hardware resources

An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

<span class="mw-page-title-main">VSE (operating system)</span>

VSEn is an operating system for IBM mainframe computers, the latest one in the DOS/360 lineage, which originated in 1965. It is less common than z/OS and is mostly used on smaller machines.

Open systems are computer systems that provide some combination of interoperability, portability, and open software standards..

BS2000 is an operating system for IBM 390-compatible mainframe computers developed in the 1970s by Siemens and from early 2000s onward by Fujitsu Technology Solutions.

<span class="mw-page-title-main">CICS</span> IBM mainframe transaction monitor

IBM CICS is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.

In software engineering, the terms frontend and backend refer to the separation of concerns between the presentation layer (frontend), and the data access layer (backend) of a piece of software, or the physical infrastructure or hardware. In the client–server model, the client is usually considered the frontend and the server is usually considered the backend, even when some presentation work is actually done on the server itself.

<span class="mw-page-title-main">IBM 3790</span>

The IBM 3790 Communications System was one of the first distributed computing platforms. The 3790 was developed by IBM's Data Processing Division (DPD) and announced in 1974. It preceded the IBM 8100, announced in 1979.

Software prototyping is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing.

Software deployment is all of the activities that make a software system available for use.

A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional job and batch are distinguished and contrasted; see that page for details. Other synonyms include batch system, distributed resource management system (DRMS), distributed resource manager (DRM), and, commonly today, workload automation (WLA). The data structure of jobs to run is known as the job queue.

<span class="mw-page-title-main">IBM System z9</span> Line of mainframe computers

IBM System z9 is a line of IBM mainframe computers. The first models were available on September 16, 2005. The System z9 also marks the end of the previously used eServer zSeries naming convention. It was also the last mainframe computer that NASA ever used.

<span class="mw-page-title-main">Systems architect</span>

The systems architect is an information and communications technology professional. Systems architects define the architecture of a computerized system in order to fulfill certain requirements. Such definitions include: a breakdown of the system into components, the component interactions and interfaces, and the technologies and resources to be used in its design and implementation.

Legacy modernization, also known as software modernization or platform modernization, refers to the conversion, rewriting or porting of a legacy system to modern computer programming languages, architectures, software libraries, protocols or hardware platforms. Legacy transformation aims to retain and extend the value of the legacy investment through migration to new platforms to benefit from the advantage of the new technologies.

<span class="mw-page-title-main">NEi Nastran</span>

NEi Nastran was an engineering analysis and simulation software product of NEi Software. Based on NASA's Structural Analysis program NASTRAN, the software is a finite element analysis (FEA) solver used to generate solutions for linear and nonlinear stress, dynamics, and heat transfer characteristics of structures and mechanical components. NEi Nastran software is used with all major industry pre- and post-processors, including Femap, a product of Siemens PLM Software, and the in-house brands NEi Nastran in-CAD, NEi Fusion, and NEi Works for SolidWorks. This software was acquired by Autodesk in May 2014.

In software deployment, an environment or tier is a computer system or set of systems in which a computer program or software component is deployed and executed. In simple cases, such as developing and immediately executing a program on the same machine, there may be a single environment, but in industrial use, the development environment and production environment are separated, often with several stages in between. This structured release management process allows phased deployment (rollout), testing, and rollback in case of problems.

References

  1. "Merriam-Webster" . Retrieved June 22, 2013.
  2. Feathers, Michael C. (2005). Working effectively with legacy code. Upper Saddle River, NJ: Prentice Hall Professional Technical Reference. p. 15. ISBN   0-13-293174-5. OCLC   660166658.
  3. Tawde, Swati. "Legacy System". educba.
  4. (for example, see Bisbal et al., 1999).
  5. Lamb, John (June 2008). "Legacy systems continue to have a place in the enterprise". Computer Weekly. Retrieved 27 October 2014.
  6. Stephanie Overby (2005-05-01). "Comair's Christmas Disaster: Bound To Fail - CIO.com - Business Technology Leadership". CIO.com. Retrieved 2012-04-29.
  7. Razermouse (2011-05-03). "The Danger of Legacy Systems". Mousesecurity.com. Archived from the original on March 23, 2012. Retrieved 2012-04-29.
  8. "Benefits of Mainframe Modernization". Modernization Hub. Retrieved 2017-08-23.
  9. McCormick, John (2000-06-02). "Mainframe-web middleware". Gcn.com. Retrieved 2012-04-29.
  10. Menychtas, Andreas; Konstanteli, Kleopatra; Alonso, Juncal; Orue-Echevarria, Leire; Gorronogoitia, Jesus; Kousiouris, George; Santzaridou, Christina; Bruneliere, Hugo; Pellens, Bram; Stuer, Peter; Strauss, Oliver; Senkova, Tatiana; Varvarigou, Theodora (2014), "Software modernization and cloudification using the ARTIST migration methodology and framework", Scalable Computing: Practice and Experience, 15 (2), doi: 10.12694/scpe.v15i2.980
  11. A.M. Hein (2014), How to Assess Heritage Systems in the Early Phases?, 6th International Systems & Concurrent Engineering for Space Applications Conference 2014, ESA
  12. A.M. Hein (2016), Heritage Technologies in Space Programs - Assessment Methodology and Statistical Analysis, PhD thesis Faculty of Mechanical Engineering, Technical University of Munich
  13. A.M. Hein (2014), How to Assess Heritage Systems in the Early Phases?, 6th International Systems & Concurrent Engineering for Space Applications Conference 2014, ESA, p. 3
  14. Lopian, Eli (May 15, 2018). "Defining Legacy Code" . Retrieved June 10, 2019.
  15. Michael Feathers' Working Effectively with Legacy Code ( ISBN   0-13-117705-2)
  16. Ginny Hendry (11 Jul 2014). "Take Pride in Your Legacy (Code)" . Retrieved 2021-10-07.
  17. "Definition of greenfield and brownfield deployment". Searchunifiedcommunications.techtarget.com. Retrieved 2012-04-29.
  18. "Cost Considerations For A Mainframe to Cloud Migration Project". Kumaran Systems.
  19. Comella-Dorda, Santiago (2000-04-01). "A Survey of Legacy System Modernization Approaches" (PDF). SEI Digital Library.

Further reading