Bare machine computing

Last updated

Bare Machine Computing (BMC) is a computer architecture based on bare machines. In the BMC paradigm, applications run without the support of any operating system (OS) or centralized kernel i.e., no intermediary software is loaded on the bare machine prior to running applications. The applications, which are called bare machine applications or simply BMC applications, do not use any persistent storage or a hard disk, and instead are stored on detachable mass storage such as a USB flash drive. A BMC program consists of a single application or a small set of applications (application suite) that runs as a single executable within one address space. BMC applications have direct access to the necessary hardware resources. They are self-contained, self-managed and self-controlled entities that boot, load and run without using any other software components or external software. BMC applications have inherent security due to their design. There are no OS-related vulnerabilities, and each application only contains the necessary (minimal) functionality. There is no privileged mode in a BMC system since applications only run in user mode. Also, application code is statically compiled-there is no means to dynamically alter BMC program flow during execution.

Contents

History

In the early days of computing, computer applications directly communicated to the hardware and there was no operating system. As applications grew larger encompassing various domains, OSes were invented. They served as middleware providing hardware abstractions to applications. OSes have grown immensely in their size and complexity resulting in attempts to reduce OS overhead and improve performance including Microkernel, Exokernel, Tiny-OS, OS-Kit, [1] Palacios and Kitten, [2] IO_Lite, [3] bare-metal Linux, IBM-Libra and other lean kernels. In addition to the above approaches, in embedded systems such as smart phones, a small and dedicated portion of an OS and a given set of applications are closely integrated with the hardware. There are also a myriad of industrial control and gaming applications that run directly on the hardware. In most of these systems, the hardware is not open to run general purpose applications.

Bare machine computing originated with the application object (AO) concept invented by Karne at Towson University. [4] It evolved over the years into dispersed operating systems (DOSC), [5] and eventually into the BMC paradigm.

Compared to conventional computing

In many ways, the BMC paradigm differs from conventional computing. There is no centralized kernel or OS running during the execution of BMC applications. Also, a bare machine in the BMC paradigm does not have any ownership or store valuable resources; and it can be used to run general purpose computing applications. Such characteristics are not found in conventional computing systems including embedded systems and system on a chip (SOC). In addition, the BMC concept is a minimalistic approach to achieve simplicity, smaller code sizes and security. [6]

In bare machine computing a computing device is bare and its programs directly communicate to hardware. Application and systems programs are one and the same. No user mode or kernel mode in this system. When a given application suite is running, no other things are running in the box. The entire programs are written in a single programming language C/C++ with very little assembly code. Application programmer controls the entire hardware resources. It is based on events thus avoiding centralized kernel. A given BMC application suite simply runs on a given instruction set architecture (ISA) for ever, as along the ISA remains upward compatible. This approach is friendly to green computing as there is no need to dump hardware and software caused by today's planned obsolescence, in every aspect of our information systems.

Applications and research

The BMC paradigm has been used to implement webservers, [7] split servers, [8] [9] VoIP, [10] SIP server, [11] email, [12] webmail, [13] Text Based Browser, [14] security protocols, [15] [16] file systems, [17] [18] [19] RAID, [20] transformed bare SQLite., [21] [22] middleware for network cards interfaces (NICS), [23] and Ethernet bonding on BMC webserver with dual NICs, [24] Success in transforming conventional Windows or Linux applications to run as BMC applications will pave the way for new uses of the BMC paradigm. [25] Design issues in running a webserver on bare PC with 32-bit multi-core architecture using TCP was described in. [26] A novel client/server protocol for web-based communication over UDP using 32-bit architecture on bare machine was demonstrated in. [27] Developing computer applications without any OS or kernel in 64-bit multi-core architecture and its implementation was shown in. [28]

Related Research Articles

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

<span class="mw-page-title-main">Operating system</span> Software that manages computer hardware resources

An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.

In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two. Virtual machines differ and are organized by their function, shown here:

<span class="mw-page-title-main">Beowulf cluster</span> Type of computing cluster

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

Originally, the word computing was synonymous with counting and calculating, and the science and technology of mathematical calculations. Today, "computing" means using computers and other computing machines. It includes their operation and usage, the electrical processes carried out within the computing hardware itself, and the theoretical concepts governing them.

In computer architecture, 64-bit integers, memory addresses, or other data units are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size. A computer that uses such a processor is a 64-bit computer.

<span class="mw-page-title-main">Windowing system</span> Software that manages separately different parts of display screens

In computing, a windowing system is a software suite that manages separately different parts of display screens. It is a type of graphical user interface (GUI) which implements the WIMP paradigm for a user interface.

<span class="mw-page-title-main">Inter-process communication</span> How computer operating systems enable data sharing

In computer science, inter-process communication (IPC), also spelled interprocess communication, are the mechanisms provided by an operating system for processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing.

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with flexible hardware platforms like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to add custom computational blocks using FPGAs. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric, thus providing new computational blocks without the need to manufacture and add new chips to the existing system.

Hardware abstractions are sets of routines in software that provide programs with access to hardware resources through programming interfaces. The programming interface allows all devices in a particular class C of hardware devices to be accessed through identical interfaces even though C may contain different subclasses of devices that each provide a different hardware interface.

System Management Mode is an operating mode of x86 central processor units (CPUs) in which all normal execution, including the operating system, is suspended. An alternate software system which usually resides in the computer's firmware, or a hardware-assisted debugger, is then executed with high privileges.

A system profiler is a program that can provide detailed information about the software installed and hardware attached to a computer. Typically workstations and personal computers have had system profilers as a common feature since the mid-1990s.

<span class="mw-page-title-main">Protection ring</span> Layer of protection in computer systems

In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults and malicious behavior.

The following is a timeline of virtualization development. In computing, virtualization is the use of a computer to simulate another computer. Through virtualization, a host simulates a guest by exposing virtual hardware devices, which may be done through software or by allowing access to a physical device connected to the machine.

<span class="mw-page-title-main">Skytone Alpha-400</span> Linux-based low-cost netbook introduced in 2008

The Skytone Alpha-400 is a Linux-based low-cost netbook with a 7 in 800×480 LCD screen, introduced in 2008. Its measurements (length×width×depth) are 210×140×32 mm and it weighs 0.65 kg.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.

<span class="mw-page-title-main">T-Kernel</span>

T-Kernel is an open source real-time operating system (RTOS) designed for 32-bit microcontrollers. It is standardized by the T-Engine Forum, which distributes it under a T-License agreement. There is also a corresponding Micro T-Kernel (μT-Kernel) implementation designed for embedded systems with 16-bit or 8-bit microcontrollers.

<span class="mw-page-title-main">Supercomputer operating system</span> Use of Operative System by type of extremely powerful computer

A supercomputer operating system is an operating system intended for supercomputers. Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as fundamental changes have occurred in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been moving away from in-house operating systems and toward some form of Linux, with it running all the supercomputers on the TOP500 list in November 2017. In 2021, top 10 computers run for instance Red Hat Enterprise Linux (RHEL), or some variant of it or other Linux distribution e.g. Ubuntu.

References

  1. "The OS Kit Project". Salt Lake, Utah: School of Computing, University of Utah. June 2002.
  2. J. Lange et al, “Palacios and Kitten: New high performance operating systems for scalable virtualized and native supercomputing,” 24th IEEEInternational Paral-lel and Distributed Processing Symposium (IPDPS), 2010, pp. 1-12
  3. Pai, V. S.; Druschel, P.; Zwaenepoel, W. (February 2000). "IO-Lite: A Unified I/O Buffering and Caching System" (PDF). ACM Transactions on Computer Systems. 18 (1): 37–66. doi:10.1145/332799.332895. S2CID   5280787.
  4. Karne, R. K. (December 1995). "Object-Oriented Computer Architectures for New Generation of Applications, Computer Architecture News". 23 (5): 8–19. doi:10.1145/218328.218332. S2CID   880971.{{cite journal}}: Cite journal requires |journal= (help)
  5. Karne, R.K, Venkatasamy, K (Karthick Jaganathan), Ahmed, T. Dispersed Operating System Computing (DOSC), Onward Track, OOPSLA 2005, San Diego, CA, October 2005.
  6. Soumya, S.; Guerin, R.; Hosanagar, K. (September 2011). "Functionality-Rich vs. Minimalist Platforms: A Two-Sided Market Analysis". ACM Computer Communication Review. 41 (5): 36–43. doi:10.1145/2043165.2043171. S2CID   890141.
  7. He, L., Karne, R. K., Wijesinha, A.L., and Emdadi, A. Design and Performance of a Bare PC Web Server, International Journal of Computers and Their Applications (IJCA), June 2008.
  8. B. Rawal, R. K. Karne, and A. L. Wijesinha. Splitting HTTP Requests on Two Servers, The Third International Conference on Communication Systems and Networks: COMSNETS 2011, January 2011, Bangalore, India.
  9. B. Rawal, R. K. Karne, and A. L. Wijesinha. “Mini Web server clusters for HTTP request splitting,” IEEE International Conference on High-Performance Computing and Communications (HPCC), pp. 94-100.
  10. G. H. Khaksari, A. L. Wijesinha, R. K. Karne, L. He, and S. Girumala, “”A peer-to-peer bare PC VoIP application,” 4th IEEE Consumer Communications and Networking Conference (CCNC), 2007, pp. 803-807.
  11. A. Alexander, R. Yasinovskyy, A. Wijesinha, and R. Karne, "SIP Server Implementation and Performance on a Bare PC," International Journal in Advances on Telecommunications, vol. 4, no. 1 and 2, 2011.
  12. Ford,G.H., Karne, R.K., Wijesinha, A.L., and Appiah-Kubi, P. The Design and Implementation of a Bare PC Email Server,33rd Annual IEEE International Computer Software and Applications Conference (COMPSAC 2009), Seattle, Washington, July 2009, p480-485.
  13. P. Appiah-kubi, R. K. Karne, and A. L. Wijesinha. The Design and Performance of a Bare PC Webmail Server, The 12th IEEE International Conference on High-Performance Computing and Communications, AHPCC 2010, Sept 1-3, 2010, Melbourne, Australia, p521-526.
  14. S.Almautairi, R. K. Karne and A.L. Wijesinha, A Bare PC Text Based Browser, 2019 Workshop On Computing, Networking and Communications (CNC), Honolulu, Hawaii, February 2019
  15. N. Kazemi, A. L. Wijesinha, and R. Karne. Design and Implementation of IPsec on a Bare PC, 2nd International Conference on Computer Science and its Applications (CSA), 2009.
  16. A. Emdadi, R. K. Karne, and A. L. Wijesinha. Implementing the TLS Protocol on a Bare PC, ICCRD2010, The 2nd International Conference on Computer Research and Development, Kaula Lumpur, Malaysia, May 2010.
  17. W. V. Thompson, H. Alabsi, R. K. Karne, S. Linag, A.L. Wijesinha, R. Almajed, and H. Chang, A Mass Storage System for Bare PC Applications Using USBs, International Journal on Advances in Internet Technology, vol 9, no 3 and 4, year 2016. p63-74.
  18. W. Thompson, R. Karne, A. Wijesinha, H. Alabsi, and H. Chang, Implementing a USB File System for Bare PC Applications, ICIW 2016: The Eleventh International Conference on Internet and Web Applications and Services, p58-63.
  19. S.Liang, R. K. Karne, and A.L.Wijesinha., A Lean USB File System For Bare Machine Applications, The Proceedings of the 21st International Conference on Software Engineering and Data Engineering, ISCA, June 2012, pp.191-196.
  20. H. Z. Alabsi, W. V. Thompson, R. K. Karne, A. L. Wijesinha, R. Almajed, F. Almansour, A Bare Machine RAID File System for USBs, SEDE 2017: 26th International Conference on Software Engineering and Data Engineering, pp 113-118.
  21. W. Thompson, R. K. Karne and A.L. Wijesinha, Interoperable SQLite for a Bare PC, 13th International Conference Beyond Database Architectures and Structures (BDAS'17), 2017, p177-188.
  22. U. Okafor, R. K. Karne, A. L. Wijesinha and B. Rawal Transforming SQLITE to Run on a Bare PC, In Proceedings of the 7th International Conference on Software Paradigm Trends, pages 311-314, Rome, Italy, July 2012.
  23. F. Almansour, R. K. Karne, A.L. Wijesinha, H. Alabsi and R. Almajed, Middleware for NICs in Bare PC Applications, 26th International Conference on Computer Communications and Networks (Poster Paper), ICCCN2017, Vancouver, Canada, 2017.
  24. F.Almansour, R K. Karne, A. L. Wijesinha, B. S. Rawal “Ethernet Bonding on a Bare PC Web Server with Dual NICs”, The 33rd ACM Symposium On Applied Computing SAC 2018, April 2018, Pau, France.
  25. Peter, A.; Karne, R.; Wijesinha, A.; Appiah-Kubi, P. (April 4–7, 2013). Transforming a Bare PC Application to Run on an ARM Device. IEEE SoutheastCon. Jacksonville, Florida.
  26. N. Soundararajan, R. Karne, A. Wijesinha, N. Ordouie, H. Chang, Design Issues in Running a Webserver on Bare PC Multi-Core Architecture, IEEE 44th Annual Computers, Software,and Applications Conference (COMPSAC), 2020
  27. N. Soundararajan, R. Karne, A. Wijesinha, N. Ordouie, B.S.Rawal, A Novel Client/Server Protocol for Web-based Communication over UDP on a Bare Machine, 18th IEEE Student Conference on Research and Development (SCOReD), 2020.
  28. N. Ordouie, N. Soundararajan, R. Karne, A. Wijesinha, Developing Computer Applications without any OS or Kernel in a Multi-core Architecture, IEEE ISNCC November 2021.