This article may rely excessively on sources too closely associated with the subject , potentially preventing the article from being verifiable and neutral.(June 2012) |
Developer(s) | Mark A. O'Neill |
---|---|
Stable release | 3-5.4.9 / October 2023 |
Written in | C |
Operating system | Linux |
Platform | i686, x86-64, ARM|AARCH64 |
License | GNU General Public License version 3 or later |
Website | github |
PUPS/P3 is an implementation of an organic computing environment for Linux which provides support for the implementation of low level persistent software agents. [1]
PUPS/P3 is a cluster computing environment derived from the MSPS operating environment implemented on the BBC Micro. [2]
The PUPS P3 environment has been used in the infrastructure of a number of scientific computing projects include the Daisy [3] automated species identification system and a number of computational neuroscience projects. [4] [5]
PUPS/P3 processes are homeostatic agents. These agents are able to save their state and migrate between machines running compatible Linux kernels (via CRIU). The PUPS/P3 API also gives them significant access to the state of their environment: like biological organisms they are animate. That is, they are able to sense changes in their environment and respond appropriately. Fir example, a P3 process may elect to save its state or migrate if some resource, for example processor cycles become scarce. Effectively, this is the machine equivalent of an animal electing hibernate or migrate when its food resources become scarce. PUPS/P3 can also share data resources via a low level persistent object, the shared heap. The semantics of using this are similar to those used by the free()/malloc() API supplied by standard C libraries.
Computations can be jointly executed by a cluster of co-operating P3 processes. This cluster is in many ways analalogous to a multicellular organism: like cells within an organism, individual P3 processes can specialise. For example, in the case of the Daisy pattern recognition system, the cluster consists of (ipm) processes which pre-process pattern-data, (floret) processes which run the PSOM neural nets used to classify those patterns, and (vhtml) processes which communicate the identity of patterns Daisy has discovered to the user. In addition, the Daisy cluster also has specialist (maggot and Kepher[ check spelling ]) processes to clear and recycle file and memory space and (lyosome) processes which destroy and replace other processes within the cluster which have become corrupted and therefore non-functional.
In conjunction with virtualisation systems, for example the Oracle VirtualBox system, it is possible to use PUPS/P3 to build homeostatic virtual (Linux) machines which can carry computational payloads while living in a dynamic cloud environment. The latest release of PUPS/P3 also supports container based operating system level virtualization (via Docker (software) and check pointing and subsequent migration and/or restoration via CRIU.
The P3 system facilitates dynamic asynchronous peer to peer communication between processes and also dynamic asynchronous communication between processes and the user. In the example process network shown, several of the communications methods implemented in PUPS/P3 are illustrated. These include:
User to PSRP server via PSRP client (using PSRP protocol). This communication mode establishes an asynchronous pseudotty connection between the psrp client (and hence the user) and the PSRP server process.
Peer to peer (between PSRP servers) via SIC channel. A PSRP server wishing to communicate directly with another server slaves an instance of the psrp client via a Slaved Interaction Client Channel (SIC). It then instructs this slaved psrp client to open a PSRP channel to the peer it wishes to talk to.
Peer to peer (between PSRP servers) via sensitive file. In this mode a PSRP server sends data to another server via file. To prevent any server reading the file it tagged with a key which has a matching lock on the recipient server. This lock and key system was inspired by enzyme-substrate and biological signaling systems.
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
In software engineering, multitier architecture is a client–server architecture in which presentation, application processing and data management functions are physically separated. The most widespread use of multitier architecture is the three-tier architecture.
In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).
A server is a computer that provides information to other computers called "clients" on a computer network. This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
In computing, load balancing is the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.
A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.
The Network Information Service, or NIS, is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS; the technology is licensed to virtually all other Unix vendors.
SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
Checkpointing is a technique that provides fault tolerance for computing systems. It basically consists of saving a snapshot of the application's state, so that applications can restart from that point in case of failure. This is particularly important for long running applications that are executed in failure-prone computing systems.
In computer science, message queues and mailboxes are software-engineering components typically used for inter-process communication (IPC), or for inter-thread communication within the same process. They use a queue for messaging – the passing of control or of content. Group communication systems provide similar kinds of functionality.
In computer networking, port forwarding or port mapping is an application of network address translation (NAT) that redirects a communication request from one address and port number combination to another while the packets are traversing a network gateway, such as a router or firewall. This technique is most commonly used to make services on a host residing on a protected or masqueraded (internal) network available to hosts on the opposite side of the gateway, by remapping the destination IP address and port number of the communication to an internal host.
In computer science and networking in particular, a session is a time-delimited two-way link, a practical layer in the TCP/IP protocol enabling interactive expression and information exchange between two or more communication devices or ends – be they computers, automated systems, or live active users. A session is established at a certain point in time, and then ‘torn down’ - brought to an end - at some later point. An established communication session may involve more than one message in each direction. A session is typically stateful, meaning that at least one of the communicating parties needs to hold current state information and save information about the session history to be able to communicate, as opposed to stateless communication, where the communication consists of independent requests with responses.
A web framework (WF) or web application framework (WAF) is a software framework that is designed to support the development of web applications including web services, web resources, and web APIs. Web frameworks provide a standard way to build and deploy web applications on the World Wide Web. Web frameworks aim to automate the overhead associated with common activities performed in web development. For example, many web frameworks provide libraries for database access, templating frameworks, and session management, and they often promote code reuse. Although they often target development of dynamic web sites, they are also applicable to static websites.
Mark A. O'Neill is an English computational biologist with interests in artificial intelligence, systems biology, complex systems and image analysis. He is the creator and lead programmer on a number of computational projects including the Digital Automated Identification SYstem (DAISY) for automated species identification and PUPS P3, an organic computing environment for Linux.
Sideband computing is where a user connects to some normal network service, and a separate communication channel is opened through which a server distributes tasks to the clients. Through sideband computing method, any network server which has a lot of clients can form into a large-scale super-computing network. During this process, the resources in the clients could be utilized through the central server so long as the main channel is maintained. Sideband computing is related to the distributed computing and multiple communication channels.
Digital automated identification system (DAISY) is an automated species identification system optimised for the rapid screening of invertebrates by non-experts.
Middleware is a type of computer software program that provides services to software applications beyond those available from the operating system. It can be described as "software glue".
Enduro/X is an open-source middleware platform for distributed transaction processing. It is built on proven APIs such as X/Open group's XATMI and XA. The platform is designed for building real-time microservices based applications with a clusterization option. Enduro/X functions as an extended drop-in replacement for Oracle Tuxedo. The platform uses in-memory POSIX Kernel queues which insures high interprocess communication throughput.