Mobile agent

Last updated

In computer science, a mobile agent is a piece of software agent combined with data that is able to migrate from one computer to another autonomously and continue its execution on the destination with the ability to interact with other agents there. Rather than a client requesting data and performing actions, a mobile agent is sent to a server to perform those tasks. This paradigm delegates the work from the client and onto the server. [1] :v–vi

Contents

Definition and overview

A mobile agent is a type of software agent, with the feature of autonomy, social ability, learning, and most significantly, mobility.

More specifically, a mobile agent is a process that can transport its state from one environment to another, with its data intact, and be capable of performing appropriately in the new environment. Mobile agents decide when and where to move. Movement is often evolved from RPC methods. Just as a user directs an Internet browser to "visit" a website (the browser merely downloads a copy of the site, or one version of it in the case of dynamic web sites), a mobile agent accomplishes a move through data duplication. When a mobile agent decides to move, it saves its own state (process image), transports this saved state to the new host and resumes execution from the saved state.

A mobile agent is a specific form of mobile code, within the field of code mobility. However, in contrast to the remote evaluation and code on demand programming paradigms, mobile agents are active in that they can choose to migrate between computers at any time during their execution. This makes them a powerful tool for implementing distributed applications in a computer network.

There are two types of mobile agents. The classification is based on their migration path.

  1. Mobile agents with predefined path: these have a static migration path.
  2. Free roaming mobile agent: [2] these have a dynamic migration path. Depending on the present network condition, the mobile agent chooses its path.

An open multi-agent system (MAS) is a system in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system.

History and Evolution

In the early 1990s, General Magic created the Telescript language and environment for writing and executing mobile agents, and described it with the now-popular "cloud" metaphor; as described by Andy Hertzfeld:

"The beauty of Telescript," says Andy, "is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create a sort of a virtual service. [3]

The company was unsuccessful, however.

Advantages

Some advantages which mobile agents have over conventional agents are:

One particular advantage for remote deployment of software includes increased portability thereby making system requirements less influential.

Related Research Articles

In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program to process HTTP/S user requests.

The Dynamic Host Configuration Protocol (DHCP) is a network management protocol used on Internet Protocol (IP) networks for automatically assigning IP addresses and other communication parameters to devices connected to the network using a client–server architecture.

Kerberos is a computer-network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Its designers aimed it primarily at a client–server model, and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

<span class="mw-page-title-main">Web server</span> Computer software that distributes web pages

A web server is computer software and underlying hardware that accepts requests via HTTP or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.

<span class="mw-page-title-main">Server (computing)</span> Computer to access a central resource or service on a network

In computing, a server is a piece of computer hardware or software that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.

<span class="mw-page-title-main">Load balancing (computing)</span> Set of techniques to improve the distribution of workloads across multiple computing resources

In computing, load balancing is the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.

<span class="mw-page-title-main">Inter-process communication</span> How computer operating systems enable data sharing

In computer science, inter-process communication (IPC), also spelled interprocess communication, are the mechanisms provided by an operating system for processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing.

<span class="mw-page-title-main">Preboot Execution Environment</span> Standard for booting from a server

In computing, the Preboot eXecution Environment, PXE specification describes a standardized client–server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industry-standard network protocols such as DHCP and TFTP.

<span class="mw-page-title-main">Dynamic web page</span> Type of web page

A dynamic web page is a web page constructed at runtime, as opposed to a static web page, delivered as it is stored. A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, and including the setting up of more client-side processing. A client-side dynamic web page processes the web page using JavaScript running in the browser as it loads. JavaScript can interact with the page via Document Object Model (DOM), to query page state and modify it. Even though a web page can be dynamic on the client-side, it can still be hosted on a static hosting service such as GitHub Pages or Amazon S3 as long as there is not any server-side code included.

<span class="mw-page-title-main">Edge computing</span> Distributed computing paradigm

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.

A home server is a computing server located in a private computing residence providing services to other devices inside or outside the household through a home network or the Internet. Such services may include file and printer serving, media center serving, home automation control, web serving, web caching, file sharing and synchronization, video surveillance and digital video recorder, calendar and contact sharing and synchronization, account authentication, and backup services.

Oracle Secure Global Desktop (SGD) software provides secure access to both published applications and published desktops running on Microsoft Windows, Unix, mainframe and IBM i systems via a variety of clients ranging from fat PCs to thin clients such as Sun Rays.

A roaming user profile is a file synchronization concept in the Windows NT family of operating systems that allows users with a computer joined to a Windows domain to log on to any computer on the same domain and access their documents and have a consistent desktop experience, such as applications remembering toolbar positions and preferences, or the desktop appearance staying the same, while keeping all related files stored locally, to not continuously depend on a fast and reliable network connection to a file server.

In distributed computing, code mobility is the ability for running programs, code or objects to be migrated from one machine or application to another. This is the process of moving mobile code across the nodes of a network as opposed to distributed computation where the data is moved.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

GridRPC in distributed computing, is Remote Procedure Call over a grid. This paradigm has been proposed by the GridRPC working group of the Open Grid Forum (OGF), and an API has been defined in order for clients to access remote servers as simply as a function call. It is used among numerous Grid middleware for its simplicity of implementation, and has been standardized by the OGF in 2007. For interoperability reasons between the different existing middleware, the API has been followed by a document describing good use and behavior of the different GridRPC API implementations. Works have then been conducted on the GridRPC Data Management, which has been standardized in 2011.

<span class="mw-page-title-main">DIET</span>

DIET is a software for grid-computing. As middleware, DIET sits between the operating system and the application software. DIET was created in 2000. It was designed for high-performance computing. It is currently developed by INRIA, École Normale Supérieure de Lyon, CNRS, Claude Bernard University Lyon 1, SysFera. It is open-source software released under the CeCILL license.

<span class="mw-page-title-main">Cloud computing architecture</span> Overview about the cloud computing architecture

Cloud computing architecture refers to the components and subcomponents required for cloud computing. These components typically consist of a front end platform, back end platforms, a cloud based delivery, and a network. Combined, these components make up cloud computing architecture.

A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge of the Internet. The main purpose of the cloudlet is supporting resource-intensive and interactive mobile applications by providing powerful computing resources to mobile devices with lower latency. It is a new architectural element that extends today's cloud computing infrastructure. It represents the middle tier of a 3-tier hierarchy: mobile device - cloudlet - cloud. A cloudlet can be viewed as a data center in a box whose goal is to bring the cloud closer. The cloudlet term was first coined by M. Satyanarayanan, Victor Bahl, Ramón Cáceres, and Nigel Davies, and a prototype implementation is developed by Carnegie Mellon University as a research project. The concept of cloudlet is also known as follow me cloud, and mobile micro-cloud.

References

  1. 1 2 Gigna, Giovanni, ed. (1998). Mobile Agents and Security. Lecture Notes in Computer Science. Vol. 1419. Berlin: Springer. doi:10.1007/3-540-68671-1. ISBN   978-3-540-68671-2. OCLC   657901937. S2CID   32201981.
  2. Linna, Fan; Jun, Liu (2010-06-01). "A free-roaming mobile agent security protocol against colluded truncation attack". 2010 2nd International Conference on Education Technology and Computer. Vol. 5. pp. V5–261–V5–265. doi:10.1109/ICETC.2010.5530034. ISBN   978-1-4244-6367-1. S2CID   13966113.
  3. Levy, Steven (April 1994). "Bill and Andy's Excellent Adventure II". Wired .