Sun RPC

Last updated

Open Network Computing (ONC) Remote Procedure Call (RPC), commonly known as Sun RPC is a remote procedure call system. ONC was originally developed by Sun Microsystems in the 1980s as part of their Network File System project.

Contents

ONC is based on calling conventions used in Unix and the C programming language. It serializes data using the External Data Representation (XDR), which has also found some use to encode and decode data in files that are to be accessed on more than one platform. ONC then delivers the XDR payload using either UDP or TCP. Access to RPC services on a machine are provided via a port mapper that listens for queries on a well-known port (number 111) over UDP and TCP.

ONC RPC version 2 was first described in RFC   1050 [1] published in April 1988. In June 1988 it was updated by RFC   1057. Later it was updated by RFC   1831, published in August 1995. RFC   5531, published in May 2009, is the current version. All these documents describe only version 2 and version 1 was not covered by any RFC document. Authentication mechanisms used by ONC RPC are described in RFC 2695, RFC 2203, and RFC 2623.

Implementations of ONC RPC exist in most Unix-like systems. Microsoft supplied an implementation for Windows in their (now discontinued) Microsoft Windows Services for UNIX product; in addition, a number of third-party implementation of ONC RPC for Windows exist, including versions for C/C++, Java, and .NET (see external links).

In 2009, Sun relicensed the ONC RPC code under the standard 3-clause BSD license [2] and then reconfirmed by Oracle Corporation in 2010 following confusion about the scope of the relicensing. [3]

ONC is considered "lean and mean", but has limited appeal as a generalized RPC system for WANs or heterogeneous environments[ citation needed ]. Systems such as DCE, CORBA and SOAP are generally used in this wider role[ citation needed ].

See also

Related Research Articles

The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.

In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space, which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of client–server interaction, typically implemented via a request–response message-passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.

Samba is a free software re-implementation of the SMB networking protocol, and was originally developed by Andrew Tridgell. Samba provides file and print services for various Microsoft Windows clients and can integrate with a Microsoft Windows Server domain, either as a Domain Controller (DC) or as a domain member. As of version 4, it supports Active Directory and Microsoft Windows NT domains.

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol.

Distributed Component Object Model (DCOM) is a proprietary Microsoft technology for communication between software components on networked computers. DCOM, which originally was called "Network OLE", extends Microsoft's COM, and provides the communication substrate under Microsoft's COM+ application server infrastructure.

An ephemeral port is a communications endpoint (port) of a transport layer protocol of the Internet protocol suite that is used for only a short period of time for the duration of a communication session. Such short-lived ports are allocated automatically within a predefined range of port numbers by the IP stack software of a computer operating system. The Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Stream Control Transmission Protocol (SCTP) typically use an ephemeral port for the client-end of a client–server communication. At the server end of the communication session, ephemeral ports may also be used for continuation of communications with a client that initially connected to one of the services listening with a well-known port. For example, the Trivial File Transfer Protocol (TFTP) and Remote Procedure Call (RPC) applications can behave in this manner.

<span class="mw-page-title-main">Inter-process communication</span> How computer operating systems enable data sharing

In computer science, inter-process communication (IPC), also spelled interprocess communication, are the mechanisms provided by an operating system for processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing.

<span class="mw-page-title-main">Interface description language</span> Computer language used to describe a software components interface

An interface description language or interface definition language (IDL) is a generic term for a language that lets a program or object written in one language communicate with another program written in an unknown language. IDLs are usually used to describe data types and interfaces in a language-independent way, for example, between those written in C++ and those written in Java.

In computing, the Windows Sockets API (WSA), later shortened to Winsock, is an application programming interface (API) that defines how Windows network application software should access network services, especially TCP/IP. It defines a standard interface between a Windows TCP/IP client application and the underlying TCP/IP protocol stack. The nomenclature is based on the Berkeley sockets API used in BSD for communications between programs.

External Data Representation (XDR) is a standard data serialization format, for uses such as computer network protocols. It allows data to be transferred between different kinds of computer systems. Converting from the local representation to XDR is called encoding. Converting from XDR to the local representation is called decoding. XDR is implemented as a software library of functions which is portable between different operating systems and is also independent of the transport layer.

X/Open group was a consortium founded by several European UNIX systems manufacturers in 1984 to identify and promote open standards in the field of information technology. More specifically, the original aim was to define a single specification for operating systems derived from UNIX, to increase the interoperability of applications and reduce the cost of porting software. Its original members were Bull, ICL, Siemens, Olivetti, and Nixdorf—a group sometimes referred to as BISON. Philips and Ericsson joined in 1985, at which point the name X/Open was adopted.

The Distributed Computing Environment (DCE) is a software system developed in the early 1990s from the work of the Open Software Foundation (OSF), a consortium founded in 1988 that included Apollo Computer, IBM, Digital Equipment Corporation, and others. The DCE supplies a framework and a toolkit for developing client/server applications. The framework includes:

The port mapper is an Open Network Computing Remote Procedure Call service that runs on network nodes that provide other ONC RPC services.

Inter-Language Unification or ILU is a method for computer systems to exchange data, bridging differences in the way systems represent the various kinds of data. Even if two systems run on the same computer, or on identical computer hardware, many differences arise from the use of different computer languages to build the systems.

DCE/RPC, short for "Distributed Computing Environment / Remote Procedure Calls", is the remote procedure call system developed for the Distributed Computing Environment (DCE). This system allows programmers to write distributed software as if it were all working on the same computer, without having to worry about the underlying network code.

A network socket is a software structure within a network node of a computer network that serves as an endpoint for sending and receiving data across the network. The structure and properties of a socket are defined by an application programming interface (API) for the networking architecture. Sockets are created only during the lifetime of a process of an application running in the node.

WebNFS is an extension to the Network File System (NFS) for allowing clients to access a file system over the internet using a simplified, firewall-friendly protocol.

The Microsoft Open Specification Promise is a promise by Microsoft, published in September 2006, to not assert its patents, in certain conditions, against implementations of a certain list of specifications.

The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.

References

Notes

  1. "RFC 1050 section 8". April 1988. rpcvers must be equal to 2
  2. Phipps, Simon (2009-02-12). "Old Code and Old Licenses". Sun Microsystems. Archived from the original on 2013-02-23. Retrieved 2012-12-21.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
  3. "The long, sordid tale of Sun RPC, abbreviated somewhat, to protect the guilty and the irresponsible". Tom Callaway, Red Hat . Retrieved 2010-08-26.