Intelligent computer network

Last updated

An intelligent computer network is a computer network in which the network is in control of application creation and operation. Relatively dumb terminal and devices on the network periphery access centralized network services on behalf of their users. The owners of the network are in complete charge of the type and quantity of applications that exist on the network.

An intelligent computer network is most suited for applications in which reliability and security are prime requirements. Application software is centralized and so can be rigorously verified before deployment. This large scale of the network and the ability to verify application operation allows such networks to address very complicated tasks. The costs of development and testing may be spread across many users.

The intelligent computer network architecture is at one extreme of a continuum of network architectures. It should be contrasted with the dumb network architecture.

Network architecture is the design of a computer network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well as communication protocols used.

Intelligent computer networks should be distinguished from telecommunication intelligent networks (IN).


Related Research Articles

Client–server model distributed application structure in computing

Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web.

Java virtual machine runtime environment that can execute Java bytecode as a result of compiling computer programs written in the Java programming language

A Java virtual machine (JVM) is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. The JVM is detailed by a specification that formally describes what is required in a JVM implementation. Having a specification ensures interoperability of Java programs across different implementations so that program authors using the Java Development Kit (JDK) need not worry about idiosyncrasies of the underlying hardware platform.

Peer-to-peer type of decentralized and distributed network architecture

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes.

Thin client Non-powerful computer optimized for remote server access

A thin client is a lightweight computer that has been optimized for establishing a remote connection with a server-based computing environment. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a fat client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

Computer terminal computer input/output device; an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system

A computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying or printing data from, a computer or a computing system. The teletype was an example of an early day hardcopy terminal, and predated the use of a computer screen by decades.

In telephony, a stimulus protocol is a type of protocol that is used to carry event notifications between end points. Such a protocol is used to control the operation of devices at each end of the link. However a stimulus protocol is not sensitive to the system state. In a typical application such a protocol will carry keystroke information from a telephone set to a central call control. It may also carry control information for simple types of text displays. MiNET from Mitel is a typical protocol of this sort.

In telephony, a functional protocol is a type of protocol that is used to carry signaling messages between end points. Such a protocol is used to control the operation of devices at each end of the link. The adjective functional is used to describe protocols that are aware of the system state of the endpoints. Session Initiation Protocol (SIP) is a currently popular protocol for Voice over IP (VoIP) and other applications.

A context-aware network is a form of computer network that is a synthesis of the properties of dumb network and intelligent computer network architectures. Dumb networks feature the use of intelligent peripheral devices and a core network which does not control or monitor application creation or operation. Such a network is said to follow the end to end principle in that those applications are set up between end peripheral devices with no control being exercised by the network. Such a network assumes that all users and all applications are of equal priority. Any conflict or undesired interaction must be handled by the independent applications. As such the network is most suited to uses in which customization to individual user needs and the addition of new applications are most important. The pure Internet ideal is an example of a dumb network.

Systems management refers to enterprise-wide administration of distributed systems including computer systems. Systems management is strongly influenced by network management initiatives in telecommunications. The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM.

A network host is a computer connected to a computer network. A host may work as a server offering information resources, services, and applications to users or other nodes on the network. Hosts are assigned at least one network address.

Edge computing is a distributed computing paradigm which brings computer data storage closer to the location where it is needed. Computation is largely or completely performed on distributed device nodes. Edge computing pushes applications, data and computing power (services) away from centralized points to locations closer to the user. The target of edge computing is an application or general functionality needing to be closer to the source of the action where distributed systems technology interacts with the physical world. Edge computing does not need contact with any centralized cloud, although it may interact with one. In contrast to cloud computing, edge computing refers to decentralized data processing at the edge of the network.

Decentralized computing is the allocation of resources, both hardware and software, to each individual workstation, or office location. In contrast, centralized computing exists when the majority of functions are carried out, or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments. This is the opposite of centralized computing, which was prevalent during the early days of computers. A decentralized computer system has many benefits over a conventional centralized network. Desktop computers have advanced so rapidly, that their potential performance far exceeds the requirements of most business applications. This results in most desktop computers remaining idle. A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness.

Metacomputing is all computing and computing-oriented activity which involves computing knowledge utilized for the research, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of large computer networks/grids, such as the Internet, intranet and other territorially distributed computer networks for special purposes.

With regard to a mobile network operator, the term dumb pipe, or dumb network, refers to a simple network that, with a high enough bandwidth to transfer bytes between the customer's device and the Internet without the need to prioritize content, can afford to be completely neutral with regard to the services and applications the customer accesses. The use of the term "dumb" refers to the fact that the network operator does not affect the customer's accessibility of the Internet such as by either limiting the available services or applications to its own proprietary portal or offer additional capabilities and services beyond simple connectivity. A dumb pipe primarily provides simple bandwidth and network speeds greater than the maximum network loads expected thus avoiding the need to discriminate between data types.

In computing, virtualization refers to the act of creating a virtual version of something, including virtual computer hardware platforms, storage devices, and computer network resources.

Cloud computing form of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.

Kernel (operating system) main component of most computer operating systems

The kernel is a computer program that is the core of a computer's operating system, with complete control over everything in the system. On most systems, it is one of the first programs loaded on start-up. It handles the rest of start-up as well as input/output requests from software, translating them into data-processing instructions for the central processing unit. It handles memory and peripherals like keyboards, monitors, printers, and speakers.

A distributed operating system is a software over a collection of independent, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.

Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring making it more like cloud computing than traditional network management. SDN is meant to address the fact that the static architecture of traditional networks is decentralized and complex while current networks require more flexibility and easy troubleshooting. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers which are considered as the brain of SDN network where the whole intelligence is incorporated. However, the intelligence centralization has its own drawbacks when it comes to security, scalability and elasticity and this is the main issue of SDN.