The present page holds the title of a primary topic , and an article needs to be written about it. It is believed to qualify as a broad-concept article . It may be written directly at this page or drafted elsewhere and then moved over here. Related titles should be described in Virtual server, while unrelated titles should be moved to Virtual server (disambiguation) . |
Virtual server may refer to:
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. Windows Server operating systems include it as a set of processes and services. Originally, only centralized domain management used Active Directory. However, it ultimately became an umbrella title for various directory-based identity-related services.
In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space, which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of client–server interaction, typically implemented via a request–response message-passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.
In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two. Virtual machines differ and are organized by their function, shown here:
A cross compiler is a compiler capable of creating executable code for a platform other than the one on which the compiler is running. For example, a compiler that runs on a PC but generates code that runs on an Android smartphone is a cross compiler.
OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, called containers, zones, virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels, or jails. Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container.
Software multitenancy is a software architecture in which a single instance of software runs on a server and serves multiple tenants. Systems designed in such manner are "shared". A tenant is a group of users who share a common access with specific privileges to the software instance. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance - including its data, configuration, user management, tenant individual functionality and non-functional properties. Multitenancy contrasts with multi-instance architectures, where separate software instances operate on behalf of different tenants.
Application virtualization is a software technology that encapsulates computer programs from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees.
The following is a timeline of virtualization development. In computing, virtualization is the use of a computer to simulate another computer. Through virtualization, a host simulates a guest by exposing virtual hardware devices, which may be done through software or by allowing access to a physical device connected to the machine.
Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.
Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.
Microsoft Application Virtualization is an application virtualization and application streaming solution from Microsoft. It was originally developed by Softricity, a company based in Boston, Massachusetts, acquired by Microsoft on July 17, 2006. App-V represents Microsoft's entry to the application virtualization market, alongside their other virtualization technologies such as Hyper-V, Microsoft User Environment Virtualization (UE-V), Remote Desktop Services, and System Center Virtual Machine Manager.
Turbo is a set of software products and services developed by the Code Systems Corporation for application virtualization, portable application creation, and digital distribution. Code Systems Corporation is an American corporation headquartered in Seattle, Washington, and is best known for its Turbo products that include Browser Sandbox, Turbo Studio, TurboServer, and Turbo.
In computing, virtualization or virtualisation is the act of creating a virtual version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.
A hosted desktop is a product set within the larger cloud-computing sphere generally delivered using a combination of technologies including hardware virtualization and some form of remote connection software, Citrix XenApp or Microsoft Remote Desktop Services being two of the most common. Processing takes place within the provider's datacentre environment with traffic between the datacentre and the client being primarily display updates, mouse movements and keyboard activity.
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first released in 2013 and is developed by Docker, Inc.
Windows Server 2016 is the eighth release of the Windows Server operating system developed by Microsoft as part of the Windows NT family of operating systems. It was developed alongside Windows 10 and is the successor to the Windows 8.1-based Windows Server 2012 R2. The first early preview version became available on October 1, 2014 together with the first technical preview of System Center. Windows Server 2016 was released on September 26, 2016 at Microsoft's Ignite conference and broadly released for retail sale on October 12, 2016. It was succeeded by Windows Server 2019 and the Windows Server Semi-Annual Channel.
In software deployment, an environment or tier is a computer system or set of systems in which a computer program or software component is deployed and executed. In simple cases, such as developing and immediately executing a program on the same machine, there may be a single environment, but in industrial use, the development environment and production environment are separated, often with several stages in between. This structured release management process allows phased deployment (rollout), testing, and rollback in case of problems.
In computing, a system virtual machine is a virtual machine (VM) that provides a complete system platform and supports the execution of a complete operating system (OS). These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use, or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness, or both. A VM was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine".
In computer networking, a bare-metal server is a physical computer server that is used by one consumer, or tenant, only. Each server offered for rental is a distinct physical piece of hardware that is a functional server on its own. They are not virtual servers running in multiple pieces of shared hardware.