Native cloud application

Last updated

A native cloud application (NCA) is a type of computer software that natively utilizes services and infrastructure from cloud computing providers such as Amazon EC2, Force.com, or Microsoft Azure. NCAs exhibit a combined usage of the three fundamental technologies:

Related Research Articles

In telecommunication, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability.

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

NetApp, Inc. is an American hybrid cloud data services and data management company headquartered in San Jose, California. It has ranked in the Fortune 500 from 2012–2021. Founded in 1992 with an IPO in 1995, NetApp offers cloud data services for management of applications and data both online and physically.

MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster.

A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results.

A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

Giovanni is a Web interface that allows users to analyze NASA's gridded data from various satellite and surface observations.

The first major provider of infrastructure as a service (IaaS) was Amazon in 2008. IaaS is a cloud computing service model by means of which computing resources are supplied by a cloud services provider. The IaaS vendor provides the storage, network, servers and virtualization. This service enable users to free themselves from maintaining an on-premise data center. The IaaS provider is hosting these resources in either a public cloud, private cloud, or hybrid cloud.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a "pay as you go" model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

<span class="mw-page-title-main">Fabric computing</span>

Fabric computing or unified computing involves constructing a computing fabric consisting of interconnected nodes that look like a weave or a fabric when seen collectively from a distance.

<span class="mw-page-title-main">MTA SZTAKI Laboratory of Parallel and Distributed Systems</span> Hungarian research laboratory

The Laboratory of Parallel and Distributed Systems (LPDS), as a department of MTA SZTAKI, is a research laboratory in distributed grid and cloud technologies. LPDS is a founding member of the Hungarian Grid Competence Centre, the Hungarian National Grid Initiative, and the Hungarian OpenNebula Community and also coordinates several European grid/cloud projects.

Interxion is a European provider of carrier and cloud-neutral colocation data centre services. Founded in 1998 in the Netherlands, the firm was publicly listed on the New York Stock Exchange from 28 January 2011 until its acquisition by Digital Realty in March 2020. Interxion is headquartered in Schiphol-Rijk, the Netherlands, and delivers its services through 53 data centres in 11 European countries located in major metropolitan areas, including Dublin, London, Frankfurt, Paris, Amsterdam, and Madrid, the 6 main data centre markets in Europe, as well as Marseille, Interxion’s Internet Gateway.

Cloud gaming, sometimes called gaming on demand or gaming-as-a-service or game streaming, is a type of online gaming that runs video games on remote servers and streams them directly to a user's device, or more colloquially, playing a game remotely from a cloud. It contrasts with traditional means of gaming, wherein a game runs locally on a user's video game console, personal computer, or mobile device.

In computing, hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system.

<span class="mw-page-title-main">Hazelcast</span> In-memory data grid

In computing, Hazelcast IMDG is an open source in-memory data grid based on Java. It is also the name of the company developing the product. The Hazelcast company is funded by venture capital and headquartered in Palo Alto, California.

A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

MBrace is a .NET open-source framework and runtime which introduces a novel programming model for performing large scale computations in public, private and hybrid cloud computing environments. MBrace is inspired by the programming paradigm of asynchronous workflows as introduced in the F# programming language while borrowing ideas from CloudHaskell and HdpH. MBrace is written entirely in F# and provides bindings for F# and C#.

References

  1. "MapReduce: Simplified Data Processing on Large Clusters - OSDI'04: Sixth Symposium on Operating System Design and Implementation, San Francisco, CA, December, 2004". Google. Archived from the original on 2008-12-17.

Further reading