Techila Grid

Last updated
Techila Distributed Computing Engine
Techila Technologies logo.png
Developer(s) Techila Technologies Ltd
Operating system Windows, Linux
Type distributed computing, grid computing, middleware
License Proprietary
Website www.techilatechnologies.com

Techila Distributed Computing Engine (earlier known as Techila Grid) is a commercial grid computing software product. It speeds up simulation, analysis and other computational applications by enabling scalability across the IT resources in user's on-premises data center and in the user's own cloud account. Techila Distributed Computing Engine is developed and licensed by Techila Technologies Ltd, a privately held company headquartered in Tampere, Finland. The product is also available as an on-demand solution in Google Cloud Launcher, the online marketplace created and operated by Google. According to IDC, [1] the solution enables organizations to create HPC infrastructure without the major capital investments and operating expenses required by new HPC hardware.

Contents

Product features

Techila Distributed Computing Engine is a distributed computing middleware and management solution, which can be used to access and manage on-premises and cloud IT resources for various high-performance computing (HPC) computing uses, including high-throughput computing (HTC) scenarios. It creates a scalable computing service and execution environment that can also support applications that are deployed within production environments.

The technology of Techila Distributed Computing Engine is built on an autonomic computing architecture that is patented by Techila Technologies. This has enabled features such as automated system management and fault tolerance, which simplify the deployment, use and administration of large-scale distributed computing systems.

Techila Distributed Computing Engine includes:

Techila Distributed Computing Engine does not set limitations to the physical location or distance of the computing resources that are used in the system. The Product Description of Techila Distributed Computing Engine lists three supported IT system designs:

Techila Technologies offers also tools that enable automated provisioning and integration of cloud-based resources to the grid.

Architecture

Techila Server

Techila Server is a Java-based software product, which optimizes the performance of a Techila Distributed Computing Engine environment and the jobs in it. The optimization done by Techila Server supports not only large jobs, but also makes the system suitable for running small computational jobs. The performance of Techila Distributed Computing Engine in different scenarios was evaluated in a thesis at Tampere University of Technology. [2]

Originally, the Techila Server was delivered as an embedded appliance. The embedded appliance product was discontinued in 2012. Currently, Techila Server is delivered either as a virtual appliance or using cloud-specific deployment tools.

Techila Worker

Techila Worker is the software agent that must be installed on each computer that will participate in a Techila Distributed Computing Engine environment. The computers can be physical, or they can be virtualized computers running on a hypervisor or in a cloud virtual machine. Techila Distributed Computing Engine supports following public cloud services: Microsoft Azure, Amazon EC2 (Amazon Elastic Compute Cloud) and Google Compute Engine. Once the Techila Worker software is installed on a computer, it will authenticate the computer to the Techila Server using a certificate, and the system will use self-management to automatically configure the computer to run jobs received from the Techila Server.

Techila Worker is a Java-based client middleware component that can be run on Microsoft Windows or Linux. Because of this, the client computers participating in the Techila Distributed Computing Engine system can have different hardware and software platforms. Techila Worker software runs on the lowest possible priority on the computer. The Techila Worker is also interoperable with batch-queuing systems, like the SLURM, TORQUE, or Oracle Grid Engine (previously known as Sun Grid Engine, SGE). This interoperability allows existing HPC users to use their existing infrastructures as a part of a Techila Distributed Computing Engine system without the Techila Worker interfering with the other system.

Techila SDK

Techila SDK (earlier known as Techila Grid Management Kit or Techila GMK) is a library of software components that connect applications to the Techila Distributed Computing Engine environment. The SDK includes plug-ins for many commonly used research and development tools and languages, such as MATLAB, R, Python, Perl, Java, C#/ .NET C/ C++, FORTRAN, and Command-line interface script. The applications that have been developed using application programming interfaces in the Techila SDK can also be deployed within production environments and run as service in a SOA environment. Techila SDK supports both Windows and Linux operating systems.

Administrator User Interface

A web-based Administrator User Interface provides administrators with a simplified and easy-to-use interface to the Techila Server. The Administrator User Interface allows monitoring system activity, view and control job execution, execution policy, monitor and control Techila Workers and Techila Worker Groups, control security settings, and manage users.

History

Techila Technologies says that the development of the Techila Distributed Computing Engine technology started initially from the vision of grid computing and enablement of fast simulation and analysis without the complexity of traditional high-performance computing.

The security of Techila Distributed Computing Engine was evaluated by Nixu Ltd in 2008. Nixu is Finland's largest specialist company in information security consulting, many global corporates as customers. After this, Techila Distributed Computing Engine has been accepted by security-sensitive industry sectors, such as Finance and Insurance, Engineering and Pharmaceutical.

Techila Distributed Computing Engine was demonstrated by a research team at the University of Helsinki in 2011 as being capable of providing autonomic management to computing environments of large numbers of Windows Azure cloud instances. The University of Helsinki has also demonstrated Techila Distributed Computing Engine's ability to enhance the usability and utilization of large-scale cluster resources in projects implemented using MATLAB, R, Python, Java, and C/ C++/ C#.

In a Techila Distributed Computing Engine system, computational resources can be arranged into device groups for organizational, security, compliance, and administrative control. Despite its performance in large-scale systems such as CSC - IT Center For Science, it is also suitable for smaller environments such as the TUTGrid which utilizes the idle capacity of desktop PCs and other computers at Tampere University of Technology (TUT) for scientific computing.

Related Research Articles

Thin client Non-powerful computer optimized for remote server access

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a fat client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

In telecommunication, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability.

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

The Globus Toolkit is an open-source toolkit for grid computing developed and provided by the Globus Alliance. On 25 May 2017 it was announced that the open source support for the project would be discontinued in January 2018, due to a lack of financial support for that work. The Globus service continues to be available to the research community under a freemium approach, designed to sustain the software, with most features freely available but some restricted to subscribers.

UNICORE is a grid computing technology for resources such as supercomputers or cluster systems and information stored in databases. UNICORE was developed in two projects funded by the German ministry for education and research (BMBF). In European-funded projects UNICORE evolved to a middleware system used at several supercomputer centers. UNICORE served as a basis in other research projects. The UNICORE technology is open source under BSD licence and available at SourceForge.

Utility computing or The Computer Utility is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.

A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.

Open Grid Forum Organization

The Open Grid Forum (OGF) is a community of users, developers, and vendors for standardization of grid computing. It was formed in 2006 in a merger of the Global Grid Forum and the Enterprise Grid Alliance. The OGF models its process on the Internet Engineering Task Force (IETF), and produces documents with many acronyms such as OGSA, OGSI, and JSDL.

CNGrid is the Chinese national high performance computing network supported by 863 Program.

Univa Software company

Univa was a software company that developed workload management and cloud management products for compute-intensive applications in the data center and across public, private, and hybrid clouds, before being acquired by Altair Engineering in September 2020.

Computer cluster

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

Cloud computing Form of Internet-based computing that provides shared processing resources and data to computers and other devices on demand

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

Eucalyptus is a paid and open-source computer software for building Amazon Web Services (AWS)-compatible private and hybrid cloud computing environments, originally developed by the company Eucalyptus Systems. Eucalyptus is an acronym for Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems. Eucalyptus enables pooling compute, storage, and network resources that can be dynamically scaled up or down as application workloads change. Mårten Mickos was the CEO of Eucalyptus. In September 2014, Eucalyptus was acquired by Hewlett-Packard and then maintained by DXC Technology. After DXC stopped developing the product in late 2017, AppScale Systems forked the code and started supporting Eucalyptus customers.

gLite

gLite is a middleware computer software project for grid computing used by the CERN LHC experiments and other scientific domains. It was implemented by collaborative efforts of more than 80 people in 12 different academic and industrial research centers in Europe. gLite provides a framework for building applications tapping into distributed computing and storage resources across the Internet. The gLite services were adopted by more than 250 computing centres, and used by more than 15000 researchers in Europe and around the world.

OpenNebula Cloud computing platform for managing heterogeneous distributed data center infrastructures

OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of Infrastructure as a Service. The two primary uses of the OpenNebula platform are data center virtualization and cloud deployments based on the KVM hypervisor, LXD system containers, and AWS Firecracker microVMs. The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing VMware infrastructure. In early June 2020, OpenNebula announced the release of a new Enterprise Edition for corporate users, along with a Community Edition. OpenNebula CE is free and open-source software, released under the Apache License version 2. OpenNebula CE comes with free access to maintenance releases but with upgrades to new minor/major versions only available for users with non-commercial deployments or with significant contributions to the OpenNebula Community. OpenNebula EE is distributed under a closed-source license and requires a commercial Subscription.

HP Cloud

HP Cloud was a set of cloud computing services available from Hewlett-Packard (HP) that offered public cloud, private cloud, hybrid cloud, managed private cloud, and other cloud services. It was the combination of the previous HP Converged Cloud business unit and HP Cloud Services, which is the OpenStack technology-based public cloud. It is used by enterprise organizations so they can combine public cloud services with their own internal IT resources to create hybrid clouds, or a mix of different cloud computing environments made up of private and public clouds.

Computation offloading is the transfer of resource intensive computational tasks to a separate processor, such as a hardware accelerator, or an external platform, such as a cluster, grid, or a cloud. Offloading to a coprocessor can be used to accelerate applications including; image rendering and mathematical calculations. Offloading computing to an external platform over a network can provide computing power and overcome hardware limitations of a device, such as limited computational power, storage, and energy.

References

  1. Wu, Jie (2010). The Rise of Grid-Based High-Performance Computing: A Cost-Effective Approach to HPC Acquisition. IDC. p. 7.
  2. Koskinen, Marko (2010). Evaluating the performance of job management systems in different distributed computing environments (PDF). Tampere University of Technology. p. 63.