"Verify mode" (also called dry run) refers to having an ability to determine whether a node is conformant with a guarantee of not modifying it, and typically involves the exclusive use of an internal language supporting read-only mode for all potentially system-modifying operations. Mutual authentication (mutual auth) refers to the client verifying the server and vice versa.
Agent describes whether additional software daemons are required. Depending on the management software these agents are usually deployed on the target system or on one or many central controller servers. Although Agent-less = No is colored red and might seem to be a negative, instead, having an agent can be considered quite advantageous to many. Consider the impact if an agent-less tool loses connectivity to a node while making critical changes—leaving the node in an indeterminate state that compromises its (production?) function.
Note: This means platforms on which a recent version of the tool has actually been used successfully, not platforms where it should theoretically work since it is written in good portable C/C++ or an interpreted language. It should also be listed as a supported platform on the project's web site.
Not all tools have the same goal and the same feature set. To help distinguish between all of these software packages, here is a short description of each one.
Combines multi-node deployment, ad-hoc task execution, and configuration management in one package. Manages nodes over SSH and requires python (2.6+ or 3.5+) to be installed on them.[110] Modules work over JSON and standard output and can be written in any language. Uses YAML to express reusable descriptions of systems.
Software to manage the configuration of a large number of computers using a central configuration model and the client–server paradigm. The system enables reconciliation between clients' state and the central configuration specification. Detailed reports provide a way to identify unmanaged configuration on hosts. Generators enable code or template-based generation of configuration files from a central data repository.
Lightweight agent system. Manages configuration of a large number of computers using the client–server paradigm or stand-alone. Any client state which is different from the policy description is reverted to the desired state. Configuration state is specified via a declarative language.[111] CFEngine's paradigm is convergent "computer immunology".[112]
cdist is a zero dependency configuration management system: It requires only ssh on the target host, which is usually enabled on all Unix-like machines. Only the administration host needs to have Python 3.2 installed.
Chef is a configuration management tool written in Erlang,[113] and uses a pure Ruby DSL for writing configuration "recipes". These recipes contain resources that should be put into the declared state. Chef can be used as a client–server tool, or used in "solo" mode.[114]
Consfigurator
While Debian and derivatives are the best supported distributions, Consfigurator also work on other distributions and various unixes but they have less support for properties for configuring specific aspects of the system. Consfigurator can set properties to be applied in scheme. This requires Consfigurator to be installed on the target computer. A more restricted language is also available which works without needing Consfigurator to be installed on the target. Remote configuration is also supported: the of hosts can be defined with scheme code.
Guix integrates many things in the same tool (a distribution, package manager, configuration management tool, container environment, etc). To remotely manage systems, it needs the target machines to already run Guix[115] or it can also alternatively deploy configurations inside Digital Ocean Droplet.[116] The machines are configured with Scheme.
Tool to execute commands and replicate files on all nodes. The nodes do not need to be up; the commands will be executed when they boot. The system has no central server so commands can be launched from any node and they will replicate to all nodes.
Juju concentrates on the notion of service, abstracting the notion of machine or server, and defines relations between those services that are automatically updated when two linked services observe a notable modification.
LCFG manages the configuration with a central description language in XML, specifying resources, aspects and profiles. Configuration is deployed using the client–server paradigm. Appropriate scripts on clients (called components) transcribe the resources into configuration files and restart services as needed.
PIKT is foremost a monitoring system that also does configuration management. "PIKT consists of a sophisticated, feature-rich file preprocessor; an innovative scripting language with unique labor-saving features; a flexible, centrally directed process scheduler; a customizing file installer; a collection of powerful command-line extensions; and other useful tools."
Puppet consists of a custom declarative language to describe system configuration, distributed using the client–server paradigm (using XML-RPC protocol in older versions, with a recent switch to REST), and a library to realize the configuration. The resource abstraction layer enables administrators to describe the configuration in high-level terms, such as users, services and packages. Puppet will then ensure the server's state matches the description. There was brief support in Puppet for using a pure Ruby DSL as an alternative configuration language starting at version 2.6.0. However this feature was deprecated beginning with version 3.1.[111][114][118][119]
Pyinfra
Pyinfra is an agentless server configuration management tool created in Python. Its execution speed is up to 10 times faster than Ansible.[120] Pyinfra is also excellent for system integration, as it can control SSH connections, Docker, Terraform, Ansible, etc. using a mechanism called a connector. Pyinfra can be run ad hoc or through the API.[121]
The quattor information model is based on the distinction between the desired state and the actual state. The desired state is registered in a fabric-wide configuration database, using a specially designed configuration language called Pan for expressing and validating configurations, composed out of reusable hierarchical building blocks called templates. Configurations are propagated to and cached on the managed nodes.
Radmind manages hosts configuration at the file system level. In a similar way to Tripwire (and other configuration management tools), it can detect external changes to managed configuration, and can optionally reverse the changes. Radmind does not have higher-level configuration element (services, packages) abstraction. A graphical interface is available (only) for OS X.
Rex is a remote execution system with integrated configuration management and software deployment capabilities. The admin provides configuration instructions via so-called Rexfiles. They are written in a small DSL but can also contain arbitrary Perl. It integrates well with an automated build system used in CI environments.
Salt started out as a tool for remote server management. As its usage has grown, it has gained a number of extended features, including a more comprehensive mechanism for host configuration. This is a relatively new feature facilitated through the Salt States component. With the traction that Salt has gotten in the last bit, the support for more features and platforms might continue to grow.
Java-based tool to deploy and configure applications distributed across multiple machines. There is no central server; you can deploy a .SF configuration file to any node and have it distributed to peer nodes according to the distribution information contained inside the deployment descriptor itself.
Spacewalk is an open source Linux and Solaris systems management solution[buzzword] and is the upstream project for the source of Red Hat Network Satellite. Spacewalk works with RHEL, Fedora, and other RHEL derivative distributions like CentOS, Scientific Linux, etc. There are ongoing efforts on getting it packaged for inclusion in Fedora. Spacewalk provides systems inventory (hardware and software information, installation and updates of software, collection and distribution of custom software packages into manageable groups, provision systems, management and deployment of configuration files, system monitoring, virtual guest provisioning, starting/stopping/configuring virtual guests and delegating all of these actions to local or LDAP users and system entitlements). As of May 2020, Spacewalk is now EOL with users having moved to either Uyuni or Foreman/Katello.
The Software Testing Automation Framework (STAF) enables users to create cross-platform, distributed software test environments. STAF removes the tedium of building an automation infrastructure, thus enabling users to focus on building their automation solution.[buzzword] The STAF framework provides the foundation upon which to build higher-level solutions[buzzword], and provides a pluggable approach supported across a large variety of platforms and languages.
Synctool
Synctool aims to be easy to understand, learn and use. It is written in Python and makes use of SSH (passwordless, with host-based or key-based authentication) and rsync. No specific language is needed to configure Synctool. Synctool has dry run capabilities that enable surgical precision. Synctool depends on Python2 which is now EOL and there are no current plans to migrate it to Python3.
1 2 3 4 5 6 7 8 9 Key pair: uses public/private key pairs and key fingerprints for mutual authentication, like SSH.
1 2 3 4 5 6 7 8 9 Secure Shell: Uses the Secure Shell protocol for encryption.
↑ Certificate and Passwords: Uses SSL X.509 certificate and fingerprint for clients to authenticate server, and passwords for server to authenticate clients; clients should only share the same password if they are allowed access to each other's configuration data.
1 2 3 4 5 6 SSL: Uses the Secure Sockets Layer, Transport Layer Security (TLS) for encryption.
↑ Full support for non-modifying determination of node compliance, including nodes not previously modified by a Bcfg2 configuration pass.
↑ "Recent versions run on Fedora Core (3, 5, 6). Various people have ported some of the LCFG core to other Linux distributions, such as Debian, but these ports have not been incorporated"
↑ "There has been an experimental port to OS X, which does work and includes some Mac-specific components. However, this is not production quality and the lack of uniform packaging system under OS X means that automatic management of installed software is likely to be difficult."
↑ "LCFG core has been ported back to Solaris and we are using this in production, although the software has not been packaged for distribution, and is not so well supported"
1 2 3 Written in Java, so should in theory work on this platform if there is the appropriate JVM version available for it; however it has not been tested on the platform, which should be considered unsupported.
1 2 Will run anywhere Python runs, but handlers for different platforms are untested.
↑ Improved security which would include an encrypted, mutually authenticated, peer-to-peer message bus is tracked here "#39 (Implement TCP mesh) - ISconf - Trac". Archived from the original on 2012-07-16. Retrieved 2007-04-17.
↑ LCFG does not provide its own transport mechanism; it relies on an external program, most often Apache. Using Apache it should be possible to do mutual authentication in several ways; however the documentation at The Complete Guide to LCFG, Section 9.4: Authorization and Security, shows access control based on IP address ranges, implying that the client does not authenticate itself to the server via an SSL certificate; it also does not mention if the LCFG client checks the validity of the server's SSL certificate (such as via a per-site fingerprint distributed with the client, or a chain of trust to an accredited CA). It mentions that there can be a per-client password in the profile, but also states that "The contents of the LCFG profile should be considered public".
↑ LCFG supports encrypted communications channels (SSL via Apache); however the documentation at The Complete Guide to LCFG, Section 9.4: Authorization and Security, states that "The contents of the LCFG profile should be considered public".
↑ Robert Osterlund (2014-01-04). "PIKT Licensing". Pikt.org. Retrieved 2014-02-10.
↑ PIKT uses shared secret keys for mutual authentication. "As an option, you can use secret key authentication to prove the master's identity to the slave. [...] If one managed to crack any system in the PIKT domain, one would have access to all common secrets. To solve this problem, you may use per-slave uid, gid, and private_key settings." - from Security Considerations.
↑ "For file installs, file fetches (to diff against the central configuration), and command executions, you can optionally encrypt all such data traffic between master and slave." - from Security Considerations.
↑ "Client to server authentication and vice versa: on one hand, this allows to enforce access policies to sensitive data according to the client "name", on the other hand, clients are guaranteed to talk to the original server." - from Quattor Installation and User Guide: Version 1.1.xArchived 2013-04-06 at the Wayback Machine , page 70
↑ Salt is an open source tool to manage your infrastructure. Easy enough to get running in minutes and fast enough to manage tens of thousands of servers
↑ There is a feature request for a Secure TCP/IP Connection Provider, and one of the developers stated on 2007-04-05 that "You will need to download the source code for OpenSSL and point the build files at it. Other than that, it should just work.", so it looks like there may be working encryption if you build from scratch instead of using the prebuilt binaries. It is unclear what if any authentication building against OpenSSL would give STAF.
↑ Installation: Control Machine Requirements, retrieved May 12, 2015 Can manage any machine with Python 2.4 or later and sshd. Control machine can be any non-Windows machine with Python 2.6 or 2.7 installed. This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.