Developer(s) | Jeffrey Fulmer, et al |
---|---|
Stable release | 3.0.9 |
Repository | github |
Available in | English |
Type | Load testing |
License | GPLv3 [1] or later |
Website | www |
Siege is a Hypertext Transfer Protocol (HTTP) and HTTPS load testing and web server benchmarking utility developed by Jeffrey Fulmer. It was designed to let web developers measure the performance of their code under stress, to see how it will stand up to load on the internet.
It is licensed under the GNU General Public License (GNU GPL) open-source software license, which means it is free to use, modify, and distribute. [2]
Siege can stress a single URL or it can read many URLs into memory and stress them simultaneously. It supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. [3]
Performance measures include elapsed time of the test, the amount of data transferred (including headers), the response time of the server, its transaction rate, its throughput, its concurrency and the number of times it returned OK. These measures are quantified and reported at the end of each run. [4]
This is a sample of siege output:
Ben: $ siege -u shemp.whoohoo.com/Admin.jsp -d1 -r10 -c25 ..Siege 2.65 2006/05/11 23:42:16 ..Preparing 25 concurrent users for battle. The server is now under siege...done Transactions: 250 hits Elapsed time: 14.67 secs Data transferred: 448,000 bytes Response time: 0.43 secs Transaction rate: 17.04 trans/sec Throughput: 30538.51 bytes/sec Concurrency: 7.38 Status code 200: 250 Successful transactions: 250 Failed transactions: 0
Siege has essentially three modes of operation: regression, internet simulation and brute force. It can read a large number of URLs from a configuration file and run through them incrementally (regression) or randomly (internet simulation). Or the user may simply pound a single URL with a runtime configuration at the command line (brute force). [4]
Siege was written on Linux and has been successfully ported to AIX, BSD, HP-UX, and Solaris. It compiles on most UNIX System V variants and on most newer BSD systems. [4]
The Apache HTTP Server, colloquially called Apache, is a free and open-source cross-platform web server software, released under the terms of Apache License 2.0. Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation.
The Hypertext Transfer Protocol (HTTP) is an application layer protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.
In general terms, throughput is the rate of production or the rate at which something is processed.
A web server is computer software and underlying hardware that accepts requests via HTTP, the network protocol created to distribute web pages, or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP, and the server responds with the content of that resource or an error message. The server can also accept and store resources sent from the user agent if configured to do so.
In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Stress testing is a form of deliberately intense or thorough testing used to determine the stability of a given system, critical infrastructure or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Reasons can include:
Load testing is the process of putting demand on a software system and measuring its response.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
cURL is a computer software project providing a library (libcurl) and command-line tool (curl) for transferring data using various network protocols. The name stands for "Client URL", which was first released in 1997.
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.
Capacity management's goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effictively. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.
In computer networks, goodput is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent until the last bit of the last packet is delivered.
Web server benchmarking is the process of estimating a web server performance in order to find if the server can serve sufficiently high workload.
The Telecommunication Application Transaction Processing Benchmark (TATP) is a benchmark designed to measure the performance of in-memory database transaction systems.
NBench, short for Native mode Benchmark and later known as BYTEmark, is a synthetic computing benchmark program developed in the mid-1990s by the now defunct BYTE magazine intended to measure a computer's CPU, FPU, and Memory System speed.
The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet Protocol Suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol provides multi-homing and redundant paths to increase resilience and reliability. SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 4960. The SCTP reference implementation was released as part of FreeBSD version 7, and has since been widely ported to other platforms.
Gatling is an open-source load- and performance-testing framework based on Scala, Akka and Netty. The first stable release was published on January 13, 2012. In 2015, Gatling's founder, Stéphane Landelle, created a company, dedicated to the development of the open-source project. According to Gatling Corp's official blog, Gatling was downloaded more than 800,000 times. In June 2016, Gatling officially presented Gatling FrontLine, Gatling's Enterprise Version with additional features.
Lightning Memory-Mapped Database (LMDB) is a software library that provides a high-performance embedded transactional database in the form of a key-value store. LMDB is written in C with API bindings for several programming languages. LMDB stores arbitrary key/data pairs as byte arrays, has a range-based search capability, supports multiple data items for a single key and has a special mode for appending records at the end of the database (MDB_APPEND) which gives a dramatic write performance increase over other similar stores. LMDB is not a relational database, it is strictly a key-value store like Berkeley DB and dbm.
Enduro/X is an open-source middleware platform for distributed transaction processing. It is built on proven APIs such as X/Open group's XATMI and XA. The platform is designed for building real-time microservices based applications with a clusterization option. Enduro/X functions as an extended drop-in replacement for Oracle Tuxedo. The platform uses in-memory POSIX Kernel queues which insures high interprocess communication throughput.