Stable release | 11 |
---|---|
Repository | |
Written in | C |
Operating system | Cross-platform |
Available in | English |
Type | Module for the Apache HTTP server |
License | Apache License |
Website | mod-qos |
mod_qos is a quality of service (QoS) module for the Apache HTTP server implementing control mechanisms that can provide different priority to different requests.
A web server can only serve a limited number of concurrent requests. QoS is used to ensure that important resources stay available under high server load. mod_qos is used to reject requests to unimportant resources while granting access to more important applications. It is also possible to disable access restrictions, for example, for requests to very important resources or for very important users.
Control mechanisms are available at the following levels:
The module can be useful when used in a reverse proxy in order to divide up resources to different webserver.
The first use case shows how mod_qos can avoid service outage of a web server due to slow responses of a single application. In case an application (here /ccc) is very slow, requests wait until a timeout occurs. Due to many waiting requests, the web server runs out of free TCP connections and is not able to process other requests to application /aaa or /bbb. mod_qos limits the concurrent requests to an application in order to assure the availability of other resources.
The keep-alive extension to HTTP 1.1 allows persistent TCP connections for multiple request/responses. This accelerates access to the web server due to less and optimised network traffic. The disadvantage of these persistent connections is that server resources are blocked even though no data is exchanged between client and server. mod_qos allows a server to support keep-alive as long as sufficient connections are free, stopping the keep-alive support when a defined connection threshold is reached.
A single client may open many simultaneous TCP connections in order to download different content from the web server. While the client gets many connections other users may not be able to access the server since no free connections remain for them. mod_qos can limit the number of concurrent connections for a single IP source address.
If you have to limit the number of requests to a URL, mod_qos can help with that too. mod_qos limits the maximum number of requests per second to this URL. The module may also control bandwidth. Simply specify the maximum allowed bandwidth and moq_qos starts throttling when it becomes necessary.
mod_qos may help to protect an Apache web server against low-bandwidth DoS attacks by enforcing a minimum upload/download throughput a client must generate. [1]
The initial release of mod_qos was created in May 2007 and published on SourceForge.net [2] as an open source software project. It was able to limit the number of concurrent HTTP requests for specified resources (path portion of request URLs) on the web server. More features were added and some of them were useful to protect Apache servers against DoS attacks. [3] [4] In 2012, mod_qos was included to the Ubuntu Linux distribution. [5]
Major releases: [6]
The Apache HTTP Server, colloquially called Apache, is a free and open-source cross-platform web server software, released under the terms of Apache License 2.0. Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation.
In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program, typically to process user requests.
The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.
A web server is computer software and underlying hardware that accepts requests via HTTP, the network protocol created to distribute web pages, or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP, and the server responds with the content of that resource or an error message. The server can also accept and store resources sent from the user agent if configured to do so.
In computing, a denial-of-service attack is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.
The File Transfer Protocol (FTP) is a standard communication protocol used for the transfer of computer files from a server to a client on a computer network. FTP is built on a client–server model architecture using separate control and data connections between the client and the server. FTP users may authenticate themselves with a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).
In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource.
In computer science and networking in particular, a session is a temporary and interactive information interchange between two or more communicating devices, or between a computer and user. A session is established at a certain point in time, and then ‘torn down’ - brought to an end - at some later point. An established communication session may involve more than one message in each direction. A session is typically stateful, meaning that at least one of the communicating parties needs to hold current state information and save information about the session history to be able to communicate, as opposed to stateless communication, where the communication consists of independent requests with responses.
In computer networks, rate limiting is used to control the rate of requests sent or received by a network interface controller. It can be used to prevent DoS attacks and limit web scraping.
Bandwidth throttling consists in the limitation of the communication speed of the ingoing (received) data and/or in the limitation of the speed of outgoing (sent) data in a network node or in a network device.
Push technology, or server push, is a style of Internet-based communication where the request for a given transaction is initiated by the publisher or central server. It is contrasted with pull/get, where the request for the transmission of information is initiated by the receiver or client.
Real-Time Messaging Protocol (RTMP) is a communication protocol for streaming audio, video and data over the Internet. Originally developed as a proprietary protocol by Macromedia for streaming between a Flash player and a server, Adobe has released an incomplete version of the specification of the protocol for public use.
SYN cookie is a technique used to resist IP address spoofing attacks. The technique's primary inventor Daniel J. Bernstein defines SYN cookies as "particular choices of initial TCP sequence numbers by TCP servers." In particular, the use of SYN cookies allows a server to avoid dropping connections when the SYN queue fills up. Instead of storing additional connections, the SYN queue entry is encoded into the sequence number sent in the SYN+ACK response. If the server then receives a subsequent ACK response from the client with the incremented sequence number, the server is able to reconstruct the SYN queue entry using information encoded in the TCP sequence number and proceed as usual with the connection.
HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair. The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.
The HTTP 403 is an HTTP status code meaning access to the requested resource is forbidden. The server understood the request, but will not fulfill it.
In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.
Slowloris is a type of denial of service attack tool which allows a single machine to take down another machine's web server with minimal bandwidth and side effects on unrelated services and ports.
FastCGI is a binary protocol for interfacing interactive programs with a web server. It is a variation on the earlier Common Gateway Interface (CGI). FastCGI's main aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.
HTTP/2 is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working Group of the Internet Engineering Task Force (IETF). HTTP/2 is the first new version of HTTP since HTTP/1.1, which was standardized in RFC 2068 in 1997. The Working Group presented HTTP/2 to the Internet Engineering Steering Group (IESG) for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on February 17, 2015. The HTTP/2 specification was published as RFC 7540 on May 14, 2015.