In capital markets, low latency is the use of algorithmic trading to react to market events faster than the competition to increase profitability of trades. For example, when executing arbitrage strategies the opportunity to "arb" the market may only present itself for a few milliseconds before parity is achieved. To demonstrate the value that clients put on latency, in 2007 a large global investment bank has stated that every millisecond lost results in $100m per annum in lost opportunity. [1]
What is considered "low" is therefore relative but also a self-fulfilling prophecy. Many organisations and companies are using the words "ultra low latency" to describe latencies of under 1 millisecond, but it is an evolving definition, with the amount of time considered "low" ever-shrinking.
There are many technical factors which impact on the time it takes a trading system to detect an opportunity and to successfully exploit that opportunity. Firms engaged in low latency trading are willing to invest considerable effort and resources to increase the speed of their trading technology as the gains can be significant. This is often done in the context of high-frequency trading.
There are many factors which impact on the time it takes a trading system to detect an opportunity and to successfully exploit that opportunity, including:
From a networking perspective, the speed of light "c" dictates one theoretical latency limit: a trading engine just 150 km (93 miles) down the road from the exchange can never achieve better than 1ms return times to the exchange before one even considers the internal latency of the exchange and the trading system. This theoretical limit assumes light is travelling in a straight line in a vacuum which in practice is unlikely to happen: Firstly achieving and maintaining a vacuum over a long distance is difficult and secondly, light cannot easily be beamed and received over long distances due to many factors, including the curvature of the Earth, interference by particles in the air, etc. Light travelling within dark fibre cables does not travel at the speed of light – "c" – since there is no vacuum and the light is constantly reflected off the walls of the cable, lengthening the effective path travelled in comparison to the length of the cable and hence slowing it down. There are also in practice several routers, switches, other cable links and protocol changes between an exchange and a trading system. As a result, most low latency trading engines will be found physically close to the exchanges, even in the same building as the exchange (co-location) to further reduce latency.
To further reduce latency, new technologies are being employed. Wireless data transmission technology can offer speed advantages over the best cabling options, as signals can travel faster through air than fiber. Wireless transmission can also allow data to move in a straighter, more direct path than cabling routes. [2]
A crucial factor in determining the latency of a data channel is its throughput. Data rates are increasing exponentially which has a direct relation to the speed at which messages can be processed. Also, low-latency systems need not only to be able to get a message from A to B as quickly as possible, but also need to be able to process millions of messages per second. See comparison of latency and throughput for a more in-depth discussion.
When talking about latency in the context of capital markets, consider the round trip between event and trade:
We also need to consider how latency is assembled in this chain of events:
There are a series of steps that contribute to the total latency of a trade:
The systems at a particular venue need to handle events, such as order placement, and get them onto the wire as quickly as possible to be competitive within the market place. Some venues offer premium services for clients needing the quickest solutions.
This is one of the areas where most delay can be added, due to the distances involved, amount of processing by internal routing engines, hand off between different networks and the sheer amount of data which is being sent, received and processed from various data venues.
Latency is largely a function of the speed of light, which is 299,792,458 meters/second (186,000 miles/second)(671,000,000 miles/hour) in a scientifically controlled environment; this would equate to a latency of 3 microseconds for every kilometer. However, when measuring latency of data we need to account for the fiber optic cable. Although it seems "pure", it is not a vacuum and therefore refraction of light needs to be accounted for. For measuring latency in long-haul networks, the calculated latency is actually 4.9 microseconds for every kilometer. In shorter metro networks, the latency performance rises a bit more due to building risers and cross-connects that can make the latency as high as 5 microseconds per kilometer.
It follows that to calculate latency of a connection, one needs to know the full distance travelled by the fiber, which is rarely a straight line, since it has to traverse geographic contours and obstacles, such as roads and railway tracks, as well as other rights-of-way.
Due to imperfections in the fiber, light degrades as it is transmitted through it. For distances greater than 100 kilometers, either amplifiers or regenerators need to be deployed. Accepted wisdom has it that amplifiers add less latency than regenerators, though in both cases the added latency can be highly variable, which needs to be taken into account. In particular, legacy spans are more likely to make use of higher latency regenerators.
This area doesn't strictly belong under the umbrella of "low-latency", rather it is the ability of the trading firm to take advantage of High Performance Computing technologies to process data quickly. However, it is included for completeness.
As with delays between Exchange and Application, many trades will involve a brokerage firm's systems. The competitiveness of the brokerage firm in many cases is directly related to the performance of their order placement and management systems.
The amount of time it takes for the execution venue to process and match the order.
Average latency is the mean average time for a message to be passed from one point to another – the lower the better. Times under 1 millisecond are typical for a market data system.
Co-location is the act of locating high frequency trading firms' and proprietary traders' computers in the same premises where an exchange's computer servers are located. This gives traders access to stock prices slightly before other investors. Many exchanges have turned co-location into a significant moneymaker by charging trading firms for "low latency access" privileges. Increasing demand for co-location has led many stock exchanges to expand their data centers. [3]
There are many use cases where predictability of latency in message delivery is just as important, if not more important than achieving a low average latency. This latency predictability is also referred to as "Low Latency Jitter" and describes a deviation of latencies around the mean latency measurement.
Throughput can be defined as amount of data processed per unit of time. Throughput refers to the number of messages being received, sent and processed by the system and is usually measured in updates per second. Throughput has a correlation to latency measurements and typically as the message rate increases so do the latency figures. To give an indication of the number of messages we are dealing with the "Options Price Reporting Authority" (OPRA) is predicting peak message rates of 907,000 updates per second (ups) on its network by July 2008. [4] This is just a single venue – most firms will be taking updates from several venues.
Clock accuracy is paramount when testing the latency between systems. Any discrepancies will give inaccurate results. Many tests involve locating the publishing node and the receiving node on the same machine to ensure the same clock time is being used. This isn't always possible, however, so clocks on different machines need to be kept in sync using some sort of time protocol:
Reducing latency in the order chain involves attacking the problem from many angles. Amdahl's Law, commonly used to calculate performance gains of throwing more CPUs at a problem, can be applied more generally to improving latency – that is, improving a portion of a system which is already fairly inconsequential (with respect to latency) will result in minimal improvement in the overall performance. Another strategy for reducing latency involves pushing the decision making on trades to a Network Interface Card. This can alleviate the need to involve the system's main processor, which can create undesirable delays in response time. Known as network-side processing, because the processing involved takes place as close to the network interface as possible, this practice is a design factor for "ultra-low latency systems." [5]
Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.
Myrinet, ANSI/VITA 26-1998, is a high-speed local area networking system designed by the company Myricom to be used as an interconnect between multiple machines to form computer clusters.
Network throughput refers to the rate of message delivery over a communication channel, such as Ethernet or packet radio, in a communication network. The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in data packets per second or data packets per time slot.
A microsecond is a unit of time in the International System of Units (SI) equal to one millionth of a second. Its symbol is μs, sometimes simplified to us when Unicode is not available.
In telecommunications, broadband is the wide-bandwidth data transmission that transports multiple signals at a wide range of frequencies and Internet traffic types, which enables messages to be sent simultaneously and is used in fast internet connections. The medium can be coaxial cable, optical fiber, wireless Internet (radio), twisted pair, or satellite.
Satellite Internet access or Satellite Broadband is Internet access provided through communication satellites. This technology enables users to access the Internet regardless of their geographical location. Modern consumer grade satellite Internet service is typically provided to individual users through geostationary satellites that can offer relatively high data speeds, with newer satellites using Ku band to achieve downstream data speeds up to 506 Mbit/s. In addition, new satellite internet constellations are being developed in low-earth orbit to enable low-latency internet access from space.
The RapidIO architecture is a high-performance packet-switched electrical connection technology. RapidIO supports messaging, read/write and cache coherency semantics. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect.
Network performance refers to measures of service quality of a network as seen by the customer.
For market data as used in marketing, see marketing information system
A network on a chip or network-on-chip is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules.
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
In telecommunication networks, the transmission time is the amount of time from the beginning until the end of a message transmission. In the case of a digital message, it is the time from the first bit until the last bit of a message has left the transmitting node. The packet transmission time in seconds can be obtained from the packet size in bit and the bit rate in bit/s as:
Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of infrared or visible light through an optical fiber. The light is a form of carrier wave that is modulated to carry information. Fiber is preferred over electrical cabling when high bandwidth, long distance, or immunity to electromagnetic interference is required. This type of communication can transmit voice, video, and telemetry through local area networks or across long distances.
Latency refers to a short period of delay between when an audio signal enters a system and when it emerges. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in the transmission medium.
In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:
CobraNet is a combination of software, hardware, and network protocols designed to deliver uncompressed, multi-channel, low-latency digital audio over a standard Ethernet network. Developed in the 1990s, CobraNet is widely regarded as the first commercially successful audio-over-Ethernet implementation.
A fiber-optic cable, also known as an optical-fiber cable, is an assembly similar to an electrical cable but containing one or more optical fibers that are used to carry light. The optical fiber elements are typically individually coated with plastic layers and contained in a protective tube suitable for the environment where the cable is used. Different types of cable are used for optical communication in different applications, for example long-distance telecommunication or providing a high-speed data connection between different parts of a building.
High-frequency trading (HFT) is a type of algorithmic trading in finance characterized by high speeds, high turnover rates, and high order-to-trade ratios that leverages high-frequency financial data and electronic trading tools. While there is no single definition of HFT, among its key attributes are highly sophisticated algorithms, co-location, and very short-term investment horizons in trading securities. HFT uses proprietary trading strategies carried out by computers to move in and out of positions in seconds or fractions of a second.
RTP-MIDI is a protocol to transport MIDI messages within Real-time Transport Protocol (RTP) packets over Ethernet and WiFi networks. It is completely open and free, and is compatible both with LAN and WAN application fields. Compared to MIDI 1.0, RTP-MIDI includes new features like session management, device synchronization and detection of lost packets, with automatic regeneration of lost data. RTP-MIDI is compatible with real-time applications, and supports sample-accurate synchronization for each MIDI message.
Spread Networks is a company founded by Dan Spivey and backed by James L. Barksdale that claims to offer Internet connectivity between Chicago and New York City at ultra-low latency, high bandwidth, and high reliability, using dark fiber. Its customers are primarily firms engaged in high-frequency trading, where small reductions in latency are important to the extent that they help one close trades before one's competitors.