This article needs additional citations for verification .(April 2024) |
In computing, redirection is a form of interprocess communication, and is a function common to most command-line interpreters, including the various Unix shells that can redirect standard streams to user-specified locations. The concept of redirection is quite old, dating back to the earliest operating systems (OS).[ citation needed ] A discussion of the design goals for redirection can be found already in the 1971 description of the input-output subsystem of the Multics OS. [1] However, prior to the introduction of UNIX OS with its "pipes", redirection in operating systems was hard or even impossible to do. [2]
In Unix-like operating systems, programs do redirection with the dup2(2) system call, or its less-flexible but higher-level stdio analogues, freopen(3) and popen(3). [3]
Redirection is usually implemented by placing certain characters between commands.
Typically, the syntax of these characters is as follows, using <
to redirect input, and >
to redirect output. command>file1
executes command, placing the output in file1, as opposed to displaying it at the terminal, which is the usual destination for standard output. This will clobber any existing data in file1.
Using command<file1
executes command, with file1 as the source of input, as opposed to the keyboard, which is the usual source for standard input.
command<infile>outfile
combines the two capabilities: command reads from infile and writes to outfile
To append output to the end of the file, rather than clobbering it, the >>
operator is used: command1>>file1
.
To read from a stream literal (an inline file, passed to the standard input), one can use a here document, using the <<
operator:
$ tra-zA-Z<< END_TEXT > one two three> uno dos tres> END_TEXTONE TWO THREEUNO DOS TRES
To read from a string, one can use a here string, using the <<<
operator: tra-zA-Z<<<"one two three"
, or:
$ NUMBERS="one two three"$ tra-zA-Z<<<"$NUMBERS"ONE TWO THREE
Programs can be run together such that one program reads the output from another with no need for an explicit intermediate file. command1|command2
executes command1, using its output as the input for command2 (commonly called piping, with the "|
" character being known as the "pipe").
The two programs performing the commands may run in parallel with the only storage space being working buffers (Linux allows up to 64K for each buffer) plus whatever work space each command's processing requires. For example, a "sort" command is unable to produce any output until all input records have been read, as the very last record received just might turn out to be first in sorted order. Dr. Alexia Massalin's experimental operating system, Synthesis, would adjust the priority of each task as they ran according to the fullness of their input and output buffers. [4]
This produces the same end result as using two redirects and a temporary file, as in:
$ command1>tempfile $ command2<tempfile $ rmtempfile
But here, command2 does not start executing until command1 has finished, and a sufficiently large scratch file is required to hold the intermediate results as well as whatever work space each task required. As an example, although DOS allows the "pipe" syntax, it employs this second approach. Thus, suppose some long-running program "Worker" produces various messages as it works, and that a second program, TimeStamp copies each record from stdin to stdout, prefixed by the system's date and time when the record is received. A sequence such as Worker|TimeStamp>LogFile.txt
would produce timestamps only when Worker had finished, merely showing how swiftly its output file could be read and written.
A good example for command piping is combining echo
with another command to achieve something interactive in a non-interactive shell, e.g. echo-e'user\npass'|ftplocalhost
. This runs the ftp client with input user, press return, then pass.
In casual use, the initial step of a pipeline is often cat
or echo
, reading from a file or string. This can often be replaced by input indirection or a here string, and use of cat and piping rather than input redirection is known as useless use of cat. For example, the following commands:
$ catinfile|command$ echo$string|command$ echo-e'user\npass'|ftplocalhost
can be replaced by:
$ command<infile $ command<<<$string$ ftplocalhost<<<$'user\npass'
As echo
is often a shell-internal command, its use is not as criticized as cat, which is an external command.
In Unix shells derived from the original Bourne shell, the first two actions can be further modified by placing a number (the file descriptor) immediately before the character; this will affect which stream is used for the redirection. [5] The Unix standard I/O streams are: [6]
Handle | Name | Description |
---|---|---|
0 | stdin | Standard input |
1 | stdout | Standard output |
2 | stderr | Standard error |
For example, command2>file1
executes command, directing the standard error stream to file1.
In shells derived from csh (the C shell), the syntax instead appends the & (ampersand) character to the redirect characters, thus achieving a similar result. The reason for this is to distinguish between a file named '1' and stdout, i.e. catfile2>1
vs catfile2>&1
. In the first case, stderr is redirected to a file named '1' and in the second, stderr is redirected to stdout.
Another useful capability is to redirect one standard file handle to another. The most popular variation is to merge standard error into standard output so error messages can be processed together with (or alternately to) the usual output. For example, find/-name.profile>results2>&1
will try to find all files named .profile. Executed without redirection, it will output hits to stdout and errors (e.g. for lack of privilege to traverse protected directories) to stderr. If standard output is directed to file results, error messages appear on the console. To see both hits and error messages in file results, merge stderr (handle 2) into stdout (handle 1) using 2>&1
.
If the merged output is to be piped into another program, the file merge sequence 2>&1
must precede the pipe symbol, thus, find/-name.profile2>&1|less
A simplified but non-POSIX conforming form of the command, command>file2>&1
is (not available in Bourne Shell prior to version 4, final release, or in the standard shell Debian Almquist shell used in Debian/Ubuntu): command&>file
or command>&file
.
It is possible to use 2>&1
before ">
" but the result is commonly misunderstood. The rule is that any redirection sets the handle to the output stream independently. So "2>&1
" sets handle 2
to whatever handle 1
points to, which at that point usually is stdout. Then ">
" redirects handle 1
to something else, e.g. a file, but it does not change handle 2
, which still points to stdout.
In the following example, standard output is written to file, but errors are redirected from stderr to stdout, i.e. sent to the screen: command2>&1>file
.
To write both errors and standard output to file, the order should be reversed. Standard output would first be redirected to the file, then stderr would additionally be redirected to the stdout handle that has already been changed to point at the file: command>file2>&1
.
The redirection and piping tokens can be chained together to create complex commands. For example, sortinfile|uniq-c|sort-n>outfile
sorts the lines of infile in lexicographical order, writes unique lines prefixed by the number of occurrences, sorts the resultant output numerically, and places the final output in outfile. [7] This type of construction is used very commonly in shell scripts and batch files.
The standard command tee can redirect output from a command to several destinations:ls-lrt|teexyz
. This directs the file list output to both standard output and the file xyz.
Bash is a Unix shell and command language written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. The shell's name is an acronym for Bourne-Again SHell, a pun on the name of the Bourne shell that it replaces and the notion of being "born again". First released in 1989, it has been used as the default login shell for most Linux distributions and it was one of the first programs Linus Torvalds ported to Linux, alongside GCC. It is available on nearly all modern operating systems.
The Bourne shell (sh
) is a shell command-line interpreter for computer operating systems.
The C shell is a Unix shell created by Bill Joy while he was a graduate student at University of California, Berkeley in the late 1970s. It has been widely distributed, beginning with the 2BSD release of the Berkeley Software Distribution (BSD) which Joy first distributed in 1978. Other early contributors to the ideas or the code were Michael Ubell, Eric Allman, Mike O'Brien and Jim Kulp.
rc is the command line interpreter for Version 10 Unix and Plan 9 from Bell Labs operating systems. It resembles the Bourne shell, but its syntax is somewhat simpler. It was created by Tom Duff, who is better known for an unusual C programming language construct.
In computer programming, standard streams are preconnected input and output communication channels between a computer program and its environment when it begins execution. The three input/output (I/O) connections are called standard input (stdin), standard output (stdout) and standard error (stderr). Originally I/O happened via a physically connected system console, but standard streams abstract this. When a command is executed via an interactive shell, the streams are typically connected to the text terminal on which the shell is running, but can be changed with redirection or a pipeline. More generally, a child process inherits the standard streams of its parent process.
dd is a command-line utility for Unix, Plan 9, Inferno, and Unix-like operating systems and beyond, the primary purpose of which is to convert and copy files. On Unix, device drivers for hardware and special device files appear in the file system just like normal files; dd can also read and/or write from/to these files, provided that function is implemented in their respective driver. As a result, dd can be used for tasks such as backing up the boot sector of a hard drive, and obtaining a fixed amount of random data. The dd program can also perform conversions on the data as it is copied, including byte order swapping and conversion to and from the ASCII and EBCDIC text encodings.
In Unix and Unix-like computer operating systems, a file descriptor is a process-unique identifier (handle) for a file or other input/output resource, such as a pipe or network socket.
In Unix and Unix-like operating systems, iconv is a command-line program and a standardized application programming interface (API) used to convert between different character encodings. "It can convert from any of these encodings to any other, through Unicode conversion."
The Thompson shell was the first Unix shell, introduced in the first version of Unix in 1971, and was written by Ken Thompson. It was a simple command interpreter, not designed for scripting, but nonetheless introduced several innovative features to the command-line interface and led to the development of the later Unix shells.
Expect is an extension to the Tcl scripting language written by Don Libes. The program automates interactions with programs that expose a text terminal interface. Expect, originally written in 1990 for the Unix platform, has since become available for Microsoft Windows and other systems.
In Unix-like computer operating systems, a pipeline is a mechanism for inter-process communication using message passing. A pipeline is a set of processes chained together by their standard streams, so that the output text of each process (stdout) is passed directly as input (stdin) to the next one. The second process is started as the first process is still executing, and they are executed concurrently. The concept of pipelines was championed by Douglas McIlroy at Unix's ancestral home of Bell Labs, during the development of Unix, shaping its toolbox philosophy. It is named by analogy to a physical pipeline. A key feature of these pipelines is their "hiding of internals". This in turn allows for more clarity and simplicity in the system.
In Unix-like operating systems, find
is a command-line utility that locates files based on some user-specified criteria and either prints the pathname of each matched object or, if another action is requested, performs that action on each matched object.
In computing, tee
is a command in command-line interpreters (shells) using standard streams which reads standard input and writes it to both standard output and one or more files, effectively duplicating its input. It is primarily used in conjunction with pipes and filters. The command is named after the T-splitter used in plumbing.
test is a command-line utility found in Unix, Plan 9, and Unix-like operating systems that evaluates conditional expressions. test was turned into a shell builtin command in 1981 with UNIX System III and at the same time made available under the alternate name [.
In computing, sort is a standard command line program of Unix and Unix-like operating systems, that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the "-r
" flag will reverse the sort order.
In computing, exec is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable. This act is also referred to as an overlay. It is especially important in Unix-like systems, although it also exists elsewhere. As no new process is created, the process identifier (PID) does not change, but the machine code, data, heap, and stack of the process are replaced by those of the new program.
Toybox is a free and open-source software implementation of over 200 Unix command line utilities such as ls, cp, and mv. The Toybox project was started in 2006, and became a 0BSD licensed BusyBox alternative. Toybox is used for most of Android's command-line tools in all currently supported Android versions, and is also used to build Android on Linux and macOS. All of the tools are tested on Linux, and many of them also work on BSD and macOS.
The script command is a Unix utility that records a terminal session. It dates back to the 1979 3.0 Berkeley Software Distribution (BSD).
In computing, process substitution is a form of inter-process communication that allows the input or output of a command to appear as a file. The command is substituted in-line, where a file name would normally occur, by the command shell. This allows programs that normally only accept files to directly read from or write to another program.
cat
is a standard Unix utility that reads files sequentially, writing them to standard output. The name is derived from its function to (con)catenate files . It has been ported to a number of operating systems.