parse_datetime
This manual documents version 8.23 of the GNU core utilities, including the standard programs for text and file manipulation.
Copyright © 1994-2014 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
--- The Detailed Node Listing ---
Common Options
Output of entire files
Formatting file contents
Output of parts of files
Summarizing files
Operating on sorted files
ptx: Produce permuted indexes
Operating on fields
Operating on characters
tr: Translate, squeeze, and/or delete characters
Directory listing
ls: List directory contents
Basic operations
Special file types
Changing file attributes
Disk usage
Printing text
Conditions
test: Check file types and compare values
expr: Evaluate expression
Redirection
File name manipulation
Working context
stty: Print or change terminal characteristics
User information
System context
date: Print or set system date and time
SELinux context
Modified command invocation
Process control
Delaying
Numeric operations
File permissions
Date input formats
Opening the software toolbox
Copying This Manual
This manual is a work in progress: many sections make no attempt to explain basic concepts in a way suitable for novices. Thus, if you are interested, please get involved in improving this manual. The entire GNU community will benefit.
The GNU utilities documented here are mostly compatible with the POSIX standard. Please report bugs to bug-coreutils@gnu.org. Remember to include the version number, machine architecture, input files, and any other information needed to reproduce the bug: your input, what you expected, what you got, and why it is wrong. Diffs are welcome, but please include a description of the problem as well, since this is sometimes difficult to infer. See Bugs.
This manual was originally derived from the Unix man pages in the distributions, which were written by David MacKenzie and updated by Jim Meyering. What you are reading now is the authoritative documentation for these utilities; the man pages are no longer being maintained. The original fmt man page was written by Ross Paterson. François Pinard did the initial conversion to Texinfo format. Karl Berry did the indexing, some reorganization, and editing of the results. Brian Youmans of the Free Software Foundation office staff combined the manuals for textutils, fileutils, and sh-utils to produce the present omnibus manual. Richard Stallman contributed his usual invaluable insights to the overall process.
Certain options are available in all of these programs. Rather than writing identical descriptions for each of the programs, they are described here. (In fact, every GNU program accepts (or should accept) these options.)
Normally options and operands can appear in any order, and programs act as if all the options appear before any operands. For example, ‘sort -r passwd -t :’ acts like ‘sort -r -t : passwd’, since ‘:’ is an option-argument of -t. However, if the POSIXLY_CORRECT environment variable is set, options must appear before operands, unless otherwise specified for a particular command.
A few programs can usefully have trailing operands with leading ‘-’. With such a program, options must precede operands even if POSIXLY_CORRECT is not set, and this fact is noted in the program description. For example, the env command's options must appear before its operands, since in some cases the operands specify a command that itself contains options.
Most programs that accept long options recognize unambiguous abbreviations of those options. For example, ‘rmdir --ignore-fail-on-non-empty’ can be invoked as ‘rmdir --ignore-fail’ or even ‘rmdir --i’. Ambiguous options, such as ‘ls --h’, are identified as such.
Some of these programs recognize the --help and --version options only when one of them is the sole command line argument. For these programs, abbreviations of the long options are not always recognized.
A single ‘-’ operand is not really an option, though it looks like one. It stands for standard input, or for standard output if that is clear from the context. For example, ‘sort -’ reads from standard input, and is equivalent to plain ‘sort’, and ‘tee -’ writes an extra copy of its input to standard output. Unless otherwise specified, ‘-’ can appear as any operand that requires a file name.
Nearly every command invocation yields an integral exit status that can be used to change how other commands work. For the vast majority of commands, an exit status of zero indicates success. Failure is indicated by a nonzero value—typically ‘1’, though it may differ on unusual platforms as POSIX requires only that it be nonzero.
However, some of the programs documented here do produce other exit status values and a few associate different meanings with the values ‘0’ and ‘1’. Here are some of the exceptions: chroot, env, expr, nice, nohup, numfmt, printenv, sort, stdbuf, test, timeout, tty.
Some GNU programs (at least cp, install, ln, and mv) optionally make backups of files before writing new versions. These options control the details of these backups. The options are also briefly mentioned in the descriptions of the particular programs.
Note that the short form of this option, -b does not accept any argument. Using -b is equivalent to using --backup=existing.
This option corresponds to the Emacs variable ‘version-control’; the values for method are the same as those used in Emacs. This option also accepts more descriptive names. The valid methods are (unique abbreviations are accepted):
Some GNU programs (at least df, du, and ls) display sizes in “blocks”. You can adjust the block size and method of display to make sizes easier to read. The block size used for display is independent of any file system block size. Fractional block counts are rounded up to the nearest integer.
The default block size is chosen by examining the following environment variables in turn; the first one that is set determines the block size.
DF_BLOCK_SIZE
BLOCK_SIZE
BLOCKSIZE
ls -l
output.
POSIXLY_CORRECT
If none of the above environment variables are set, the block size currently defaults to 1024 bytes in most contexts, but this number may change in the future. For ls file sizes, the block size defaults to 1 byte.
A block size specification can be a positive integer specifying the number
of bytes per block, or it can be human-readable
or si
to
select a human-readable format. Integers may be followed by suffixes
that are upward compatible with the
SI prefixes
for decimal multiples and with the
ISO/IEC 80000-13 (formerly IEC 60027-2) prefixes for binary multiples.
With human-readable formats, output sizes are followed by a size letter
such as ‘M’ for megabytes. BLOCK_SIZE=human-readable
uses
powers of 1024; ‘M’ stands for 1,048,576 bytes.
BLOCK_SIZE=si
is similar, but uses powers of 1000 and appends
‘B’; ‘MB’ stands for 1,000,000 bytes.
A block size specification preceded by ‘'’ causes output sizes to be displayed with thousands separators. The LC_NUMERIC locale specifies the thousands separator and grouping. For example, in an American English locale, ‘--block-size="'1kB"’ would cause a size of 1234000 bytes to be displayed as ‘1,234’. In the default C locale, there is no thousands separator so a leading ‘'’ has no effect.
An integer block size can be followed by a suffix to specify a multiple of that size. A bare size letter, or one followed by ‘iB’, specifies a multiple using powers of 1024. A size letter followed by ‘B’ specifies powers of 1000 instead. For example, ‘1M’ and ‘1MiB’ are equivalent to ‘1048576’, whereas ‘1MB’ is equivalent to ‘1000000’.
A plain suffix without a preceding integer acts as if ‘1’ were prepended, except that it causes a size indication to be appended to the output. For example, ‘--block-size="kB"’ displays 3000 as ‘3kB’.
The following suffixes are defined. Large sizes like 1Y
may be rejected by your computer due to limitations of its arithmetic.
Block size defaults can be overridden by an explicit --block-size=size option. The -k option is equivalent to --block-size=1K, which is the default unless the POSIXLY_CORRECT environment variable is set. The -h or --human-readable option is equivalent to --block-size=human-readable. The --si option is equivalent to --block-size=si. Note for ls the -k option does not control the display of the apparent file sizes, whereas the --block-size option does.
Commands that accept or produce floating point numbers employ the floating point representation of the underlying system, and suffer from rounding error, overflow, and similar floating-point issues. Almost all modern systems use IEEE-754 floating point, and it is typically portable to assume IEEE-754 behavior these days. IEEE-754 has positive and negative infinity, distinguishes positive from negative zero, and uses special values called NaNs to represent invalid computations such as dividing zero by itself. For more information, please see David Goldberg's paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Commands that accept floating point numbers as options, operands or
input use the standard C functions strtod
and strtold
to
convert from text to floating point numbers. These floating point
numbers therefore can use scientific notation like 1.0e-34
and
-10e100
. Modern C implementations also accept hexadecimal
floating point numbers such as -0x.ep-3
, which stands for
−14/16 times 2^-3, which equals −0.109375. The
LC_NUMERIC locale determines the decimal-point character.
See Parsing of Floats.
A signal may be a signal name like ‘HUP’, or a signal number like ‘1’, or an exit status of a process terminated by the signal. A signal name can be given in canonical form or prefixed by ‘SIG’. The case of the letters is ignored. The following signal names and numbers are supported on all POSIX compliant systems:
Other supported signal names have system-dependent corresponding numbers. All systems conforming to POSIX 1003.1-2001 also support the following signals:
POSIX 1003.1-2001 systems that support the XSI extension also support the following signals:
POSIX 1003.1-2001 systems that support the XRT extension also support at least eight real-time signals called ‘RTMIN’, ‘RTMIN+1’, ..., ‘RTMAX-1’, ‘RTMAX’.
Since the user and group arguments to these commands
may be specified as names or numeric IDs, there is an
apparent ambiguity.
What if a user or group name is a string of digits?
1
Should the command interpret it as a user name or as an ID?
POSIX requires that these commands
first attempt to resolve the specified string as a name, and
only once that fails, then try to interpret it as an ID.
This is troublesome when you want to specify a numeric ID, say 42,
and it must work even in a pathological situation where
‘42’ is a user name that maps to some other user ID, say 1000.
Simply invoking chown 42 F
, will set Fs owner ID to
1000—not what you intended.
GNU chown, chgrp, chroot, and id provide a way to work around this, that at the same time may result in a significant performance improvement by eliminating a database look-up. Simply precede each numeric user ID and/or group ID with a ‘+’, in order to force its interpretation as an integer:
chown +42 F chgrp +$numeric_group_id another-file chown +0:+0 /
The name look-up process is skipped for each ‘+’-prefixed string, because a string containing ‘+’ is never a valid user or group name. This syntax is accepted on most common Unix systems, but not on Solaris 10.
The shuf, shred, and sort commands sometimes need random data to do their work. For example, ‘sort -R’ must choose a hash function at random, and it needs random data to make this selection.
By default these commands use an internal pseudorandom generator initialized by a small amount of entropy, but can be directed to use an external source with the --random-source=file option. An error is reported if file does not contain enough bytes.
For example, the device file /dev/urandom could be used as the source of random data. Typically, this device gathers environmental noise from device drivers and other sources into an entropy pool, and uses the pool to generate random bits. If the pool is short of data, the device reuses the internal pool to produce more bits, using a cryptographically secure pseudorandom number generator. But be aware that this device is not designed for bulk random data generation and is relatively slow.
/dev/urandom suffices for most practical uses, but applications requiring high-value or long-term protection of private data may require an alternate data source like /dev/random or /dev/arandom. The set of available sources depends on your operating system.
To reproduce the results of an earlier invocation of a command, you can save some random data into a file and then use that file as the random source in earlier and later invocations of the command.
The cp, install, ln, and mv commands normally treat the last operand specially when it is a directory or a symbolic link to a directory. For example, ‘cp source dest’ is equivalent to ‘cp source dest/source’ if dest is a directory. Sometimes this behavior is not exactly what is wanted, so these commands support the following options to allow more fine-grained control:
In the opposite situation, where you want the last operand to be
treated as a directory and want a diagnostic otherwise, you can use
the --target-directory (-t) option.
The interface for most programs is that after processing options and a finite (possibly zero) number of fixed-position arguments, the remaining argument list is either expected to be empty, or is a list of items (usually files) that will all be handled identically. The xargs program is designed to work well with this convention.
The commands in the mv-family are unusual in that they take
a variable number of arguments with a special case at the end
(namely, the target directory). This makes it nontrivial to perform some
operations, e.g., “move all files from here to ../d/”, because
mv * ../d/
might exhaust the argument space, and ls | xargs ...
doesn't have a clean way to specify an extra final argument for each
invocation of the subject command. (It can be done by going through a
shell command, but that requires more human labor and brain power than
it should.)
The --target-directory (-t) option allows the cp,
install, ln, and mv programs to be used
conveniently with xargs. For example, you can move the files
from the current directory to a sibling directory, d
like this:
ls | xargs mv -t ../d --
However, this doesn't move files whose names begin with ‘.’. If you use the GNU find program, you can move those files too, with this command:
find . -mindepth 1 -maxdepth 1 \ | xargs mv -t ../d
But both of the above approaches fail if there are no files in the current directory, or if any file has a name containing a blank or some other special characters. The following example removes those limitations and requires both GNU find and GNU xargs:
find . -mindepth 1 -maxdepth 1 -print0 \ | xargs --null --no-run-if-empty \ mv -t ../d
The --target-directory (-t) and --no-target-directory (-T) options cannot be combined.
Some GNU programs (at least cp and mv) allow you to remove any trailing slashes from each source argument before operating on it. The --strip-trailing-slashes option enables this behavior.
This is useful when a source argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, mv, for example, (via the system's rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced directory and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard.
The following options modify how chown and chgrp traverse a hierarchy when the --recursive (-R) option is also specified. If more than one of the following options is specified, only the final one takes effect. These options specify whether processing a symbolic link to a directory entails operating on just the symbolic link or on all files in the hierarchy rooted at that directory.
These options are independent of --dereference and --no-dereference (-h), which control whether to modify a symlink or its referent.
Certain commands can operate destructively on entire hierarchies. For example, if a user with appropriate privileges mistakenly runs ‘rm -rf / tmp/junk’, that may remove all files on the entire system. Since there are so few legitimate uses for such a command, GNU rm normally declines to operate on any directory that resolves to /. If you really want to try to remove all the files on your system, you can use the --no-preserve-root option, but the default behavior, specified by the --preserve-root option, is safer for most purposes.
The commands chgrp, chmod and chown can also operate destructively on entire hierarchies, so they too support these options. Although, unlike rm, they don't actually unlink files, these commands are arguably more dangerous when operating recursively on /, since they often work much more quickly, and hence damage more files before an alert user can interrupt them. Tradition and POSIX require these commands to operate recursively on /, so they default to --no-preserve-root, but using the --preserve-root option makes them safer for most purposes. For convenience you can specify --preserve-root in an alias or in a shell function.
Note that the --preserve-root option also ensures that chgrp and chown do not modify / even when dereferencing a symlink pointing to /.
Some programs like nice can invoke other programs; for example, the command ‘nice cat file’ invokes the program cat by executing the command ‘cat file’. However, special built-in utilities like exit cannot be invoked this way. For example, the command ‘nice exit’ does not have a well-defined behavior: it may generate an error message instead of exiting.
Here is a list of the special built-in utilities that are standardized by POSIX 1003.1-2004.
. : break continue eval exec exit export readonly return set shift times trap unset
For example, because ‘.’, ‘:’, and ‘exec’ are special, the commands ‘nice . foo.sh’, ‘nice :’, and ‘nice exec pwd’ do not work as you might expect.
Many shells extend this list. For example, Bash has several extra special built-in utilities like history, and suspend, and with Bash the command ‘nice suspend’ generates an error message instead of suspending.
In a few cases, the GNU utilities' default behavior is incompatible with the POSIX standard. To suppress these incompatibilities, define the POSIXLY_CORRECT environment variable. Unless you are checking for POSIX conformance, you probably do not need to define POSIXLY_CORRECT.
Newer versions of POSIX are occasionally incompatible with older versions. For example, older versions of POSIX required the command ‘sort +1’ to sort based on the second and succeeding fields in each input line, but starting with POSIX 1003.1-2001 the same command is required to sort the file named +1, and you must instead use the command ‘sort -k 2’ to get the field-based sort.
The GNU utilities normally conform to the version of POSIX that is standard for your system. To cause them to conform to a different version of POSIX, define the _POSIX2_VERSION environment variable to a value of the form yyyymm specifying the year and month the standard was adopted. Three values are currently supported for _POSIX2_VERSION: ‘199209’ stands for POSIX 1003.2-1992, ‘200112’ stands for POSIX 1003.1-2001, and ‘200809’ stands for POSIX 1003.1-2008. For example, if you have a newer system but are running software that assumes an older version of POSIX and uses ‘sort +1’ or ‘tail +10’, you can work around any compatibility problems by setting ‘_POSIX2_VERSION=199209’ in your environment.
coreutils invokes an individual utility, either implicitly selected by the last component of ‘argv[0]’, or by explicitly calling coreutils with the --coreutils-prog option. Synopsis:
coreutils --coreutils-prog=PROGRAM ...
These commands read and write entire files, possibly transforming them in some way.
cat copies each file (‘-’ means standard input), or standard input if none are given, to standard output. Synopsis:
cat [option] [file]...
The program accepts the following options. Also see Common options.
On systems like MS-DOS that distinguish between text and binary files, cat normally reads and writes in binary mode. However, cat reads in text mode if one of the options -bensAE is used or if cat is reading from standard input and standard input is a terminal. Similarly, cat writes in text mode if one of the options -bensAE is used or if standard output is a terminal.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Output f's contents, then standard input, then g's contents. cat f - g # Copy standard input to standard output. cat
tac copies each file (‘-’ means standard input), or standard input if none are given, to standard output, reversing the records (lines by default) in each separately. Synopsis:
tac [option]... [file]...
Records are separated by instances of a string (newline by default). By default, this separator string is attached to the end of the record that it follows in the file.
The program accepts the following options. Also see Common options.
On systems like MS-DOS that distinguish between text and binary files, tac reads and writes in binary mode.
An exit status of zero indicates success, and a nonzero value indicates failure.
Example:
# Reverse a file character by character. tac -r -s 'x\|[^x]'
nl writes each file (‘-’ means standard input), or standard input if none are given, to standard output, with line numbers added to some or all of the lines. Synopsis:
nl [option]... [file]...
nl decomposes its input into (logical) pages; by default, the line number is reset to 1 at the top of each logical page. nl treats all of the input files as a single document; it does not reset line numbers or logical pages between files.
A logical page consists of three sections: header, body, and footer. Any of the sections can be empty. Each can be numbered in a different style from the others.
The beginnings of the sections of logical pages are indicated in the input file by a line containing exactly one of these delimiter strings:
The two characters from which these strings are made can be changed from ‘\’ and ‘:’ via options (see below), but the pattern and length of each string cannot be changed.
A section delimiter is replaced by an empty line on output. Any text that comes before the first section delimiter string in the input file is considered to be part of a body section, so nl treats a file that contains no section delimiters as a single body section.
The program accepts the following options. Also see Common options.
rn
):
An exit status of zero indicates success, and a nonzero value indicates failure.
od writes an unambiguous representation of each file (‘-’ means standard input), or standard input if none are given. Synopses:
od [option]... [file]... od [-abcdfilosx]... [file] [[+]offset[.][b]] od [option]... --traditional [file] [[+]offset[.][b] [[+]label[.][b]]]
Each line of output consists of the offset in the input, followed by
groups of data from the file. By default, od prints the offset in
octal, and each group of file data is a C short int
's worth of input
printed as a single octal number.
If offset is given, it specifies how many input bytes to skip before formatting and writing. By default, it is interpreted as an octal number, but the optional trailing decimal point causes it to be interpreted as decimal. If no decimal is specified and the offset begins with ‘0x’ or ‘0X’ it is interpreted as a hexadecimal number. If there is a trailing ‘b’, the number of bytes skipped will be offset multiplied by 512.
If a command is of both the first and second forms, the second form is assumed if the last operand begins with ‘+’ or (if there are two operands) a digit. For example, in ‘od foo 10’ and ‘od +10’ the ‘10’ is an offset, whereas in ‘od 10’ the ‘10’ is a file name.
The program accepts the following options. Also see Common options.
The default is octal.
‘b’ => 512 ("blocks") ‘KB’ => 1000 (KiloBytes) ‘K’ => 1024 (KibiBytes) ‘MB’ => 1000*1000 (MegaBytes) ‘M’ => 1024*1024 (MebiBytes) ‘GB’ => 1000*1000*1000 (GigaBytes) ‘G’ => 1024*1024*1024 (GibiBytes)
and so on for ‘T’, ‘P’, ‘E’, ‘Z’, and ‘Y’.
bytes
are interpreted as for the -j option.
If bytes is omitted with --strings, the default is 3.
Adding a trailing “z” to any type specification appends a display of the single byte character representation of the printable characters to the output line generated by the type specification.
The type a
outputs things like ‘sp’ for space, ‘nl’ for
newline, and ‘nul’ for a zero byte. Only the least significant
seven bits of each byte is used; the high-order bit is ignored.
Type c
outputs
‘ ’, ‘\n’, and \0
, respectively.
Except for types ‘a’ and ‘c’, you can specify the number of bytes to use in interpreting each number in the given data type by following the type indicator character with a decimal integer. Alternately, you can specify the size of one of the C compiler's built-in data types by following the type indicator character with one of the following characters. For integers (‘d’, ‘o’, ‘u’, ‘x’):
For floating point (f
):
n
input bytes per output line. This must be a multiple of
the least common multiple of the sizes associated with the specified
output types.
If this option is not given at all, the default is 16. If n is omitted, the default is 32.
The next several options are shorthands for format specifications. GNU od accepts any combination of shorthands and format specification options. These options accumulate.
od --traditional [file] [[+]offset[.][b] [[+]label[.][b]]]
can be used to specify at most one file and optional arguments specifying an offset and a pseudo-start address, label. The label argument is interpreted just like offset, but it specifies an initial pseudo-address. The pseudo-addresses are displayed in parentheses following any normal address.
An exit status of zero indicates success, and a nonzero value indicates failure.
base64 transforms data read from a file, or standard input, into (or from) base64 encoded form. The base64 encoded form uses printable ASCII characters to represent binary data. Synopses:
base64 [option]... [file] base64 --decode [option]... [file]
The base64 encoding expands data to roughly 133% of the original. The format conforms to RFC 4648.
The program accepts the following options. Also see Common options.
The default is to wrap after 76 characters. Use the value 0 to
disable line wrapping altogether.
An exit status of zero indicates success, and a nonzero value indicates failure.
These commands reformat the contents of files.
fmt fills and joins lines to produce output lines of (at most) a given number of characters (75 by default). Synopsis:
fmt [option]... [file]...
fmt reads from the specified file arguments (or standard input if none are given), and writes to standard output.
By default, blank lines, spaces between words, and indentation are preserved in the output; successive input lines with different indentation are not joined; tabs are expanded on input and introduced on output.
fmt prefers breaking lines at the end of a sentence, and tries to avoid line breaks after the first word of a sentence or before the last word of a sentence. A sentence break is defined as either the end of a paragraph or a word ending in any of ‘.?!’, followed by two spaces or end of line, ignoring any intervening parentheses or quotes. Like TeX, fmt reads entire “paragraphs” before choosing line breaks; the algorithm is a variant of that given by Donald E. Knuth and Michael F. Plass in “Breaking Paragraphs Into Lines”, Software—Practice & Experience 11, 11 (November 1981), 1119–1184.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
numfmt reads numbers in various representations and reformats them as requested. The most common usage is converting numbers to/from human representation (e.g. ‘4G’ ==> ‘4,000,000,000’).
numfmt [option]... [number]
numfmt converts each number on the command-line according to the specified options (see below). If no numbers are given, it reads numbers from standard input. numfmt can optionally extract numbers from specific columns, maintaining proper line padding and alignment.
An exit status of zero indicates success, and a nonzero value indicates failure.
See --invalid for additional information regarding exit status.
The program accepts the following options. Also see Common options.
The following are the possible unit options with --from=UNITS and --to=UNITS:
‘K’ => 1000^1 = 10^3 (Kilo) ‘M’ => 1000^2 = 10^6 (Mega) ‘G’ => 1000^3 = 10^9 (Giga) ‘T’ => 1000^4 = 10^12 (Tera) ‘P’ => 1000^5 = 10^15 (Peta) ‘E’ => 1000^6 = 10^18 (Exa) ‘Z’ => 1000^7 = 10^21 (Zetta) ‘Y’ => 1000^8 = 10^24 (Yotta)
‘K’ => 1024^1 = 2^10 (Kibi) ‘M’ => 1024^2 = 2^20 (Mebi) ‘G’ => 1024^3 = 2^30 (Gibi) ‘T’ => 1024^4 = 2^40 (Tebi) ‘P’ => 1024^5 = 2^50 (Pebi) ‘E’ => 1024^6 = 2^60 (Exbi) ‘Z’ => 1024^7 = 2^70 (Zebi) ‘Y’ => 1024^8 = 2^80 (Yobi)
The iec option uses a single letter suffix (e.g. ‘G’), which is
not fully standard, as the iec standard recommends a two-letter symbol
(e.g ‘Gi’) - but in practice, this method common. Compare with
the iec-i option.
‘Ki’ => 1024^1 = 2^10 (Kibi) ‘Mi’ => 1024^2 = 2^20 (Mebi) ‘Gi’ => 1024^3 = 2^30 (Gibi) ‘Ti’ => 1024^4 = 2^40 (Tebi) ‘Pi’ => 1024^5 = 2^50 (Pebi) ‘Ei’ => 1024^6 = 2^60 (Exbi) ‘Zi’ => 1024^7 = 2^70 (Zebi) ‘Yi’ => 1024^8 = 2^80 (Yobi)
The iec-i option uses a two-letter suffix symbol (e.g. ‘Gi’),
as the iec standard recommends, but this is not always common in
practice. Compare with the iec option.
Converting a single number from/to human representation:
$ numfmt --to=si 500000 500K $ numfmt --to=iec 500000 489K $ numfmt --to=iec-i 500000 489Ki $ numfmt --from=si 1M 1000000 $ numfmt --from=iec 1M 1048576 # with '--from=auto', M=Mega, Mi=Mebi $ numfmt --from=auto 1M 1000000 $ numfmt --from=auto 1Mi 1048576
Converting from ‘SI’ to ‘IEC’ scales (e.g. when a harddisk capacity is advertised as ‘1TB’, while checking the drive's capacity gives lower values):
$ numfmt --from=si --to=iec 1T 932G
Converting a single field from an input file / piped input (these contrived examples are for demonstration purposes only, as both ls and df support the --human-readable option to output sizes in human-readable format):
# Third field (file size) will be shown in SI representation $ ls -log | numfmt --field 3 --header --to=si | head -n4 -rw-r--r-- 1 94K Aug 23 2011 ABOUT-NLS -rw-r--r-- 1 3.7K Jan 7 16:15 AUTHORS -rw-r--r-- 1 36K Jun 1 2011 COPYING -rw-r--r-- 1 0 Jan 7 15:15 ChangeLog # Second field (size) will be shown in IEC representation $ df --block-size=1 | numfmt --field 2 --header --to=iec | head -n4 File system 1B-blocks Used Available Use% Mounted on rootfs 132G 104741408 26554036 80% / tmpfs 794M 7580 804960 1% /run/shm /dev/sdb1 694G 651424756 46074696 94% /home
Output can be tweaked using --padding or --format:
# Pad to 10 characters, right-aligned $ du -s * | numfmt --to=si --padding=10 2.5K config.log 108 config.status 1.7K configure 20 configure.ac # Pad to 10 characters, left-aligned $ du -s * | numfmt --to=si --padding=-10 2.5K config.log 108 config.status 1.7K configure 20 configure.ac # Pad to 10 characters, left-aligned, using 'format' $ du -s * | numfmt --to=si --format="%10f" 2.5K config.log 108 config.status 1.7K configure 20 configure.ac # Pad to 10 characters, left-aligned, using 'format' $ du -s * | numfmt --to=si --padding="%-10f" 2.5K config.log 108 config.status 1.7K configure 20 configure.ac
With locales that support grouping digits, using --grouping or --format enables grouping. In ‘POSIX’ locale, grouping is silently ignored:
$ LC_ALL=C numfmt --from=iec --grouping 2G 2147483648 $ LC_ALL=en_US.utf8 numfmt --from=iec --grouping 2G 2,147,483,648 $ LC_ALL=ta_IN numfmt --from=iec --grouping 2G 2,14,74,83,648 $ LC_ALL=C ./src/numfmt --from=iec --format="==%'15f==" 2G == 2147483648== $ LC_ALL=en_US.utf8 ./src/numfmt --from=iec --format="==%'15f==" 2G == 2,147,483,648== $ LC_ALL=en_US.utf8 ./src/numfmt --from=iec --format="==%'-15f==" 2G ==2,147,483,648 == $ LC_ALL=ta_IN ./src/numfmt --from=iec --format="==%'15f==" 2G == 2,14,74,83,648==
pr writes each file (‘-’ means standard input), or standard input if none are given, to standard output, paginating and optionally outputting in multicolumn format; optionally merges all files, printing all in parallel, one per column. Synopsis:
pr [option]... [file]...
By default, a 5-line header is printed at each page: two blank lines; a line with the date, the file name, and the page count; and two more blank lines. A footer of five blank lines is also printed. The default page_length is 66 lines. The default number of text lines is therefore 56. The text line of the header takes the form ‘date string page’, with spaces inserted around string so that the line takes up the full page_width. Here, date is the date (see the -D or --date-format option for details), string is the centered header string, and page identifies the page number. The LC_MESSAGES locale category affects the spelling of page; in the default C locale, it is ‘Page number’ where number is the decimal page number.
Form feeds in the input cause page breaks in the output. Multiple form feeds produce empty pages.
Columns are of equal width, separated by an optional string (default is ‘space’). For multicolumn output, lines will always be truncated to page_width (default 72), unless you use the -J option. For single column output no line truncation occurs by default. Use -W option to truncate lines in that case.
The program accepts the following options. Also see Common options.
The default date format is ‘%Y-%m-%d %H:%M’ (for example, ‘2001-12-04 23:59’); but if the POSIXLY_CORRECT environment variable is set and the LC_TIME locale category specifies the POSIX locale, the default is ‘%b %e %H:%M %Y’ (for example, ‘Dec 4 23:59 2001’.
Time stamps are listed according to the time zone rules specified by
the TZ environment variable, or by the system default rules if
TZ is not set. See Specifying the Time Zone with TZ.
An exit status of zero indicates success, and a nonzero value indicates failure.
fold writes each file (- means standard input), or standard input if none are given, to standard output, breaking long lines. Synopsis:
fold [option]... [file]...
By default, fold breaks lines wider than 80 columns. The output is split into as many lines as necessary.
fold counts screen columns by default; thus, a tab may count more than one column, backspace decreases the column count, and carriage return sets the column to zero.
The program accepts the following options. Also see Common options.
For compatibility fold supports an obsolete option syntax -width. New scripts should use -w width instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
These commands output pieces of the input.
head prints the first part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of -. Synopsis:
head [option]... [file]...
If more than one file is specified, head prints a one-line header consisting of:
==> file name <==
before the output for each file.
The program accepts the following options. Also see Common options.
‘b’ => 512 ("blocks") ‘KB’ => 1000 (KiloBytes) ‘K’ => 1024 (KibiBytes) ‘MB’ => 1000*1000 (MegaBytes) ‘M’ => 1024*1024 (MebiBytes) ‘GB’ => 1000*1000*1000 (GigaBytes) ‘G’ => 1024*1024*1024 (GibiBytes)
and so on for ‘T’, ‘P’, ‘E’, ‘Z’, and ‘Y’.
For compatibility head also supports an obsolete option syntax -countoptions, which is recognized only if it is specified first. count is a decimal number optionally followed by a size letter (‘b’, ‘k’, ‘m’) as in -c, or ‘l’ to mean count by lines, or other option letters (‘cqv’). Scripts intended for standard hosts should use -c count or -n count instead. If your script must also run on hosts that support only the obsolete syntax, it is usually simpler to avoid head, e.g., by using ‘sed 5q’ instead of ‘head -5’.
An exit status of zero indicates success, and a nonzero value indicates failure.
tail prints the last part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of ‘-’. Synopsis:
tail [option]... [file]...
If more than one file is specified, tail prints a one-line header consisting of:
==> file name <==
before the output for each file.
GNU tail can output any amount of data (some other versions of tail cannot). It also has no -r option (print in reverse), since reversing a file is really a different job from printing the end of a file; BSD tail (which is the one with -r) can only reverse files that are at most as large as its buffer, which is typically 32 KiB. A more reliable and versatile way to reverse files is the GNU tac command.
The program accepts the following options. Also see Common options.
‘b’ => 512 ("blocks") ‘KB’ => 1000 (KiloBytes) ‘K’ => 1024 (KibiBytes) ‘MB’ => 1000*1000 (MegaBytes) ‘M’ => 1024*1024 (MebiBytes) ‘GB’ => 1000*1000*1000 (GigaBytes) ‘G’ => 1024*1024*1024 (GibiBytes)
and so on for ‘T’, ‘P’, ‘E’, ‘Z’, and ‘Y’.
There are two ways to specify how you'd like to track files with this option, but that difference is noticeable only when a followed file is removed or renamed. If you'd like to continue to track the end of a growing file even after it has been unlinked, use --follow=descriptor. This is the default behavior, but it is not useful if you're tracking a log file that may be rotated (removed or renamed, then reopened). In that case, use --follow=name to track the named file, perhaps by reopening it periodically to see if it has been removed and recreated by some other program. Note that the inotify-based implementation handles this case without the need for any periodic reopening.
No matter which method you use, if the tracked file is determined to have shrunk, tail prints a message saying the file has been truncated and resumes tracking the end of the file from the newly-determined endpoint.
When a file is removed, tail's behavior depends on whether it is following the name or the descriptor. When following by name, tail can detect that a file has been removed and gives a message to that effect, and if --retry has been specified it will continue checking periodically to see if the file reappears. When following a descriptor, tail does not detect that the file has been unlinked or renamed and issues no message; even though the file may no longer be accessible via its original name, it may still be growing.
The option values ‘descriptor’ and ‘name’ may be specified only with the long form of the option, not with -f.
The -f option is ignored if no file operand is specified and standard input is a FIFO or a pipe. Likewise, the -f option has no effect for any operand specified as ‘-’, when standard input is a FIFO or a pipe.
With kernel inotify support, output is triggered by file changes and is generally very prompt. Otherwise, tail sleeps for one second between checks— use --sleep-interval=n to change that default—which can make the output appear slightly less responsive or bursty. When using tail without inotify support, you can make it more responsive by using a sub-second sleep interval, e.g., via an alias like this:
alias tail='tail -s.1'
When following by file descriptor (i.e., with --follow=descriptor), this option only affects the initial open of the file, as after a successful open, tail will start following the file descriptor.
When following by name (i.e., with --follow=name), tail infinitely retries to re-open the given files until killed.
Without this option, when tail encounters a file that doesn't
exist or is otherwise inaccessible, it reports that fact and
never checks it again.
tail -f
process yourself.
$ make >& makerr & tail --pid=$! -f makerr
If you specify a pid that is not in use or that does not correspond
to the process that is writing to the tailed files, then tail
may terminate long before any files stop growing or it may not
terminate until long after the real writer has terminated.
Note that --pid cannot be supported on some systems; tail
will print a warning if this is the case.
open
/fstat
the file to determine if that file name is
still associated with the same device/inode-number pair as before.
When following a log file that is rotated, this is approximately the
number of seconds between when tail prints the last pre-rotation lines
and when it prints the lines that have accumulated in the new log file.
This option is meaningful only when polling (i.e., without inotify)
and when following by name.
For compatibility tail also supports an obsolete usage ‘tail -[count][bcl][f] [file]’, which is recognized only if it does not conflict with the usage described above. This obsolete form uses exactly one option and at most one file. In the option, count is an optional decimal number optionally followed by a size letter (‘b’, ‘c’, ‘l’) to mean count by 512-byte blocks, bytes, or lines, optionally followed by ‘f’ which has the same meaning as -f.
On older systems, the leading ‘-’ can be replaced by ‘+’ in the obsolete option syntax with the same meaning as in counts, and obsolete usage overrides normal usage when the two conflict. This obsolete behavior can be enabled or disabled with the _POSIX2_VERSION environment variable (see Standards conformance).
Scripts intended for use on standard hosts should avoid obsolete syntax and should use -c count[b], -n count, and/or -f instead. If your script must also run on hosts that support only the obsolete syntax, you can often rewrite it to avoid problematic usages, e.g., by using ‘sed -n '$p'’ rather than ‘tail -1’. If that's not possible, the script can use a test like ‘if tail -c +1 </dev/null >/dev/null 2>&1; then ...’ to decide which syntax to use.
Even if your script assumes the standard behavior, you should still beware usages whose behaviors differ depending on the POSIX version. For example, avoid ‘tail - main.c’, since it might be interpreted as either ‘tail main.c’ or as ‘tail -- - main.c’; avoid ‘tail -c 4’, since it might mean either ‘tail -c4’ or ‘tail -c 10 4’; and avoid ‘tail +4’, since it might mean either ‘tail ./+4’ or ‘tail -n +4’.
An exit status of zero indicates success, and a nonzero value indicates failure.
split creates output files containing consecutive or interleaved sections of input (standard input if none is given or input is ‘-’). Synopsis:
split [option] [input [prefix]]
By default, split puts 1000 lines of input (or whatever is left over for the last section), into each output file.
The output files' names consist of prefix (‘x’ by default) followed by a group of characters (‘aa’, ‘ab’, ... by default), such that concatenating the output files in traditional sorted order by file name produces the original input file (except -nr/n). By default split will initially create files with two generated suffix characters, and will increase this width by two when the next most significant position reaches the last character. (‘yz’, ‘zaaa’, ‘zaab’, ...). In this way an arbitrary number of output files are supported, which sort as described above, even in the presence of an --additional-suffix option. If the -a option is specified and the output file names are exhausted, split reports an error without deleting the output files that it did create.
The program accepts the following options. Also see Common options.
For compatibility split also supports an obsolete
option syntax -lines. New scripts should use
-l lines instead.
‘b’ => 512 ("blocks") ‘KB’ => 1000 (KiloBytes) ‘K’ => 1024 (KibiBytes) ‘MB’ => 1000*1000 (MegaBytes) ‘M’ => 1024*1024 (MebiBytes) ‘GB’ => 1000*1000*1000 (GigaBytes) ‘G’ => 1024*1024*1024 (GibiBytes)
and so on for ‘T’, ‘P’, ‘E’, ‘Z’, and ‘Y’.
xz -dc BIG.xz | split -b200G --filter='xz > $FILE.xz' - big-
Assuming a 10:1 compression ratio, that would create about fifty 20GiB files
with names big-aa.xz, big-ab.xz, big-ac.xz, etc.
n generate n files based on current size of input
k/n only output kth of n to stdout
l/n generate n files without splitting lines
l/k/n likewise but only output kth of n to stdout
r/n like ‘l’ but use round robin distribution
r/k/n likewise but only output kth of n to stdout
Any excess bytes remaining after dividing the input into n chunks, are assigned to the last chunk. Any excess bytes appearing after the initial calculation are discarded (except when using ‘r’ mode).
All n files are created even if there are fewer than n lines, or the input is truncated.
For ‘l’ mode, chunks are approximately input size / n. The input is partitioned into n equal sized portions, with the last assigned any excess. If a line starts within a partition it is written completely to the corresponding file. Since lines are not split even if they overlap a partition, the files written can be larger or smaller than the partition size, and even empty if a line is so long as to completely overlap the partition.
For ‘r’ mode, the size of input is irrelevant,
and so can be a pipe for example.
An exit status of zero indicates success, and a nonzero value indicates failure.
Here are a few examples to illustrate how the --number (-n) option works:
Notice how, by default, one line may be split onto two or more:
$ seq -w 6 10 > k; split -n3 k; head xa? ==> xaa <== 06 07 ==> xab <== 08 0 ==> xac <== 9 10
Use the "l/" modifier to suppress that:
$ seq -w 6 10 > k; split -nl/3 k; head xa? ==> xaa <== 06 07 ==> xab <== 08 09 ==> xac <== 10
Use the "r/" modifier to distribute lines in a round-robin fashion:
$ seq -w 6 10 > k; split -nr/3 k; head xa? ==> xaa <== 06 09 ==> xab <== 07 10 ==> xac <== 08
You can also extract just the Kth chunk. This extracts and prints just the 7th "chunk" of 33:
$ seq 100 > k; split -nl/7/33 k 20 21 22
csplit creates zero or more output files containing sections of input (standard input if input is ‘-’). Synopsis:
csplit [option]... input pattern...
The contents of the output files are determined by the pattern arguments, as detailed below. An error occurs if a pattern argument refers to a nonexistent line of the input file (e.g., if no remaining line matches a given regular expression). After every pattern has been matched, any remaining input is copied into one last output file.
By default, csplit prints the number of bytes written to each output file after it has been created.
The types of pattern arguments are:
The output files' names consist of a prefix (‘xx’ by default) followed by a suffix. By default, the suffix is an ascending sequence of two-digit decimal numbers from ‘00’ to ‘99’. In any case, concatenating the output files in sorted order by file name produces the original input file.
By default, if csplit encounters an error or receives a hangup, interrupt, quit, or terminate signal, it removes any output files that it has created so far before it exits.
The program accepts the following options. Also see Common options.
printf(3)
-style conversion specification, possibly including
format specification flags, a field width, a precision specifications,
or all of these kinds of modifiers. The format letter must convert a
binary unsigned integer argument to readable form. The format letters
‘d’ and ‘i’ are aliases for ‘u’, and the
‘u’, ‘o’, ‘x’, and ‘X’ conversions are allowed. The
entire suffix is given (with the current output file number) to
sprintf(3)
to form the file name suffixes for each of the
individual output files in turn. If this option is used, the
--digits option is ignored.
An exit status of zero indicates success, and a nonzero value indicates failure.
Here is an example of its usage. First, create an empty directory for the exercise, and cd into it:
$ mkdir d && cd d
Now, split the sequence of 1..14 on lines that end with 0 or 5:
$ seq 14 | csplit - '/[05]$/' '{*}' 8 10 15
Each number printed above is the size of an output file that csplit has just created. List the names of those output files:
$ ls xx00 xx01 xx02
Use head to show their contents:
$ head xx* ==> xx00 <== 1 2 3 4 ==> xx01 <== 5 6 7 8 9 ==> xx02 <== 10 11 12 13 14
Example of splitting input by empty lines:
$ csplit --suppress-matched input.txt '/^$/' '{*}'
These commands generate just a few numbers representing entire contents of files.
wc counts the number of bytes, characters, whitespace-separated words, and newlines in each given file, or standard input if none are given or for a file of ‘-’. Synopsis:
wc [option]... [file]...
wc prints one line of counts for each file, and if the file was given as an argument, it prints the file name following the counts. If more than one file is given, wc prints a final line containing the cumulative counts, with the file name total. The counts are printed in this order: newlines, words, characters, bytes, maximum line length. Each count is printed right-justified in a field with at least one space between fields so that the numbers and file names normally line up nicely in columns. The width of the count fields varies depending on the inputs, so you should not depend on a particular field width. However, as a GNU extension, if only one count is printed, it is guaranteed to be printed without leading spaces.
By default, wc prints three counts: the newline, words, and byte counts. Options can specify that only certain counts be printed. Options do not undo others previously given, so
wc --bytes --words
prints both the byte counts and the word counts.
With the --max-line-length option, wc prints the length of the longest line per file, and if there is more than one file it prints the maximum (not the sum) of those lengths. The line lengths here are measured in screen columns, according to the current locale and assuming tab positions in every 8th column.
The program accepts the following options. Also see Common options.
For example, to find the length of the longest line in any .c or .h file in the current hierarchy, do this:
find . -name '*.[ch]' -print0 | wc -L --files0-from=- | tail -n1
An exit status of zero indicates success, and a nonzero value indicates failure.
sum computes a 16-bit checksum for each given file, or standard input if none are given or for a file of ‘-’. Synopsis:
sum [option]... [file]...
sum prints the checksum for each file followed by the number of blocks in the file (rounded up). If more than one file is given, file names are also printed (by default). (With the --sysv option, corresponding file names are printed when there is at least one file argument.)
By default, GNU sum computes checksums using an algorithm compatible with BSD sum and prints file sizes in units of 1024-byte blocks.
The program accepts the following options. Also see Common options.
sum is provided for compatibility; the cksum program (see next section) is preferable in new applications.
An exit status of zero indicates success, and a nonzero value indicates failure.
cksum computes a cyclic redundancy check (CRC) checksum for each given file, or standard input if none are given or for a file of ‘-’. Synopsis:
cksum [option]... [file]...
cksum prints the CRC checksum for each file along with the number of bytes in the file, and the file name unless no arguments were given.
cksum is typically used to ensure that files transferred by unreliable means (e.g., netnews) have not been corrupted, by comparing the cksum output for the received files with the cksum output for the original files (typically given in the distribution).
The CRC algorithm is specified by the POSIX standard. It is not compatible with the BSD or System V sum algorithms (see the previous section); it is more robust.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
md5sum computes a 128-bit checksum (or fingerprint or message-digest) for each specified file.
Note: The MD5 digest is more reliable than a simple CRC (provided by the cksum command) for detecting accidental file corruption, as the chances of accidentally having two files with identical MD5 are vanishingly small. However, it should not be considered secure against malicious tampering: although finding a file with a given MD5 fingerprint is considered infeasible at the moment, it is known how to modify certain files, including digital certificates, so that they appear valid when signed with an MD5 digest. For more secure hashes, consider using SHA-2. See sha2 utilities.
If a file is specified as ‘-’ or if no files are given md5sum computes the checksum for the standard input. md5sum can also determine whether a file and checksum are consistent. Synopsis:
md5sum [option]... [file]...
For each file, ‘md5sum’ outputs the MD5 checksum, a flag indicating binary or text input mode, and the file name. If file contains a backslash or newline, the line is started with a backslash, and each problematic character in the file name is escaped with a backslash, making the output unambiguous even in the presence of arbitrary file names. If file is omitted or specified as ‘-’, standard input is read.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
sha1sum computes a 160-bit checksum for each specified file. The usage and options of this command are precisely the same as for md5sum. See md5sum invocation.
Note: The SHA-1 digest is more secure than MD5, and no collisions of it are known (different files having the same fingerprint). However, it is known that they can be produced with considerable, but not unreasonable, resources. For this reason, it is generally considered that SHA-1 should be gradually phased out in favor of the more secure SHA-2 hash algorithms. See sha2 utilities.
The commands sha224sum, sha256sum, sha384sum and sha512sum compute checksums of various lengths (respectively 224, 256, 384 and 512 bits), collectively known as the SHA-2 hashes. The usage and options of these commands are precisely the same as for md5sum. See md5sum invocation.
Note: The SHA384 and SHA512 digests are considerably slower to compute, especially on 32-bit computers, than SHA224 or SHA256.
These commands work with (or produce) sorted files.
sort sorts, merges, or compares all the lines from the given files, or standard input if none are given or for a file of ‘-’. By default, sort writes the results to standard output. Synopsis:
sort [option]... [file]...
sort has three modes of operation: sort (the default), merge, and check for sortedness. The following options change the operation mode:
A pair of lines is compared as follows: sort compares each pair of fields, in the order specified on the command line, according to the associated ordering options, until a difference is found or no fields are left. If no key fields are specified, sort uses a default key of the entire line. Finally, as a last resort when all keys compare equal, sort compares entire lines as if no ordering options other than --reverse (-r) were specified. The --stable (-s) option disables this last-resort comparison so that lines in which all fields compare equal are left in their original relative order. The --unique (-u) option also disables the last-resort comparison.
Unless otherwise specified, all comparisons use the character collating sequence specified by the LC_COLLATE locale.2
GNU sort (as specified for all GNU utilities) has no limit on input line length or restrictions on bytes allowed within lines. In addition, if the final byte of an input file is not a newline, GNU sort silently supplies one. A line's trailing newline is not part of the line for comparison purposes.
0 if no error occurred 1 if invoked with -c or -C and the input is not sorted 2 if an error occurred
If the environment variable TMPDIR is set, sort uses its value as the directory for temporary files instead of /tmp. The --temporary-directory (-T) option in turn overrides the environment variable.
The following options affect the ordering of output lines. They may be specified globally or as part of a specific key field. If no key fields are specified, global options apply to comparison of entire lines; otherwise the global options are inherited by key fields that do not specify any special options of their own. In pre-POSIX versions of sort, global options affect only later key fields, so portable shell scripts should specify global options first.
Use this option only if there is no alternative; it is much slower than
--numeric-sort (-n) and it can lose information when
converting to floating point.
Comparison is exact; there is no rounding error.
Neither a leading ‘+’ nor exponential notation is recognized.
To compare such strings numerically, use the
--general-numeric-sort (-g) option.
If multiple random sort fields are specified, the same random hash function is used for all fields. To use different random hash functions for different fields, you can invoke sort more than once.
The choice of hash function is affected by the --random-source option.
Other options are:
With no arguments, prog must compress standard input to standard output, and when given the -d option it must decompress standard input to standard output.
Terminate with an error if prog exits with nonzero status.
White space and the backslash character should not appear in
prog; they are reserved for future use.
Each pos has the form ‘f[.c][opts]’, where f is the number of the field to use, and c is the number of the first character from the beginning of the field. Fields and character positions are numbered starting with 1; a character position of zero in pos2 indicates the field's last character. If ‘.c’ is omitted from pos1, it defaults to 1 (the beginning of the field); if omitted from pos2, it defaults to 0 (the end of the field). opts are ordering options, allowing individual keys to be sorted according to different rules; see below for details. Keys can span multiple fields.
Example: To sort on the second field, use --key=2,2
(-k 2,2). See below for more notes on keys and more examples.
See also the --debug option to help determine the part
of the line being used in the sort.
When sort has to merge more than nmerge inputs, it merges them in groups of nmerge, saving the result in a temporary file, which is then used as an input in a subsequent merge.
A large value of nmerge may improve merge performance and decrease temporary storage utilization at the expense of increased memory usage and I/O. Conversely a small value of nmerge may reduce memory requirements and I/O at the expense of temporary storage consumption and merge performance.
The value of nmerge must be at least 2. The default value is currently 16, but this is implementation-dependent and may change in the future.
The value of nmerge may be bounded by a resource limit for open
file descriptors. The commands ‘ulimit -n’ or ‘getconf
OPEN_MAX’ may display limits for your systems; these limits may be
modified further if your program already has some files open, or if
the operating system has other limits on the number of open files. If
the value of nmerge exceeds the resource limit, sort
silently uses a smaller value.
sort -o F F
and cat F | sort -o F
.
However, sort with --merge (-m) can open
the output file before reading all input, so a command like cat
F | sort -m -o F - G
is not safe as sort might start
writing F before cat is done reading it.
On newer systems, -o cannot appear after an input file if
POSIXLY_CORRECT is set, e.g., ‘sort F -o F’. Portable
scripts should specify -o output-file before any input
files.
This option can improve the performance of sort by causing it
to start with a larger or smaller sort buffer than the default.
However, this option affects only the initial buffer size. The buffer
grows beyond size if sort encounters input lines larger
than size.
That is, given the input line ‘ foo bar’, sort breaks it into fields ‘ foo’ and ‘ bar’. The field separator is not considered to be part of either the field preceding or the field following, so with ‘sort -t " "’ the same input line has three fields: an empty field, ‘foo’, and ‘bar’. However, fields that extend to the end of the line, as -k 2, or fields consisting of a range, as -k 2,3, retain the field separators present between the endpoints of the range.
To specify ASCII NUL as the field separator,
use the two-character string ‘\0’, e.g., ‘sort -t '\0'’.
This option also disables the default last-resort comparison.
The commands sort -u
and sort | uniq
are equivalent, but
this equivalence does not extend to arbitrary sort options.
For example, sort -n -u
inspects only the value of the initial
numeric string when checking for uniqueness, whereas sort -n |
uniq
inspects the entire line. See uniq invocation.
Historical (BSD and System V) implementations of sort have differed in their interpretation of some options, particularly -b, -f, and -n. GNU sort follows the POSIX behavior, which is usually (but not always!) like the System V behavior. According to POSIX, -n no longer implies -b. For consistency, -M has been changed in the same way. This may affect the meaning of character positions in field specifications in obscure cases. The only fix is to add an explicit -b.
A position in a sort field specified with -k may have any of the option letters ‘MbdfghinRrV’ appended to it, in which case no global ordering options are inherited by that particular field. The -b option may be independently attached to either or both of the start and end positions of a field specification, and if it is inherited from the global options it will be attached to both. If input lines can contain leading or adjacent blanks and -t is not used, then -k is typically combined with -b or an option that implicitly ignores leading blanks (‘Mghn’) as otherwise the varying numbers of leading blanks in fields can cause confusing results.
If the start position in a sort field specifier falls after the end of the line or after the end field, the field is empty. If the -b option was specified, the ‘.c’ part of a field specification is counted from the first nonblank character of the field.
On older systems, sort supports an obsolete origin-zero syntax ‘+pos1 [-pos2]’ for specifying sort keys. The obsolete sequence ‘sort +a.x -b.y’ is equivalent to ‘sort -k a+1.x+1,b’ if y is ‘0’ or absent, otherwise it is equivalent to ‘sort -k a+1.x+1,b+1.y’.
This obsolete behavior can be enabled or disabled with the _POSIX2_VERSION environment variable (see Standards conformance); it can also be enabled when POSIXLY_CORRECT is not set by using the obsolete syntax with ‘-pos2’ present.
Scripts intended for use on standard hosts should avoid obsolete syntax and should use -k instead. For example, avoid ‘sort +2’, since it might be interpreted as either ‘sort ./+2’ or ‘sort -k 3’. If your script must also run on hosts that support only the obsolete syntax, it can use a test like ‘if sort -k 1 </dev/null >/dev/null 2>&1; then ...’ to decide which syntax to use.
Here are some examples to illustrate various combinations of options.
sort -n -r
sort --parallel=4 -S 10M
sort -k 3b
sort -t : -k 2,2n -k 5.3,5.4
Note that if you had written -k 2n instead of -k 2,2n sort would have used all characters beginning in the second field and extending to the end of the line as the primary numeric key. For the large majority of applications, treating keys spanning more than one field as numeric will not do what you expect.
Also note that the ‘n’ modifier was applied to the field-end specifier for the first key. It would have been equivalent to specify -k 2n,2 or -k 2n,2n. All modifiers except ‘b’ apply to the associated field, regardless of whether the modifier character is attached to the field-start and/or the field-end part of the key specifier.
sort -t : -k 5b,5 -k 3,3n /etc/passwd sort -t : -n -k 5b,5 -k 3,3 /etc/passwd sort -t : -b -k 5,5 -k 3,3n /etc/passwd
These three commands have equivalent effect. The first specifies that the first key's start position ignores leading blanks and the second key is sorted numerically. The other two commands rely on global options being inherited by sort keys that lack modifiers. The inheritance works in this case because -k 5b,5b and -k 5b,5 are equivalent, as the location of a field-end lacking a ‘.c’ character position is not affected by whether initial blanks are skipped.
4.150.156.3 - - [01/Apr/2004:06:31:51 +0000] message 1 211.24.3.231 - - [24/Apr/2004:20:17:39 +0000] message 2
Fields are separated by exactly one space. Sort IPv4 addresses lexicographically, e.g., 212.61.52.2 sorts before 212.129.233.201 because 61 is less than 129.
sort -s -t ' ' -k 4.9n -k 4.5M -k 4.2n -k 4.14,4.21 file*.log | sort -s -t '.' -k 1,1n -k 2,2n -k 3,3n -k 4,4n
This example cannot be done with a single sort invocation, since IPv4 address components are separated by ‘.’ while dates come just after a space. So it is broken down into two invocations of sort: the first sorts by time stamp and the second by IPv4 address. The time stamp is sorted by year, then month, then day, and finally by hour-minute-second field, using -k to isolate each field. Except for hour-minute-second there's no need to specify the end of each key field, since the ‘n’ and ‘M’ modifiers sort based on leading prefixes that cannot cross field boundaries. The IPv4 addresses are sorted lexicographically. The second sort uses ‘-s’ so that ties in the primary key are broken by the secondary key; the first sort uses ‘-s’ so that the combination of the two sorts is stable.
find src -type f -print0 | sort -z -f | xargs -0 etags --append
The use of -print0, -z, and -0 in this case means that file names that contain blanks or other special characters are not broken up by the sort operation.
awk '{print length, $0}' /etc/passwd | sort -n | cut -f2- -d' '
In general this technique can be used to sort data that the sort command does not support, or is inefficient at, sorting directly.
ls */* | sort -t / -k 1,1R -k 2,2
shuf shuffles its input by outputting a random permutation of its input lines. Each output permutation is equally likely. Synopses:
shuf [option]... [file] shuf -e [option]... [arg]... shuf -i lo-hi [option]...
shuf has three modes of operation that affect where it obtains its input lines. By default, it reads lines from standard input. The following options change the operation mode:
shuf's other options can affect its behavior in all operation modes:
shuf -o F <F
and cat F | shuf -o F
.
For example:
shuf <<EOF A man, a plan, a canal: Panama! EOF
might produce the output
Panama! A man, a canal: a plan,
Similarly, the command:
shuf -e clubs hearts diamonds spades
might output:
clubs diamonds spades hearts
and the command ‘shuf -i 1-4’ might output:
4 2 1 3
The above examples all have four input lines, so shuf might produce any of the twenty-four possible permutations of the input. In general, if there are n input lines, there are n! (i.e., n factorial, or n * (n - 1) * ... * 1) possible output permutations.
To output 50 random numbers each in the range 0 through 9, use:
shuf -r -n 50 -i 0-9
To simulate 100 coin flips, use:
shuf -r -n 100 -e Head Tail
An exit status of zero indicates success, and a nonzero value indicates failure.
uniq writes the unique lines in the given input, or standard input if nothing is given or for an input name of ‘-’. Synopsis:
uniq [option]... [input [output]]
By default, uniq prints its input lines, except that it discards all but the first of adjacent repeated lines, so that no output lines are repeated. Optionally, it can instead discard lines that are not repeated, or all repeated lines.
The input need not be sorted, but repeated input lines are detected
only if they are adjacent. If you want to discard non-adjacent
duplicate lines, perhaps you want to use sort -u
.
See sort invocation.
Comparisons honor the rules specified by the LC_COLLATE locale category.
If no output file is specified, uniq writes to standard output.
The program accepts the following options. Also see Common options.
For compatibility uniq supports an obsolete option syntax
-n. New scripts should use -f n instead.
On older systems, uniq supports an obsolete option syntax
+n.
This obsolete behavior can be enabled or disabled with the
_POSIX2_VERSION environment variable (see Standards conformance), but portable scripts should avoid commands whose
behavior depends on this variable.
For example, use ‘uniq ./+10’ or ‘uniq -s 10’ rather than
the ambiguous ‘uniq +10’.
Note that when groups are delimited and the input stream contains two or more consecutive blank lines, then the output is ambiguous. To avoid that, filter the input through ‘tr -s '\n'’ to replace each sequence of consecutive newlines with a single newline.
This is a GNU extension.
Note that when groups are delimited and the input stream contains two or more consecutive blank lines, then the output is ambiguous. To avoid that, filter the input through ‘tr -s '\n'’ to replace each sequence of consecutive newlines with a single newline.
This is a GNU extension.
An exit status of zero indicates success, and a nonzero value indicates failure.
comm writes to standard output lines that are common, and lines that are unique, to two input files; a file name of ‘-’ means standard input. Synopsis:
comm [option]... file1 file2
Before comm can be used, the input files must be sorted using the collating sequence specified by the LC_COLLATE locale. If an input file ends in a non-newline character, a newline is silently appended. The sort command with no options always outputs a file that is suitable input to comm.
With no options, comm produces three-column output. Column one contains lines unique to file1, column two contains lines unique to file2, and column three contains lines common to both files. Columns are separated by a single TAB character.
The options -1, -2, and -3 suppress printing of the corresponding columns (and separators). Also see Common options.
Unlike some other comparison utilities, comm has an exit status that does not depend on the result of the comparison. Upon normal completion comm produces an exit code of zero. If there is an error it exits with nonzero status.
If the --check-order option is given, unsorted inputs will cause a fatal error message. If the option --nocheck-order is given, unsorted inputs will never cause an error message. If neither of these options is given, wrongly sorted inputs are diagnosed only if an input file is found to contain unpairable lines. If an input file is diagnosed as being unsorted, the comm command will exit with a nonzero status (and the output should not be used).
Forcing comm to process wrongly sorted input files containing unpairable lines by specifying --nocheck-order is not guaranteed to produce any particular output. The output will probably not correspond with whatever you hoped it would be.
Other options are:
The delimiter str may not be empty.
ptx reads a text file and essentially produces a permuted index, with each keyword in its context. The calling sketch is either one of:
ptx [option ...] [file ...] ptx -G [option ...] [input [output]]
The -G (or its equivalent: --traditional) option disables all GNU extensions and reverts to traditional mode, thus introducing some limitations and changing several of the program's default option values. When -G is not specified, GNU extensions are always enabled. GNU extensions to ptx are documented wherever appropriate in this document. See Compatibility in ptx, for the full list.
Individual options are explained in the following sections.
When GNU extensions are enabled, there may be zero, one or several files after the options. If there is no file, the program reads the standard input. If there is one or several files, they give the name of input files which are all read in turn, as if all the input files were concatenated. However, there is a full contextual break between each file and, when automatic referencing is requested, file names and line numbers refer to individual text input files. In all cases, the program outputs the permuted index to the standard output.
When GNU extensions are not enabled, that is, when the program operates in traditional mode, there may be zero, one or two parameters besides the options. If there are no parameters, the program reads the standard input and outputs the permuted index to the standard output. If there is only one parameter, it names the text input to be read instead of the standard input. If two parameters are given, they give respectively the name of the input file to read and the name of the output file to produce. Be very careful to note that, in this case, the contents of file given by the second parameter is destroyed. This behavior is dictated by System V ptx compatibility; GNU Standards normally discourage output parameters not introduced by an option.
Note that for any file named as the value of an option or as an input text file, a single dash ‘-’ may be used, in which case standard input is assumed. However, it would not make sense to use this convention more than once per program invocation.
An exit status of zero indicates success, and a nonzero value indicates failure.
As it is set up now, the program assumes that the input file is coded using 8-bit ISO 8859-1 code, also known as Latin-1 character set, unless it is compiled for MS-DOS, in which case it uses the character set of the IBM-PC. (GNU ptx is not known to work on smaller MS-DOS machines anymore.) Compared to 7-bit ASCII, the set of characters which are letters is different; this alters the behavior of regular expression matching. Thus, the default regular expression for a keyword allows foreign or diacriticized letters. Keyword sorting, however, is still crude; it obeys the underlying character set ordering quite blindly.
When GNU extensions are enabled, the only way to avoid newline as a
break character is to write all the break characters in the file with no
newline at all, not even at the end of the file. When GNU extensions
are disabled, spaces, tabs and newlines are always considered as break
characters even if not included in the Break file.
There is no default for the Only file. When both an Only file and an
Ignore file are specified, a word is considered a keyword only
if it is listed in the Only file and not in the Ignore file.
Using this option, the program does not try very hard to remove
references from contexts in output, but it succeeds in doing so
when the context ends exactly at the newline. If option
-r is used with -S default value, or when GNU extensions
are disabled, this condition is always met and references are completely
excluded from the output contexts.
[.?!][]\"')}]*\\($\\|\t\\| \\)[ \t\n]*
Whenever GNU extensions are disabled or if -r option is used, end of lines are used; in this case, the default regexp is just:
\n
Using an empty regexp is equivalent to completely disabling end of line or end of sentence recognition. In this case, the whole file is considered to be a single big line or sentence. The user might want to disallow all truncation flag generation as well, through option -F "". See Syntax of Regular Expressions.
When the keywords happen to be near the beginning of the input line or sentence, this often creates an unused area at the beginning of the output context line; when the keywords happen to be near the end of the input line or sentence, this often creates an unused area at the end of the output context line. The program tries to fill those unused areas by wrapping around context in them; the tail of the input line or sentence is used to fill the unused area on the left of the output line; the head of the input line or sentence is used to fill the unused area on the right of the output line.
As a matter of convenience to the user, many usual backslashed escape
sequences from the C language are recognized and converted to the
corresponding characters by ptx itself.
An empty regexp is equivalent to not using this option. See Syntax of Regular Expressions.
As a matter of convenience to the user, many usual backslashed escape sequences, as found in the C language, are recognized and converted to the corresponding characters by ptx itself.
Output format is mainly controlled by the -O and -T options
described in the table below. When neither -O nor -T are
selected, and if GNU extensions are enabled, the program chooses an
output format suitable for a dumb terminal. Each keyword occurrence is
output to the center of one line, surrounded by its left and right
contexts. Each field is properly justified, so the concordance output
can be readily observed. As a special feature, if automatic
references are selected by option -A and are output before the
left context, that is, if option -R is not selected, then
a colon is added after the reference; this nicely interfaces with GNU
Emacs next-error
processing. In this default output format, each
white space character, like newline and tab, is merely changed to
exactly one space, with no special attempt to compress consecutive
spaces. This might change in the future. Except for those white space
characters, every other character of the underlying set of 256
characters is transmitted verbatim.
Output format is further controlled by the following options.
This option is automatically selected whenever GNU extensions are
disabled.
string may have more than one character, as in -F .... Also, in the particular case when string is empty (-F ""), truncation flagging is disabled, and no truncation marks are appended in this case.
As a matter of convenience to the user, many usual backslashed escape
sequences, as found in the C language, are recognized and converted to
the corresponding characters by ptx itself.
.xx "tail" "before" "keyword_and_after" "head" "ref"
so it will be possible to write a ‘.xx’ roff macro to take care of the output typesetting. This is the default output format when GNU extensions are disabled. Option -M can be used to change ‘xx’ to another macro name.
In this output format, each non-graphical character, like newline and
tab, is merely changed to exactly one space, with no special attempt to
compress consecutive spaces. Each quote character ‘"’ is doubled
so it will be correctly processed by nroff or troff.
\xx {tail}{before}{keyword}{after}{head}{ref}
so it will be possible to write a \xx
definition to take care of
the output typesetting. Note that when references are not being
produced, that is, neither option -A nor option -r is
selected, the last parameter of each \xx
call is inhibited.
Option -M can be used to change ‘xx’ to another macro
name.
In this output format, some special characters, like ‘$’, ‘%’,
‘&’, ‘#’ and ‘_’ are automatically protected with a
backslash. Curly brackets ‘{’, ‘}’ are protected with a
backslash and a pair of dollar signs (to force mathematical mode). The
backslash itself produces the sequence \backslash{}
.
Circumflex and tilde diacritical marks produce the sequence ^\{ }
and
~\{ }
respectively. Other diacriticized characters of the
underlying character set produce an appropriate TeX sequence as far
as possible. The other non-graphical characters, like newline and tab,
and all other characters which are not part of ASCII, are merely
changed to exactly one space, with no special attempt to compress
consecutive spaces. Let me know how to improve this special character
processing for TeX.
This version of ptx contains a few features which do not exist in System V ptx. These extra features are suppressed by using the -G command line option, unless overridden by other command line options. Some GNU extensions cannot be recovered by overriding, so the simple rule is to avoid -G if you care about GNU extensions. Here are the differences between this program and System V ptx.
Having output parameters not introduced by options is a dangerous practice which GNU avoids as far as possible. So, for using ptx portably between GNU and System V, you should always use it with a single input file, and always expect the result on standard output. You might also want to automatically configure in a -G option to ptx calls in products using ptx, if the configurator finds that the installed ptx accepts -G.
tsort performs a topological sort on the given file, or standard input if no input file is given or for a file of ‘-’. For more details and some history, see tsort background. Synopsis:
tsort [option] [file]
tsort reads its input as pairs of strings, separated by blanks, indicating a partial ordering. The output is a total ordering that corresponds to the given partial ordering.
For example
tsort <<EOF a b c d e f b c d e EOF
will produce the output
a b c d e f
Consider a more realistic example.
You have a large set of functions all in one file, and they may all be
declared static except one. Currently that one (say main
) is the
first function defined in the file, and the ones it calls directly follow
it, followed by those they call, etc. Let's say that you are determined
to take advantage of prototypes, so you have to choose between declaring
all of those functions (which means duplicating a lot of information from
the definitions) and rearranging the functions so that as many as possible
are defined before they are used. One way to automate the latter process
is to get a list for each function of the functions it calls directly.
Many programs can generate such lists. They describe a call graph.
Consider the following list, in which a given line indicates that the
function on the left calls the one on the right directly.
main parse_options main tail_file main tail_forever tail_file pretty_name tail_file write_header tail_file tail tail_forever recheck tail_forever pretty_name tail_forever write_header tail_forever dump_remainder tail tail_lines tail tail_bytes tail_lines start_lines tail_lines dump_remainder tail_lines file_lines tail_lines pipe_lines tail_bytes xlseek tail_bytes start_bytes tail_bytes dump_remainder tail_bytes pipe_bytes file_lines dump_remainder recheck pretty_name
then you can use tsort to produce an ordering of those functions that satisfies your requirement.
example$ tsort call-graph | tac dump_remainder start_lines file_lines pipe_lines xlseek start_bytes pipe_bytes tail_lines tail_bytes pretty_name write_header tail recheck parse_options tail_file tail_forever main
tsort detects any cycles in the input and writes the first cycle encountered to standard error.
Note that for a given partial ordering, generally there is no unique
total ordering. In the context of the call graph above, the function
parse_options
may be placed anywhere in the list as long as it
precedes main
.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
tsort exists because very early versions of the Unix linker processed an archive file exactly once, and in order. As ld read each object in the archive, it decided whether it was needed in the program based on whether it defined any symbols which were undefined at that point in the link.
This meant that dependencies within the archive had to be handled
specially. For example, scanf
probably calls read
. That means
that in a single pass through an archive, it was important for scanf.o
to appear before read.o, because otherwise a program which calls
scanf
but not read
might end up with an unexpected unresolved
reference to read
.
The way to address this problem was to first generate a set of dependencies of one object file on another. This was done by a shell script called lorder. The GNU tools don't provide a version of lorder, as far as I know, but you can still find it in BSD distributions.
Then you ran tsort over the lorder output, and you used the resulting sort to define the order in which you added objects to the archive.
This whole procedure has been obsolete since about 1980, because Unix archives now contain a symbol table (traditionally built by ranlib, now generally built by ar itself), and the Unix linker uses the symbol table to effectively make multiple passes over an archive file.
Anyhow, that's where tsort came from. To solve an old problem with the way the linker handled archive files, which has since been solved in different ways.
cut writes to standard output selected parts of each line of each input file, or standard input if no files are given or for a file name of ‘-’. Synopsis:
cut option... [file]...
In the table which follows, the byte-list, character-list, and field-list are one or more numbers or ranges (two numbers separated by a dash) separated by commas. Bytes, characters, and fields are numbered starting at 1. Incomplete ranges may be given: -m means ‘1-m’; ‘n-’ means ‘n’ through end of line or last field. The list elements can be repeated, can overlap, and can be specified in any order; but the selected input is written in the same order that it is read, and is written exactly once.
The program accepts the following options. Also see Common options.
Note awk supports more sophisticated field processing, and by default will use (and discard) runs of blank characters to separate fields, and ignore leading and trailing blanks.
awk '{print $2}' # print the second field awk '{print $NF-1}' # print the penultimate field awk '{print $2,$1}' # reorder the first two fields
In the unlikely event that awk is unavailable, one can use the join command, to process blank characters as awk does above.
join -a1 -o 1.2 - /dev/null # print the second field join -a1 -o 1.2,1.1 - /dev/null # reorder the first two fields
An exit status of zero indicates success, and a nonzero value indicates failure.
paste writes to standard output lines consisting of sequentially corresponding lines of each given file, separated by a TAB character. Standard input is used for a file name of ‘-’ or if no input files are given.
Synopsis:
paste [option]... [file]...
For example, with:
$ cat num2 1 2 $ cat let3 a b c
Take lines sequentially from each file:
$ paste num2 let3 1 a 2 b c
Duplicate lines from a file:
$ paste num2 let3 num2 1 a 1 2 b 2 c
Intermix lines from stdin:
$ paste - let3 - < num2 1 a 2 b c
Join consecutive lines with a space:
$ seq 4 | paste -d ' ' - - 1 2 3 4
The program accepts the following options. Also see Common options.
$ paste -s num2 let3 1 2 a b c
$ paste -d '%_' num2 let3 num2 1%a_1 2%b_2 %c_
An exit status of zero indicates success, and a nonzero value indicates failure.
join writes to standard output a line for each pair of input lines that have identical join fields. Synopsis:
join [option]... file1 file2
Either file1 or file2 (but not both) can be ‘-’, meaning standard input. file1 and file2 should be sorted on the join fields.
Normally, the sort order is that of the
collating sequence specified by the LC_COLLATE locale. Unless
the -t option is given, the sort comparison ignores blanks at
the start of the join field, as in sort -b
. If the
--ignore-case option is given, the sort comparison ignores
the case of characters in the join field, as in sort -f
.
The sort and join commands should use consistent locales and options if the output of sort is fed to join. You can use a command like ‘sort -k 1b,1’ to sort a file on its default join field, but if you select a non-default locale, join field, separator, or comparison options, then you should do so consistently between join and sort. If ‘join -t ''’ is specified then the whole line is considered which matches the default operation of sort.
If the input has no unpairable lines, a GNU extension is available; the sort order can be any order that considers two fields to be equal if and only if the sort comparison described above considers them to be equal. For example:
$ cat file1 a a1 c c1 b b1 $ cat file2 a a2 c c2 b b2 $ join file1 file2 a a1 a2 c c1 c2 b b1 b2
If the --check-order option is given, unsorted inputs will cause a fatal error message. If the option --nocheck-order is given, unsorted inputs will never cause an error message. If neither of these options is given, wrongly sorted inputs are diagnosed only if an input file is found to contain unpairable lines, and when both input files are non empty. If an input file is diagnosed as being unsorted, the join command will exit with a nonzero status (and the output should not be used).
Forcing join to process wrongly sorted input files containing unpairable lines by specifying --nocheck-order is not guaranteed to produce any particular output. The output will probably not correspond with whatever you hoped it would be.
The defaults are:
The program accepts the following options. Also see Common options.
Otherwise, construct each output line according to the format in field-list. Each element in field-list is either the single character ‘0’ or has the form m.n where the file number, m, is ‘1’ or ‘2’ and n is a positive field number.
A field specification of ‘0’ denotes the join field. In most cases, the functionality of the ‘0’ field spec may be reproduced using the explicit m.n that corresponds to the join field. However, when printing unpairable lines (using either of the -a or -v options), there is no way to specify the join field using m.n in field-list if there are unpairable lines in both files. To give join that functionality, POSIX invented the ‘0’ field specification notation.
The elements in field-list are separated by commas or blanks. Blank separators typically need to be quoted for the shell. For example, the commands ‘join -o 1.2,2.2’ and ‘join -o '1.2 2.2'’ are equivalent.
All output lines—including those printed because of any -a or -v
option—are subject to the specified field-list.
An exit status of zero indicates success, and a nonzero value indicates failure.
These commands operate on individual characters.
tr [option]... set1 [set2]
tr copies standard input to standard output, performing one of the following operations:
The set1 and (if given) set2 arguments define ordered sets of characters, referred to below as set1 and set2. These sets are the characters of the input that tr operates on. The --complement (-c, -C) option replaces set1 with its complement (all of the characters that are not in set1).
Currently tr fully supports only single-byte characters. Eventually it will support multibyte characters; when it does, the -C option will cause it to complement the set of characters, whereas -c will cause it to complement the set of values. This distinction will matter only when some values are not characters, and this is possible only in locales using multibyte encodings when the input contains encoding errors.
The program accepts the --help and --version options. See Common options. Options must precede operands.
An exit status of zero indicates success, and a nonzero value indicates failure.
The format of the set1 and set2 arguments resembles the format of regular expressions; however, they are not regular expressions, only lists of characters. Most characters simply represent themselves in these strings, but the strings can contain the shorthands listed below, for convenience. Some of them can be used only in set1 or set2, as noted below.
While a backslash followed by a character not listed above is
interpreted as that character, the backslash also effectively
removes any special significance, so it is useful to escape
‘[’, ‘]’, ‘*’, and ‘-’.
GNU tr does not support the System V syntax that uses square brackets to enclose ranges. Translations specified in that format sometimes work as expected, since the brackets are often transliterated to themselves. However, they should be avoided because they sometimes behave unexpectedly. For example, ‘tr -d '[0-9]'’ deletes brackets as well as digits.
Many historically common and even accepted uses of ranges are not
portable. For example, on EBCDIC hosts using the ‘A-Z’
range will not do what most would expect because ‘A’ through ‘Z’
are not contiguous as they are in ASCII.
If you can rely on a POSIX compliant version of tr, then
the best way to work around this is to use character classes (see below).
Otherwise, it is most portable (and most ugly) to enumerate the members
of the ranges.
upper
and lower
classes,
which expand in ascending order. When the --delete (-d)
and --squeeze-repeats (-s) options are both given, any
character class can be used in set2. Otherwise, only the
character classes lower
and upper
are accepted in
set2, and then only if the corresponding character class
(upper
and lower
, respectively) is specified in the same
relative position in set1. Doing this specifies case conversion.
The class names are given below; an error results when an invalid class
name is given.
alnum
alpha
blank
cntrl
digit
graph
lower
print
punct
space
upper
xdigit
tr performs translation when set1 and set2 are both given and the --delete (-d) option is not given. tr translates each character of its input that is in set1 to the corresponding character in set2. Characters not in set1 are passed through unchanged. When a character appears more than once in set1 and the corresponding characters in set2 are not all the same, only the final one is used. For example, these two commands are equivalent:
tr aaa xyz tr a z
A common use of tr is to convert lowercase characters to uppercase. This can be done in many ways. Here are three of them:
tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ tr a-z A-Z tr '[:lower:]' '[:upper:]'
But note that using ranges like a-z
above is not portable.
When tr is performing translation, set1 and set2 typically have the same length. If set1 is shorter than set2, the extra characters at the end of set2 are ignored.
On the other hand, making set1 longer than set2 is not portable; POSIX says that the result is undefined. In this situation, BSD tr pads set2 to the length of set1 by repeating the last character of set2 as many times as necessary. System V tr truncates set1 to the length of set2.
By default, GNU tr handles this case like BSD tr. When the --truncate-set1 (-t) option is given, GNU tr handles this case like the System V tr instead. This option is ignored for operations other than translation.
Acting like System V tr in this case breaks the relatively common BSD idiom:
tr -cs A-Za-z0-9 '\012'
because it converts only zero bytes (the first element in the complement of set1), rather than all non-alphanumerics, to newlines.
By the way, the above idiom is not portable because it uses ranges, and it assumes that the octal code for newline is 012. Assuming a POSIX compliant tr, here is a better way to write it:
tr -cs '[:alnum:]' '[\n*]'
When given just the --delete (-d) option, tr removes any input characters that are in set1.
When given just the --squeeze-repeats (-s) option, tr replaces each input sequence of a repeated character that is in set1 with a single occurrence of that character.
When given both --delete and --squeeze-repeats, tr first performs any deletions using set1, then squeezes repeats from any remaining characters using set2.
The --squeeze-repeats option may also be used when translating, in which case tr first performs translation, then squeezes repeats from any remaining characters using set2.
Here are some examples to illustrate various combinations of options:
tr -d '\0'
tr -cs '[:alnum:]' '[\n*]'
tr -s '\n'
#!/bin/sh cat -- "$@" \ | tr -s '[:punct:][:blank:]' '[\n*]' \ | tr '[:upper:]' '[:lower:]' \ | uniq -d
tr -d axM
However, when ‘-’ is one of those characters, it can be tricky because
‘-’ has special meanings. Performing the same task as above but also
removing all ‘-’ characters, we might try tr -d -axM
, but
that would fail because tr would try to interpret -a as
a command-line option. Alternatively, we could try putting the hyphen
inside the string, tr -d a-xM
, but that wouldn't work either because
it would make tr interpret a-x
as the range of characters
‘a’...‘x’ rather than the three.
One way to solve the problem is to put the hyphen at the end of the list
of characters:
tr -d axM-
Or you can use ‘--’ to terminate option processing:
tr -d -- -axM
More generally, use the character class notation [=c=]
with ‘-’ (or any other character) in place of the ‘c’:
tr -d '[=-=]axM'
Note how single quotes are used in the above example to protect the square brackets from interpretation by a shell.
expand writes the contents of each given file, or standard input if none are given or for a file of ‘-’, to standard output, with tab characters converted to the appropriate number of spaces. Synopsis:
expand [option]... [file]...
By default, expand converts all tabs to spaces. It preserves backspace characters in the output; they decrement the column count for tab calculations. The default action is equivalent to -t 8 (set tabs every 8 columns).
The program accepts the following options. Also see Common options.
For compatibility, GNU expand also accepts the obsolete
option syntax, -t1[,t2].... New scripts
should use -t t1[,t2]... instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
unexpand writes the contents of each given file, or standard input if none are given or for a file of ‘-’, to standard output, converting blanks at the beginning of each line into as many tab characters as needed. In the default POSIX locale, a blank is a space or a tab; other locales may specify additional blank characters. Synopsis:
unexpand [option]... [file]...
By default, unexpand converts only initial blanks (those that precede all non-blank characters) on each line. It preserves backspace characters in the output; they decrement the column count for tab calculations. By default, tabs are set at every 8th column.
The program accepts the following options. Also see Common options.
For compatibility, GNU unexpand supports the obsolete option syntax,
-tab1[,tab2]..., where tab stops must be
separated by commas. (Unlike -t, this obsolete option does
not imply -a.) New scripts should use --first-only -t
tab1[,tab2]... instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
This chapter describes the ls command and its variants dir and vdir, which list information about files.
The ls program lists information about files (of any type, including directories). Options and file arguments can be intermixed arbitrarily, as usual.
For non-option command-line arguments that are directories, by default ls lists the contents of directories, not recursively, and omitting files with names beginning with ‘.’. For other non-option arguments, by default ls lists just the file name. If no non-option argument is specified, ls operates on the current directory, acting as if it had been invoked with a single argument of ‘.’.
By default, the output is sorted alphabetically, according to the locale settings in effect.3 If standard output is a terminal, the output is in columns (sorted vertically) and control characters are output as question marks; otherwise, the output is listed one per line and control characters are output as-is.
Because ls is such a fundamental program, it has accumulated many options over the years. They are described in the subsections below; within each section, options are listed alphabetically (ignoring case). The division of options into the subsections is not absolute, since some options affect more than one aspect of ls's operation.
0 success 1 minor problems (e.g., failure to access a file or directory not specified as a command line argument. This happens when listing a directory in which entries are actively being removed or renamed.) 2 serious trouble (e.g., memory exhausted, invalid option, failure to access a file or directory specified as a command line argument or a directory loop)
Also see Common options.
These options determine which files ls lists information for. By default, ls lists files and the contents of any directories on the command line, except that in directories it ignores files whose names start with ‘.’.
This option can be useful in shell aliases. For example, if
lx is an alias for ‘ls --hide='*~'’ and ly is
an alias for ‘ls --ignore='*~'’, then the command ‘lx -A’
lists the file README~ even though ‘ly -A’ would not.
$ ls --ignore='.??*' --ignore='.[^.]' --ignore='#*'
The first option ignores names of length 3 or more that start with ‘.’,
the second ignores all two-character names that start with ‘.’
except ‘..’, and the third ignores names that start with ‘#’.
These options affect the information that ls displays. By default, only file names are shown.
//DIRED// beg1 end1 beg2 end2 ...
The begn and endn are unsigned integers that record the byte position of the beginning and end of each file name in the output. This makes it easy for Emacs to find the names, even when they contain unusual characters such as space or newline, without fancy searching.
If directories are being listed recursively (-R), output a similar line with offsets for each subdirectory name:
//SUBDIRED// beg1 end1 ...
Finally, output a line of the form:
//DIRED-OPTIONS// --quoting-style=word
where word is the quoting style (see Formatting the file names).
Here is an actual example:
$ mkdir -p a/sub/deeper a/sub2 $ touch a/f1 a/f2 $ touch a/sub/deeper/file $ ls -gloRF --dired a a: total 8 -rw-r--r-- 1 0 Jun 10 12:27 f1 -rw-r--r-- 1 0 Jun 10 12:27 f2 drwxr-xr-x 3 4096 Jun 10 12:27 sub/ drwxr-xr-x 2 4096 Jun 10 12:27 sub2/ a/sub: total 4 drwxr-xr-x 2 4096 Jun 10 12:27 deeper/ a/sub/deeper: total 0 -rw-r--r-- 1 0 Jun 10 12:27 file a/sub2: total 0 //DIRED// 48 50 84 86 120 123 158 162 217 223 282 286 //SUBDIRED// 2 3 167 172 228 240 290 296 //DIRED-OPTIONS// --quoting-style=literal
Note that the pairs of offsets on the ‘//DIRED//’ line above delimit these names: f1, f2, sub, sub2, deeper, file. The offsets on the ‘//SUBDIRED//’ line delimit the following directory names: a, a/sub, a/sub/deeper, a/sub2.
Here is an example of how to extract the fifth entry name, ‘deeper’, corresponding to the pair of offsets, 222 and 228:
$ ls -gloRF --dired a > out $ dd bs=1 skip=222 count=6 < out 2>/dev/null; echo deeper
Note that although the listing above includes a trailing slash for the ‘deeper’ entry, the offsets select the name without the trailing slash. However, if you invoke ls with --dired along with an option like --escape (aka -b) and operate on a file whose name contains special characters, notice that the backslash is included:
$ touch 'a b' $ ls -blog --dired 'a b' -rw-r--r-- 1 0 Jun 10 12:28 a\ b //DIRED// 30 34 //DIRED-OPTIONS// --quoting-style=escape
If you use a quoting style that adds quote marks
(e.g., --quoting-style=c), then the offsets include the quote marks.
So beware that the user may select the quoting style via the environment
variable QUOTING_STYLE. Hence, applications using --dired
should either specify an explicit --quoting-style=literal option
(aka -N or --literal) on the command line, or else be
prepared to parse the escaped names.
Normally the size is printed as a byte count without punctuation, but this can be overridden (see Block size). For example, -h prints an abbreviated, human-readable count, and ‘--block-size="'1"’ prints a byte count with the thousands separator of the current locale.
For each directory that is listed, preface the files with a line ‘total blocks’, where blocks is the total disk allocation for all files in that directory. The block size currently defaults to 1024 bytes, but this can be overridden (see Block size). The blocks computed counts each hard link separately; this is arguably a deficiency.
The file type is one of the following characters:
The file mode bits listed are similar to symbolic mode specifications (see Symbolic Modes). But ls combines multiple bits into the third character of each set of permissions as follows:
Following the file mode bits is a single character that specifies whether an alternate access method such as an access control list applies to the file. When the character following the file mode bits is a space, there is no alternate access method. When it is a printing character, then there is such a method.
GNU ls uses a ‘.’ character to indicate a file with a security context, but no other alternate access method.
A file with any other combination of alternate access methods
is marked with a ‘+’ character.
Normally the disk allocation is printed in units of 1024 bytes, but this can be overridden (see Block size).
For files that are NFS-mounted from an HP-UX system to a BSD system,
this option reports sizes that are half the correct values. On HP-UX
systems, it reports sizes that are twice the correct values for files
that are NFS-mounted from BSD systems. This is due to a flaw in HP-UX;
it also affects the HP-UX ls program.
These options change the order in which ls sorts the information it outputs. By default, sorting is done by character code (e.g., ASCII order).
Version sorting handles the fact that file names frequently include indices or version numbers. Standard sorting usually does not produce the order that one expects because comparisons are made on a character-by-character basis. Version sorting is especially useful when browsing directories that contain many files with indices/version numbers in their names:
$ ls -1 $ ls -1v abc.zml-1.gz abc.zml-1.gz abc.zml-12.gz abc.zml-2.gz abc.zml-2.gz abc.zml-12.gz
Version-sorted strings are compared such that if ver1 and ver2 are version numbers and prefix and suffix (suffix matching the regular expression ‘(\.[A-Za-z~][A-Za-z0-9~]*)*’) are strings then ver1 < ver2 implies that the name composed of “prefix ver1 suffix” sorts before “prefix ver2 suffix”.
Note also that leading zeros of numeric parts are ignored:
$ ls -1 $ ls -1v abc-1.007.tgz abc-1.01a.tgz abc-1.012b.tgz abc-1.007.tgz abc-1.01a.tgz abc-1.012b.tgz
This functionality is implemented using gnulib's filevercmp
function,
which has some caveats worth noting.
abc-1.2.3.4.7z abc-1.2.3.7z
abc-1.2.3.4.x86_64.rpm abc-1.2.3.x86_64.rpm
These options affect the appearance of the overall output.
more -f
does seem to work.
Note that using the --color option may incur a noticeable
performance penalty when run in a directory with very many entries,
because the default settings require that ls stat
every
single file it lists.
However, if you would like most of the file-type coloring
but can live without the other coloring options (e.g.,
executable, orphan, sticky, other-writable, capability), use
dircolors to set the LS_COLORS environment variable like this,
eval $(dircolors -p | perl -pe \ 's/^((CAP|S[ET]|O[TR]|M|E)\w+).*/$1 00/' | dircolors -)
and on a dirent.d_type
-capable file system, ls
will perform only one stat
call per command line argument.
The -k or --kibibytes option affects the
per-directory block count written by the -l and similar
options, and the size written by the -s or --size
option. It does not affect the file size written by -l.
Some terminal emulators might not properly align columns to the right of a
TAB following a non-ASCII byte. You can avoid that issue by using the
-T0 option or put TABSIZE=0
in your environment, to tell
ls to align using spaces, not tabs.
By default, file timestamps are listed in abbreviated form, using a date like ‘Mar 30 2002’ for non-recent timestamps, and a date-without-year and time like ‘Mar 30 23:45’ for recent timestamps. This format can change depending on the current locale as detailed below.
A timestamp is considered to be recent if it is less than six months old, and is not dated in the future. If a timestamp dated today is not listed in recent form, the timestamp is in the future, which means you probably have clock skew problems which may break programs like make that rely on file timestamps.
Time stamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ.
The following option changes how file timestamps are printed.
If format contains two format strings separated by a newline,
the former is used for non-recent files and the latter for recent
files; if you want output columns to line up, you may need to insert
spaces in one of the two formats.
This is useful because the time output includes all the information that
is available from the operating system. For example, this can help
explain make's behavior, since GNU make
uses the full timestamp to determine whether a file is out of date.
newline=' ' ls -l --time-style="+%Y-%m-%d $newline%m-%d %H:%M" ls -l --time-style="iso"
The LC_TIME locale category specifies the timestamp format. The default POSIX locale uses timestamps like ‘Mar 30 2002’ and ‘Mar 30 23:45’; in this locale, the following two ls invocations are equivalent:
newline=' ' ls -l --time-style="+%b %e %Y$newline%b %e %H:%M" ls -l --time-style="locale"
Other locales behave differently. For example, in a German locale,
--time-style="locale" might be equivalent to
--time-style="+%e. %b %Y $newline%e. %b %H:%M"
and might generate timestamps like ‘30. Mär 2002 ’ and
‘30. Mär 23:45’.
You can specify the default value of the --time-style option with the environment variable TIME_STYLE; if TIME_STYLE is not set the default style is ‘locale’. GNU Emacs 21.3 and later use the --dired option and therefore can parse any date format, but if you are using Emacs 21.1 or 21.2 and specify a non-POSIX locale you may need to set ‘TIME_STYLE="posix-long-iso"’.
To avoid certain denial-of-service attacks, timestamps that would be longer than 1000 bytes may be treated as errors.
These options change how file names themselves are printed.
You can specify the default value of the --quoting-style option
with the environment variable QUOTING_STYLE. If that environment
variable is not set, the default value is ‘literal’, but this
default may change to ‘shell’ in a future version of this package.
dir is equivalent to ls -C
-b
; that is, by default files are listed in columns, sorted vertically,
and special characters are represented by backslash escape sequences.
See ls.
vdir is equivalent to ls -l
-b
; that is, by default files are listed in long format and special
characters are represented by backslash escape sequences.
dircolors outputs a sequence of shell commands to set up the terminal for color output from ls (and dir, etc.). Typical usage:
eval "$(dircolors [option]... [file])"
If file is specified, dircolors reads it to determine which colors to use for which file types and extensions. Otherwise, a precompiled database is used. For details on the format of these files, run ‘dircolors --print-database’.
To make dircolors read a ~/.dircolors file if it exists, you can put the following lines in your ~/.bashrc (or adapt them to your favorite shell):
d=.dircolors test -r $d && eval "$(dircolors $d)"
The output is a shell command to set the LS_COLORS environment variable. You can specify the shell syntax to use on the command line, or dircolors will guess it from the value of the SHELL environment variable.
The program accepts the following options. Also see Common options.
SHELL
ends with
csh or tcsh.
An exit status of zero indicates success, and a nonzero value indicates failure.
This chapter describes the commands for basic file manipulation: copying, moving (renaming), and deleting (removing).
cp copies files (or, optionally, directories). The copy is completely independent of the original. You can either copy one file to another, or copy arbitrarily many files to a destination directory. Synopses:
cp [option]... [-T] source dest cp [option]... source... directory cp [option]... -t directory source...
Generally, files are written just as they are read. For exceptions, see the --sparse option below.
By default, cp does not copy directories. However, the -R, -a, and -r options cause cp to copy recursively by descending into source directories and copying files to corresponding destination directories.
When copying from a symbolic link, cp normally follows the link only when not copying recursively or when --link (-l) is used. This default can be overridden with the --archive (-a), -d, --dereference (-L), --no-dereference (-P), and -H options. If more than one of these options is specified, the last one silently overrides the others.
When copying to a symbolic link, cp follows the link only when it refers to an existing regular file. However, when copying to a dangling symbolic link, cp refuses by default, and fails with a diagnostic, since the operation is inherently dangerous. This behavior is contrary to historical practice and to POSIX. Set POSIXLY_CORRECT to make cp attempt to create the target of a dangling destination symlink, in spite of the possible risk. Also, when an option like --backup or --link acts to rename or remove the destination before copying, cp renames or removes the symbolic link rather than the file it points to.
By default, cp copies the contents of special files only when not copying recursively. This default can be overridden with the --copy-contents option.
cp generally refuses to copy a file onto itself, with the following exception: if --force --backup is specified with source and dest identical, and referring to a regular file, cp will make a backup file, either regular or numbered, as specified in the usual ways (see Backup options). This is useful when you simply want to make a backup of an existing file before changing it.
The program accepts the following options. Also see Common options.
#!/bin/sh # Usage: backup FILE... # Create a GNU-style backup of each listed FILE. fail=0 for i; do cp --backup --force --preserve=all -- "$i" "$i" || fail=1 done exit $fail
cp -R --copy-contents
will hang indefinitely trying to read
from FIFOs and special files like /dev/console, and it will
fill up your destination disk if you use it to copy /dev/zero.
This option has no effect unless copying recursively, and it does not
affect the copying of symbolic links.
This option is independent of the --interactive or -i option: neither cancels the effect of the other.
This option is ignored when the --no-clobber or -n option
is also used.
utimensat
function,
which makes it possible even for symbolic links.
$ mkdir c; : > a; ln -s a b; cp -aH a b c; ls -i1 c 74161745 a 74161745 b
Note the inputs: b is a symlink to regular file a, yet the files in destination directory, c/, are hard-linked. Since -a implies --no-dereference it would copy the symlink, but the later -H tells cp to dereference the command line arguments where it then sees two files with the same inode number. Then the --preserve=links option also implied by -a will preserve the perceived hard link.
Here is a similar example that exercises cp's -L option:
$ mkdir b c; (cd b; : > a; ln -s a b); cp -aL b c; ls -i1 c/b 74163295 a 74163295 b
Using --preserve with no attribute_list is equivalent to --preserve=mode,ownership,timestamps.
In the absence of this option, the permissions of existing destination
files are unchanged, while each new file is created with the
mode bits of the corresponding source file, minus the bits set in the
umask and minus the set-user-ID and set-group-ID bits.
See File permissions.
cp --parents a/b/c existing_dir
copies the file a/b/c to existing_dir/a/b/c, creating
any missing intermediate directories.
The when value can be one of the following:
This option is overridden by the --link, --symbolic-link and --attributes-only options, thus allowing it to be used to configure the default data copying behavior for cp. For example, with the following alias, cp will use the minimum amount of space supported by the file system.
alias cp='cp --reflink=auto --sparse=always'
The when value can be one of the following:
An exit status of zero indicates success, and a nonzero value indicates failure.
dd copies a file (from standard input to standard output, by default) with a changeable I/O block size, while optionally performing conversions on it. Synopses:
dd [operand]... dd option
The only options are --help and --version. See Common options. dd accepts the following operands, whose syntax was inspired by the DD (data definition) statement of OS/360 JCL.
Conversions:
The ‘ascii’, ‘ebcdic’, and ‘ibm’ conversions are
mutually exclusive. If you use any of these options, you should also
use the ‘cbs=’ option.
The ‘block’ and ‘unblock’ conversions are mutually exclusive.
The ‘lcase’ and ‘ucase’ conversions are mutually exclusive.
The following “conversions” are really file flags and don't affect internal processing:
The ‘excl’ and ‘nocreat’ conversions are mutually exclusive.
Here are the flags. Not every flag is supported on every operating system.
EINVAL
if the
output buffer size is not a multiple of 512.
# Advise to drop cache for whole file dd if=ifile iflag=nocache count=0 # Ensure drop cache for the whole file dd of=ofile oflag=nocache conv=notrunc,fdatasync count=0 # Drop cache for part of file dd if=ifile iflag=nocache skip=10 count=10 of=/dev/null # Stream data using just the read-ahead cache dd if=ifile of=ofile iflag=nocache oflag=nocache
read
system call
may return early if a full block is not available.
When that happens, continue calling read
to fill the remainder
of the block.
This flag can be used only with iflag
.
This flag is useful with pipes for example
as they may return short reads. In that case,
this flag is needed to ensure that a ‘count=’ argument is
interpreted as a block count rather than a count of read operations.
iflag
.
iflag
.
oflag
.
These flags are not supported on all systems, and ‘dd’ rejects attempts to use them when they are not supported. When reading from standard input or writing to standard output, the ‘nofollow’ and ‘noctty’ flags should not be specified, and the other flags (e.g., ‘nonblock’) can affect how other processes behave with the affected file descriptors, even after dd exits.
The numeric-valued strings above (n and bytes) can be followed by a multiplier: ‘b’=512, ‘c’=1, ‘w’=2, ‘xm’=m, or any of the standard block size suffixes like ‘k’=1024 (see Block size).
Any block size you specify via ‘bs=’, ‘ibs=’, ‘obs=’, ‘cbs=’ should not be too large—values larger than a few megabytes are generally wasteful or (as in the gigabyte..exabyte case) downright counterproductive or error-inducing.
To process data that is at an offset or size that is not a multiple of the I/O block size, you can use the ‘skip_bytes’, ‘seek_bytes’ and ‘count_bytes’ flags. Alternatively the traditional method of separate dd invocations can be used. For example, the following shell commands copy data in 512 KiB blocks between a disk and a tape, but do not save or restore a 4 KiB label at the start of the disk:
disk=/dev/rdsk/c0t1d0s2 tape=/dev/rmt/0 # Copy all but the label from disk to tape. (dd bs=4k skip=1 count=0 && dd bs=512k) <$disk >$tape # Copy from tape back to disk, but leave the disk label alone. (dd bs=4k seek=1 count=0 && dd bs=512k) <$tape >$disk
For failing disks, other tools come with a great variety of extra functionality to ease the saving of as much data as possible before the disk finally dies, e.g. GNU ddrescue. However, in some cases such a tool is not available or the administrator feels more comfortable with the handling of dd. As a simple rescue method, call dd as shown in the following example: the options ‘conv=noerror,sync’ are used to continue after read errors and to pad out bad reads with NULs, while ‘iflag=fullblock’ caters for short reads (which traditionally never occur on disk based devices):
# Rescue data from an (unmounted!) partition of a failing disk. dd conv=noerror,sync iflag=fullblock </dev/sda1 > /mnt/rescue.img
Sending an ‘INFO’ signal to a running dd
process makes it print I/O statistics to standard error
and then resume copying. In the example below,
dd is run in the background to copy 10 million blocks.
The kill command makes it output intermediate I/O statistics,
and when dd completes normally or is killed by the
SIGINT
signal, it outputs the final statistics.
$ dd if=/dev/zero of=/dev/null count=10MB & pid=$! $ kill -s INFO $pid; wait $pid 3385223+0 records in 3385223+0 records out 1733234176 bytes (1.7 GB) copied, 6.42173 seconds, 270 MB/s 10000000+0 records in 10000000+0 records out 5120000000 bytes (5.1 GB) copied, 18.913 seconds, 271 MB/s
On systems lacking the ‘INFO’ signal dd responds to the ‘USR1’ signal instead, unless the POSIXLY_CORRECT environment variable is set.
An exit status of zero indicates success, and a nonzero value indicates failure.
install copies files while setting their file mode bits and, if possible, their owner and group. Synopses:
install [option]... [-T] source dest install [option]... source... directory install [option]... -t directory source... install [option]... -d directory...
install is similar to cp, but allows you to control the attributes of destination files. It is typically used in Makefiles to copy programs into their destination directories. It refuses to copy files onto themselves.
install never preserves extended attributes (xattr).
The program accepts the following options. Also see Common options.
root
. owner may be either a user name or a numeric user
ID.
An exit status of zero indicates success, and a nonzero value indicates failure.
mv moves or renames files (or directories). Synopses:
mv [option]... [-T] source dest mv [option]... source... directory mv [option]... -t directory source...
mv can move any type of file from one file system to another.
Prior to version 4.0
of the fileutils,
mv could move only regular files between file systems.
For example, now mv can move an entire directory hierarchy
including special device files from one partition to another. It first
uses some of the same code that's used by cp -a
to copy the
requested directories and files, then (assuming the copy succeeded)
it removes the originals. If the copy fails, then the part that was
copied to the destination partition is removed. If you were to copy
three directories from one partition to another and the copy of the first
directory succeeded, but the second didn't, the first would be left on
the destination partition and the second and third would be left on the
original partition.
mv always tries to copy extended attributes (xattr), which may include SELinux context, ACLs or Capabilities. Upon failure all but ‘Operation not supported’ warnings are output.
If a destination file exists but is normally unwritable, standard input is a terminal, and the -f or --force option is not given, mv prompts the user for whether to replace the file. (You might own the file, or have write permission on its directory.) If the response is not affirmative, the file is skipped.
Warning: Avoid specifying a source name with a trailing slash,
when it might be a symlink to a directory.
Otherwise, mv may do something very surprising, since
its behavior depends on the underlying rename system call.
On a system with a modern Linux-based kernel, it fails with
errno=ENOTDIR
.
However, on other systems (at least FreeBSD 6.1 and Solaris 10) it silently
renames not the symlink but rather the directory referenced by the symlink.
See Trailing slashes.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
rm removes each given file. By default, it does not remove directories. Synopsis:
rm [option]... [file]...
If the -I or --interactive=once option is given, and there are more than three files or the -r, -R, or --recursive are given, then rm prompts the user for whether to proceed with the entire operation. If the response is not affirmative, the entire command is aborted.
Otherwise, if a file is unwritable, standard input is a terminal, and the -f or --force option is not given, or the -i or --interactive=always option is given, rm prompts the user for whether to remove the file. If the response is not affirmative, the file is skipped.
Any attempt to remove a file whose last file name component is . or .. is rejected without any prompting, as mandated by POSIX.
Warning: If you use rm to remove a file, it is usually possible to recover the contents of that file. If you want more assurance that the contents are truly unrecoverable, consider using shred.
The program accepts the following options. Also see Common options.
This option is useful when removing a build “chroot” hierarchy,
which normally contains no valuable data. However, it is not uncommon
to bind-mount /home into such a hierarchy, to make it easier to
use one's start-up file. The catch is that it's easy to forget to
unmount /home. Then, when you use rm -rf to remove
your normally throw-away chroot, that command will remove everything
under /home, too.
Use the --one-file-system option, and it will
warn about and skip directories on other file systems.
Of course, this will not save your /home if it and your
chroot happen to be on the same file system.
One common question is how to remove files whose names begin with a
‘-’. GNU rm, like every program that uses the getopt
function to parse its arguments, lets you use the ‘--’ option to
indicate that all following arguments are non-options. To remove a file
called -f in the current directory, you could type either:
rm -- -f
or:
rm ./-f
The Unix rm program's use of a single ‘-’ for this purpose predates the development of the getopt standard syntax.
An exit status of zero indicates success, and a nonzero value indicates failure.
shred overwrites devices or files, to help prevent even very expensive hardware from recovering the data.
Ordinarily when you remove a file (see rm invocation), the data is not actually destroyed. Only the index listing where the file is stored is destroyed, and the storage is made available for reuse. There are undelete utilities that will attempt to reconstruct the index and can bring the file back if the parts were not reused.
On a busy system with a nearly-full drive, space can get reused in a few seconds. But there is no way to know for sure. If you have sensitive data, you may want to be sure that recovery is not possible by actually overwriting the file with non-sensitive data.
However, even after doing that, it is possible to take the disk back to a laboratory and use a lot of sensitive (and expensive) equipment to look for the faint “echoes” of the original data underneath the overwritten data. If the data has only been overwritten once, it's not even that hard.
The best way to remove something irretrievably is to destroy the media it's on with acid, melt it down, or the like. For cheap removable media like floppy disks, this is the preferred method. However, hard drives are expensive and hard to melt, so the shred utility tries to achieve a similar effect non-destructively.
This uses many overwrite passes, with the data patterns chosen to maximize the damage they do to the old data. While this will work on floppies, the patterns are designed for best effect on hard drives. For more details, see the source code and Peter Gutmann's paper Secure Deletion of Data from Magnetic and Solid-State Memory, from the proceedings of the Sixth USENIX Security Symposium (San Jose, California, July 22–25, 1996).
Please note that shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. Exceptions include:
data=journal
mode),
BFS, NTFS, etc., when they are configured to journal data.
In the particular case of ext3 file systems, the above disclaimer applies (and
shred is thus of limited effectiveness) only in data=journal
mode, which journals file data in addition to just metadata. In both
the data=ordered
(default) and data=writeback
modes,
shred works as usual. Ext3 journaling modes can be changed
by adding the data=something
option to the mount options for a
particular file system in the /etc/fstab file, as documented in
the mount man page (man mount).
If you are not sure how your file system operates, then you should assume that it does not overwrite data in place, which means that shred cannot reliably operate on regular files in your file system.
Generally speaking, it is more reliable to shred a device than a file, since this bypasses the problem of file system design mentioned above. However, even shredding devices is not always completely reliable. For example, most disks map out bad sectors invisibly to the application; if the bad sectors contain sensitive data, shred won't be able to destroy it.
shred makes no attempt to detect or report this problem, just as it makes no attempt to do anything about backups. However, since it is more reliable to shred devices than files, shred by default does not truncate or remove the output file. This default is more suitable for devices, which typically cannot be truncated and should not be removed.
Finally, consider the risk of backups and mirrors. File system backups and remote mirrors may contain copies of the file that cannot be removed, and that will allow a shredded file to be recovered later. So if you keep any data you may later want to destroy using shred, be sure that it is not backed up or mirrored.
shred [option]... file[...]
The program accepts the following options. Also see Common options.
You might use the following command to erase all trace of the file system you'd created on the floppy disk in your first drive. That command takes about 20 minutes to erase a “1.44MB” (actually 1440 KiB) floppy.
shred --verbose /dev/fd0
Similarly, to erase all data on a selected partition of your hard disk, you could give a command like this:
shred --verbose /dev/sda5
On modern disks, a single pass should be adequate, and it will take one third the time of the default three-pass approach.
# 1 pass, write pseudo-random data; 3x faster than the default shred --verbose -n1 /dev/sda5
To be on the safe side, use at least one pass that overwrites using pseudo-random data. I.e., don't be tempted to use ‘-n0 --zero’, in case some disk controller optimizes the process of writing blocks of all zeros, and thereby does not clear all bytes in a block. Some SSDs may do just that.
A file of ‘-’ denotes standard output. The intended use of this is to shred a removed temporary file. For example:
i=$(mktemp) exec 3<>"$i" rm -- "$i" echo "Hello, world" >&3 shred - >&3 exec 3>-
However, the command ‘shred - >file’ does not shred the contents of file, since the shell truncates file before invoking shred. Use the command ‘shred file’ or (if using a Bourne-compatible shell) the command ‘shred - 1<>file’ instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
This chapter describes commands which create special types of files (and rmdir, which removes directories, one special file type).
Although Unix-like operating systems have markedly fewer special file types than others, not everything can be treated only as the undifferentiated byte stream of normal files. For example, when a file is created or removed, the system must record this information, which it does in a directory—a special type of file. Although you can read directories as normal files, if you're curious, in order for the system to do its job it must impose a structure, a certain order, on the bytes of the file. Thus it is a “special” type of file.
Besides directories, other special file types include named pipes (FIFOs), symbolic links, sockets, and so-called special files.
link creates a single hard link at a time.
It is a minimalist interface to the system-provided
link
function. See Hard Links.
It avoids the bells and whistles of the more commonly-used
ln command (see ln invocation).
Synopsis:
link filename linkname
filename must specify an existing file, and linkname
must specify a nonexistent entry in an existing directory.
link simply calls link (
filename,
linkname)
to create the link.
On a GNU system, this command acts like ‘ln --directory --no-target-directory filename linkname’. However, the --directory and --no-target-directory options are not specified by POSIX, and the link command is more portable in practice.
If filename is a symbolic link, it is unspecified whether linkname will be a hard link to the symbolic link or to the target of the symbolic link. Use ln -P or ln -L to specify which behavior is desired.
An exit status of zero indicates success, and a nonzero value indicates failure.
ln makes links between files. By default, it makes hard links; with the -s option, it makes symbolic (or soft) links. Synopses:
ln [option]... [-T] target linkname ln [option]... target ln [option]... target... directory ln [option]... -t directory target...
Normally ln does not remove existing files. Use the --force (-f) option to remove them unconditionally, the --interactive (-i) option to remove them conditionally, and the --backup (-b) option to rename them.
A hard link is another name for an existing file; the link and the original are indistinguishable. Technically speaking, they share the same inode, and the inode contains all the information about a file—indeed, it is not incorrect to say that the inode is the file. Most systems prohibit making a hard link to a directory; on those where it is allowed, only the super-user can do so (and with caution, since creating a cycle will cause problems to many other utilities). Hard links cannot cross file system boundaries. (These restrictions are not mandated by POSIX, however.)
Symbolic links (symlinks for short), on the other hand, are a special file type (which not all kernels support: System V release 3 (and older) systems lack symlinks) in which the link file actually refers to a different file, by name. When most operations (opening, reading, writing, and so on) are passed the symbolic link file, the kernel automatically dereferences the link and operates on the target of the link. But some operations (e.g., removing) work on the link file itself, rather than on its target. The owner and group of a symlink are not significant to file access performed through the link, but do have implications on deleting a symbolic link from a directory with the restricted deletion bit set. On the GNU system, the mode of a symlink has no significance and cannot be changed, but on some BSD systems, the mode can be changed and will affect whether the symlink will be traversed in file name resolution. See Symbolic Links.
Symbolic links can contain arbitrary strings; a dangling symlink occurs when the string in the symlink does not resolve to a file. There are no restrictions against creating dangling symbolic links. There are trade-offs to using absolute or relative symlinks. An absolute symlink always points to the same file, even if the directory containing the link is moved. However, if the symlink is visible from more than one machine (such as on a networked file system), the file pointed to might not always be the same. A relative symbolic link is resolved in relation to the directory that contains the link, and is often useful in referring to files on the same device without regards to what name that device is mounted on when accessed via networked machines.
When creating a relative symlink in a different location than the current directory, the resolution of the symlink will be different than the resolution of the same string from the current directory. Therefore, many users prefer to first change directories to the location where the relative symlink will be created, so that tab-completion or other file resolution will find the same target as what will be placed in the symlink.
The program accepts the following options. Also see Common options.
When the destination is an actual directory (not a symlink to one), there is no ambiguity. The link is created in that directory. But when the specified destination is a symlink to a directory, there are two ways to treat the user's request. ln can treat the destination just as it would a normal directory and create the link in it. On the other hand, the destination can be viewed as a non-directory—as the symlink itself. In that case, ln must delete or backup that symlink before creating the new link. The default is to treat a destination that is a symlink to a directory just like a directory.
This option is weaker than the --no-target-directory
(-T) option, so it has no effect if both options are given.
Example:
ln -srv /a/file /tmp '/tmp/file' -> '../a/file'
Relative symbolic links are generated based on their canonicalized containing directory, and canonicalized targets. I.E. all symbolic links in these file names will be resolved. See realpath invocation, which gives greater control over relative file name generation, as demonstrated in the following example:
ln--relative() { test "$1" = --no-symlinks && { nosym=$1; shift; } target="$1"; test -d "$2" && link="$2/." || link="$2" rtarget="$(realpath $nosym -m "$target" \ --relative-to "$(dirname "$link")")" ln -s -v "$rtarget" "$link" }
If -L and -P are both given, the last one takes
precedence. If -s is also given, -L and -P
are silently ignored. If neither option is given, then this
implementation defaults to -P if the system link
supports
hard links to symbolic links (such as the GNU system), and -L
if link
follows symbolic links (such as on BSD).
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
Bad Example: # Create link ../a pointing to a in that directory. # Not really useful because it points to itself. ln -s a .. Better Example: # Change to the target before creating symlinks to avoid being confused. cd .. ln -s adir/a . Bad Example: # Hard coded file names don't move well. ln -s $(pwd)/a /some/dir/ Better Example: # Relative file names survive directory moves and also # work across networked file systems. ln -s afile anotherfile ln -s ../adir/afile yetanotherfile
mkdir creates directories with the specified names. Synopsis:
mkdir [option]... name...
mkdir creates each directory name in the order given. It reports an error if name already exists, unless the -p option is given and name is a directory.
The program accepts the following options. Also see Common options.
Normally the directory has the desired file mode bits at the moment it
is created. As a GNU extension, mode may also mention
special mode bits, but in this case there may be a temporary window
during which the directory exists but its special mode bits are
incorrect. See Directory Setuid and Setgid, for how the
set-user-ID and set-group-ID bits of directories are inherited unless
overridden in this way.
To set the file permission bits of any newly-created parent
directories to a value that includes ‘u+wx’, you can set the
umask before invoking mkdir. For example, if the shell
command ‘(umask u=rwx,go=rx; mkdir -p P/Q)’ creates the parent
P it sets the parent's permission bits to ‘u=rwx,go=rx’.
To set a parent's special mode bits as well, you can invoke
chmod after mkdir. See Directory Setuid and Setgid, for how the set-user-ID and set-group-ID bits of
newly-created parent directories are inherited.
An exit status of zero indicates success, and a nonzero value indicates failure.
mkfifo creates FIFOs (also called named pipes) with the specified names. Synopsis:
mkfifo [option] name...
A FIFO is a special file type that permits independent processes to communicate. One process opens the FIFO file for writing, and another for reading, after which data can flow as with the usual anonymous pipe in shells or elsewhere.
The program accepts the following option. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
mknod creates a FIFO, character special file, or block special file with the specified name. Synopsis:
mknod [option]... name type [major minor]
Unlike the phrase “special file type” above, the term special file has a technical meaning on Unix: something that can generate or receive data. Usually this corresponds to a physical piece of hardware, e.g., a printer or a disk. (These files are typically created at system-configuration time.) The mknod command is what creates files of this type. Such devices can be read either a character at a time or a “block” (many characters) at a time, hence we say there are block special files and character special files.
Due to shell aliases and built-in mknod functions, using an
unadorned mknod interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env mknod ...
) to avoid interference
from the shell.
The arguments after name specify the type of file to make:
When making a block or character special file, the major and minor device numbers must be given after the file type. If a major or minor device number begins with ‘0x’ or ‘0X’, it is interpreted as hexadecimal; otherwise, if it begins with ‘0’, as octal; otherwise, as decimal.
The program accepts the following option. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
readlink may work in one of two supported modes:
readlink [option]... file...
By default, readlink operates in readlink mode.
The program accepts the following options. Also see Common options.
The readlink utility first appeared in OpenBSD 2.1.
The realpath command without options, operates like readlink in canonicalize mode.
An exit status of zero indicates success, and a nonzero value indicates failure.
rmdir removes empty directories. Synopsis:
rmdir [option]... directory...
If any directory argument does not refer to an existing empty directory, it is an error.
The program accepts the following options. Also see Common options.
See rm invocation, for how to remove non-empty directories (recursively).
An exit status of zero indicates success, and a nonzero value indicates failure.
unlink deletes a single specified file name.
It is a minimalist interface to the system-provided
unlink
function. See Deleting Files. Synopsis:
It avoids the bells and whistles of the more commonly-used
rm command (see rm invocation).
unlink filename
On some systems unlink
can be used to delete the name of a
directory. On others, it can be used that way only by a privileged user.
In the GNU system unlink
can never delete the name of a directory.
The unlink command honors the --help and --version options. To remove a file whose name begins with ‘-’, prefix the name with ‘./’, e.g., ‘unlink ./--help’.
An exit status of zero indicates success, and a nonzero value indicates failure.
A file is not merely its contents, a name, and a file type (see Special file types). A file also has an owner (a user ID), a group (a group ID), permissions (what the owner can do with the file, what people in the group can do, and what everyone else can do), various timestamps, and other information. Collectively, we call these a file's attributes.
These commands change file attributes.
chown changes the user and/or group ownership of each given file to new-owner or to the user and group of an existing reference file. Synopsis:
chown [option]... {new-owner | --reference=ref_file} file...
If used, new-owner specifies the new owner and/or group as follows (with no embedded white space):
[owner] [ : [group] ]
Specifically:
If owner or group is intended to represent a numeric user or group ID, then you may specify it with a leading ‘+’. See Disambiguating names and IDs.
Some older scripts may still use ‘.’ in place of the ‘:’ separator. POSIX 1003.1-2001 (see Standards conformance) does not require support for that, but for backward compatibility GNU chown supports ‘.’ so long as no ambiguity results. New scripts should avoid the use of ‘.’ because it is not portable, and because it has undesirable results if the entire owner‘.’group happens to identify a user whose name contains ‘.’.
It is system dependent whether a user can change the group to an arbitrary one, or the more portable behavior of being restricted to setting a group of which the user is a member.
The chown command sometimes clears the set-user-ID or
set-group-ID permission bits. This behavior depends on the policy and
functionality of the underlying chown
system call, which may
make system-dependent file mode modifications outside the control of
the chown command. For example, the chown command
might not affect those bits when invoked by a user with appropriate
privileges, or when the
bits signify some function other than executable permission (e.g.,
mandatory locking).
When in doubt, check the underlying system behavior.
The program accepts the following options. Also see Common options.
root
might run
find / -owner OLDUSER -print0 | xargs -0 chown -h NEWUSER
But that is dangerous because the interval between when the find tests the existing file's owner and when the chown is actually run may be quite large. One way to narrow the gap would be to invoke chown for each file as it is found:
find / -owner OLDUSER -exec chown -h NEWUSER {} \;
But that is very slow if there are many affected files. With this option, it is safer (the gap is narrower still) though still not perfect:
chown -h -R --from=OLDUSER NEWUSER /
lchown
system call.
On systems that do not provide the lchown
system call,
chown fails when a file specified on the command line
is a symbolic link.
By default, no diagnostic is issued for symbolic links encountered
during a recursive traversal, but see --verbose.
lchown
system call, and --no-dereference
is in effect, then issue a diagnostic saying neither the symbolic link nor
its referent is being changed.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Change the owner of /u to "root". chown root /u # Likewise, but also change its group to "staff". chown root:staff /u # Change the owner of /u and subfiles to "root". chown -hR root /u
chgrp changes the group ownership of each given file to group (which can be either a group name or a numeric group ID) or to the group of an existing reference file. See chown invocation. Synopsis:
chgrp [option]... {group | --reference=ref_file} file...
If group is intended to represent a numeric group ID, then you may specify it with a leading ‘+’. See Disambiguating names and IDs.
It is system dependent whether a user can change the group to an arbitrary one, or the more portable behavior of being restricted to setting a group of which the user is a member.
The program accepts the following options. Also see Common options.
lchown
system call.
On systems that do not provide the lchown
system call,
chgrp fails when a file specified on the command line
is a symbolic link.
By default, no diagnostic is issued for symbolic links encountered
during a recursive traversal, but see --verbose.
lchown
system call, and --no-dereference
is in effect, then issue a diagnostic saying neither the symbolic link nor
its referent is being changed.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Change the group of /u to "staff". chgrp staff /u # Change the group of /u and subfiles to "staff". chgrp -hR staff /u
chmod changes the access permissions of the named files. Synopsis:
chmod [option]... {mode | --reference=ref_file} file...
chmod never changes the permissions of symbolic links, since the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals.
A successful use of chmod clears the set-group-ID bit of a
regular file if the file's group ID does not match the user's
effective group ID or one of the user's supplementary group IDs,
unless the user has appropriate privileges. Additional restrictions
may cause the set-user-ID and set-group-ID bits of mode or
ref_file to be ignored. This behavior depends on the policy and
functionality of the underlying chmod
system call. When in
doubt, check the underlying system behavior.
If used, mode specifies the new file mode bits. For details, see the section on File permissions. If you really want mode to have a leading ‘-’, you should use -- first, e.g., ‘chmod -- -w file’. Typically, though, ‘chmod a-w file’ is preferable, and chmod -w file (without the --) complains if it behaves differently from what ‘chmod a-w file’ would do.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
touch changes the access and/or modification times of the specified files. Synopsis:
touch [option]... file...
Any file argument that does not exist is created empty, unless option --no-create (-c) or --no-dereference (-h) was in effect.
A file argument string of ‘-’ is handled specially and causes touch to change the times of the file associated with standard output.
By default, touch sets file timestamps to the current time. Because touch acts on its operands left to right, the resulting timestamps of earlier and later operands may disagree. Also, the determination of what time is “current” depends on the platform. Platforms with network file systems often use different clocks for the operating system and for file systems; because touch typically uses file systems' clocks by default, clock skew can cause the resulting file timestamps to appear to be in a program's “future” or “past”.
The touch command sets the file's timestamp to the greatest representable value that is not greater than the requested time. This can differ from the requested time for several reasons. First, the requested time may have a higher resolution than supported. Second, a file system may use different resolutions for different types of times. Third, file timestamps may use a different resolution than operating system timestamps. Fourth, the operating system primitives used to update timestamps may employ yet a different resolution. For example, in theory a file system might use 10-microsecond resolution for access time and 100-nanosecond resolution for modification time, and the operating system might use nanosecond resolution for the current time and microsecond resolution for the primitive that touch uses to set a file's timestamp to an arbitrary value.
When setting file timestamps to the current time, touch can change the timestamps for files that the user does not own but has write permission for. Otherwise, the user must own the files. Some older systems have a further restriction: the user must own the files unless both the access and modification times are being set to the current time.
Although touch provides options for changing two of the times—the
times of last access and modification—of a file, there is actually
a standard third one as well: the inode change time. This is often
referred to as a file's ctime
.
The inode change time represents the time when the file's meta-information
last changed. One common example of this is when the permissions of a
file change. Changing the permissions doesn't access the file, so
the atime doesn't change, nor does it modify the file, so the mtime
doesn't change. Yet, something about the file itself has changed,
and this must be noted somewhere. This is the job of the ctime field.
This is necessary, so that, for example, a backup program can make a
fresh copy of the file, including the new permissions value.
Another operation that modifies a file's ctime without affecting
the others is renaming. In any case, it is not possible, in normal
operations, for a user to change the ctime field to a user-specified value.
Some operating systems and file systems support a fourth time: the
birth time, when the file was first created; by definition, this
timestamp never changes.
Time stamps assume the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ. You can avoid ambiguities during daylight saving transitions by using UTC time stamps.
The program accepts the following options. Also see Common options.
On older systems, touch supports an obsolete syntax, as follows. If no timestamp is given with any of the -d, -r, or -t options, and if there are two or more files and the first file is of the form ‘mmddhhmm[yy]’ and this would be a valid argument to the -t option (if the yy, if any, were moved to the front), and if the represented year is in the range 1969–1999, that argument is interpreted as the time for the other files instead of as a file name. This obsolete behavior can be enabled or disabled with the _POSIX2_VERSION environment variable (see Standards conformance), but portable scripts should avoid commands whose behavior depends on this variable. For example, use ‘touch ./12312359 main.c’ or ‘touch -t 12312359 main.c’ rather than the ambiguous ‘touch 12312359 main.c’.
An exit status of zero indicates success, and a nonzero value indicates failure.
No disk can hold an infinite amount of data. These commands report how much disk storage is in use or available, report other file and file status information, and write buffers to disk.
df reports the amount of disk space used and available on file systems. Synopsis:
df [option]... [file]...
With no arguments, df reports the space used and available on all currently mounted file systems (of all types). Otherwise, df reports on the file system containing each argument file.
Normally the disk space is printed in units of 1024 bytes, but this can be overridden (see Block size). Non-integer quantities are rounded up to the next higher unit.
For bind mounts and without arguments, df only outputs the statistics for that device with the shortest mount point name in the list of file systems (mtab), i.e., it hides duplicate entries, unless the -a option is specified.
With the same logic, df elides a mount entry of a dummy pseudo device if there is another mount entry of a real block device for that mount point with the same device number, e.g. the early-boot pseudo file system ‘rootfs’ is not shown per default when already the real root device has been mounted.
If an argument file resolves to a special file containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node. GNU df does not attempt to determine the disk usage on unmounted file systems, because on most kinds of systems doing so requires extremely nonportable intimate knowledge of file system structures.
The program accepts the following options. Also see Common options.
For the grand total line, df prints ‘"total"’ into the
source column, and ‘"-"’ into the target column.
If there is no source column (see --output), then
df prints ‘"total"’ into the target column,
if present.
sync
system call before getting any usage data.
This may make df run significantly faster on systems with many
disks, but on some systems (notably SunOS) the results may be slightly
out of date. This is the default.
The use of the --output together with each of the options -i, -P, and -T is mutually exclusive.
FIELD_LIST is a comma-separated list of columns to be included in df's output and therefore effectively controls the order of output columns. Each field can thus be used at the place of choice, but yet must only be used once.
Valid field names in the field_list are:
The fields for block and inodes statistics are affected by the scaling options like -h as usual.
The definition of the field_list can even be splitted among several --output uses.
#!/bin/sh # Print the TARGET (i.e., the mount point) along with their percentage # statistic regarding the blocks and the inodes. df --out=target --output=pcent,ipcent # Print all available fields. df --o
sync
system call before getting any usage data. On
some systems (notably SunOS), doing this yields more up to date results,
but in general this option makes df much slower, especially when
there are many or very busy file systems.
An exit status of zero indicates success, and a nonzero value indicates failure. Failure includes the case where no output is generated, so you can inspect the exit status of a command like ‘df -t ext3 -t reiserfs dir’ to test whether dir is on a file system of type ‘ext3’ or ‘reiserfs’.
Since the list of file systems (mtab) is needed to determine the file system type, failure includes the cases when that list cannot be read and one or more of the options -a, -l, -t or -x is used together with a file name argument.
du reports the amount of disk space used by the specified files and for each subdirectory (of directory arguments). Synopsis:
du [option]... [file]...
With no arguments, du reports the disk space for the current directory. Normally the disk space is printed in units of 1024 bytes, but this can be overridden (see Block size). Non-integer quantities are rounded up to the next higher unit.
If two or more hard links point to the same file, only one of the hard links is counted. The file argument order affects which links are counted, and changing the argument order may change the numbers that du outputs.
The program accepts the following options. Also see Common options.
wc -c
on regular files,
or more generally, ls -l --block-size=1
or stat --format=%s
.
For example, a file containing the word ‘zoo’ with no newline would,
of course, have an apparent size of 3. Such a small file may require
anywhere from 0 to 16 KiB or more of disk space, depending on
the type and configuration of the file system on which the file resides.
However, a sparse file created with this command:
dd bs=1 seek=2GiB if=/dev/null of=big
has an apparent size of 2 GiB, yet on most modern
systems, it actually uses almost no disk space.
--apparent-size --block-size=1
.
du --max-depth=0
is equivalent to du -s
.
If size is positive, then du will only print entries with a size greater than or equal to that.
If size is negative, then du will only print entries with a size smaller than or equal to that.
Although GNU find can be used to find files of a certain size, du's --threshold option can be used to also filter directories based on a given size.
Please note that the --threshold option can be combined with the --apparent-size option, and in this case would elide entries based on its apparent size.
Please note that the --threshold option can be combined with the --inodes option, and in this case would elide entries based on its inodes count.
Here's how you would use --threshold to find directories with a size greater than or equal to 200 megabytes:
du --threshold=200MB
Here's how you would use --threshold to find directories and files - note the -a - with an apparent size smaller than or equal to 500 bytes:
du -a -t -500 --apparent-size
Here's how you would use --threshold to find directories on the root file system with more than 20000 inodes used in the directory tree below:
du --inodes -x --threshold=20000 /
You can specify the default value of the --time-style option
with the environment variable TIME_STYLE; if TIME_STYLE is not set
the default style is ‘long-iso’. For compatibility with ls,
if TIME_STYLE begins with ‘+’ and contains a newline,
the newline and any later characters are ignored; if TIME_STYLE
begins with ‘posix-’ the ‘posix-’ is ignored; and if
TIME_STYLE is ‘locale’ it is ignored.
du --exclude='*.o'
excludes files whose names
end in ‘.o’.
On BSD systems, du reports sizes that are half the correct values for files that are NFS-mounted from HP-UX systems. On HP-UX systems, it reports sizes that are twice the correct values for files that are NFS-mounted from BSD systems. This is due to a flaw in HP-UX; it also affects the HP-UX du program.
An exit status of zero indicates success, and a nonzero value indicates failure.
stat displays information about the specified file(s). Synopsis:
stat [option]... [file]...
With no option, stat reports all information about the given files. But it also can be used to report the information of the file systems the given files are located on. If the files are links, stat can also give information about the files the links point to.
Due to shell aliases and built-in stat functions, using an
unadorned stat interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env stat ...
) to avoid interference
from the shell.
$ stat --format=%d:%i / /usr 2050:2 2057:2
$ stat --printf='%d:%i\n' / /usr 2050:2 2057:2
The valid format directives for files with --format and --printf are:
The ‘%t’ and ‘%T’ formats operate on the st_rdev member of the stat(2) structure, and are only defined for character and block special files. On some systems or file types, st_rdev may be used to represent other quantities.
The ‘%W’, ‘%X’, ‘%Y’, and ‘%Z’ formats accept a precision preceded by a period to specify the number of digits to print after the decimal point. For example, ‘%.3X’ outputs the last access time to millisecond precision. If a period is given but no precision, stat uses 9 digits, so ‘%.X’ is equivalent to ‘%.9X’. When discarding excess precision, time stamps are truncated toward minus infinity.
zero pad: $ stat -c '[%015Y]' /usr [000001288929712] space align: $ stat -c '[%15Y]' /usr [ 1288929712] $ stat -c '[%-15Y]' /usr [1288929712 ] precision: $ stat -c '[%.3Y]' /usr [1288929712.114] $ stat -c '[%.Y]' /usr [1288929712.114951834]
The mount point printed by ‘%m’ is similar to that output by df, except that:
When listing file system information (--file-system (-f)), you must use a different set of format directives:
Time stamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ.
An exit status of zero indicates success, and a nonzero value indicates failure.
sync writes any data buffered in memory out to disk. This can
include (but is not limited to) modified superblocks, modified inodes,
and delayed reads and writes. This must be implemented by the kernel;
The sync program does nothing but exercise the sync
system
call.
The kernel keeps data in memory to avoid doing (relatively slow) disk reads and writes. This improves performance, but if the computer crashes, data may be lost or the file system corrupted as a result. The sync command ensures everything in memory is written to disk.
Any arguments are ignored, except for a lone --help or --version (see Common options).
An exit status of zero indicates success, and a nonzero value indicates failure.
truncate shrinks or extends the size of each file to the specified size. Synopsis:
truncate option... file...
Any file that does not exist is created.
If a file is larger than the specified size, the extra data is lost. If a file is shorter, it is extended and the extended part (or hole) reads as zero bytes.
The program accepts the following options. Also see Common options.
‘KB’ => 1000 (KiloBytes) ‘K’ => 1024 (KibiBytes) ‘MB’ => 1000*1000 (MegaBytes) ‘M’ => 1024*1024 (MebiBytes) ‘GB’ => 1000*1000*1000 (GigaBytes) ‘G’ => 1024*1024*1024 (GibiBytes)
and so on for ‘T’, ‘P’, ‘E’, ‘Z’, and ‘Y’.
size may also be prefixed by one of the following to adjust the size of each file based on their current size:
‘+’ => extend by ‘-’ => reduce by ‘<’ => at most ‘>’ => at least ‘/’ => round down to multiple of ‘%’ => round up to multiple of
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that display text strings.
echo writes each given string to standard output, with a space between each and a newline after the last one. Synopsis:
echo [option]... [string]...
Due to shell aliases and built-in echo functions, using an
unadorned echo interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env echo ...
) to avoid interference
from the shell.
The program accepts the following options. Also see Common options. Options must precede operands, and the normally-special argument ‘--’ has no special meaning and is treated like any other string.
If the POSIXLY_CORRECT environment variable is set, then when
echo's first argument is not -n it outputs
option-like arguments instead of treating them as options. For
example, echo -ne hello
outputs ‘-ne hello’ instead of
plain ‘hello’.
POSIX does not require support for any options, and says that the behavior of echo is implementation-defined if any string contains a backslash or if the first argument is -n. Portable programs can use the printf command if they need to omit trailing newlines or output control characters or backslashes. See printf invocation.
An exit status of zero indicates success, and a nonzero value indicates failure.
printf does formatted printing of text. Synopsis:
printf format [argument]...
printf prints the format string, interpreting ‘%’ directives and ‘\’ escapes to format numeric and string arguments in a way that is mostly similar to the C ‘printf’ function. See printf format directives, for details. The differences are listed below.
Due to shell aliases and built-in printf functions, using an
unadorned printf interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env printf ...
) to avoid interference
from the shell.
A floating-point argument must use a period before any fractional digits, but is printed according to the LC_NUMERIC category of the current locale. For example, in a locale whose radix character is a comma, the command ‘printf %g 3.14’ outputs ‘3,14’ whereas the command ‘printf %g 3,14’ is an error. See Floating point.
printf interprets ‘\ooo’ in format as an octal number (if ooo is 1 to 3 octal digits) specifying a byte to print, and ‘\xhh’ as a hexadecimal number (if hh is 1 to 2 hex digits) specifying a character to print. Note however that when ‘\ooo’ specifies a number larger than 255, printf ignores the ninth bit. For example, ‘printf '\400'’ is equivalent to ‘printf '\0'’.
printf interprets two character syntaxes introduced in ISO C 99: ‘\u’ for 16-bit Unicode (ISO/IEC 10646) characters, specified as four hexadecimal digits hhhh, and ‘\U’ for 32-bit Unicode characters, specified as eight hexadecimal digits hhhhhhhh. printf outputs the Unicode characters according to the LC_CTYPE locale. Unicode characters in the ranges U+0000...U+009F, U+D800...U+DFFF cannot be specified by this syntax, except for U+0024 ($), U+0040 (@), and U+0060 (`).
The processing of ‘\u’ and ‘\U’ requires a full-featured
iconv
facility. It is activated on systems with glibc 2.2 (or newer),
or when libiconv
is installed prior to this package. Otherwise
‘\u’ and ‘\U’ will print as-is.
The only options are a lone --help or --version. See Common options. Options must precede operands.
The Unicode character syntaxes are useful for writing strings in a locale independent way. For example, a string containing the Euro currency symbol
$ env printf '\u20AC 14.95'
will be output correctly in all locales supporting the Euro symbol (ISO-8859-15, UTF-8, and others). Similarly, a Chinese string
$ env printf '\u4e2d\u6587'
will be output correctly in all Chinese locales (GB2312, BIG5, UTF-8, etc).
Note that in these examples, the printf command has been invoked via env to ensure that we run the program found via your shell's search path, and not a shell alias or a built-in function.
For larger strings, you don't need to look up the hexadecimal code values of each character one by one. ASCII characters mixed with \u escape sequences is also known as the JAVA source file encoding. You can use GNU recode 3.5c (or newer) to convert strings to this encoding. Here is how to convert a piece of text into a shell script which will output this text in a locale-independent way:
$ LC_CTYPE=zh_CN.big5 /usr/local/bin/printf \ '\u4e2d\u6587\n' > sample.txt $ recode BIG5..JAVA < sample.txt \ | sed -e "s|^|/usr/local/bin/printf '|" -e "s|$|\\\\n'|" \ > sample.sh
An exit status of zero indicates success, and a nonzero value indicates failure.
yes prints the command line arguments, separated by spaces and followed by a newline, forever until it is killed. If no arguments are given, it prints ‘y’ followed by a newline forever until killed.
Upon a write error, yes exits with status ‘1’.
The only options are a lone --help or --version. To output an argument that begins with ‘-’, precede it with --, e.g., ‘yes -- --help’. See Common options.
This section describes commands that are primarily useful for their exit
status, rather than their output. Thus, they are often used as the
condition of shell if
statements, or as the last command in a
pipeline.
false does nothing except return an exit status of 1, meaning failure. It can be used as a place holder in shell scripts where an unsuccessful command is needed. In most modern shells, false is a built-in command, so when you use ‘false’ in a script, you're probably using the built-in command, not the one documented here.
false honors the --help and --version options.
This version of false is implemented as a C program, and is thus more secure and faster than a shell script implementation, and may safely be used as a dummy shell for the purpose of disabling accounts.
Note that false (unlike all other programs documented herein) exits unsuccessfully, even when invoked with --help or --version.
Portable programs should not assume that the exit status of false is 1, as it is greater than 1 on some non-GNU hosts.
true does nothing except return an exit status of 0, meaning
success. It can be used as a place holder in shell scripts
where a successful command is needed, although the shell built-in
command :
(colon) may do the same thing faster.
In most modern shells, true is a built-in command, so when
you use ‘true’ in a script, you're probably using the built-in
command, not the one documented here.
true honors the --help and --version options.
Note, however, that it is possible to cause true to exit with nonzero status: with the --help or --version option, and with standard output already closed or redirected to a file that evokes an I/O error. For example, using a Bourne-compatible shell:
$ ./true --version >&- ./true: write error: Bad file number $ ./true --version > /dev/full ./true: write error: No space left on device
This version of true is implemented as a C program, and is thus more secure and faster than a shell script implementation, and may safely be used as a dummy shell for the purpose of disabling accounts.
test returns a status of 0 (true) or 1 (false) depending on the evaluation of the conditional expression expr. Each part of the expression must be a separate argument.
test has file status checks, string operators, and numeric comparison operators.
test has an alternate form that uses opening and closing square brackets instead a leading ‘test’. For example, instead of ‘test -d /’, you can write ‘[ -d / ]’. The square brackets must be separate arguments; for example, ‘[-d /]’ does not have the desired effect. Since ‘test expr’ and ‘[ expr ]’ have the same meaning, only the former form is discussed below.
Synopses:
test expression test [ expression ] [ ] [ option
Due to shell aliases and built-in test functions, using an
unadorned test interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env test ...
) to avoid interference
from the shell.
If expression is omitted, test returns false. If expression is a single argument, test returns false if the argument is null and true otherwise. The argument can be any string, including strings like ‘-d’, ‘-1’, ‘--’, ‘--help’, and ‘--version’ that most other programs would treat as options. To get help and version information, invoke the commands ‘[ --help’ and ‘[ --version’, without the usual closing brackets. See Common options.
0 if the expression is true, 1 if the expression is false, 2 if an error occurred.
These options test for particular types of files. (Everything's a file, but not all files are the same!)
These options test for particular access permissions.
These options test other file characteristics.
These options test string characteristics. You may need to quote string arguments for the shell. For example:
test -n "$V"
The quotes here prevent the wrong arguments from being passed to test if ‘$V’ is empty or contains special characters.
Numeric relational operators. The arguments must be entirely numeric
(possibly negative), or the special expression -l
string,
which evaluates to the length of string.
For example:
test -1 -gt -2 && echo yes ⇒ yes test -l abc -gt 1 && echo yes ⇒ yes test 0x100 -eq 1 error--> test: integer expression expected before -eq
The usual logical connectives.
expr evaluates an expression and writes the result on standard output. Each token of the expression must be a separate argument.
Operands are either integers or strings. Integers consist of one or more decimal digits, with an optional leading ‘-’. expr converts anything appearing in an operand position to an integer or a string depending on the operation being applied to it.
Strings are not quoted for expr itself, though you may need to
quote them to protect characters with special meaning to the shell,
e.g., spaces. However, regardless of whether it is quoted, a string
operand should not be a parenthesis or any of expr's
operators like +
, so you cannot safely pass an arbitrary string
$str
to expr merely by quoting it to the shell. One way to
work around this is to use the GNU extension +
,
(e.g., + "$str" = foo
); a more portable way is to use
" $str"
and to adjust the rest of the expression to take
the leading space into account (e.g., " $str" = " foo"
).
You should not pass a negative integer or a string with leading ‘-’ as expr's first argument, as it might be misinterpreted as an option; this can be avoided by parenthesization. Also, portable scripts should not use a string operand that happens to take the form of an integer; this can be worked around by inserting leading spaces as mentioned above.
Operators may be given as infix symbols or prefix keywords. Parentheses may be used for grouping in the usual manner. You must quote parentheses and many operators to avoid the shell evaluating them, however.
When built with support for the GNU MP library, expr uses arbitrary-precision arithmetic; otherwise, it uses native arithmetic types and may fail due to arithmetic overflow.
The only options are --help and --version. See Common options. Options must precede operands.
0 if the expression is neither null nor 0, 1 if the expression is null or 0, 2 if the expression is invalid, 3 if an internal error occurred (e.g., arithmetic overflow).
expr supports pattern matching and other string operators. These have higher precedence than both the numeric and relational operators (in the next sections).
grep
) regular
expression, with a ^
implicitly prepended. The first argument is
then matched against this regular expression.
If the match succeeds and regex uses ‘\(’ and ‘\)’, the
:
expression returns the part of string that matched the
subexpression; otherwise, it returns the number of characters matched.
If the match fails, the :
operator returns the null string if
‘\(’ and ‘\)’ are used in regex, otherwise 0.
Only the first ‘\( ... \)’ pair is relevant to the return value; additional pairs are meaningful only for grouping the regular expression operators.
In the regular expression, \+
, \?
, and \|
are
operators which respectively match one or more, zero or one, or separate
alternatives. SunOS and other expr's treat these as regular
characters. (POSIX allows either behavior.)
See Regular Expression Library, for details of
regular expression syntax. Some examples are in Examples of expr.
/
.
This makes it possible to test expr length + "$x"
or
expr + "$x" : '.*/\(.\)'
and have it do the right thing even if
the value of $x happens to be (for example) /
or index
.
This operator is a GNU extension. Portable shell scripts should use
" $token" : ' \(.*\)'
instead of + "$token"
.
To make expr interpret keywords as strings, you must use the
quote
operator.
expr supports the usual numeric operators, in order of increasing precedence. These numeric operators have lower precedence than the string operators described in the previous section, and higher precedence than the connectives (next section).
expr supports the usual logical connectives and relations. These have lower precedence than the string and numeric operators (previous sections). Here is the list, lowest-precedence operator first.
==
is a synonym for =
. expr first tries to convert
both arguments to integers and do a numeric comparison; if either
conversion fails, it does a lexicographic comparison using the character
collating sequence specified by the LC_COLLATE locale.
Here are a few examples, including quoting for shell metacharacters.
To add 1 to the shell variable foo
, in Bourne-compatible shells:
foo=$(expr $foo + 1)
To print the non-directory part of the file name stored in
$fname
, which need not contain a /
:
expr $fname : '.*/\(.*\)' '|' $fname
An example showing that \+
is an operator:
expr aaa : 'a\+' ⇒ 3
expr abc : 'a\(.\)c' ⇒ b expr index abcdef cz ⇒ 3 expr index index a error--> expr: syntax error expr index + index a ⇒ 0
Unix shells commonly provide several forms of redirection—ways to change the input source or output destination of a command. But one useful redirection is performed by a separate command, not by the shell; it's described here.
The tee command copies standard input to standard output and also to any files given as arguments. This is useful when you want not only to send some data down a pipe, but also to save a copy. Synopsis:
tee [option]... [file]...
If a file being written to does not already exist, it is created. If a file being written to already exists, the data it previously contained is overwritten unless the -a option is used.
A file of ‘-’ causes tee to send another copy of input to standard output, but this is typically not that useful as the copies are interleaved.
The program accepts the following options. Also see Common options.
The tee command is useful when you happen to be transferring a large amount of data and also want to summarize that data without reading it a second time. For example, when you are downloading a DVD image, you often want to verify its signature or checksum right away. The inefficient way to do it is simply:
wget http://example.com/some.iso && sha1sum some.iso
One problem with the above is that it makes you wait for the download to complete before starting the time-consuming SHA1 computation. Perhaps even more importantly, the above requires reading the DVD image a second time (the first was from the network).
The efficient way to do it is to interleave the download and SHA1 computation. Then, you'll get the checksum for free, because the entire process parallelizes so well:
# slightly contrived, to demonstrate process substitution wget -O - http://example.com/dvd.iso \ | tee >(sha1sum > dvd.sha1) > dvd.iso
That makes tee write not just to the expected output file, but also to a pipe running sha1sum and saving the final checksum in a file named dvd.sha1.
Note, however, that this example relies on a feature of modern shells called process substitution (the ‘>(command)’ syntax, above; See Process Substitution.), so it works with zsh, bash, and ksh, but not with /bin/sh. So if you write code like this in a shell script, be sure to start the script with ‘#!/bin/bash’.
Since the above example writes to one file and one process, a more conventional and portable use of tee is even better:
wget -O - http://example.com/dvd.iso \ | tee dvd.iso | sha1sum > dvd.sha1
You can extend this example to make tee write to two processes, computing MD5 and SHA1 checksums in parallel. In this case, process substitution is required:
wget -O - http://example.com/dvd.iso \ | tee >(sha1sum > dvd.sha1) \ >(md5sum > dvd.md5) \ > dvd.iso
This technique is also useful when you want to make a compressed copy of the contents of a pipe. Consider a tool to graphically summarize disk usage data from ‘du -ak’. For a large hierarchy, ‘du -ak’ can run for a long time, and can easily produce terabytes of data, so you won't want to rerun the command unnecessarily. Nor will you want to save the uncompressed output.
Doing it the inefficient way, you can't even start the GUI until after you've compressed all of the du output:
du -ak | gzip -9 > /tmp/du.gz gzip -d /tmp/du.gz | xdiskusage -a
With tee and process substitution, you start the GUI right away and eliminate the decompression completely:
du -ak | tee >(gzip -9 > /tmp/du.gz) | xdiskusage -a
Finally, if you regularly create more than one type of
compressed tarball at once, for example when make dist
creates
both gzip-compressed and bzip2-compressed tarballs,
there may be a better way.
Typical automake-generated Makefile rules create
the two compressed tar archives with commands in sequence, like this
(slightly simplified):
tardir=your-pkg-M.N tar chof - "$tardir" | gzip -9 -c > your-pkg-M.N.tar.gz tar chof - "$tardir" | bzip2 -9 -c > your-pkg-M.N.tar.bz2
However, if the hierarchy you are archiving and compressing is larger than a couple megabytes, and especially if you are using a multi-processor system with plenty of memory, then you can do much better by reading the directory contents only once and running the compression programs in parallel:
tardir=your-pkg-M.N tar chof - "$tardir" \ | tee >(gzip -9 -c > your-pkg-M.N.tar.gz) \ | bzip2 -9 -c > your-pkg-M.N.tar.bz2
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that manipulate file names.
basename removes any leading directory components from name. Synopsis:
basename name [suffix] basename option... name...
If suffix is specified and is identical to the end of name, it is removed from name as well. Note that since trailing slashes are removed prior to suffix matching, suffix will do nothing if it contains slashes. basename prints the result on standard output.
Together, basename and dirname are designed such that if ‘ls "$name"’ succeeds, then the command sequence ‘cd "$(dirname "$name")"; ls "$(basename "$name")"’ will, too. This works for everything except file names containing a trailing newline.
POSIX allows the implementation to define the results if name is empty or ‘//’. In the former case, GNU basename returns the empty string. In the latter case, the result is ‘//’ on platforms where // is distinct from /, and ‘/’ on platforms where there is no difference.
The program accepts the following options. Also see Common options. Options must precede operands.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Output "sort". basename /usr/bin/sort # Output "stdio". basename include/stdio.h .h # Output "stdio". basename -s .h include/stdio.h # Output "stdio" followed by "stdlib" basename -a -s .h include/stdio.h include/stdlib.h
dirname prints all but the final slash-delimited component of each name. Slashes on either side of the final component are also removed. If the string contains no slash, dirname prints ‘.’ (meaning the current directory). Synopsis:
dirname [option] name...
name need not be a file name, but if it is, this operation effectively lists the directory that contains the final component, including the case when the final component is itself a directory.
Together, basename and dirname are designed such that if ‘ls "$name"’ succeeds, then the command sequence ‘cd "$(dirname "$name")"; ls "$(basename "$name")"’ will, too. This works for everything except file names containing a trailing newline.
POSIX allows the implementation to define the results if name is ‘//’. With GNU dirname, the result is ‘//’ on platforms where // is distinct from /, and ‘/’ on platforms where there is no difference.
The program accepts the following option. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Output "/usr/bin". dirname /usr/bin/sort dirname /usr/bin//.// # Output "dir1" followed by "dir2" dirname dir1/str dir2/str # Output ".". dirname stdio.h
pathchk checks validity and portability of file names. Synopsis:
pathchk [option]... name...
For each name, pathchk prints an error message if any of these conditions is true:
A nonexistent name is not an error, so long a file with that name could be created under the above conditions.
The program accepts the following options. Also see Common options. Options must precede operands.
0 if all specified file names passed all checks, 1 otherwise.
mktemp manages the creation of temporary files and directories. Synopsis:
mktemp [option]... [template]
Safely create a temporary file or directory based on template, and print its name. If given, template must include at least three consecutive ‘X’s in the last component. If omitted, the template ‘tmp.XXXXXXXXXX’ is used, and option --tmpdir is implied. The final run of ‘X’s in the template will be replaced by alpha-numeric characters; thus, on a case-sensitive file system, and with a template including a run of n instances of ‘X’, there are ‘62**n’ potential file names.
Older scripts used to create temporary files by simply joining the name of the program with the process id (‘$$’) as a suffix. However, that naming scheme is easily predictable, and suffers from a race condition where the attacker can create an appropriately named symbolic link, such that when the script then opens a handle to what it thought was an unused file, it is instead modifying an existing file. Using the same scheme to create a directory is slightly safer, since the mkdir will fail if the target already exists, but it is still inferior because it allows for denial of service attacks. Therefore, modern scripts should use the mktemp command to guarantee that the generated name will be unpredictable, and that knowledge of the temporary file name implies that the file was created by the current script and cannot be modified by other users.
When creating a file, the resulting file has read and write permissions for the current user, but no permissions for the group or others; these permissions are reduced if the current umask is more restrictive.
Here are some examples (although note that if you repeat them, you will most likely get different file names):
$ mktemp file.XXXX file.H47c
$ mktemp --suffix=.txt file-XXXX file-H08W.txt $ mktemp file-XXXX-XXXX.txt file-XXXX-eI9L.txt
$ dir=$(mktemp -p "${TMPDIR:-.}" -d dir-XXXX) || exit 1 $ fifo=$dir/fifo $ mkfifo "$fifo" || { rmdir "$dir"; exit 1; }
$ file=$(mktemp -q) && { > # Safe to use $file only within this block. Use quotes, > # since $TMPDIR, and thus $file, may contain whitespace. > echo ... > "$file" > rm "$file" > }
$ mktemp -u XXX Gb9 $ mktemp -u XXX nzC
The program accepts the following options. Also see Common options.
0 if the file was created, 1 otherwise.
realpath expands all symbolic links and resolves references to ‘/./’, ‘/../’ and extra ‘/’ characters. By default, all but the last component of the specified files must exist. Synopsis:
realpath [option]... file...
The program accepts the following options. Also see Common options.
realpath --relative-to=/usr /tmp /usr/bin ⇒ ../tmp ⇒ bin realpath --relative-base=/usr /tmp /usr/bin ⇒ /tmp ⇒ bin
0 if all file names were printed without issue. 1 otherwise.
This section describes commands that display or alter the context in which you are working: the current directory, the terminal settings, and so forth. See also the user-related commands in the next section.
pwd prints the name of the current directory. Synopsis:
pwd [option]...
The program accepts the following options. Also see Common options.
If -L and -P are both given, the last one takes precedence. If neither option is given, then this implementation uses -P as the default unless the POSIXLY_CORRECT environment variable is set.
Due to shell aliases and built-in pwd functions, using an
unadorned pwd interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env pwd ...
) to avoid interference
from the shell.
An exit status of zero indicates success, and a nonzero value indicates failure.
stty prints or changes terminal characteristics, such as baud rate. Synopses:
stty [option] [setting]... stty [option]
If given no line settings, stty prints the baud rate, line discipline number (on systems that support it), and line settings that have been changed from the values set by ‘stty sane’. By default, mode reading and setting are performed on the tty line connected to standard input, although this can be modified by the --file option.
stty accepts many non-option arguments that change aspects of the terminal line operation, as described below.
The program accepts the following options. Also see Common options.
O_NONDELAY
flag to prevent a POSIX tty from blocking
until the carrier detect line is high if
the clocal
flag is not set. Hence, it is not always possible
to allow the shell to open the device in the traditional manner.
Many settings can be turned off by preceding them with a ‘-’. Such arguments are marked below with “May be negated” in their description. The descriptions themselves refer to the positive case, that is, when not negated (unless stated otherwise, of course).
Some settings are not available on all POSIX systems, since they use extensions. Such arguments are marked below with “Non-POSIX” in their description. On non-POSIX systems, those or other settings also may not be available, but it's not feasible to document all the variations: just try it and see.
An exit status of zero indicates success, and a nonzero value indicates failure.
These settings control operations on data received from the terminal.
stop
character when the system input buffer
is almost full, and start
character when it becomes almost
empty again. May be negated.
These settings control operations on data sent to the terminal.
interrupt
, quit
, and suspend
special
characters. May be negated.
erase
, kill
, werase
, and rprnt
special characters. May be negated.
erase
characters as backspace-space-backspace. May be
negated.
kill
character. May be negated.
interrupt
and quit
special
characters. May be negated.
icanon
is set.
Non-POSIX. May be negated.
kill
special character by erasing each character on
the line as indicated by the echoprt
and echoe
settings,
instead of by the echoctl
and echok
settings.
Non-POSIX.
May be negated.
parenb -parodd cs7
. May be negated. If negated, same
as -parenb cs8
.
parenb parodd cs7
. May be negated. If negated, same
as -parenb cs8
.
-icrnl -onlcr
. May be negated. If negated, same as
icrnl -inlcr -igncr onlcr -ocrnl -onlret
.
erase
and kill
special characters to their default
values.
cread -ignbrk brkint -inlcr -igncr icrnl -ixoff -iuclc -ixany imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke
and also sets all special characters to their default values.
brkint ignpar istrip icrnl ixon opost isig icanon
, plus
sets the eof
and eol
characters to their default values
if they are the same as the min
and time
characters.
May be negated. If negated, same as raw
.
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel -opost -isig -icanon -xcase min 1 time 0
May be negated. If negated, same as cooked
.
icanon
.
-parenb -istrip cs8
. May be negated. If negated,
same as parenb istrip cs7
.
parenb istrip opost cs7
.
tab0
. Non-POSIX. May be negated. If negated, same
as tab3
.
xcase iuclc olcuc
. Non-POSIX. May be negated.
(Used for terminals with uppercase characters only.)
echoe echoctl echoke
.
echoe echoctl echoke -ixany intr ^C erase ^? kill C-u
.
The special characters' default values vary from system to system. They are set with the syntax ‘name value’, where the names are listed below and the value can be given either literally, in hat notation (‘^c’), or as an integer which may start with ‘0x’ to indicate hexadecimal, ‘0’ to indicate octal, or any other digit to indicate decimal.
For GNU stty, giving a value of ^-
or undef
disables that
special character. (This is incompatible with Ultrix stty,
which uses a value of ‘u’ to disable a special character. GNU
stty treats a value ‘u’ like any other, namely to set that
special character to <U>.)
exta
extb
. exta
is the same as 19200;
extb
is the same as 38400. Many systems, including GNU/Linux,
support higher speeds. The stty command includes support
for speeds of
57600,
115200,
230400,
460800,
500000,
576000,
921600,
1000000,
1152000,
1500000,
2000000,
2500000,
3000000,
3500000,
or
4000000 where the system supports these.
0 hangs up the line if -clocal is set.
printenv prints environment variable values. Synopsis:
printenv [option] [variable]...
If no variables are specified, printenv prints the value of every environment variable. Otherwise, it prints the value of each variable that is set, and nothing for those that are not set.
The program accepts the following option. Also see Common options.
0 if all variables specified were found 1 if at least one specified variable was not found 2 if a write error occurred
tty prints the file name of the terminal connected to its standard input. It prints ‘not a tty’ if standard input is not a terminal. Synopsis:
tty [option]...
The program accepts the following option. Also see Common options.
0 if standard input is a terminal 1 if standard input is not a terminal 2 if given incorrect arguments 3 if a write error occurs
This section describes commands that print user-related information: logins, groups, and so forth.
id prints information about the given user, or the process running it if no user is specified. Synopsis:
id [option]... [user]
user can be either a user ID or a name, with name look-up taking precedence unless the ID is specified with a leading ‘+’. See Disambiguating names and IDs.
By default, it prints the real user ID, real group ID, effective user ID if different from the real user ID, effective group ID if different from the real group ID, and supplemental group IDs. In addition, if SELinux is enabled and the POSIXLY_CORRECT environment variable is not set, then print ‘context=c’, where c is the security context.
Each of these numeric values is preceded by an identifying string and followed by the corresponding user or group name in parentheses.
The options cause id to print only part of the above information. Also see Common options.
Example:
$ id -Gn --zero users <NUL> devs <NUL>
Primary and supplementary groups for a process are normally inherited from its parent and are usually unchanged since login. This means that if you change the group database after logging in, id will not reflect your changes within your existing login session. Running id with a user argument causes the user and group database to be consulted afresh, and so will give a different result.
An exit status of zero indicates success, and a nonzero value indicates failure.
logname prints the calling user's name, as found in a system-maintained file (often /var/run/utmp or /etc/utmp), and exits with a status of 0. If there is no entry for the calling process, logname prints an error message and exits with a status of 1.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
whoami prints the user name associated with the current effective user ID. It is equivalent to the command ‘id -un’.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
groups prints the names of the primary and any supplementary groups for each given username, or the current process if no names are given. If more than one name is given, the name of each user is printed before the list of that user's groups and the user name is separated from the group list by a colon. Synopsis:
groups [username]...
The group lists are equivalent to the output of the command ‘id -Gn’.
The only options are --help and --version. See Common options.
Primary and supplementary groups for a process are normally inherited from its parent and are usually unchanged since login. This means that if you change the group database after logging in, groups will not reflect your changes within your existing login session. Running groups with a list of users causes the user and group database to be consulted afresh, and so will give a different result.
An exit status of zero indicates success, and a nonzero value indicates failure.
users prints on a single line a blank-separated list of user names of users currently logged in to the current host. Each user name corresponds to a login session, so if a user has more than one login session, that user's name will appear the same number of times in the output. Synopsis:
users [file]
With no file argument, users extracts its information from a system-maintained file (often /var/run/utmp or /etc/utmp). If a file argument is given, users uses that file instead. A common choice is /var/log/wtmp.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
who prints information about users who are currently logged on. Synopsis:
who [option] [file] [am i]
If given no non-option arguments, who prints the following information for each user currently logged on: login name, terminal line, login time, and remote hostname or X display.
If given one non-option argument, who uses that instead of a default system-maintained file (often /var/run/utmp or /etc/utmp) as the name of the file containing the record of users logged on. /var/log/wtmp is commonly given as an argument to who to look at who has previously logged on.
If given two non-option arguments, who prints only the entry for the user running it (determined from its standard input), preceded by the hostname. Traditionally, the two arguments given are ‘am i’, as in ‘who am i’.
Time stamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ.
The program accepts the following options. Also see Common options.
‘+’ allowingwrite
messages ‘-’ disallowingwrite
messages ‘?’ cannot find terminal device
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that print or change system-wide information.
date [option]... [+format] date [-u|--utc|--universal] [ MMDDhhmm[[CC]YY][.ss] ]
Invoking date with no format argument is equivalent to invoking it with a default format that depends on the LC_TIME locale category. In the default C locale, this format is ‘'+%a %b %e %H:%M:%S %Z %Y'’, so the output looks like ‘Thu Mar 3 13:47:51 PST 2005’.
Normally, date uses the time zone rules indicated by the TZ environment variable, or the system default rules if TZ is not set. See Specifying the Time Zone with TZ.
If given an argument that starts with a ‘+’, date prints the
current date and time (or the date and time specified by the
--date option, see below) in the format defined by that argument,
which is similar to that of the strftime
function. Except for
conversion specifiers, which start with ‘%’, characters in the
format string are printed unchanged. The conversion specifiers are
described below.
An exit status of zero indicates success, and a nonzero value indicates failure.
date conversion specifiers related to times.
date conversion specifiers related to dates.
date conversion specifiers that produce literal strings.
Unless otherwise specified, date normally pads numeric fields with zeros, so that, for example, numeric months are always output as two digits. Seconds since the epoch are not padded, though, since there is no natural width for them.
As a GNU extension, date recognizes any of the following optional flags after the ‘%’:
Here are some examples of padding:
date +%d/%m -d "Feb 1" ⇒ 01/02 date +%-d/%-m -d "Feb 1" ⇒ 1/2 date +%_d/%_m -d "Feb 1" ⇒ 1/ 2
As a GNU extension, you can specify the field width (after any flag, if present) as a decimal number. If the natural size of the output of the field has less than the specified number of characters, the result is written right adjusted and padded to the given size. For example, ‘%9B’ prints the right adjusted month name in a field of width 9.
An optional modifier can follow the optional flag and width specification. The modifiers are:
If the format supports the modifier but no alternate representation is available, it is ignored.
If given an argument that does not start with ‘+’, date sets the system clock to the date and time specified by that argument (as described below). You must have appropriate privileges to set the system clock. Note for changes to persist across a reboot, the hardware clock may need to be updated from the system clock, which might not happen automatically on your system.
The argument must consist entirely of digits, which have the following meaning:
Note, the --date and --set options may not be used with an argument in the above format. The --universal option may be used with such an argument to indicate that the specified date and time are relative to Coordinated Universal Time rather than to the local time zone.
The program accepts the following options. Also see Common options.
date -d "$(LC_TIME=C date)"
See Date input formats.
The argument timespec specifies the number of additional terms of the time to include. It can be one of the following:
If showing any time terms, then include the time zone using the format
‘%z’.
Fri, 09 Sep 2005 13:51:39 -0700
This format conforms to
Internet RFCs 2822 and
822, the
current and previous standards for Internet email.
The argument timespec specifies how much of the time to include. It can be one of the following:
Here are a few examples. Also see the documentation for the -d option in the previous section.
date --date='2 days ago'
date --date='3 months 1 day'
date --date='25 Dec' +%j
date '+%B %d'
But this may not be what you want because for the first nine days of the month, the ‘%d’ expands to a zero-padded two-digit field, for example ‘date -d 1may '+%B %d'’ will print ‘May 01’.
date -d 1may '+%B %-d
date +%m%d%H%M%Y.%S
date --set='+2 minutes'
Fri, 09 Sep 2005 13:51:39 -0700
date --date='1970-01-01 00:02:00 +0000' +%s 120
If you do not specify time zone information in the date string, date uses your computer's idea of the time zone when interpreting the string. For example, if your computer's time zone is that of Cambridge, Massachusetts, which was then 5 hours (i.e., 18,000 seconds) behind UTC:
# local time zone used date --date='1970-01-01 00:02:00' +%s 18120
date --date='2000-01-01 UTC' +%s 946684800
An alternative is to use the --utc (-u) option. Then you may omit ‘UTC’ from the date string. Although this produces the same result for ‘%s’ and many other format sequences, with a time zone offset different from zero, it would give a different result for zone-dependent formats like ‘%z’.
date -u --date=2000-01-01 +%s 946684800
To convert such an unwieldy number of seconds back to a more readable form, use a command like this:
# local time zone used date -d '1970-01-01 UTC 946684800 seconds' +"%Y-%m-%d %T %z" 1999-12-31 19:00:00 -0500
Or if you do not mind depending on the ‘@’ feature present since coreutils 5.3.0, you could shorten this to:
date -d @946684800 +"%F %T %z" 1999-12-31 19:00:00 -0500
Often it is better to output UTC-relative date and time:
date -u -d '1970-01-01 946684800 seconds' +"%Y-%m-%d %T %z" 2000-01-01 00:00:00 +0000
Here is how the two kinds of systems handle the leap second at 2012-06-30 23:59:60 UTC:
# Typical systems ignore leap seconds: date --date='2012-06-30 23:59:59 +0000' +%s 1341100799 date --date='2012-06-30 23:59:60 +0000' +%s date: invalid date '2012-06-30 23:59:60 +0000' date --date='2012-07-01 00:00:00 +0000' +%s 1341100800
# Atypical systems count leap seconds: date --date='2012-06-30 23:59:59 +0000' +%s 1341100823 date --date='2012-06-30 23:59:60 +0000' +%s 1341100824 date --date='2012-07-01 00:00:00 +0000' +%s 1341100825
arch prints the machine hardware name, and is equivalent to ‘uname -m’. Synopsis:
arch [option]
The program accepts the Common options only.
An exit status of zero indicates success, and a nonzero value indicates failure.
Print the number of processing units available to the current process, which may be less than the number of online processors. If this information is not accessible, then print the number of processors installed. If the OMP_NUM_THREADS environment variable is set, then it will determine the returned value. The result is guaranteed to be greater than zero. Synopsis:
nproc [option]
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
uname prints information about the machine and operating system it is run on. If no options are given, uname acts as if the -s option were given. Synopsis:
uname [option]...
If multiple options or -a are given, the selected information is printed in this order:
kernel-name nodename kernel-release kernel-version machine processor hardware-platform operating-system
The information may contain internal spaces, so such output cannot be parsed reliably. In the following example, release is ‘2.2.18ss.e820-bda652a #4 SMP Tue Jun 5 11:24:08 PDT 2001’:
uname -a ⇒ Linux dumdum 2.2.18 #4 SMP Tue Jun 5 11:24:08 PDT 2001 i686 unknown unknown GNU/Linux
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
With no arguments, hostname prints the name of the current host system. With one argument, it sets the current host name to the specified string. You must have appropriate privileges to set the host name. Synopsis:
hostname [name]
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
hostid prints the numeric identifier of the current host in hexadecimal. This command accepts no arguments. The only options are --help and --version. See Common options.
For example, here's what it prints on one system I use:
$ hostid 1bac013d
On that system, the 32-bit quantity happens to be closely related to the system's Internet address, but that isn't always the case.
An exit status of zero indicates success, and a nonzero value indicates failure.
uptime prints the current time, the system's uptime, the number of logged-in users and the current load average.
If an argument is specified, it is used as the file to be read to discover how many users are logged in. If no argument is specified, a system default is used (uptime --help indicates the default setting).
The only options are --help and --version. See Common options.
For example, here's what it prints right now on one system I use:
$ uptime 14:07 up 3:35, 3 users, load average: 1.39, 1.15, 1.04
The precise method of calculation of load average varies somewhat between systems. Some systems calculate it as the average number of runnable processes over the last 1, 5 and 15 minutes, but some systems also include processes in the uninterruptible sleep state (that is, those processes which are waiting for disk I/O). The Linux kernel includes uninterruptible processes.
This section describes commands for operations with SELinux contexts.
chcon changes the SELinux security context of the selected files. Synopses:
chcon [option]... context file... chcon [option]... [-u user] [-r role] [-l range] [-t type] file... chcon [option]... --reference=rfile file...
Change the SELinux security context of each file to context. With --reference, change the security context of each file to that of rfile.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
runcon runs file in specified SELinux security context.
Synopses:
runcon context command [args] runcon [ -c ] [-u user] [-r role] [-t type] [-l range] command [args]
Run command with completely-specified context, or with current or transitioned security context modified by one or more of level, role, type and user.
If none of -c, -t, -u, -r, or -l is specified, the first argument is used as the complete context. Any additional arguments after command are interpreted as arguments to the command.
With neither context nor command, print the current security context.
The program accepts the following options. Also see Common options.
126 if command is found but cannot be invoked
127 if runcon itself fails or if command cannot be found
the exit status of command otherwise
This section describes commands that run other commands in some context different than the current one: a modified environment, as a different user, etc.
chroot runs a command with a specified root directory. On many systems, only the super-user can do this.4 Synopses:
chroot option newroot [command [args]...] chroot option
Ordinarily, file names are looked up starting at the root of the directory structure, i.e., /. chroot changes the root to the directory newroot (which must exist) and then runs command with optional args. If command is not specified, the default is the value of the SHELL environment variable or /bin/sh if not set, invoked with the -i option. command must not be a special built-in utility (see Special built-in utilities).
The program accepts the following options. Also see Common options. Options must precede operands.
The user and group name look-up performed by the --userspec and --groups options, is done both outside and inside the chroot, with successful look-ups inside the chroot taking precedence. If the specified user or group items are intended to represent a numeric ID, then a name to ID resolving step is avoided by specifying a leading ‘+’. See Disambiguating names and IDs.
Here are a few tips to help avoid common problems in using chroot. To start with a simple example, make command refer to a statically linked binary. If you were to use a dynamically linked executable, then you'd have to arrange to have the shared libraries in the right place under your new root directory.
For example, if you create a statically linked ls executable, and put it in /tmp/empty, you can run this command as root:
$ chroot /tmp/empty /ls -Rl /
Then you'll see output like this:
/: total 1023 -rwxr-xr-x 1 0 0 1041745 Aug 16 11:17 ls
If you want to use a dynamically linked executable, say bash, then first run ‘ldd bash’ to see what shared objects it needs. Then, in addition to copying the actual binary, also copy the listed files to the required positions under your intended new root directory. Finally, if the executable requires any other files (e.g., data, state, device files), copy them into place, too.
125 if chroot itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
env runs a command with a modified environment. Synopses:
env [option]... [name=value]... [command [args]...] env
Operands of the form ‘variable=value’ set the environment variable variable to value value. value may be empty (‘variable=’). Setting a variable to an empty value is different from unsetting it. These operands are evaluated left-to-right, so if two operands mention the same variable the earlier is ignored.
Environment variable names can be empty, and can contain any characters other than ‘=’ and ASCII NUL. However, it is wise to limit yourself to names that consist solely of underscores, digits, and ASCII letters, and that begin with a non-digit, as applications like the shell do not work well with other names.
The first operand that does not contain the character ‘=’ specifies the program to invoke; it is searched for according to the PATH environment variable. Any remaining arguments are passed as arguments to that program. The program should not be a special built-in utility (see Special built-in utilities).
Modifications to PATH take effect prior to searching for command. Use caution when reducing PATH; behavior is not portable when PATH is undefined or omits key directories such as /bin.
In the rare case that a utility contains a ‘=’ in the name, the only way to disambiguate it from a variable assignment is to use an intermediate command for command, and pass the problematic program name via args. For example, if ./prog= is an executable in the current PATH:
env prog= true # runs 'true', with prog= in environment env ./prog= true # runs 'true', with ./prog= in environment env -- prog= true # runs 'true', with prog= in environment env sh -c '\prog= true' # runs 'prog=' with argument 'true' env sh -c 'exec "$@"' sh prog= true # also runs 'prog='
If no command name is specified following the environment specifications, the resulting environment is printed. This is like specifying the printenv program.
For some examples, suppose the environment passed to env contains ‘LOGNAME=rms’, ‘EDITOR=emacs’, and ‘PATH=.:/gnubin:/hacks’:
$ env | LC_ALL=C sort EDITOR=emacs LOGNAME=rms PATH=.:/gnubin:/hacks
env - PATH="$PATH" foo
env foo
env DISPLAY=gnu:0 LOGNAME=foo nemacs
env -u EDITOR PATH=/energy -- e=mc2 bar baz
The program accepts the following options. Also see Common options. Options must precede operands.
0 if no command is specified and the environment is output
125 if env itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
nice prints a process's niceness, or runs a command with modified niceness. niceness affects how favorably the process is scheduled in the system. Synopsis:
nice [option]... [command [arg]...]
If no arguments are given, nice prints the current niceness. Otherwise, nice runs the given command with its niceness adjusted. By default, its niceness is incremented by 10.
Niceness values range at least from −20 (process has high priority and gets more resources, thus slowing down other processes) through 19 (process has lower priority and runs slowly itself, but has less impact on the speed of other running processes). Some systems may have a wider range of niceness values; conversely, other systems may enforce more restrictive limits. An attempt to set the niceness outside the supported range is treated as an attempt to use the minimum or maximum supported value.
A niceness should not be confused with a scheduling priority, which lets applications determine the order in which threads are scheduled to run. Unlike a priority, a niceness is merely advice to the scheduler, which the scheduler is free to ignore. Also, as a point of terminology, POSIX defines the behavior of nice in terms of a nice value, which is the nonnegative difference between a niceness and the minimum niceness. Though nice conforms to POSIX, its documentation and diagnostics use the term “niceness” for compatibility with historical practice.
command must not be a special built-in utility (see Special built-in utilities).
Due to shell aliases and built-in nice functions, using an
unadorned nice interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env nice ...
) to avoid interference
from the shell.
Note to change the niceness of an existing process, one needs to use the renice command.
The program accepts the following option. Also see Common options. Options must precede operands.
For compatibility nice also supports an obsolete option syntax -adjustment. New scripts should use -n adjustment instead.
0 if no command is specified and the niceness is output
125 if nice itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
It is sometimes useful to run a non-interactive program with reduced niceness.
$ nice factor 4611686018427387903
Since nice prints the current niceness, you can invoke it through itself to demonstrate how it works.
The default behavior is to increase the niceness by ‘10’:
$ nice 0 $ nice nice 10 $ nice -n 10 nice 10
The adjustment is relative to the current niceness. In the next example, the first nice invocation runs the second one with niceness 10, and it in turn runs the final one with a niceness that is 3 more:
$ nice nice -n 3 nice 13
Specifying a niceness larger than the supported range is the same as specifying the maximum supported value:
$ nice -n 10000000000 nice 19
Only a privileged user may run a process with lower niceness:
$ nice -n -1 nice nice: cannot set niceness: Permission denied 0 $ sudo nice -n -1 nice -1
nohup runs the given command with hangup signals ignored, so that the command can continue running in the background after you log out. Synopsis:
nohup command [arg]...
If standard input is a terminal, it is redirected from /dev/null so that terminal sessions do not mistakenly consider the terminal to be used by the command. This is a GNU extension; programs intended to be portable to non-GNU hosts should use ‘nohup command [arg]... </dev/null’ instead.
If standard output is a terminal, the command's standard output is appended to the file nohup.out; if that cannot be written to, it is appended to the file $HOME/nohup.out; and if that cannot be written to, the command is not run. Any nohup.out or $HOME/nohup.out file created by nohup is made readable and writable only to the user, regardless of the current umask settings.
If standard error is a terminal, it is normally redirected to the same file descriptor as the (possibly-redirected) standard output. However, if standard output is closed, standard error terminal output is instead appended to the file nohup.out or $HOME/nohup.out as above.
To capture the command's output to a file other than nohup.out you can redirect it. For example, to capture the output of make:
nohup make > make.log
nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an ‘&’. Also, nohup does not alter the niceness of command; use nice for that, e.g., ‘nohup nice command’.
command must not be a special built-in utility (see Special built-in utilities).
The only options are --help and --version. See Common options. Options must precede operands.
125 if nohup itself fails, and POSIXLY_CORRECT is not set 126 if command is found but cannot be invoked 127 if command cannot be found the exit status of command otherwise
If POSIXLY_CORRECT is set, internal failures give status 127 instead of 125.
stdbuf allows one to modify the buffering operations of the three standard I/O streams associated with a program. Synopsis:
stdbuf option... command
command must start with the name of a program that
FILE
streams for input/output (note the
programs dd and cat don't do that),
Any additional args are passed as additional arguments to the command.
The program accepts the following options. Also see Common options.
The mode can be specified as follows:
fread
will still block until EOF
or error,
even if the underlying read
returns less data than requested.
‘KB’ => 1000 (KiloBytes) ‘K’ => 1024 (KibiBytes) ‘MB’ => 1000*1000 (MegaBytes) ‘M’ => 1024*1024 (MebiBytes) ‘GB’ => 1000*1000*1000 (GigaBytes) ‘G’ => 1024*1024*1024 (GibiBytes)
and so on for ‘T’, ‘P’, ‘E’, ‘Z’, and ‘Y’.
125 if stdbuf itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
timeout runs the given command and kills it if it is still running after the specified time interval. Synopsis:
timeout [option] duration command [arg]...
command must not be a special built-in utility (see Special built-in utilities).
The program accepts the following options. Also see Common options. Options must precede operands.
Note in this mode of operation, any children of command
will not be timed out.
duration is a floating point number followed by an optional unit:
‘s’ for seconds (the default) ‘m’ for minutes ‘h’ for hours ‘d’ for days
A duration of 0 disables the associated timeout. Note that the actual timeout duration is dependent on system conditions, which should be especially considered when specifying sub-second timeouts.
124 if command times out
125 if timeout itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
137 if command is sent the KILL(9) signal (128+9)
the exit status of command otherwise
The kill command sends a signal to processes, causing them to terminate or otherwise act upon receiving the signal in some way. Alternatively, it lists information about signals. Synopses:
kill [-s signal | --signal signal | -signal] pid... kill [-l | --list | -t | --table] [signal]...
Due to shell aliases and built-in kill functions, using an
unadorned kill interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env kill ...
) to avoid interference
from the shell.
The first form of the kill command sends a signal to all pid arguments. The default signal to send if none is specified is ‘TERM’. The special signal number ‘0’ does not denote a valid signal, but can be used to test whether the pid arguments specify processes to which a signal could be sent.
If pid is positive, the signal is sent to the process with the process ID pid. If pid is zero, the signal is sent to all processes in the process group of the current process. If pid is −1, the signal is sent to all processes for which the user has permission to send a signal. If pid is less than −1, the signal is sent to all processes in the process group that equals the absolute value of pid.
If pid is not positive, a system-dependent set of system processes is excluded from the list of processes to which the signal is sent.
If a negative pid argument is desired as the first one, it should be preceded by --. However, as a common extension to POSIX, -- is not required with ‘kill -signal -pid’. The following commands are equivalent:
kill -15 -1 kill -TERM -1 kill -s TERM -- -1 kill -- -1
The first form of the kill command succeeds if every pid argument specifies at least one process that the signal was sent to.
The second form of the kill command lists signal information. Either the -l or --list option, or the -t or --table option must be specified. Without any signal argument, all supported signals are listed. The output of -l or --list is a list of the signal names, one per line; if signal is already a name, the signal number is printed instead. The output of -t or --table is a table of signal numbers, names, and descriptions. This form of the kill command succeeds if all signal arguments are valid and if there is no output error.
The kill command also supports the --help and --version options. See Common options.
A signal may be a signal name like ‘HUP’, or a signal number like ‘1’, or an exit status of a process terminated by the signal. A signal name can be given in canonical form or prefixed by ‘SIG’. The case of the letters is ignored, except for the -signal option which must use upper case to avoid ambiguity with lower case option letters. See Signal specifications, for a list of supported signal names and numbers.
sleep pauses for an amount of time specified by the sum of the values of the command line arguments. Synopsis:
sleep number[smhd]...
Each argument is a number followed by an optional unit; the default is seconds. The units are:
Historical implementations of sleep have required that number be an integer, and only accepted a single argument without a suffix. However, GNU sleep accepts arbitrary floating point numbers. See Floating point.
The only options are --help and --version. See Common options.
Due to shell aliases and built-in sleep functions, using an
unadorned sleep interactively or in a script may get you
different functionality than that described here. Invoke it via
env (i.e., env sleep ...
) to avoid interference
from the shell.
An exit status of zero indicates success, and a nonzero value indicates failure.
These programs do numerically-related operations.
factor prints prime factors. Synopses:
factor [number]... factor option
If no number is specified on the command line, factor reads numbers from standard input, delimited by newlines, tabs, or spaces.
The factor command supports only a small number of options:
Factoring the product of the eighth and ninth Mersenne primes takes about 30 milliseconds of CPU time on a 2.2 GHz Athlon.
M8=$(echo 2^31-1|bc) M9=$(echo 2^61-1|bc) n=$(echo "$M8 * $M9" | bc) /usr/bin/time -f %U factor $n 4951760154835678088235319297: 2147483647 2305843009213693951 0.03
Similarly, factoring the eighth Fermat number 2^256+1 takes about 20 seconds on the same machine.
Factoring large numbers is, in general, hard. The Pollard Rho algorithm used by factor is particularly effective for numbers with relatively small factors. If you wish to factor large numbers which do not have small factors (for example, numbers which are the product of two large primes), other methods are far better.
If factor is built without using GNU MP, only single-precision arithmetic is available, and so large numbers (typically 2^64 and above) will not be supported. The single-precision code uses an algorithm which is designed for factoring smaller numbers.
An exit status of zero indicates success, and a nonzero value indicates failure.
seq prints a sequence of numbers to standard output. Synopses:
seq [option]... last seq [option]... first last seq [option]... first increment last
seq prints the numbers from first to last by
increment. By default, each number is printed on a separate line.
When increment is not specified, it defaults to ‘1’,
even when first is larger than last.
first also defaults to ‘1’. So seq 1
prints
‘1’, but seq 0
and seq 10 5
produce no output.
The sequence of numbers ends when the sum of the current number and
increment would become greater than last,
so seq 1 10 10
only produces ‘1’.
Floating-point numbers may be specified. See Floating point.
The program accepts the following options. Also see Common options. Options must precede operands.
The default format is derived from first, step, and
last. If these all use a fixed point decimal representation,
the default format is ‘%.pf’, where p is the minimum
precision that can represent the output numbers exactly. Otherwise,
the default format is ‘%g’.
You can get finer-grained control over output with -f:
$ seq -f '(%9.2E)' -9e5 1.1e6 1.3e6 (-9.00E+05) ( 2.00E+05) ( 1.30E+06)
If you want hexadecimal integer output, you can use printf to perform the conversion:
$ printf '%x\n' $(seq 1048575 1024 1050623) fffff 1003ff 1007ff
For very long lists of numbers, use xargs to avoid system limitations on the length of an argument list:
$ seq 1000000 | xargs printf '%x\n' | tail -n 3 f423e f423f f4240
To generate octal output, use the printf %o
format instead
of %x
.
On most systems, seq can produce whole-number output for values up to at least 2^53. Larger integers are approximated. The details differ depending on your floating-point implementation. See Floating point. A common case is that seq works with integers through 2^64, and larger integers may not be numerically correct:
$ seq 50000000000000000000 2 50000000000000000004 50000000000000000000 50000000000000000000 50000000000000000004
However, note that when limited to non-negative whole numbers, an increment of 1 and no format-specifying option, seq can print arbitrarily large numbers.
Be careful when using seq with outlandish values: otherwise you may see surprising results, as seq uses floating point internally. For example, on the x86 platform, where the internal representation uses a 64-bit fraction, the command:
seq 1 0.0000000000000000001 1.0000000000000000009
outputs 1.0000000000000000007 twice and skips 1.0000000000000000008.
An exit status of zero indicates success, and a nonzero value indicates failure.
Each file has a set of file mode bits that control the kinds of access that users have to that file. They can be represented either in symbolic form or as an octal number.
The file mode bits have two parts: the file permission bits, which control ordinary access to the file, and special mode bits, which affect only some files.
There are three kinds of permissions that a user can have for a file:
There are three categories of users who may have different permissions to perform any of the above operations on a file:
Files are given an owner and group when they are created. Usually the owner is the current user and the group is the group of the directory the file is in, but this varies with the operating system, the file system the file is created on, and the way the file is created. You can change the owner and group of a file by using the chown and chgrp commands.
In addition to the three sets of three permissions listed above, the file mode bits have three special components, which affect only executable files (programs) and, on most systems, directories:
For regular files on some older systems, save the program's text image on the swap device so it will load more quickly when run; this is called the sticky bit.
In addition to the file mode bits listed above, there may be file attributes specific to the file system, e.g., access control lists (ACLs), whether a file is compressed, whether a file can be modified (immutability), and whether a file can be dumped. These are usually set using programs specific to the file system. For example:
Even if a file's mode bits allow an operation on that file, that operation may still fail, because:
For example, if the immutable attribute is set on a file,
it cannot be modified, regardless of the fact that you
may have just run chmod a+w FILE
.
Symbolic modes represent changes to files' mode bits as
operations on single-character symbols. They allow you to modify either
all or selected parts of files' mode bits, optionally based on
their previous values, and perhaps on the current umask
as well
(see Umask and Protection).
The format of symbolic modes is:
[ugoa...][-+=]perms...[,...]
where perms is either zero or more letters from the set ‘rwxXst’, or a single letter from the set ‘ugo’.
The following sections describe the operators and other details of symbolic modes.
The basic symbolic operations on a file's permissions are adding, removing, and setting the permission that certain users have to read, write, and execute or search the file. These operations have the following format:
users operation permissions
The spaces between the three parts above are shown for readability only; symbolic modes cannot contain spaces.
The users part tells which users' access to the file is changed. It consists of one or more of the following letters (or it can be empty; see Umask and Protection, for a description of what happens then). When more than one of these letters is given, the order that they are in does not matter.
u
g
o
a
The operation part tells how to change the affected users' access to the file, and is one of the following symbols:
+
-
=
The permissions part tells what kind of access to the file should be changed; it is normally zero or more of the following letters. As with the users part, the order does not matter when more than one letter is given. Omitting the permissions part is useful only with the ‘=’ operation, where it gives the specified users no access at all to the file.
r
w
x
For example, to give everyone permission to read and write a regular file, but not to execute it, use:
a=rw
To remove write permission for all users other than the file's owner, use:
go-w
The above command does not affect the access that the owner of the file has to it, nor does it affect whether other users can read or execute the file.
To give everyone except a file's owner no permission to do anything with that file, use the mode below. Other users could still remove the file, if they have write permission on the directory it is in.
go=
Another way to specify the same thing is:
og-rwx
You can base a file's permissions on its existing permissions. To do this, instead of using a series of ‘r’, ‘w’, or ‘x’ letters after the operator, you use the letter ‘u’, ‘g’, or ‘o’. For example, the mode
o+g
adds the permissions for users who are in a file's group to the permissions that other users have for the file. Thus, if the file started out as mode 664 (‘rw-rw-r--’), the above mode would change it to mode 666 (‘rw-rw-rw-’). If the file had started out as mode 741 (‘rwxr----x’), the above mode would change it to mode 745 (‘rwxr--r-x’). The ‘-’ and ‘=’ operations work analogously.
In addition to changing a file's read, write, and execute/search permissions, you can change its special mode bits. See Mode Structure, for a summary of these special mode bits.
To change the file mode bits to set the user ID on execution, use ‘u’ in the users part of the symbolic mode and ‘s’ in the permissions part.
To change the file mode bits to set the group ID on execution, use ‘g’ in the users part of the symbolic mode and ‘s’ in the permissions part.
To set both user and group ID on execution, omit the users part of the symbolic mode (or use ‘a’) and use ‘s’ in the permissions part.
To change the file mode bits to set the restricted deletion flag or sticky bit, omit the users part of the symbolic mode (or use ‘a’) and use ‘t’ in the permissions part.
For example, to set the set-user-ID mode bit of a program, you can use the mode:
u+s
To remove both set-user-ID and set-group-ID mode bits from it, you can use the mode:
a-s
To set the restricted deletion flag or sticky bit, you can use the mode:
+t
The combination ‘o+s’ has no effect. On GNU systems the combinations ‘u+t’ and ‘g+t’ have no effect, and ‘o+t’ acts like plain ‘+t’.
The ‘=’ operator is not very useful with special mode bits. For example, the mode:
o=t
does set the restricted deletion flag or sticky bit, but it also removes all read, write, and execute/search permissions that users not in the file's group might have had for it.
See Directory Setuid and Setgid, for additional rules concerning set-user-ID and set-group-ID bits and directories.
There is one more special type of symbolic permission: if you use ‘X’ instead of ‘x’, execute/search permission is affected only if the file is a directory or already had execute permission.
For example, this mode:
a+X
gives all users permission to search directories, or to execute files if anyone could execute them before.
The format of symbolic modes is actually more complex than described above (see Setting Permissions). It provides two ways to make multiple changes to files' mode bits.
The first way is to specify multiple operation and permissions parts after a users part in the symbolic mode.
For example, the mode:
og+rX-w
gives users other than the owner of the file read permission and, if it is a directory or if someone already had execute permission to it, gives them execute/search permission; and it also denies them write permission to the file. It does not affect the permission that the owner of the file has for it. The above mode is equivalent to the two modes:
og+rX og-w
The second way to make multiple changes is to specify more than one simple symbolic mode, separated by commas. For example, the mode:
a+r,go-w
gives everyone permission to read the file and removes write permission on it for all users except its owner. Another example:
u=rwx,g=rx,o=
sets all of the permission bits for the file explicitly. (It gives users who are not in the file's group no permission at all for it.)
The two methods can be combined. The mode:
a+r,g+x-w
gives all users permission to read the file, and gives users who are in the file's group permission to execute/search it as well, but not permission to write to it. The above mode could be written in several different ways; another is:
u+r,g+rx,o+r,g-w
If the users part of a symbolic mode is omitted, it defaults to
‘a’ (affect all users), except that any permissions that are
set in the system variable umask
are not affected.
The value of umask
can be set using the
umask
command. Its default value varies from system to system.
Omitting the users part of a symbolic mode is generally not useful
with operations other than ‘+’. It is useful with ‘+’ because
it allows you to use umask
as an easily customizable protection
against giving away more permission to files than you intended to.
As an example, if umask
has the value 2, which removes write
permission for users who are not in the file's group, then the mode:
+w
adds permission to write to the file to its owner and to other users who are in the file's group, but not to other users. In contrast, the mode:
a+w
ignores umask
, and does give write permission for
the file to all users.
As an alternative to giving a symbolic mode, you can give an octal (base 8) number that represents the mode. This number is always interpreted in octal; you do not have to add a leading ‘0’, as you do in C. Mode ‘0055’ is the same as mode ‘55’. (However, modes of five digits or more, such as ‘00055’, are sometimes special. See Directory Setuid and Setgid.)
A numeric mode is usually shorter than the corresponding symbolic mode, but it is limited in that normally it cannot take into account the previous file mode bits; it can only set them absolutely. The set-user-ID and set-group-ID bits of directories are an exception to this general limitation. See Directory Setuid and Setgid. Also, operator numeric modes can take previous file mode bits into account. See Operator Numeric Modes.
The permissions granted to the user, to other users in the file's group, and to other users not in the file's group each require three bits, which are represented as one octal digit. The three special mode bits also require one bit each, and they are as a group represented as another octal digit. Here is how the bits are arranged, starting with the lowest valued bit:
Value in Corresponding Mode Mode Bit Other users not in the file's group: 1 Execute/search 2 Write 4 Read Other users in the file's group: 10 Execute/search 20 Write 40 Read The file's owner: 100 Execute/search 200 Write 400 Read Special mode bits: 1000 Restricted deletion flag or sticky bit 2000 Set group ID on execution 4000 Set user ID on execution
For example, numeric mode ‘4755’ corresponds to symbolic mode ‘u=rwxs,go=rx’, and numeric mode ‘664’ corresponds to symbolic mode ‘ug=rw,o=r’. Numeric mode ‘0’ corresponds to symbolic mode ‘a=’.
An operator numeric mode is a numeric mode that is prefixed by a ‘-’, ‘+’, or ‘=’ operator, which has the same interpretation as in symbolic modes. For example, ‘+440’ enables read permission for the file's owner and group, ‘-1’ disables execute permission for other users, and ‘=600’ clears all permissions except for enabling read-write permissions for the file's owner. Operator numeric modes can be combined with symbolic modes by separating them with a comma; for example, ‘=0,u+r’ clears all permissions except for enabling read permission for the file's owner.
The commands ‘chmod =755 dir’ and ‘chmod 755 dir’ differ in that the former clears the directory dir's setuid and setgid bits, whereas the latter preserves them. See Directory Setuid and Setgid.
Operator numeric modes are a GNU extension.
On most systems, if a directory's set-group-ID bit is set, newly created subfiles inherit the same group as the directory, and newly created subdirectories inherit the set-group-ID bit of the parent directory. On a few systems, a directory's set-user-ID bit has a similar effect on the ownership of new subfiles and the set-user-ID bits of new subdirectories. These mechanisms let users share files more easily, by lessening the need to use chmod or chown to share new files.
These convenience mechanisms rely on the set-user-ID and set-group-ID bits of directories. If commands like chmod and mkdir routinely cleared these bits on directories, the mechanisms would be less convenient and it would be harder to share files. Therefore, a command like chmod does not affect the set-user-ID or set-group-ID bits of a directory unless the user specifically mentions them in a symbolic mode, or uses an operator numeric mode such as ‘=755’, or sets them in a numeric mode, or clears them in a numeric mode that has five or more octal digits. For example, on systems that support set-group-ID inheritance:
# These commands leave the set-user-ID and # set-group-ID bits of the subdirectories alone, # so that they retain their default values. mkdir A B C chmod 755 A chmod 0755 B chmod u=rwx,go=rx C mkdir -m 755 D mkdir -m 0755 E mkdir -m u=rwx,go=rx F
If you want to try to set these bits, you must mention them explicitly in the symbolic or numeric modes, e.g.:
# These commands try to set the set-user-ID # and set-group-ID bits of the subdirectories. mkdir G chmod 6755 G chmod +6000 G chmod u=rwx,go=rx,a+s G mkdir -m 6755 H mkdir -m +6000 I mkdir -m u=rwx,go=rx,a+s J
If you want to try to clear these bits, you must mention them explicitly in a symbolic mode, or use an operator numeric mode, or specify a numeric mode with five or more octal digits, e.g.:
# These commands try to clear the set-user-ID # and set-group-ID bits of the directory D. chmod a-s D chmod -6000 D chmod =755 D chmod 00755 D
This behavior is a GNU extension. Portable scripts should not rely on requests to set or clear these bits on directories, as POSIX allows implementations to ignore these requests. The GNU behavior with numeric modes of four or fewer digits is intended for scripts portable to systems that preserve these bits; the behavior with numeric modes of five or more digits is for scripts portable to systems that do not preserve the bits.
Our units of temporal measurement, from seconds on up to months, are so complicated, asymmetrical and disjunctive so as to make coherent mental reckoning in time all but impossible. Indeed, had some tyrannical god contrived to enslave our minds to time, to make it all but impossible for us to escape subjection to sodden routines and unpleasant surprises, he could hardly have done better than handing down our present system. It is like a set of trapezoidal building blocks, with no vertical or horizontal surfaces, like a language in which the simplest thought demands ornate constructions, useless particles and lengthy circumlocutions. Unlike the more successful patterns of language and science, which enable us to face experience boldly or at least level-headedly, our system of temporal calculation silently and persistently encourages our terror of time. ...It is as though architects had to measure length in feet, width in meters and height in ells; as though basic instruction manuals demanded a knowledge of five different languages. It is no wonder then that we often look into our own immediate past or future, last Tuesday or a week from Sunday, with feelings of helpless confusion. ...
—Robert Grudin, Time and the Art of Living.
This section describes the textual date representations that GNU
programs accept. These are the strings you, as a user, can supply as
arguments to the various programs. The C interface (via the
parse_datetime
function) is not described here.
A date is a string, possibly empty, containing many items separated by whitespace. The whitespace may be omitted when no ambiguity arises. The empty string means the beginning of today (i.e., midnight). Order of the items is immaterial. A date string may contain many flavors of items:
We describe each of these item types in turn, below.
A few ordinal numbers may be written out in words in some contexts. This is most useful for specifying day of the week items or relative items (see below). Among the most commonly used ordinal numbers, the word ‘last’ stands for -1, ‘this’ stands for 0, and ‘first’ and ‘next’ both stand for 1. Because the word ‘second’ stands for the unit of time there is no way to write the ordinal number 2, but for convenience ‘third’ stands for 3, ‘fourth’ for 4, ‘fifth’ for 5, ‘sixth’ for 6, ‘seventh’ for 7, ‘eighth’ for 8, ‘ninth’ for 9, ‘tenth’ for 10, ‘eleventh’ for 11 and ‘twelfth’ for 12.
When a month is written this way, it is still considered to be written numerically, instead of being “spelled in full”; this changes the allowed strings.
In the current implementation, only English is supported for words and abbreviations like ‘AM’, ‘DST’, ‘EST’, ‘first’, ‘January’, ‘Sunday’, ‘tomorrow’, and ‘year’.
The output of the date command is not always acceptable as a date string, not only because of the language problem, but also because there is no standard meaning for time zone items like ‘IST’. When using date to generate a date string intended to be parsed later, specify a date format that is independent of language and that does not use time zone items other than ‘UTC’ and ‘Z’. Here are some ways to do this:
$ LC_ALL=C TZ=UTC0 date Mon Mar 1 00:21:42 UTC 2004 $ TZ=UTC0 date +'%Y-%m-%d %H:%M:%SZ' 2004-03-01 00:21:42Z $ date --rfc-3339=ns # --rfc-3339 is a GNU extension. 2004-02-29 16:21:42.692722128-08:00 $ date --rfc-2822 # a GNU extension Sun, 29 Feb 2004 16:21:42 -0800 $ date +'%Y-%m-%d %H:%M:%S %z' # %z is a GNU extension. 2004-02-29 16:21:42 -0800 $ date +'@%s.%N' # %s and %N are GNU extensions. @1078100502.692722128
Alphabetic case is completely ignored in dates. Comments may be introduced between round parentheses, as long as included parentheses are properly nested. Hyphens not followed by a digit are currently ignored. Leading zeros on numbers are ignored.
Invalid dates like ‘2005-02-29’ or times like ‘24:00’ are rejected. In the typical case of a host that does not support leap seconds, a time like ‘23:59:60’ is rejected even if it corresponds to a valid leap second.
A calendar date item specifies a day of the year. It is specified differently, depending on whether the month is specified numerically or literally. All these strings specify the same calendar date:
1972-09-24 # ISO 8601. 72-9-24 # Assume 19xx for 69 through 99, # 20xx for 00 through 68. 72-09-24 # Leading zeros are ignored. 9/24/72 # Common U.S. writing. 24 September 1972 24 Sept 72 # September has a special abbreviation. 24 Sep 72 # Three-letter abbreviations always allowed. Sep 24, 1972 24-sep-72 24sep72
The year can also be omitted. In this case, the last specified year is used, or the current year if none. For example:
9/24 sep 24
Here are the rules.
For numeric months, the ISO 8601 format ‘year-month-day’ is allowed, where year is any positive number, month is a number between 01 and 12, and day is a number between 01 and 31. A leading zero must be present if a number is less than ten. If year is 68 or smaller, then 2000 is added to it; otherwise, if year is less than 100, then 1900 is added to it. The construct ‘month/day/year’, popular in the United States, is accepted. Also ‘month/day’, omitting the year.
Literal months may be spelled out in full: ‘January’, ‘February’, ‘March’, ‘April’, ‘May’, ‘June’, ‘July’, ‘August’, ‘September’, ‘October’, ‘November’ or ‘December’. Literal months may be abbreviated to their first three letters, possibly followed by an abbreviating dot. It is also permitted to write ‘Sept’ instead of ‘September’.
When months are written literally, the calendar date may be given as any of the following:
day month year day month month day year day-month-year
Or, omitting the year:
month day
A time of day item in date strings specifies the time on a given day. Here are some examples, all of which represent the same time:
20:02:00.000000 20:02 8:02pm 20:02-0500 # In EST (U.S. Eastern Standard Time).
More generally, the time of day may be given as ‘hour:minute:second’, where hour is a number between 0 and 23, minute is a number between 0 and 59, and second is a number between 0 and 59 possibly followed by ‘.’ or ‘,’ and a fraction containing one or more digits. Alternatively, ‘:second’ can be omitted, in which case it is taken to be zero. On the rare hosts that support leap seconds, second may be 60.
If the time is followed by ‘am’ or ‘pm’ (or ‘a.m.’ or ‘p.m.’), hour is restricted to run from 1 to 12, and ‘:minute’ may be omitted (taken to be zero). ‘am’ indicates the first half of the day, ‘pm’ indicates the second half of the day. In this notation, 12 is the predecessor of 1: midnight is ‘12am’ while noon is ‘12pm’. (This is the zero-oriented interpretation of ‘12am’ and ‘12pm’, as opposed to the old tradition derived from Latin which uses ‘12m’ for noon and ‘12pm’ for midnight.)
The time may alternatively be followed by a time zone correction, expressed as ‘shhmm’, where s is ‘+’ or ‘-’, hh is a number of zone hours and mm is a number of zone minutes. The zone minutes term, mm, may be omitted, in which case the one- or two-digit correction is interpreted as a number of hours. You can also separate hh from mm with a colon. When a time zone correction is given this way, it forces interpretation of the time relative to Coordinated Universal Time (UTC), overriding any previous specification for the time zone or the local time zone. For example, ‘+0530’ and ‘+05:30’ both stand for the time zone 5.5 hours ahead of UTC (e.g., India). This is the best way to specify a time zone correction by fractional parts of an hour. The maximum zone correction is 24 hours.
Either ‘am’/‘pm’ or a time zone correction may be specified, but not both.
A time zone item specifies an international time zone, indicated by a small set of letters, e.g., ‘UTC’ or ‘Z’ for Coordinated Universal Time. Any included periods are ignored. By following a non-daylight-saving time zone by the string ‘DST’ in a separate word (that is, separated by some white space), the corresponding daylight saving time zone may be specified. Alternatively, a non-daylight-saving time zone can be followed by a time zone correction, to add the two values. This is normally done only for ‘UTC’; for example, ‘UTC+05:30’ is equivalent to ‘+05:30’.
Time zone items other than ‘UTC’ and ‘Z’ are obsolescent and are not recommended, because they are ambiguous; for example, ‘EST’ has a different meaning in Australia than in the United States. Instead, it's better to use unambiguous numeric time zone corrections like ‘-0500’, as described in the previous section.
If neither a time zone item nor a time zone correction is supplied, time stamps are interpreted using the rules of the default time zone (see Specifying time zone rules).
The ISO 8601 date and time of day extended format consists of an ISO 8601 date, a ‘T’ character separator, and an ISO 8601 time of day. This format is also recognized if the ‘T’ is replaced by a space.
In this format, the time of day should use 24-hour notation. Fractional seconds are allowed, with either comma or period preceding the fraction. ISO 8601 fractional minutes and hours are not supported. Typically, hosts support nanosecond timestamp resolution; excess precision is silently discarded.
Here are some examples:
2012-09-24T20:02:00.052-0500 2012-12-31T23:59:59,999999999+1100 1970-01-01 00:00Z
The explicit mention of a day of the week will forward the date (only if necessary) to reach that day of the week in the future.
Days of the week may be spelled out in full: ‘Sunday’, ‘Monday’, ‘Tuesday’, ‘Wednesday’, ‘Thursday’, ‘Friday’ or ‘Saturday’. Days may be abbreviated to their first three letters, optionally followed by a period. The special abbreviations ‘Tues’ for ‘Tuesday’, ‘Wednes’ for ‘Wednesday’ and ‘Thur’ or ‘Thurs’ for ‘Thursday’ are also allowed.
A number may precede a day of the week item to move forward supplementary weeks. It is best used in expression like ‘third monday’. In this context, ‘last day’ or ‘next day’ is also acceptable; they move one week before or after the day that day by itself would represent.
A comma following a day of the week item is ignored.
Relative items adjust a date (or the current date if none) forward or backward. The effects of relative items accumulate. Here are some examples:
1 year 1 year ago 3 years 2 days
The unit of time displacement may be selected by the string ‘year’ or ‘month’ for moving by whole years or months. These are fuzzy units, as years and months are not all of equal duration. More precise units are ‘fortnight’ which is worth 14 days, ‘week’ worth 7 days, ‘day’ worth 24 hours, ‘hour’ worth 60 minutes, ‘minute’ or ‘min’ worth 60 seconds, and ‘second’ or ‘sec’ worth one second. An ‘s’ suffix on these units is accepted and ignored.
The unit of time may be preceded by a multiplier, given as an optionally signed number. Unsigned numbers are taken as positively signed. No number at all implies 1 for a multiplier. Following a relative item by the string ‘ago’ is equivalent to preceding the unit by a multiplier with value -1.
The string ‘tomorrow’ is worth one day in the future (equivalent to ‘day’), the string ‘yesterday’ is worth one day in the past (equivalent to ‘day ago’).
The strings ‘now’ or ‘today’ are relative items corresponding to zero-valued time displacement, these strings come from the fact a zero-valued time displacement represents the current time when not otherwise changed by previous items. They may be used to stress other items, like in ‘12:00 today’. The string ‘this’ also has the meaning of a zero-valued time displacement, but is preferred in date strings like ‘this thursday’.
When a relative item causes the resulting date to cross a boundary where the clocks were adjusted, typically for daylight saving time, the resulting date and time are adjusted accordingly.
The fuzz in units can cause problems with relative items. For example, ‘2003-07-31 -1 month’ might evaluate to 2003-07-01, because 2003-06-31 is an invalid date. To determine the previous month more reliably, you can ask for the month before the 15th of the current month. For example:
$ date -R Thu, 31 Jul 2003 13:02:39 -0700 $ date --date='-1 month' +'Last month was %B?' Last month was July? $ date --date="$(date +%Y-%m-15) -1 month" +'Last month was %B!' Last month was June!
Also, take care when manipulating dates around clock changes such as daylight saving leaps. In a few cases these have added or subtracted as much as 24 hours from the clock, so it is often wise to adopt universal time by setting the TZ environment variable to ‘UTC0’ before embarking on calendrical calculations.
The precise interpretation of a pure decimal number depends on the context in the date string.
If the decimal number is of the form yyyymmdd and no other calendar date item (see Calendar date items) appears before it in the date string, then yyyy is read as the year, mm as the month number and dd as the day of the month, for the specified calendar date.
If the decimal number is of the form hhmm and no other time of day item appears before it in the date string, then hh is read as the hour of the day and mm as the minute of the hour, for the specified time of day. mm can also be omitted.
If both a calendar date and a time of day appear to the left of a number in the date string, but no relative item, then the number overrides the year.
If you precede a number with ‘@’, it represents an internal time stamp as a count of seconds. The number can contain an internal decimal point (either ‘.’ or ‘,’); any excess precision not supported by the internal representation is truncated toward minus infinity. Such a number cannot be combined with any other date item, as it specifies a complete time stamp.
Internally, computer times are represented as a count of seconds since an epoch—a well-defined point of time. On GNU and POSIX systems, the epoch is 1970-01-01 00:00:00 UTC, so ‘@0’ represents this time, ‘@1’ represents 1970-01-01 00:00:01 UTC, and so forth. GNU and most other POSIX-compliant systems support such times as an extension to POSIX, using negative counts, so that ‘@-1’ represents 1969-12-31 23:59:59 UTC.
Traditional Unix systems count seconds with 32-bit two's-complement integers and can represent times from 1901-12-13 20:45:52 through 2038-01-19 03:14:07 UTC. More modern systems use 64-bit counts of seconds with nanosecond subcounts, and can represent all the times in the known lifetime of the universe to a resolution of 1 nanosecond.
On most hosts, these counts ignore the presence of leap seconds. For example, on most hosts ‘@915148799’ represents 1998-12-31 23:59:59 UTC, ‘@915148800’ represents 1999-01-01 00:00:00 UTC, and there is no way to represent the intervening leap second 1998-12-31 23:59:60 UTC.
Normally, dates are interpreted using the rules of the current time zone, which in turn are specified by the TZ environment variable, or by a system default if TZ is not set. To specify a different set of default time zone rules that apply just to one date, start the date with a string of the form ‘TZ="rule"’. The two quote characters (‘"’) must be present in the date, and any quotes or backslashes within rule must be escaped by a backslash.
For example, with the GNU date command you can answer the question “What time is it in New York when a Paris clock shows 6:30am on October 31, 2004?” by using a date beginning with ‘TZ="Europe/Paris"’ as shown in the following shell transcript:
$ export TZ="America/New_York" $ date --date='TZ="Europe/Paris" 2004-10-31 06:30' Sun Oct 31 01:30:00 EDT 2004
In this example, the --date operand begins with its own TZ setting, so the rest of that operand is processed according to ‘Europe/Paris’ rules, treating the string ‘2004-10-31 06:30’ as if it were in Paris. However, since the output of the date command is processed according to the overall time zone rules, it uses New York time. (Paris was normally six hours ahead of New York in 2004, but this example refers to a brief Halloween period when the gap was five hours.)
A TZ value is a rule that typically names a location in the ‘tz’ database. A recent catalog of location names appears in the TWiki Date and Time Gateway. A few non-GNU hosts require a colon before a location name in a TZ setting, e.g., ‘TZ=":America/New_York"’.
The ‘tz’ database includes a wide variety of locations ranging
from ‘Arctic/Longyearbyen’ to ‘Antarctica/South_Pole’, but
if you are at sea and have your own private time zone, or if you are
using a non-GNU host that does not support the ‘tz’
database, you may need to use a POSIX rule instead. Simple
POSIX rules like ‘UTC0’ specify a time zone without
daylight saving time; other rules can specify simple daylight saving
regimes. See Specifying the Time Zone with TZ
.
parse_datetime