Improve Your Unix Logging with Advanced I/O Redirection

Tuesday Mar 16th 2010 by Charlie Schluting

Keeping good logs on a Unix or Linux system can be a challenge, but with a little work up front you can make sure error messages make it to the people who need to see them and get vital troubleshooting information into a permanent log.

Beyond the basic shell I/O redirection tools, there are a few interesting tricks to learn. One trick, swapping stderr and stdout, is highly useful in many situations. Command lengths tend to grow out of control and become difficult to understand, but taking the time to break them apart into each component reveals a simple explanation.

Briefly, let's make sure everyone is on the same page. In Unix IO, there are three file handles that always exist: stdin (0), stdout (1), and stderr (2). Standard error is the troublesome one, as we often want all command output sent to a file. Running ./command > logfile will send only stdout to the file, and stderr output will be sent to the terminal. To combat this, we can simply redirect the stderr stream into stdout, so the both stream into the file: ./command 2>&1 >logfile. This works wonderfully, but often severely limits our options.

Take the following which, is all one line (backslashes mean continue on the next line):

(((/root/ | tee -a /var/log/backuplog) \
3>&1 1>&2 2>&3 | tee -a /var/log/backuplog) \
3>&1 1>&2 2>&3) >/dev/null

This trick involves three nested subshells, and all kinds of weird file handle manipulation. The above command will log both stderr and stdout into the /var/log/backuplog file, and also emit any errors to the terminal as stderr should. This is the only way to accomplish those requirements, aside from creating your own FIFO pipes with mkfifo, or perhaps using bash's process substitution.

The reason this command gets so crazy is because 'tee' doesn't understand stderr. The tee command will take whatever it receives on stdin and write it to stdout and the specified file. If you approach this the normal way, that is to say combine stderr into stdout (2>&1), before sending it to tee, you've just lost the ability to write only stderr to the console. Both streams are combined, with no way to break them apart again.

Why is this useful? When the above command is run from cron, it will log everything to the file and if anything goes wrong, stderr will go to the console, which gets e-mailed to an administrator. If desired, you could also log stdout and stderr to two distinct files.

How It Works

Starting with the basic command, we're simply writing the stdout output to a log file, and back to stdout again. | tee -a /var/log/backuplog

Any errors going to stderr are preserved, because we haven't redirected anything yet. Unfortunately, we've lost control of any stderr out that came from After running this command, stderr simply gets written to the console.

To test this, simple write a script to run that outputs both stderr and stdout:

echo "test stdout" >&1
echo "test stderr" >&2

Now, to resolve the issue with stderr being written to the console before we've had a chance to play with it, we use a subshell:

( | tee -a ./log)

We now have our two distinct file descriptors to work with again. A subshell will capture stdout and stderr, and those file handles will both be available for the next process. If we just stopped there, we could now write those two items to distinct log files, like so:

( | tee -a ./log) 1>out.log 2> err.log

We're already writing stdout to the backuplog file, so this probably isn't useful in this situation. The next step in this process is to swap stdout and stderr. Remember, 'tee' can only operate on stdout, so in order to have it write stderr to the log, we have to swap the two. When done carefully, we don't permanently combine the two streams with no way to get them back apart.

( | tee -a ./log) 3>&1 1>&2 2>&3

This works be creating a new handle, 3, and mapping it to stdout. Then stdout gets mapped to stderr, and stderr becomes stdout in the last portion. The third file handle essentially "saves" stdout for use later, so 2>&3 maps stderr to stdout.

At this point, we have writing "test stdout" into the log file, and then the stdout and stderr file handles are swapped.

Now, we can write stderr into the log file:

(( | tee -a ./log) 3>&1 1>&2 2>&3 | tee -a ./log)

This is the same thing we did before, including the subshell. This is where the 2nd level of subshell comes from; we want to retain both stdout and stderr so we can swap them back to normal again.

At this point, you could also modify the above to write to two different log files, one for the standard output and one for standard error. Now let's return out file handles to the proper state by using that swap trick again:

((( | tee -a ./log) 3>&1 1>&2 2>&3 | tee -a ./log) \
3>&1 1>&2 2>&3)

We're now back to where we started, except both stdout and stderr have been written to the log file! You can proceed as normal, and redirect the IO however you wish. In the original command above, we added >/dev/null to the end, so that only standard output would be output. This is great for cron scripts where you only want e-mail if something goes wrong, but you also wish to have all output logged for debugging purposes. There is one problem, however, with this approach. If both types of output are being written quickly, things can arrive in the log file out of order. There is no solution when using this technique, so be aware that it may happen.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved