-
Notifications
You must be signed in to change notification settings - Fork 12
Shell scripts
- Write a shell script that runs a command or series of commands for a fixed set of files.
- Run a shell script from the command line.
- Write a shell script that operates on a set of files defined by the user on the command line.
- Create pipelines that include user-written shell scripts.
We are finally ready to see what makes the shell such a powerful programming environment. We are going to take the commands we repeat frequently and save them in files so that we can re-run all those operations again later by typing a single command. For historical reasons, a bunch of commands saved in a file is usually called a shell script, but make no mistake: these are actually small programs.
Let's start by going back to molecules/
$ cd molecules
and putting the following line in the file middle.sh
:
head -15 octane.pdb | tail -5
This is a variation on the pipe we constructed earlier:
it selects lines 11-15 of the file octane.pdb
.
Remember, we are not running it as a command just yet:
we are putting the commands in a file.
Once we have saved the file, we need to give ourselves execute permissions
chmod u+x
And run the script
$ ./middle.sh
Sure enough, our script's output is exactly what we would get if we ran that pipeline directly.
We usually call programs like Microsoft Word or LibreOffice Writer "text
editors", but we need to be a bit more careful when it comes to
programming. By default, Microsoft Word uses .docx
files to store not
only text, but also formatting information about fonts, headings, and so
on. This extra information isn't stored as characters, and doesn't mean
anything to tools like head
: they expect input files to contain
nothing but the letters, digits, and punctuation on a standard computer
keyboard. When editing programs, therefore, you must either use a plain
text editor, or be careful to save files as plain text.
As an example, copy a normal Microsoft Word document and take a peek:
cp /lustre/projects/staton/teaching/epp622/3/example.docx .
head example.docx
(This was a very boring, very short budget justification.) So don't code in Word! Consider learning Vim. Otherwise, nano will work just fine.
What if we want to select lines from an arbitrary file?
We could edit middle.sh
each time to change the filename,
but that would probably take longer than just retyping the command.
Instead,
let's edit middle.sh
and replace octane.pdb
with a special variable called $1
:
$ cat middle.sh
head -15 "$1" | tail -5
Inside a shell script,
$1
means "the first filename (or other parameter) on the command line".
We can now run our script like this:
$ ./middle.sh octane.pdb
or on a different file like this:
$ ./middle.sh pentane.pdb
We put the
$1
inside of double-quotes in case the filename happens to contain any spaces. The shell uses whitespace to separate arguments, so we have to be careful when using arguments that might have whitespace in them. If we left out these quotes, and$1
expanded to a filename likemethyl butane.pdb
, the command in the script would effectively be:head -15 methyl butane.pdb | tail -5
This would call
head
on two separate files,methyl
andbutane.pdb
, which is probably not what we intended.
We still need to edit middle.sh
each time we want to adjust the range of lines,
though.
Let's fix that by using the special variables $2
and $3
:
$ cat middle.sh
head "$2" "$1" | tail "$3"
$ ./middle.sh pentane.pdb -20 -5
This works,
but it may take the next person who reads middle.sh
a moment to figure out what it does.
We can improve our script by adding some comments at the top:
$ cat middle.sh
# Select lines from the middle of a file.
# Usage: middle.sh filename -end_line -num_lines
head "$2" "$1" | tail "$3"
A comment starts with a #
character and runs to the end of the line.
The computer ignores comments,
but they're invaluable for helping people understand and use scripts.
What if we want to process many files in a single pipeline?
For example, if we want to sort our .pdb
files by length, we would type:
$ wc -l *.pdb | sort -n
because wc -l
lists the number of lines in the files
(recall that wc stands for 'word count', adding the -l flag means 'count lines' instead)
and sort -n
sorts things numerically.
We could put this in a file,
but then it would only ever sort a list of .pdb
files in the current directory.
If we want to be able to get a sorted list of other kinds of files,
we need a way to get all those names into the script.
We can't use $1
, $2
, and so on
because we don't know how many files there are.
Instead, we use the special variable $@
,
which means,
"All of the command-line parameters to the shell script."
We also should put $@
inside double-quotes
to handle the case of parameters containing spaces
("$@"
is equivalent to "$1"
"$2"
...)
Here's an example:
$ cat sorted.sh
wc -l "$@" | sort -n
$ ./sorted.sh *.pdb ../creatures/*.dat
What happens if a script is supposed to process a bunch of files, but we don't give it any filenames? For example, what if we type:
$ ./sorted.sh
but don't say
*.dat
(or anything else)? In this case,$@
expands to nothing at all, so the pipeline inside the script is effectively:wc -l | sort -n
Since it doesn't have any filenames,
wc
assumes it is supposed to process standard input, so it just sits there and waits for us to give it some data interactively. From the outside, though, all we see is it sitting there: the script doesn't appear to do anything.If you have accidentally run a command that is waiting for input, you can always type Control-C to kill the command.
We have two more things to do before we're finished with our simple shell scripts. If you look at a script like:
wc -l "$@" | sort -n
you can probably puzzle out what it does. On the other hand, if you look at this script:
# List files sorted by number of lines.
wc -l "$@" | sort -n
you don't have to puzzle it out --- the comment at the top tells you what it does. A line or two of documentation like this make it much easier for other people (including your future self) to re-use your work. The only caveat is that each time you modify the script, you should check that the comment is still accurate: an explanation that sends the reader in the wrong direction is worse than none at all.
Second, suppose we have just run a series of commands that did something useful --- for example, that created a graph we'd like to use in a paper. We'd like to be able to re-create the graph later if we need to, so we want to save the commands in a file. Instead of typing them in again (and potentially getting them wrong) we can do this:
$ history | tail -4 > redo-figure-3.sh
The file redo-figure-3.sh
now contains:
297 ./goostats -r NENE01729B.txt stats-NENE01729B.txt
298 ./goodiff stats-NENE01729B.txt /data/validated/01729.txt > 01729-differences.txt
299 cut -d ',' -f 2-3 01729-differences.txt > 01729-time-series.txt
300 ygraph --format scatter --color bw --borders none 01729-time-series.txt figure-3.png
After a moment's work in an editor to remove the serial numbers on the commands, we have a completely accurate record of how we created that figure.
You could also use
colrm
(short for "column removal") to remove the serial numbers on the previous commands. Its parameters are the range of characters to strip from its input:$ history | tail -5 173 cd /tmp 174 ls 175 mkdir bakup 176 mv bakup backup 177 history | tail -5 $ history | tail -5 | colrm 1 7 cd /tmp ls mkdir bakup mv bakup backup history | tail -5 history | tail -5 | colrm 1 7
In practice, most people develop shell scripts by running commands at the shell prompt a few times
to make sure they're doing the right thing,
then saving them in a file for re-use.
This style of work allows people to recycle
what they discover about their data and their workflow with one call to history
and a bit of editing to clean up the output
and save it as a shell script.
Lets revisit Nelle's data again. We need to get back to that directory with the goostats file... where was that again? Can you use the find command?
An off-hand comment from her supervisor has made Nelle realize that
she should have provided a couple of extra parameters to goostats
when she processed her files.
This might have been a disaster if she had done all the analysis by hand,
but thanks to for loops,
it will only take a couple of hours to re-do.
But experience has taught her that if something needs to be done twice, it will probably need to be done a third or fourth time as well. She runs the editor and writes the following:
# Calculate reduced stats for data files at J = 100 c/bp.
for datafile in "$@"
do
echo $datafile
./goostats -J 100 -r $datafile stats-$datafile
done
(The parameters -J 100
and -r
are the ones her supervisor said she should have used.)
She saves this in a file called do-stats.sh
so that she can now re-do the first stage of her analysis by typing:
$ chmod u+x do-stats.sh
$ ./do-stats.sh NE*[AB].txt
She can also do this:
$ ./do-stats.sh NE*[AB].txt | wc -l
so that the output is just the number of files processed rather than the names of the files that were processed.
One thing to note about Nelle's script is that it lets the person running it decide what files to process. She could have written it as:
# Calculate reduced stats for A and Site B data files at J = 100 c/bp.
for datafile in *[AB].txt
do
echo $datafile
./goostats -J 100 -r $datafile stats-$datafile
done
The advantage is that this always selects the right files:
she doesn't have to remember to exclude the 'Z' files.
The disadvantage is that it always selects just those files --- she can't run it on all files
(including the 'Z' files),
or on the 'G' or 'H' files her colleagues in Antarctica are producing,
without editing the script.
If she wanted to be more adventurous,
she could modify her script to check for command-line parameters,
and use *[AB].txt
if none were provided.
Of course, this introduces another tradeoff between flexibility and complexity.
This material is made available under the Creative Commons Attribution license. Much of the material has been adapted from lessons by Software Carpentry.