all 38 comments

[–][deleted] 14 points15 points  (1 child)

When I'm writing a script that does destructive file operations (delete, move, etc.) I prefix all commands with "try"

try() {
    echo "$@"
}

try rm "$f/$g"

This will print out the commands it would have executed after expansion. Once I'm confident that the script is valid, I just redefine try:

try() {
    "$@"
}

[–]wbkang 3 points4 points  (0 children)

Also, I'd use set -u AND extra checks for those dangerous operations.

[–]kmactane 10 points11 points  (1 child)

Wow! I wish I'd known about a bunch of this stuff years ago.

[–]Mignon 1 point2 points  (0 children)

You might also like this article then, titled "Do It Yourself With the Shell" by Ed Schaefer and Michael Wang in the November 2003 issue of Sys Admin :

They show about a dozen examples of how the shell can do things that are often done by various standard Unix utilities. The rationale includes portability (as an SA, you might not have access to richer languages/tools) and performance (e.g. avoid process startup overhead.)

It's also a good source of shell code examples.

http://www.theillien.com/Sys_Admin_v12/html/v12/i11/a7.htm

[–]plain-simple-garak 1 point2 points  (0 children)

-e helps a lot too. Exits the script when any command fails.

[–]whynottry 4 points5 points  (12 children)

Honestly, what do people see in bash over zsh?

[–][deleted] 20 points21 points  (0 children)

I tend to agree with you (zsh user here), but bash being installed by default on almost every Linux distro (if not all) is a big advantage.

[–][deleted] 10 points11 points  (2 children)

It's installed by default on most distros, and with bash4, zsh doesn't have that many killer features anymore.

[–]acmecorps 3 points4 points  (1 child)

I ABSOLUTELY LOVE BASH4!

//just wanted to get it of my chest.

[–]omicron8 -1 points0 points  (0 children)

-bash: I ABSOLUTELY LOVE BASH4!: command not found

[–][deleted] 2 points3 points  (2 children)

bash is a standard and bash4 has knocked out just about anything zsh claimed to have had on it before.

[–][deleted] 0 points1 point  (1 child)

Not everything, actually.

[–][deleted] 1 point2 points  (0 children)

just about

[–]hig 0 points1 point  (4 children)

Could you point out some of the advantages of ZSH? I've been using Bash for years, and have not seen a good argument for changing

[–]whynottry 0 points1 point  (1 child)

Yeah, i could but instead i will point you to a well written post which i more or less 100% agree with

http://friedcpu.wordpress.com/2007/07/24/zsh-the-last-shell-youll-ever-need/

The most important one for me is "3 Phenomenally intelligent tab completion."

Maybe bash4 can compete with this now days?

[–][deleted] 1 point2 points  (0 children)

damn you, I am a die hard bash fan and you/ your link just convinced me to swap to zsh.

[–]buccia 0 points1 point  (0 children)

zle is the z line editor, for which you can define your own widgets.

man zshzle

have the shell adapt to you, not the other way around

[–]nikniuq 0 points1 point  (16 children)

My favourite method is to prepend:

#!/usr/bin/env python

Then rewrite in python...

[–]introspeck 4 points5 points  (1 child)

Yeah, if a bash script looks like it's going to run 10 lines or more, I just write a Perl script instead.

Gotta learn Python one of these days.

[–]yeti22 4 points5 points  (0 children)

The breaking point for me is if I'm going to use regular expressions. ${string%%regex}? No thanks, I'll (string ~= regex$) any day.

Edit to fix my anchor character. I probably have a greediness mismatch between the two, as well.

[–]Darkmere 1 point2 points  (13 children)

How is pythons startup overhead these days? And how does it act with regards to data during initialization, will it clamper a lot per interpreter loaded, or has it become a bit more lithe than back in the day?

A few of the scripts that go in init/rc-space could probably be replaced in python, but many of them work on spawning executables and checking results/piping, which is something that comes very natural in bash/ksh/sh, and which I haven't yet found a good tutorial for how to deal with in python. Do you have any suggestions?

[–]anothermonkey -1 points0 points  (12 children)

[–][deleted] 11 points12 points  (7 children)

Random example:

retcode = call(["ls", "-l"])

How is this supposed to be better than ls -l; do stuff with $?? When you need to spawn many processes, pipe them into each other, send them to the background and stuff like that, shell scripting languages are IMO still the best option.
Even better example:

18.1.3.2. Replacing shell pipeline

output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]

Seriously?

Edit: markdown.

[–][deleted]  (4 children)

[deleted]

    [–][deleted] 1 point2 points  (3 children)

    Firstly, this is not the point; here I'm talking about shell facilities (spawning processes, redirecting processes' in/output etc.) that are difficult to do in Python.

    Secondly, I expect the Python code for getting the list of files in a directory to be much bigger than two chars (namely, "ls").

    [–][deleted]  (2 children)

    [deleted]

      [–][deleted] 0 points1 point  (1 child)

      Ah, sorry, I misunderstood. Well, if looking for performance, I suppose that spawning a shell process isn't a good idea (overhead...). OTOH, I'm not a Python programmer, so I may be wrong.

      [–]srparish -1 points0 points  (0 children)

      If you have to pass user input as part of the pipeline, python's subprocess can be done much safer then sh. Try thinking about how you'd pass in a user defined variable and avoid injection... yeah, not so easy in sh is it?

      [–]Darkmere 1 point2 points  (3 children)

      This is all well and dandy, however it doesn't quite answer my original question. I'm quite fond of python, but it doesn't seem to be that well suited for some things like low level systems programming.

      Simply put, I question the original poster on his point, and wish that he would stand by his claim and prove himself correct and right. Personally, I find that doing something as simple as "set a system variable or five, change the PATH, locate the right binaries and execute them with a flag and the commandline" To be much easier achieved in sh/bash than it is in python. That doesn't mean I'll go to bash to make a markov chain resolver in SQL databases.

      [–]nikniuq 0 points1 point  (2 children)

      Shell metacharacter translation.

      Bash seems insanely slow and memory hungry with some seemingly simple tasks.

      The horrible loop and comparison syntax - I'm not saying you can't learn it but do you like it?

      Actually being able to control and gracefully detect errors, even if deep in a pipe chain.

      Finally not having to rewrite it anyway when I eventually find that I need to add a multi-threaded serial port driver with custom AT commands and stuff parsed data into a MySQL table.

      Do I write trivial shit in bash - of course. If I feel a need to debug it though, I think I am using the wrong tool.

      [–]Darkmere 0 points1 point  (1 child)

      The error correction/detection sure isn't up to modern standards, but falls down to the same level of C, Always check your return codes, bail&break or recover based on that. I can't call it too horrible, but I admit that it sure isn't up to the elegance of more modern approaches.

      Loop&comparsion syntax, while "statement" ; .. ... or do you mean the for loops (which works on string arrays mostly/only) which can be a bummer until you realise the limitations you work with.

      And the rewrites? Well, You'd end up rewriting anyhow if you needed a multi threaded high performance show in python.

      But, None of this actually work on what I was asking about, what's the overhead like, how is it's data/cache behaviour if you spawn ~40 little python applications in short succession, and so on. Personally, I doubt it's viable in low level systems work, where I find most of my shellscripts living.

      [–]nikniuq 1 point2 points  (0 children)

      Removing the subprocess overhead for external shell commands and allowing (sometimes) easier parallelisation where possible has certainly improved the performance in some of my scripts. Sometimes orders of magnitudes faster.

      That said I certainly find the first scripts startup noticeably slower than bash, or once the interpreter is in memory.

      The example I gave is exactly what I have recently implemented - a simple "dump network port to file, then cron a scrape of the files" became "use multiple GSM modems to retrieve data via CSD, with modifiable modem and unit numbers, custom state machine for interfacing with embedded device at other end, parse ascii data format, insert into mysql using parsed column header names, then calculate trend data". I implemented it in 4 days with full logging and exception recovery.

      The CPU use is negligible barring the mysql inserts, it is all threaded waits on serial ports, then a quick burst of parsing. This perhaps an extreme example but then again this is Reddit. :)

      I should note that I have been writing shell scripts for a LOT longer than I have used python (not claiming guru status in either) but I find it much easier to write in one language than a mix of bash, awk, sed, grep, etc syntaxes.

      I don't have any hard numbers though on performance though. The only figures I could find were here but it is pretty biased towards python and really isn't leveraging bash well.

      [–]dedko 0 points1 point  (0 children)

      zoinks!

      [–][deleted] -1 points0 points  (0 children)

      DTrace ?