Tux

...making Linux just a little more fun!

How to get a random file name from a directory

Suramya Tomar [security at suramya.com]


Tue, 25 Nov 2008 00:40:30 +0530

Hi Everyone,

I am working on a collage creation script in bash and am using find to get a list of all jpg files in a given directory. However find always returns the image names in the same order so the collage tends to have images from the same photo set i.e. at a given time it will have the most of the images from a particular subfolder.

What I want to do is get a list of random image file names from the system and then use it to create the collage, so that the collage looks more varied.

Here's what I have done so far to get a random file name:

   awk "NR==$(($RANDOM % $(wc -l fi.txt| awk '{print $1}')))" fi.txt

where fi.txt was created using

   find . -iname "*.jpg" -true | sed 's/ /\\ /g' >fi.txt

Now, this way works but I have to create a temp file which I want to avoid. Is there some way of getting find to return results in a random order? I tried searching the web but didn't find any useful results.

I am attaching the current version of the Collage creation script with this email to give you an idea of how it works. This version is without randomization of the filenames.

Thanks in advance for the help.

- Suramya

PS: Please do give your feedback/suggestions on how the script can be improved in addition to a solution :)

#!/bin/bash
 
#########################################################################
# CreateCollage.sh Ver 0.5                                              #
# Script to Create a collage of images using the specified image set    #
# Created by Suramya Tomar (suramya@suramya.com)                        #
# Last updated 25th Nov 2008                                            #
#-----------------------------------------------------------------------#
# This program is free software; you can redistribute it and/or modify  #
# it under the terms of the GNU General Public License as published by  #
# the Free Software Foundation; either version 2 of the License, or     #
# (at your option) any later version.                                   #
#                                                                       #
# This program is distributed in the hope that it will be useful, but   #
# WITHOUT ANY WARRANTY; without even the implied warranty of            #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU      #
# General Public License for more details.                              #
#                                                                       #
# You should have received a copy of the GNU General Public License     #
# along with this program; if not, write to the                         #
# Free Software Foundation, Inc.,                                       #
# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA                 #
#########################################################################
 
 
if [ $# -le 3 ]
then
 
 echo "
 $0 missing file operand.
 
 Usage: $0 <Width> <Height> <Directory> OutputImage
 The script creates a collage sized height x width out of the images in the directory specified. 
The resulting collage will be written as a jpg file into the Outputfile specified.
 
e.g. collageCreator.sh 640 480 images/ collage.jpg
 
will create a 640x480 image called collage.jpg and use the photos in the images directory to create a collage in it.
 
"
 exit 0
fi
 
# We assign the command line parameters to variables
 
width=$1
height=$2
SourceDirectory=$3
OutputImage=$4
 
# First we create a blank file for the collage
 
convert -size $1x$2 xc:black $OutputImage
 
# Then we search for all image files. I am just searching for jpg files as all my photos are jpg's
# and loop for each search result found and add it the collage
 
find $SourceDirectory -iname "*.jpg" -true | sed 's/ /\\ /g' |while read file
 
#awk "NR==$(($RANDOM % $(wc -l fi.txt| awk '{print $1}')))" fi.txt
 
# Now we start creating the collage
 
do
 
  # We have to seed the random number generator otherwise it tends to return the same set of results
  # We use the no of nanoseconds from the current date/time to seed it.
 
  RANDOM=`date '+%N'`
  x_location=`echo $(($RANDOM*$width/32767))`
  RANDOM=`date '+%N'`
  y_location=`echo $(($RANDOM*$height/32767))`
 
 
  echo $file
  # We insert the current image in the location selected by the random generator above in the output image
 
  composite -geometry 150x150+$x_location+$y_location "$file" $OutputImage $OutputImage
 
done


Top    Back


Thomas Adam [thomas.adam22 at gmail.com]


Mon, 24 Nov 2008 19:48:50 +0000

2008/11/24 Suramya Tomar <security@suramya.com>:

> Now, this way works but I have to create a temp file which I want to
> avoid. Is there some way of getting find to return results in a random
> order? I tried searching the web but didn't find any useful results.

So something like:

cd /some/where && \
myfiles=(*.foo); num=${#myfiles[@]}; somecommand "${myfiles[RANDOM % num]}"

Where "myfiles" will be an array from the glob, "num" is the size of the array, and "${myfiles[RANDOM % num]}" selects a random element. Change at will.

-- Thomas Adam


Top    Back


Will [will at willstuff.net]


Mon, 24 Nov 2008 14:54:51 -0500

Suramya Tomar wrote:

> Hi Everyone,
>
> I am working on a collage creation script in bash and am using find to 
> get a list of all jpg files in a given directory. However find always 
> returns the image names in the same order so the collage tends to have 
> images from the same photo set i.e. at a given time it will have the 
> most of the images from a particular subfolder.
>
> What I want to do is get a list of random image file names from the 
> system and then use it to create the collage, so that the collage 
> looks more varied.
>
There is program in the "coreutils" package that should help--shuf. It will take lines of input and display them in a randomized order:

http://www.gnu.org/software/coreutils/manual/html_node/shuf-invocation.html

You could pipe the output of find to it, like this:

find . -iname "*.jpg" -print | shuf

Here's a simple test I ran with filenames "file1.jpg" through "file5.jpg"

$ find . -iname "*.jpg" -print | shuf
./file3.jpg
./file2.jpg
./file4.jpg
./file5.jpg
./file1.jpg
$ find . -iname "*.jpg" -print | shuf
./file4.jpg
./file2.jpg
./file3.jpg
./file1.jpg
./file5.jpg
$ find . -iname "*.jpg" -print | shuf
./file4.jpg
./file1.jpg
./file3.jpg
./file5.jpg
./file2.jpg

Hope that helps.


Top    Back


Thomas Adam [thomas.adam22 at gmail.com]


Mon, 24 Nov 2008 19:59:39 +0000

2008/11/24 Will <will@willstuff.net>:

> There is program in the "coreutils" package that should help--shuf. It
> will take lines of input and display them in a randomized order:

AFAIK this is non-portable on things like BSD. Besides, if you're wanting to be coreutil specific, sort learnt the -R flag for randomising a sort as over version 6.something.

-- Thomas Adam


Top    Back


Jim Jackson [jj at franjam.org.uk]


Mon, 24 Nov 2008 21:23:05 +0000 (GMT)

I remember something similar I squirrelled away from the TAG archives. As Ben hasn't chipped in yet, let me on his behalf give you a perl one liner that could be utilised thus...

  cd directory
  find .  -maxdepth 1 -type f |\
   perl -we'rand($.) < 1 && ($pick = $_) while <>; print $pick'

Here's the TAG email...

Date: Wed, 8 Aug 2007 15:04:10 -0400
From: Ben Okopnik <ben@linuxgazette.net>
To: The Answer Gang <tag@lists.linuxgazette.net>
There's a nifty little "random picker" algorithm that I got from Randal Schwartz a while ago - I don't think he invented it, but it works really well for choosing a random line from a file:

perl -we'rand($.) < 1 && ($pick = $_) while <>; print $pick' file
1. line 1: test 'rand(1)<1' (100% chance: 'rand' always returns <1)
2. line 2: test 'rand(2)<1' (1/2 chance that line 2 will replace $pick)
3. line 3: test 'rand(3)<1' (1/3 chance that line 3 will replace $pick)
...

Pretty cool stuff. The important thing is the chance of replacement on every line; it ends up spreading out very fairly.

> Err but line 300 has a chance of only 1/300 of replacing $pick. Still
> don't quite get
> it, I'll eventually......

Yes - but the chances for any line of both being picked and NOT being replaced are equal. That's the trick to it.


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Tue, 25 Nov 2008 06:15:38 +0530

Hello,

On Mon, 24 Nov 2008, Jim Jackson wrote:

> ```
>   cd directory
>   find .  -maxdepth 1 -type f |\
>    perl -we'rand($.) < 1 && ($pick = $_) while <>; print $pick'
> '''

While this works fine in principle (as does "shuf"), the disadvantage of both is that they are permuting a (possibly large) list in order to select a random element.

Ideally, to do the latter one would:

 1. get the number of elements in the list of files
    (possibly by doing a "stat" on the directory inode)
 2. pick a random number which is between 0 and that number
 3. run an "ls" and pick that-eth element from the output

For really large file lists I believe this method (which is more or less what Suramya did with his awk program) would be faster.

Kapil. --


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Mon, 24 Nov 2008 21:42:13 -0500

On Mon, Nov 24, 2008 at 09:23:05PM +0000, Jim Jackson wrote:

> 
> I remember something similar I squirrelled away from the TAG archives.
> As Ben hasn't chipped in yet, let me on his behalf give you a perl one 
> liner that could be utilised thus...
> 
> ```
>   cd directory
>   find .  -maxdepth 1 -type f |\
>    perl -we'rand($.) < 1 && ($pick = $_) while <>; print $pick'
> '''

I was just about to repost that one. :) Well done, Jim!

The same thing can also be done using Perl's looping option:

find /my/dir -name '*jpg'|perl -wne'rand($.)<1&&($x=$_);END{print $x}'
-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Mon, 24 Nov 2008 22:05:24 -0500

On Tue, Nov 25, 2008 at 06:15:38AM +0530, Kapil Hari Paranjape wrote:

> 
> While this works fine in principle (as does "shuf"), the disadvantage
> of both is that they are permuting a (possibly large) list in order
> to select a random element.
> 
> Ideally, to do the latter one would:
>  1. get the number of elements in the list of files
>     (possibly by doing a "stat" on the directory inode)
>  2. pick a random number which is between 0 and that number
>  3. run an "ls" and pick that-eth element from the output
> 
> For really large file lists I believe this method (which is more or
> less what Suramya did with his awk program) would be faster.

You have a point. I've done something like that in the past, in a shell script; in fact, now that I've looked for it and found it, it does almost exactly what Suramya is doing (except I was searching for XPMs, and using 'locate'.) Here's a slightly modified version:

a=(`find /my/dir -name '*jpg'`); echo ${a[$(($RANDOM*${#a[*]}/32768))]}
-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Deividson Okopnik [deivid.okop at gmail.com]


Tue, 25 Nov 2008 09:26:32 -0300

> You have a point. I've done something like that in the past, in a shell
> script; in fact, now that I've looked for it and found it, it does
> almost exactly what Suramya is doing (except I was searching for XPMs,
> and using 'locate'.) Here's a slightly modified version:
>
> ``
> a=(`find /my/dir -name '*jpg'`); echo ${a[$(($RANDOM*${#a[*]}/32768))]}
> ''

That would be fine to select one random jpg, but wouldnt it create duplicates if you run it like he is? For what he is using I think shuf would be the best, unless displaying the same photo several times is not a problem.


Top    Back


Suramya Tomar [security at suramya.com]


Tue, 25 Nov 2008 19:57:23 +0530

Hey Will,

> There is program in the "coreutils" package that should help--shuf. It 
> will take lines of input and display them in a randomized order:

Thanks. This is exactly what I was looking for. :)

> Hope that helps.

It defenitly did help.

Thanks,

Suramya


Top    Back


Suramya Tomar [security at suramya.com]


Tue, 25 Nov 2008 20:04:04 +0530

Hey,

> That would be fine to select one random jpg, but wouldnt it create
> duplicates if you run it like he is? For what he is using I think shuf
> would be the best, unless displaying the same photo several times is
> not a problem.

It does return duplicates but I will have to run it on a large file set to see how many repetitions are there. A few duplicates won't be that big a deal but if it starts returning the same fileset over and over then the collage would look a bit funny.

I will run a test once I get home to see how many repetitions are there.

Another concern I have is about memory usage and runtime. I would prefer to use an option that is faster. I will try to time both and will post my findings, but I think the one using 'shuf' would be faster.

Thanks,

Suramya


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Tue, 25 Nov 2008 09:50:59 -0500

On Tue, Nov 25, 2008 at 09:26:32AM -0300, Deividson Okopnik wrote:

> > You have a point. I've done something like that in the past, in a shell
> > script; in fact, now that I've looked for it and found it, it does
> > almost exactly what Suramya is doing (except I was searching for XPMs,
> > and using 'locate'.) Here's a slightly modified version:
> >
> > ``
> > a=(`find /my/dir -name '*jpg'`); echo ${a[$(($RANDOM*${#a[*]}/32768))]}
> > ''
> 
> That would be fine to select one random jpg, but wouldnt it create
> duplicates if you run it like he is? For what he is using I think shuf
> would be the best, unless displaying the same photo several times is
> not a problem.

Ah - I (and persumably Kapil also) had lost track of the fact that Suramya wants all the files sorted in random order rather than just one random file. Yes, at that point, 'shuf' is a good solution - or, if you don't have 'shuf', there's always Perl.

#!/usr/bin/perl -wl
# Created by Ben Okopnik on Tue Nov 25 09:44:15 EST 2008
use strict;
use File::Find;
use List::Util 'shuffle';
 
my @list;
find(sub {push @list, $_ if /jpg$/}, "/home/ben/Pics");
print for shuffle(@list);

Both File::Find and List::Util are Perl core modules, meaning that if you have Perl, you already have these installed. "List::Util::shuffle" does a Fisher-Yates shuffle, so the list is quite nicely randomized.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Tue, 25 Nov 2008 09:56:33 -0500

On Tue, Nov 25, 2008 at 09:50:59AM -0500, Benjamin Okopnik wrote:

> 
> my @list;
> find(sub {push @list, $_ if /jpg$/}, "/home/ben/Pics");

Actually, that should be

find(sub {push @list, $File::Find::name if /jpg$/}, "/my/pic/dir");

- assuming you want the full paths (and not just the filenames) to be printed.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Michael Makuch [mike8 at makuch.org]


Tue, 25 Nov 2008 09:29:03 -0600

Deividson Okopnik wrote:

>> You have a point. I've done something like that in the past, in a shell
>> a=(`find /my/dir -name '*jpg'`); echo ${a[$(($RANDOM*${#a[*]}/32768))]}
>>     
>
> That would be fine to select one random jpg, but wouldnt it create
> duplicates if you run it like he is? For what he is using I think shuf
>
>   
What's wrong with;

    a=`find /path2pics | sort -R`


Top    Back


Thomas Adam [thomas.adam22 at gmail.com]


Tue, 25 Nov 2008 15:59:59 +0000

2008/11/25 Michael Makuch <mike8 at makuch dot org>:

> What's wrong with;
>
>     a=`find /path2pics | sort -R`

As I mentioned in my post regarding its use -- it's Linux specific.

-- Thomas Adam


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Tue, 25 Nov 2008 22:05:03 +0530

On Tue, 25 Nov 2008, Ben Okopnik wrote:

> Ah - I (and persumably Kapil also) had lost track of the fact that
> Suramya wants all the files sorted in random order rather than just one
> random file.

The subject of his mail says he wants to pick a file at random from a list of files!

On reading his mail I realised that he said he wanted to get the files in a different (random) order.

An interesting side light is the "birthday paradox" which says that if you pick by "my method" (pick a random number and pick that-eth picture), and do this enough times, it is like that a few pictures will be picked more than once.

Kapil. --


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Tue, 25 Nov 2008 16:51:20 -0500

[ Michael - you're probably unaware of this, but you sent your message in HTML format. This doubled the size of your message without any benefit in return, and will create extra work for our Mailbag editor.

Please change your mailer's settings to stop it from doing this. For more info, please see <http://expita.com/nomime.html>. ]

On Tue, Nov 25, 2008 at 09:29:03AM -0600, Michael Makuch wrote:

>    Deividson Okopnik wrote:
> 
>  You have a point. I've done something like that in the past, in a shell
>  a=(`find /my/dir -name '*jpg'`); echo ${a[$(($RANDOM*${#a[*]}/32768))]}
>     
> 
>  That would be fine to select one random jpg, but wouldnt it create
>  duplicates if you run it like he is? For what he is using I think shuf
> 
>    What's wrong with;
> 
>        a=`find /path2pics | sort -R`

Lack of portability. Bash and Perl are found on most versions of Unix these days, but 'sort' that supports '-R' is still a rare bird.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Tue, 25 Nov 2008 17:07:27 -0500

On Tue, Nov 25, 2008 at 10:05:03PM +0530, Kapil Hari Paranjape wrote:

> On Tue, 25 Nov 2008, Ben Okopnik wrote:
> > Ah - I (and persumably Kapil also) had lost track of the fact that
> > Suramya wants all the files sorted in random order rather than just one
> > random file.
> 
> The subject of his mail says he wants to pick a file at random from a list of
> files!

Oh - you actually read and believe those things? :)

> On reading his mail I realised that he said he wanted to get the
> files in a different (random) order.
> 
> An interesting side light is the "birthday paradox" which says that
> if you pick by "my method" (pick a random number and pick that-eth
> picture), and do this enough times, it is like that a few pictures
> will be picked more than once.

I hadn't thought about it, but you're right, of course.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Deividson Okopnik [deivid.okop at gmail.com]


Tue, 25 Nov 2008 21:34:06 -0300

>> An interesting side light is the "birthday paradox" which says that
>> if you pick by "my method" (pick a random number and pick that-eth
>> picture), and do this enough times, it is like that a few pictures
>> will be picked more than once.

Problem number two is: you can never know when you got all the files.


Top    Back


Samuel Bisbee [sbisbee at computervip.com]


Tue, 25 Nov 2008 20:24:13 -0500

Deividson Okopnik wrote:

>>> An interesting side light is the "birthday paradox" which says that
>>> if you pick by "my method" (pick a random number and pick that-eth
>>> picture), and do this enough times, it is like that a few pictures
>>> will be picked more than once.
>
> Problem number two is: you can never know when you got all the files.

Eh, not sure I'm with you there Deividson. This would really depend on your implementation: whether you hold the whole set in memory and select/remove items from the set, or whether you select items from the set and leave them in there (silly). Even sillier would be to run `ls` every time.

The first (according to me, "not silly") way is:

1. Store the file paths in memory (array or language's type of choice)
2. Shuffle the array
3. Iterate over the array, throwing images into the collage.

However, this can be made better: depending on the shuffle method used, we might be iterating over all the elements in the set twice. No good, so here's a better solution in pseudo code (assumes the set/array/whatever is 0-indexed):

array set = paths from ls
while(set.length > 0)
{
   int i = RANDOM % set.length
   putInCollage(set[i])
   switch(i)
   {
     0: set = array(set[1 .. set.length-1])
     set.length-1: set = array(set[0 .. set.length-2])
     *: set = array(set[0 .. i-1] + set[i+1 .. set.length-1])
   }
}

Please excuse the roughness of the "code", but I just got home from work. :-)

Of course, I could have just misunderstood you, in which case I hope someone finds this useful for other reasons.

Peace,

-- 
Sam Bisbee


Top    Back


Will [will at willstuff.net]


Wed, 26 Nov 2008 09:53:41 -0500

Thomas Adam wrote:

> 2008/11/24 Will <will@willstuff.net>:
>   
>> There is program in the "coreutils" package that should help--shuf. It
>> will take lines of input and display them in a randomized order:
>>     
>
> AFAIK this is non-portable on things like BSD.  Besides, if you're
> wanting to be coreutil specific, sort learnt the -R flag for
> randomising a sort as over version 6.something.

AFAIK this is a Linux help group. So it's completely portable across the systems concerned here (every distro will have coreutils). The simplest answer is usually best.


Top    Back


Michael Makuch [linuxgazette at makuch.org]


Wed, 26 Nov 2008 09:18:57 -0600

Will wrote:

> Thomas Adam wrote:
>   
>> 2008/11/24 Will <will@willstuff.net>:
>>   
>>     
>>> There is program in the "coreutils" package that should help--shuf. It
>>> will take lines of input and display them in a randomized order:
>>>     
>>>       
>> AFAIK this is non-portable on things like BSD.  Besides, if you're
>> wanting to be coreutil specific, sort learnt the -R flag for
>> randomising a sort as over version 6.something.
>>
>>   
>>     
> AFAIK this is a Linux help group. So it's completely portable across the 
> systems concerned here (every distro will have coreutils). The simplest 
> answer is usually best.
>   

I was wondering about that. Interesting that the linux only solution was so quickly dismissed.

Isn't it more instructive to discuss all potential good solutions? Or do the dominant opinions here feel that one should always strive for 100% portability? Even at the expense of more development time? Where do you draw the line?

I am all in favor of promoting portability, but I also code in the real world.

Mike


Top    Back


Samuel Bisbee-vonKaufmann [sbisbee at computervip.com]


Wed, 26 Nov 2008 15:43:51 +0000

>Will wrote:
>> Thomas Adam wrote:
>>   
>>> 2008/11/24 Will <will@willstuff.net>:
>>>   
>>>     
>>>> There is program in the "coreutils" package that should help--shuf. It
>>>> will take lines of input and display them in a randomized order:
>>>>     
>>>>       
>>> AFAIK this is non-portable on things like BSD.  Besides, if you're
>>> wanting to be coreutil specific, sort learnt the -R flag for
>>> randomising a sort as over version 6.something.
>>>
>>>   
>>>     
>> AFAIK this is a Linux help group. So it's completely portable across the 
>> systems concerned here (every distro will have coreutils). The simplest 
>> answer is usually best.
>>   
>
>I was wondering about that. Interesting that the linux only solution was 
>so quickly dismissed.
>
>Isn't it more instructive to discuss all potential good solutions? 

That's exactly what people are doing. You will find on this list that people are constantly pushing and challenging, together trying to find the best solution. Usually in a friendly manner. :-)

>Or do 
>the dominant opinions here
>feel that one should always strive for 100% portability? Even at the 
>expense of more development
>time? Where do you draw the line?
>

I don't think anyone is suggesting 100% portability (don't forget MINIX!). However, this was a shell scripting question and there's often little reason to not have shell scripts be portable. It's really one of those "why use /bin/bash when /bin/sh would run all that syntax?" questions.

Also, I would argue that "development time" really wasn't that much more.

>I am all in favor of promoting portability, but I also code in the real 
>world.
>

I always love these lines, "I code for big business, so that makes my experience with programming more realistic than yours." As someone who has coded for all sizes of companies/groups/fun, I have found that it's all the same. The only difference(s) are logistical/political, which manifest differently depending on the environment. However, that has no bearing on what the best solution to the problem is. Just because you have to meet a deadline and therefore go with an exponential time solution doesn't make that solution faster than a linear time solution.

-- 
Sam Bisbee


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Wed, 26 Nov 2008 20:24:10 -0500

On Wed, Nov 26, 2008 at 09:53:41AM -0500, Will wrote:

> Thomas Adam wrote:
> > 2008/11/24 Will <will@willstuff.net>:
> >   
> >> There is program in the "coreutils" package that should help--shuf. It
> >> will take lines of input and display them in a randomized order:
> >>     
> >
> > AFAIK this is non-portable on things like BSD.  Besides, if you're
> > wanting to be coreutil specific, sort learnt the -R flag for
> > randomising a sort as over version 6.something.
> >
> >   
> AFAIK this is a Linux help group.

Actually, we often focus on broader issues whenever possible - as in this case. There's little or nothing restricting solutions to a problem like this to being Linux-only, so why not make it as usable as possible? As I've often noted before, the smartest thing to do in this group (as in many other learning environments) is to take constructive criticism with gratitude rather than, say, bristling at it.

> So it's completely portable across the 
> systems concerned here (every distro will have coreutils).

Well, no. Tom's RootBoot doesn't, for example; most of the utilities in it, including "sort", are built with busybox - and I'm pretty sure that it doesn't support the "-R" option. There is a number of other Linuxen listed in 'http://en.wikipedia.org/wiki/List_of_Linux_distributions#Others', many of which would be an even-money bet for that, too.

> The simplest 
> answer is usually best.

That is certainly a common and trivially-cited platitude - one which is not in the least helpful. What is "simplest"? Doesn't it depend on the problem? Perhaps chopping someone's head off when they need brain surgery would be simplest, but it would not necessarily be best.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Wed, 26 Nov 2008 20:31:23 -0500

On Wed, Nov 26, 2008 at 09:18:57AM -0600, Michael Makuch wrote:

> Will wrote:
> >     
> > AFAIK this is a Linux help group. So it's completely portable across the 
> > systems concerned here (every distro will have coreutils). The simplest 
> > answer is usually best.
> 
> I was wondering about that. Interesting that the linux only solution was 
> so quickly dismissed.

It wasn't "dismissed", as I recall; rather, it was noted that it wasn't portable. The decision about dismissing it or not is left up to the person who needs to solve the problem - which is squarely where that responsibility should fall. Providing more, better, and broader answers is what we do here so as to give more options to those who need to make those decisions.

> Isn't it more instructive to discuss all potential good solutions? Or do 
> the dominant opinions here
> feel that one should always strive for 100% portability?

Yes to both.

> Even at the 
> expense of more development
> time? Where do you draw the line?

How much time do you have? Feel free to post however many appropriate solutions you see consonant with the time you feel you have to spend on it; others will, theoretically, critique your solutions (again, based on the time that they have) - and out of the discussion comes the best approach that we have. That's what people do in discussion groups.

> I am all in favor of promoting portability, but I also code in the real 
> world.

I'd be fascinated to know what alternatives you believe exist to coding in the real world. Do you believe that some people code in, say, a world that's 90 degrees to this one? Or perhaps in Emerald City? I'd like to see a job app for one of those - or better yet, a Trip-Tik. :)

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Wed, 26 Nov 2008 20:34:27 -0500

On Wed, Nov 26, 2008 at 03:43:51PM +0000, Samuel Bisbee-vonKaufmann wrote:

> 
> I don't think anyone is suggesting 100% portability (don't forget
> MINIX!). However, this was a shell scripting question and there's
> often little reason to not have shell scripts be portable. It's really
> one of those "why use /bin/bash when /bin/sh would run all that
> syntax?" questions.
> 
> Also, I would argue that "development time" really wasn't that much more.

Often, it comes down to simply using "printf" instead of "echo" and not using {Bash,KSH,C,etc.}-isms.

> >I am all in favor of promoting portability, but I also code in the real 
> >world.
> >
> 
> I always love these lines, "I code for big business, so that makes my
> experience with programming more realistic than yours." As someone who
> has coded for all sizes of companies/groups/fun, I have found that
> it's all the same.

Sam, that implies that your professional ethics are more important than the environment in which you work. How dare you, sir!... I knew I liked you for a reason. :)

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back


Rick Moen [rick at linuxmafia.com]


Wed, 26 Nov 2008 18:56:07 -0800

Quoting Ben Okopnik (ben@linuxgazette.net):

> Actually, we often focus on broader issues whenever possible - as in
> this case. There's little or nothing restricting solutions to a problem
> like this to being Linux-only, so why not make it as usable as possible?

(/me dons his Rhetorical Questions Answered Cheap hat.)

I have noticed over the years that bashisms and Linuxisms tend to increase readability and conciseness -- at least a bit. The examples I can most quickly recall are

  export variable=value 
as opposed to
  set variable=value
  export variable
and
  ln -sf foo bar
as opposed to
  if [-f bar]; then unlink bar; fi
  ln -s foo bar
Point is, portability can exact a cost. (If you want your Bourne scripts to work on systems that don't support symlinks at all, you'll need to cruft up those "ln -s" commands with a bunch more logic, nei?)

Extensions are crunchy and delicious! ;->

-- 
Cheers,                "I'm sorry Dan, what's right isn't always popular, 
Rick Moen              and what's popular isn't always right."
rick@linuxmafia.com                     -- George R. Moscone, Nov. 27, 1978


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Wed, 26 Nov 2008 22:37:34 -0500

On Wed, Nov 26, 2008 at 06:56:07PM -0800, Rick Moen wrote:

> 
> Point is, portability can exact a cost.  (If you want your Bourne
> scripts to work on systems that don't support symlinks at all, you'll
> need to cruft up those "ln -s" commands with a bunch more logic, nei?)

In Perl, at least, it's simple:

# What do you mean, "Flash Gordon has escaped"???
die "Die, scum-sucking demon from Hell!!!\n" if eval { symlink('',''); 1 }
 
# Rest of script follows

In Bash, it's a little more crunchy - although there's $OSTYPE to lead you to water.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


Top    Back