programming

Oh, the cursed Quicksilver

Blog
Oh, cursed Quicksilver, how quickly you have come to dominate my computing experience. How easy is navigation on my fruity box now that you have entered my world.

No longer can I be content with just an xterminal and the command line. Now that I have seen the power of the three key launch of any program, or the four keystroke opening of any file on my system, my mouse and touchpad are forsaken, and all applications are at my fingertips.

Nevermore shall I be satisfied with the Quicksilver-free interface. No longer will I be able to use the blight that is Windows or the cripple of the Gnomed KDE, without full cringing and much angst.

And cursed is the 43 Folders for showing me the light that is Quicksilver. May you experience the pain of QS free navigation on all your boxes until QS is available everywhere.

10 best programming practices

Book page

From: http://www.perl.com/lpt/a/2005/07/14/bestpractices.html

Perl.com: Ten Essential Development Practices

Ten Essential Development Practices
By Damian Conway
July 14, 2005

The following ten tips come from Perl Best Practices, a new book of Perl coding and development guidelines by Damian Conway.

1. Design the Module's Interface First

The most important aspect of any module is not how it implements the facilities it provides, but the way in which it provides those facilities in the first place. If the module's API is too awkward, or too complex, or too extensive, or too fragmented, or even just poorly named, developers will avoid using it. They'll write their own code instead. In that way, a poorly designed module can actually reduce the overall maintainability of a system.

Designing module interfaces requires both experience and creativity. Perhaps the easiest way to work out how an interface should work is to "play test" it: to write examples of code that will use the module before implementing the module itself. These examples will not be wasted when the design is complete. You can usually recycle them into demos, documentation examples, or the core of a test suite.

The key, however, is to write that code as if the module were already available, and write it the way you'd most like the module to work.

Once you have some idea of the interface you want to create, convert your "play tests" into actual tests (see Tip #2). Then it's just a Simple Matter Of Programming to make the module work the way that the code examples and the tests want it to.

Of course, it may not be possible for the module to work the way you'd most like, in which case attempting to implement it that way will help you determine what aspects of your API are not practical, and allow you to work out what might be an acceptable alternative.

2. Write the Test Cases Before the Code

Probably the single best practice in all of software development is writing your test suite first.

A test suite is an executable, self-verifying specification of the behavior of a piece of software. If you have a test suite, you can--at any point in the development process--verify that the code works as expected. If you have a test suite, you can--after any changes during the maintenance cycle--verify that the code still works as expected.

Write the tests first. Write them as soon as you know what your interface will be (see #1). Write them before you start coding your application or module. Unless you have tests, you have no unequivocal specification of what the software should do, and no way of knowing whether it does it.

Writing tests always seems like a chore, and an unproductive chore at that: you don't have anything to test yet, so why write tests? Yet most developers will--almost automatically--write driver software to test their new module in an ad hoc way:

> cat try_inflections.pl

# Test my shiny new English inflections module...

use Lingua::EN::Inflect qw( inflect );

# Try some plurals (both standard and unusual inflections)...

my %plural_of = (
   'house'         => 'houses',
   'mouse'         => 'mice',
   'box'           => 'boxes',
   'ox'            => 'oxen',
   'goose'         => 'geese',
   'mongoose'      => 'mongooses', 
   'law'           => 'laws',
   'mother-in-law' => 'mothers-in-law',
);
 
# For each of them, print both the expected result and the actual inflection...

for my $word ( keys %plural_of ) {
   my $expected = $plural_of{$word};
   my $computed = inflect( "PL_N($word)" );
 
   print "For $word:\n", 
         "\tExpected: $expected\n",
         "\tComputed: $computed\n";
}

A driver like that is actually harder to write than a test suite, because you have to worry about formatting the output in a way that is easy to read. It's also much harder to use the driver than it would be to use a test suite, because every time you run it you have to wade though that formatted output and verify "by eye" that everything is as it should be. That's also error-prone; eyes are not optimized for picking out small differences in the middle of large amounts of nearly identical text.

Instead of hacking together a driver program, it's easier to write a test program using the standard Test::Simple module. Instead of print statements showing what's being tested, you just write calls to the ok() subroutine, specifying as its first argument the condition under which things are okay, and as its second argument a description of what you're actually testing:

> cat inflections.t

use Lingua::EN::Inflect qw( inflect);

use Test::Simple qw( no_plan);

my %plural_of = (
   'mouse'         => 'mice',
   'house'         => 'houses',
   'ox'            => 'oxen',
   'box'           => 'boxes',
   'goose'         => 'geese',
   'mongoose'      => 'mongooses', 
   'law'           => 'laws',
   'mother-in-law' => 'mothers-in-law',
);

for my $word ( keys %plural_of ) {
   my $expected = $plural_of{$word};
   my $computed = inflect( "PL_N($word)" );

   ok( $computed eq $expected, "$word -> $expected" );
}

by Damian Conway

Note that this code loads Test::Simple with the argument qw( no_plan ). Normally that argument would be tests => count, indicating how many tests to expect, but here the tests are generated from the %plural_of table at run time, so the final count will depend on how many entries are in that table. Specifying a fixed number of tests when loading the module is useful if you happen know that number at compile time, because then the module can also "meta-test:" verify that you carried out all the tests you expected to.

The Test::Simple program is slightly more concise and readable than the original driver code, and the output is much more compact and informative:

> perl inflections.t

ok 1 - house -> houses
ok 2 - law -> laws
not ok 3 - mongoose -> mongooses
#     Failed test (inflections.t at line 21)
ok 4 - goose -> geese
ok 5 - ox -> oxen
not ok 6 - mother-in-law -> mothers-in-law
#     Failed test (inflections.t at line 21)
ok 7 - mouse -> mice
ok 8 - box -> boxes
1..8
# Looks like you failed 2 tests of 8. 

More importantly, this version requires far less effort to verify the correctness of each test. You just scan down the left margin looking for a not and a comment line.

You might prefer to use the Test::More module instead of Test::Simple. Then you can specify the actual and expected values separately, by using the is() subroutine, rather than ok():

use Lingua::EN::Inflect qw( inflect );
use Test::More qw( no_plan ); # Now using more advanced testing tools

my %plural_of = (
   'mouse'         => 'mice',
   'house'         => 'houses',
   'ox'            => 'oxen',
   'box'           => 'boxes',
   'goose'         => 'geese',
   'mongoose'      => 'mongooses', 
   'law'           => 'laws',
   'mother-in-law' => 'mothers-in-law',
);

for my $word ( keys %plural_of ) {
   my $expected = $plural_of{$word};
   my $computed = inflect( "PL_N($word)" );

   # Test expected and computed inflections for string equality...
   is( $computed, $expected, "$word -> $expected" );
}

Apart from no longer having to type the eq yourself, this version also produces more detailed error messages:

> perl inflections.t

ok 1 - house -> houses
ok 2 - law -> laws
not ok 3 - mongoose -> mongooses
#     Failed test (inflections.t at line 20)
#          got: 'mongeese'
#     expected: 'mongooses'
ok 4 - goose -> geese
ok 5 - ox -> oxen
not ok 6 - mother-in-law -> mothers-in-law
#     Failed test (inflections.t at line 20)
#          got: 'mothers-in-laws'
#     expected: 'mothers-in-law'
ok 7 - mouse -> mice
ok 8 - box -> boxes
1..8
# Looks like you failed 2 tests of 8.

The Test::Tutorial documentation that comes with Perl 5.8 provides a gentle introduction to both Test::Simple and Test::More.

3. Create Standard POD Templates for Modules and Applications

One of the main reasons documentation can often seem so unpleasant is the "blank page effect." Many programmers simply don't know how to get started or what to say.

Perhaps the easiest way to make writing documentation less forbidding (and hence, more likely to actually occur) is to circumvent that initial empty screen by providing a template that developers can cut and paste into their code.

For a module, that documentation template might look something like this:

=head1 NAME

<Module::Name> - <One-line description of module's purpose>

=head1 VERSION

The initial template usually just has:

This documentation refers to <Module::Name> version 0.0.1.

=head1 SYNOPSIS

   use <Module::Name>;

   # Brief but working code example(s) here showing the most common usage(s)
   # This section will be as far as many users bother reading, so make it as
   # educational and exemplary as possible.

=head1 DESCRIPTION

A full description of the module and its features.

May include numerous subsections (i.e., =head2, =head3, etc.).

=head1 SUBROUTINES/METHODS

A separate section listing the public components of the module's interface.

These normally consist of either subroutines that may be exported, or methods
that may be called on objects belonging to the classes that the module
provides.

Name the section accordingly.

In an object-oriented module, this section should begin with a sentence (of the
form "An object of this class represents ...") to give the reader a high-level
context to help them understand the methods that are subsequently described.

=head1 DIAGNOSTICS

A list of every error and warning message that the module can generate (even
the ones that will "never happen"), with a full explanation of each problem,
one or more likely causes, and any suggested remedies.

=head1 CONFIGURATION AND ENVIRONMENT

A full explanation of any configuration system(s) used by the module, including
the names and locations of any configuration files, and the meaning of any
environment variables or properties that can be set. These descriptions must
also include details of any configuration language used.

=head1 DEPENDENCIES

A list of all of the other modules that this module relies upon, including any
restrictions on versions, and an indication of whether these required modules
are part of the standard Perl distribution, part of the module's distribution,
or must be installed separately.

=head1 INCOMPATIBILITIES

A list of any modules that this module cannot be used in conjunction with.
This may be due to name conflicts in the interface, or competition for system
or program resources, or due to internal limitations of Perl (for example, many
modules that use source code filters are mutually incompatible).

=head1 BUGS AND LIMITATIONS

A list of known problems with the module, together with some indication of
whether they are likely to be fixed in an upcoming release.

Also, a list of restrictions on the features the module does provide: data types
that cannot be handled, performance issues and the circumstances in which they
may arise, practical limitations on the size of data sets, special cases that
are not (yet) handled, etc.

The initial template usually just has:

There are no known bugs in this module.

Please report problems to <Maintainer name(s)> (<contact address>)

Patches are welcome.

=head1 AUTHOR

<Author name(s)>  (<contact address>)

=head1 LICENSE AND COPYRIGHT

Copyright (c) <year> <copyright holder> (<contact address>).
All rights reserved.

followed by whatever license you wish to release it under.

For Perl code that is often just:

This module is free software; you can redistribute it and/or modify it under
the same terms as Perl itself. See L<perlartistic>.  This program is
distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.

by Damian Conway

Of course, the specific details that your templates provide may vary from those shown here, according to your other coding practices. The most likely variation will be in the license and copyright, but you may also have specific in-house conventions regarding version numbering, the grammar of diagnostic messages, or the attribution of authorship.

4. Use a Revision Control System

Maintaining control over the creation and modification of your source code is utterly essential for robust team-based development. And not just over source code: you should be revision controlling your documentation, and data files, and document templates, and makefiles, and style sheets, and change logs, and any other resources your system requires.

Just as you wouldn't use an editor without an Undo command or a word processor that can't merge documents, so too you shouldn't use a file system you can't rewind, or a development environment that can't integrate the work of many contributors.

Programmers make mistakes, and occasionally those mistakes will be catastrophic. They will reformat the disk containing the most recent version of the code. Or they'll mistype an editor macro and write zeros all through the source of a critical core module. Or two developers will unwittingly edit the same file at the same time and half their changes will be lost. Revision control systems can prevent those kinds of problems.

Moreover, occasionally the very best debugging technique is to just give up, stop trying to get yesterday's modifications to work correctly, roll the code back to a known stable state, and start over again. Less drastically, comparing the current condition of your code with the most recent stable version from your repository (even just a line-by-line diff) can often help you isolate your recent "improvements" and work out which of them is the problem.

Revision control systems such as RCS, CVS, Subversion, Monotone, darcs, Perforce, GNU arch, or BitKeeper can protect against calamities, and ensure that you always have a working fallback position if maintenance goes horribly wrong. The various systems have different strengths and limitations, many of which stem from fundamentally different views on what exactly revision control is. It's a good idea to audition the various revision control systems, and find the one that works best for you. Pragmatic Version Control Using Subversion, by Mike Mason (Pragmatic Bookshelf, 2005) and Essential CVS, by Jennifer Vesperman (O'Reilly, 2003) are useful starting points.

5. Create Consistent Command-Line Interfaces

Command-line interfaces have a strong tendency to grow over time, accreting new options as you add features to the application. Unfortunately, the evolution of such interfaces is rarely designed, managed, or controlled, so the set of flags, options, and arguments that a given application accepts are likely to be ad hoc and unique.

This also means they're likely to be inconsistent with the unique ad hoc sets of flags, options, and arguments that other related applications provide. The result is inevitably a suite of programs, each of which is driven in a distinct and idiosyncratic way. For example:

> orchestrate source.txt -to interim.orc

> remonstrate +interim.rem -interim.orc 

> fenestrate  --src=interim.rem --dest=final.wdw
Invalid input format

> fenestrate --help
Unknown option: --help.
Type 'fenestrate -hmo' for help

Here, the orchestrate utility expects its input file as its first argument, while the -to flag specifies its output file. The related remonstrate tool uses -infile and +outfile options instead, with the output file coming first. The fenestrate program seems to require GNU-style "long options:" --src=infile and --dest=outfile, except, apparently, for its oddly named help flag. All in all, it's a mess.

When you're providing a suite of programs, all of them should appear to work the same way, using the same flags and options for the same features across all applications. This enables your users to take advantage of existing knowledge--instead of continually asking you.

Those three programs should work like this:

> orchestrate -i source.txt -o dest.orc

> remonstrate -i source.orc -o dest.rem

> fenestrate  -i source.rem -o dest.wdw
Input file ('source.rem') not a valid Remora file
(type "fenestrate --help" for help)

> fenestrate --help
fenestrate - convert Remora .rem files to Windows .wdw format
Usage: fenestrate [-i <infile>] [-o <outfile>] [-cstq] [-h|-v]
Options:
   -i <infile> Specify input source [default: STDIN]
   -o <outfile> Specify output destination [default: STDOUT]
   -c Attempt to produce a more compact representation
   -h Use horizontal (landscape) layout
   -v Use vertical (portrait) layout
   -s Be strict regarding input
   -t Be extra tolerant regarding input
   -q Run silent
   --version Print version information
   --usage Print the usage line of this summary
   --help Print this summary
   --man Print the complete manpage

Here, every application that takes input and output files uses the same two flags to do so. A user who wants to use the substrate utility (to convert that final .wdw file to a subroutine) is likely to be able to guess correctly the required syntax:

> substrate  -i dest.wdw -o dest.sub

Anyone who can't guess that probably can guess that:

> substrate --help

is likely to render aid and comfort.

by Damian Conway

A large part of making interfaces consistent is being consistent in specifying the individual components of those interfaces. Some conventions that may help to design consistent and predictable interfaces include:

  • Require a flag preceding every piece of command-line data, except filenames.

    Users don't want to have to remember that your application requires "input file, output file, block size, operation, fallback strategy," and requires them in that precise order:

    > lustrate sample_data proc_data 1000 normalize log

    They want to be able to say explicitly what they mean, in any order that suits them:

    > lustrate sample_data proc_data -op=normalize -b1000 --fallback=log
  • Provide a flag for each filename, too, especially when a program can be given files for different purposes.

    Users might also not want to remember the order of the two positional filenames, so let them label those arguments as well, and specify them in whatever order they prefer:

    > lustrate -i sample_data -op normalize -b1000 --fallback log -o proc_data
  • Use a single - prefix for short-form flags, up to three letters (-v, -i, -rw, -in, -out).

    Experienced users appreciate short-form flags as a way of reducing typing and limiting command-line clutter. Don't make them type two dashes in these shortcuts.

  • Use a double -- prefix for longer flags (--verbose, --interactive, --readwrite, --input, --output).

    Flags that are complete words improve the readability of a command line (in a shell script, for example). The double dash also helps to distinguish between the longer flag name and any nearby file names.

  • If a flag expects an associated value, allow an optional = between the flag and the value.

    Some people prefer to visually associate a value with its preceding flag:

    > lustrate -i=sample_data -op=normalize -b=1000 --fallback=log -o=proc_data

    Others don't:

    > lustrate -i sample_data -op normalize -b1000 --fallback log -o proc_data

    Still others want a bit each way:

    > lustrate -i sample_data -o proc_data -op=normalize -b=1000 --fallback=log

    Let the user choose.

  • Allow single-letter options to be "bundled" after a single dash.

    It's irritating to have to type repeated dashes for a series of flags:

    > lustrate -i sample_data -v -l -x

    Allow experienced users to also write:

    > lustrate -i sample_data -vlx
  • Provide a multi-letter version of every single-letter flag.

    Short-form flags may be nice for experienced users, but they can be troublesome for new users: hard to remember and even harder to recognize. Don't force people to do either. Give them a verbose alternative to every concise flag; full words that are easier to remember, and also more self-documenting in shell scripts.

  • Always allow - as a special filename.

    A widely used convention is that a dash (-) where an input file is expected means "read from standard input," and a dash where an output file is expected means "write to standard output."

  • Always allow -- as a file list marker.

    Another widely used convention is that the appearance of a double dash (--) on the command line marks the end of any flagged options, and indicates that the remaining arguments are a list of filenames, even if some of them look like flags.

6. Agree Upon a Coherent Layout Style and Automate It with perltidy

Formatting. Indentation. Style. Code layout. Whatever you choose to call it, it's one of the most contentious aspects of programming discipline. More and bloodier wars have been fought over code layout than over just about any other aspect of coding.

What is the best practice here? Should you use classic Kernighan and Ritchie style? Or go with BSD code formatting? Or adopt the layout scheme specified by the GNU project? Or conform to the Slashcode coding guidelines?

Of course not! Everyone knows that <insert your personal coding style here> is the One True Layout Style, the only sane choice, as ordained by <insert your favorite Programming Deity here> since Time Immemorial! Any other choice is manifestly absurd, willfully heretical, and self-evidently a Work of Darkness!

That's precisely the problem. When deciding on a layout style, it's hard to decide where rational choices end and rationalized habits begin.

Adopting a coherently designed approach to code layout, and then applying that approach consistently across all your coding, is fundamental to best-practice programming. Good layout can improve the readability of a program, help detect errors within it, and make the structure of your code much easier to comprehend. Layout matters.

However, most coding styles--including the four mentioned earlier--confer those benefits almost equally well. While it's true that having a consistent code layout scheme matters very much indeed, the particular code layout scheme you ultimately decide upon does not matter at all! All that matters is that you adopt a single, coherent style; one that works for your entire programming team, and, having agreed upon that style, that you then apply it consistently across all your development.

by Damian Conway

In the long term, it's best to train yourself and your team to code in a consistent, rational, and readable style. However, the time and commitment necessary to accomplish that isn't always available. In such cases, a reasonable compromise is to prescribe a standard code-formatting tool that must be applied to all code before it's committed, reviewed, or otherwise displayed in public.

There is now an excellent code formatter available for Perl: perltidy. It provides an extensive range of user-configurable options for indenting, block delimiter positioning, column-like alignment, and comment positioning.

Using perltidy, you can convert code like this:

if($sigil eq '$'){
   if($subsigil eq '?'){ 
       $sym_table{substr($var_name,2)}=delete $sym_table{locate_orig_var($var)};
       $internal_count++;$has_internal{$var_name}++
   } else {
       ${$var_ref} =
           q{$sym_table{$var_name}}; $external_count++; $has_external{$var_name}++;
}} elsif ($sigil eq '@'&&$subsigil eq '?') {
   @{$sym_table{$var_name}} = grep
       {defined $_} @{$sym_table{$var_name}};
} elsif ($sigil eq '%' && $subsigil eq '?') {
delete $sym_table{$var_name}{$EMPTY_STR}; } else
{
${$var_ref}
=
q{$sym_table{$var_name}}
}

into something readable:

if ( $sigil eq '$' ) {
   if ( $subsigil eq '?' ) {
       $sym_table{ substr( $var_name, 2 ) }
           = delete $sym_table{ locate_orig_var($var) };
       $internal_count++;
       $has_internal{$var_name}++;
   }
   else {
       ${$var_ref} = q{$sym_table{$var_name}};
       $external_count++;
       $has_external{$var_name}++;
   }
}
elsif ( $sigil eq '@' && $subsigil eq '?' ) {
   @{ $sym_table{$var_name} }
       = grep {defined $_} @{ $sym_table{$var_name} };
}
elsif ( $sigil eq '%' && $subsigil eq '?' ) {
   delete $sym_table{$var_name}{$EMPTY_STR};
}
else {
   ${$var_ref} = q{$sym_table{$var_name}};
}

Mandating that everyone use a common tool to format their code can also be a simple way of sidestepping the endless objections, acrimony, and dogma that always surround any discussion on code layout. If perltidy does all the work for them, then it will cost developers almost no effort to adopt the new guidelines. They can simply set up an editor macro that will "straighten" their code whenever they need to.

7. Code in Commented Paragraphs

A paragraph is a collection of statements that accomplish a single task: in literature, it's a series of sentences conveying a single idea; in programming, a series of instructions implementing a single step of an algorithm.

Break each piece of code into sequences that achieve a single task, placing a single empty line between each sequence. To further improve the maintainability of the code, place a one-line comment at the start of each such paragraph, describing what the sequence of statements does. Like so:

# Process an array that has been recognized...
sub addarray_internal {
   my ($var_name, $needs_quotemeta) = @_;

   # Cache the original...
   $raw .= $var_name;

   # Build meta-quoting code, if requested...
   my $quotemeta = $needs_quotemeta ?  q{map {quotemeta $_} } : $EMPTY_STR;

   # Expand elements of variable, conjoin with ORs...
   my $perl5pat = qq{(??{join q{|}, $quotemeta \@{$var_name}})};

   # Insert debugging code if requested...
   my $type = $quotemeta ? 'literal' : 'pattern';
   debug_now("Adding $var_name (as $type)");
   add_debug_mesg("Trying $var_name (as $type)");

   return $perl5pat;
}

Paragraphs are useful because humans can focus on only a few pieces of information at once. Paragraphs are one way of aggregating small amounts of related information, so that the resulting "chunk" can fit into a single slot of the reader's limited short-term memory. Paragraphs enable the physical structure of a piece of writing to reflect and emphasize its logical structure.

Adding comments at the start of each paragraph further enhances the chunking by explicitly summarizing the purpose of each chunk (note: the purpose, not the behavior). Paragraph comments need to explain why the code is there and what it achieves, not merely paraphrase the precise computational steps it's performing.

by Damian Conway

Note, however, that the contents of paragraphs are only of secondary importance here. It is the vertical gaps separating each paragraph that are critical. Without them, the readability of the code declines dramatically, even if the comments are retained:

sub addarray_internal {
   my ($var_name, $needs_quotemeta) = @_;
   # Cache the original...
   $raw .= $var_name;
   # Build meta-quoting code, if required...
   my $quotemeta = $needs_quotemeta ?  q{map {quotemeta $_} } : $EMPTY_STR;
   # Expand elements of variable, conjoin with ORs...
   my $perl5pat = qq{(??{join q{|}, $quotemeta \@{$var_name}})};
   # Insert debugging code if requested...
   my $type = $quotemeta ? 'literal' : 'pattern';
   debug_now("Adding $var_name (as $type)");
   add_debug_mesg("Trying $var_name (as $type)");
   return $perl5pat;
}

8. Throw Exceptions Instead of Returning Special Values or Setting Flags

Returning a special error value on failure, or setting a special error flag, is a very common error-handling technique. Collectively, they're the basis for virtually all error notification from Perl's own built-in functions. For example, the built-ins eval, exec, flock, open, print, stat, and system all return special values on error. Unfortunately, they don't all use the same special value. Some of them also set a flag on failure. Sadly, it's not always the same flag. See the perlfunc manpage for the gory details.

Apart from the obvious consistency problems, error notification via flags and return values has another serious flaw: developers can silently ignore flags and return values, and ignoring them requires absolutely no effort on the part of the programmer. In fact, in a void context, ignoring return values is Perl's default behavior. Ignoring an error flag that has suddenly appeared in a special variable is just as easy: you simply don't bother to check the variable.

Moreover, because ignoring a return value is the void-context default, there's no syntactic marker for it. There's no way to look at a program and immediately see where a return value is deliberately being ignored, which means there's also no way to be sure that it's not being ignored accidentally.

The bottom line: regardless of the programmer's (lack of) intention, an error indicator is being ignored. That's not good programming.

Ignoring error indicators frequently causes programs to propagate errors in entirely the wrong direction. For example:

# Find and open a file by name, returning the filehandle
# or undef on failure...
sub locate_and_open {
   my ($filename) = @_;

   # Check acceptable directories in order...
   for my $dir (@DATA_DIRS) {
       my $path = "$dir/$filename";

       # If file exists in an acceptable directory, open and return it...
       if (-r $path) {
           open my $fh, '<', $path;
           return $fh;
       }
   }

   # Fail if all possible locations tried without success...
   return;
}

# Load file contents up to the first <DATA/> marker...
sub load_header_from {
   my ($fh) = @_;

   # Use DATA tag as end-of-"line"...
   local $/ = '<DATA/>';

   # Read to end-of-"line"...
   return <$fh>;
}

# and later...
for my $filename (@source_files) {
   my $fh = locate_and_open($filename);
   my $head = load_header_from($fh);
   print $head;
}

The locate_and_open() subroutine simply assumes that the call to open works, immediately returning the filehandle ($fh), whatever the actual outcome of the open. Presumably, the expectation is that whoever calls locate_and_open() will check whether the return value is a valid filehandle.

Except, of course, "whoever" doesn't check. Instead of testing for failure, the main for loop takes the failure value and immediately propagates it "across" the block, to the rest of the statements in the loop. That causes the call to loader_header_from() to propagate the error value "downwards." It's in that subroutine that the attempt to treat the failure value as a filehandle eventually kills the program:

readline() on unopened filehandle at demo.pl line 28.

Code like that--where an error is reported in an entirely different part of the program from where it actually occurred--is particularly onerous to debug.

Of course, you could argue that the fault lies squarely with whoever wrote the loop, for using locate_and_open() without checking its return value. In the narrowest sense, that's entirely correct--but the deeper fault lies with whoever actually wrote locate_and_open() in the first place, or at least, whoever assumed that the caller would always check its return value.

by Damian Conway

Humans simply aren't like that. Rocks almost never fall out of the sky, so humans soon conclude that they never do, and stop looking up for them. Fires rarely break out in their homes, so humans soon forget that they might, and stop testing their smoke detectors every month. In the same way, programmers inevitably abbreviate "almost never fails" to "never fails," and then simply stop checking.

That's why so very few people bother to verify their print statements:

if (!print 'Enter your name: ') {
   print {*STDLOG} warning => 'Terminal went missing!'
}

It's human nature to "trust but not verify."

Human nature is why returning an error indicator is not best practice. Errors are (supposed to be) unusual occurrences, so error markers will almost never be returned. Those tedious and ungainly checks for them will almost never do anything useful, so eventually they'll be quietly omitted. After all, leaving the tests off almost always works just fine. It's so much easier not to bother. Especially when not bothering is the default!

Don't return special error values when something goes wrong; throw an exception instead. The great advantage of exceptions is that they reverse the usual default behaviors, bringing untrapped errors to immediate and urgent attention. On the other hand, ignoring an exception requires a deliberate and conspicuous effort: you have to provide an explicit eval block to neutralize it.

The locate_and_open() subroutine would be much cleaner and more robust if the errors within it threw exceptions:

# Find and open a file by name, returning the filehandle
# or throwing an exception on failure...
sub locate_and_open {
   my ($filename) = @_;

   # Check acceptable directories in order...
   for my $dir (@DATA_DIRS) {
       my $path = "$dir/$filename";

       # If file exists in acceptable directory, open and return it...
       if (-r $path) {
           open my $fh, '<', $path
               or croak( "Located $filename at $path, but could not open");
           return $fh;
       }
   }

   # Fail if all possible locations tried without success...
   croak( "Could not locate $filename" );
}

# and later...
for my $filename (@source_files) {
   my $fh = locate_and_open($filename);
   my $head = load_header_from($fh);
   print $head;
}

Notice that the main for loop didn't change at all. The developer using locate_and_open() still assumes that nothing can go wrong. Now there's some justification for that expectation, because if anything does go wrong, the thrown exception will automatically terminate the loop.

Exceptions are a better choice even if you are the careful type who religiously checks every return value for failure:

SOURCE_FILE:
for my $filename (@source_files) {
   my $fh = locate_and_open($filename);
   next SOURCE_FILE if !defined $fh;
   my $head = load_header_from($fh);
   next SOURCE_FILE if !defined $head;
   print $head;
}

Constantly checking return values for failure clutters your code with validation statements, often greatly decreasing its readability. In contrast, exceptions allow an algorithm to be implemented without having to intersperse any error-handling infrastructure at all. You can factor the error-handling out of the code and either relegate it to after the surrounding eval, or else dispense with it entirely:

for my $filename (@directory_path) {

   # Just ignore any source files that don't load...
   eval {
       my $fh = locate_and_open($filename);
       my $head = load_header_from($fh);
       print $head;
   }
}

9. Add New Test Cases Before you Start Debugging

The first step in any debugging process is to isolate the incorrect behavior of the system, by producing the shortest demonstration of it that you reasonably can. If you're lucky, this may even have been done for you:

To: DCONWAY@cpan.org
From: sascha@perlmonks.org
Subject: Bug in inflect module

Zdravstvuite,

I have been using your Lingua::EN::Inflect module to normalize terms in a
data-mining application I am developing, but there seems to be a bug in it,
as the following example demonstrates:

   use Lingua::EN::Inflect qw( PL_N );
   print PL_N('man'), "\n";       # Prints "men", as expected
   print PL_N('woman'), "\n";     # Incorrectly prints "womans"

Once you have distilled a short working example of the bug, convert it to a series of tests, such as:

use Lingua::EN::Inflect qw( PL_N );
use Test::More qw( no_plan );
is(PL_N('man') ,  'men', 'man -> men'     );
is(PL_N('woman'), 'women', 'woman -> women' );

by Damian Conway

Don't try to fix the problem straight away, though. Instead, immediately add those tests to your test suite. If that testing has been well set up, that can often be as simple as adding a couple of entries to a table:

my %plural_of = (
   'mouse'         => 'mice',
   'house'         => 'houses',
   'ox'            => 'oxen',
   'box'           => 'boxes',
   'goose'         => 'geese',
   'mongoose'      => 'mongooses', 
   'law'           => 'laws',
   'mother-in-law' => 'mothers-in-law', 

   # Sascha's bug, reported 27 August 2004...
   'man'           => 'men',
   'woman'         => 'women',
);

The point is: if the original test suite didn't report this bug, then that test suite was broken. It simply didn't do its job (finding bugs) adequately. Fix the test suite first by adding tests that cause it to fail:

> perl inflections.t
ok 1 - house -> houses
ok 2 - law -> laws
ok 3 - man -> men
ok 4 - mongoose -> mongooses
ok 5 - goose -> geese
ok 6 - ox -> oxen
not ok 7 - woman -> women
#     Failed test (inflections.t at line 20)
#          got: 'womans'
#     expected: 'women'
ok 8 - mother-in-law -> mothers-in-law
ok 9 - mouse -> mice
ok 10 - box -> boxes
1..10
# Looks like you failed 1 tests of 10.

Once the test suite is detecting the problem correctly, then you'll be able to tell when you've correctly fixed the actual bug, because the tests will once again fall silent.

This approach to debugging is most effective when the test suite covers the full range of manifestations of the problem. When adding test cases for a bug, don't just add a single test for the simplest case. Make sure you include the obvious variations as well:

my %plural_of = (
   'mouse'         => 'mice',
   'house'         => 'houses',
   'ox'            => 'oxen',
   'box'           => 'boxes',
   'goose'         => 'geese',
   'mongoose'      => 'mongooses', 
   'law'           => 'laws',
   'mother-in-law' => 'mothers-in-law', 

   # Sascha's bug, reported 27 August 2004...
   'man'           => 'men',
   'woman'         => 'women',
   'human'         => 'humans',
   'man-at-arms'   => 'men-at-arms', 
   'lan'           => 'lans',
   'mane'          => 'manes',
   'moan'          => 'moans',
);

The more thoroughly you test the bug, the more completely you will fix it.

10. Don't Optimize Code--Benchmark It

If you need a function to remove duplicate elements of an array, it's natural to think that a "one-liner" like this:

sub uniq { return keys %{ { map {$_=>1} @_ } } }

will be more efficient than two statements:

sub uniq {
   my %seen;
   return grep {!$seen{$_}++} @_;
}

Unless you are deeply familiar with the internals of the Perl interpreter (in which case you already have far more serious personal issues to deal with), intuitions about the relative performance of two constructs are exactly that: unconscious guesses.

The only way to know for sure which of two--or more--alternatives will perform better is to actually time each of them. The standard Benchmark module makes that easy:

# A short list of not-quite-unique values...
our @data = qw( do re me fa so la ti do );

# Various candidates...
sub unique_via_anon {
   return keys %{ { map {$_=>1} @_ } };
}

sub unique_via_grep {
   my %seen;
   return grep { !$seen{$_}++ } @_;
}

sub unique_via_slice {
   my %uniq;
   @uniq{@_} = ();
   return keys %uniq;
}

# Compare the current set of data in @data
sub compare {
   my ($title) = @_;
   print "\n[$title]\n";

   # Create a comparison table of the various timings, making sure that
   # each test runs at least 10 CPU seconds...
   use Benchmark qw( cmpthese );
   cmpthese -10, {
       anon  => 'my @uniq = unique_via_anon(@data)',
       grep  => 'my @uniq = unique_via_grep(@data)',
       slice => 'my @uniq = unique_via_slice(@data)',
   };

   return;
}

compare('8 items, 10% repetition');

# Two copies of the original data...
@data = (@data) x 2;
compare('16 items, 56% repetition');

# One hundred copies of the original data...
@data = (@data) x 50;
compare('800 items, 99% repetition');

The cmpthese() subroutine takes a number, followed by a reference to a hash of tests. The number specifies either the exact number of times to run each test (if the number is positive), or the absolute number of CPU seconds to run the test for (if the number is negative). Typical values are around 10,000 repetitions or ten CPU seconds, but the module will warn you if the test is too short to produce an accurate benchmark.

The keys of the test hash are the names of your tests, and the corresponding values specify the code to be tested. Those values can be either strings (which are eval'd to produce executable code) or subroutine references (which are called directly).

The benchmarking code shown above would print out something like the following:

[8 items, 10% repetitions]
        Rate anon  grep slice
anon  28234/s --  -24%  -47%
grep  37294/s   32% --  -30%
slice 53013/s   88% 42%    --

[16 items, 50% repetitions]
        Rate anon  grep slice
anon  21283/s --  -28%  -51%
grep  29500/s   39% --  -32%
slice 43535/s  105% 48%    --

[800 items, 99% repetitions]
       Rate  anon grep slice
anon   536/s --  -65%  -89%
grep  1516/s  183% --  -69%
slice 4855/s  806%  220% --

Each of the tables printed has a separate row for each named test. The first column lists the absolute speed of each candidate in repetitions per second, while the remaining columns allow you to compare the relative performance of any two tests. For example, in the final test tracing across the grep row to the anon column reveals that the grepped solution was 1.83 times (183 percent) faster than using an anonymous hash. Tracing further across the same row also indicates that grepping was 69 percent slower (-69 percent faster) than slicing.

Overall, the indication from the three tests is that the slicing-based solution is consistently the fastest for this particular set of data on this particular machine. It also appears that as the data set increases in size, slicing also scales much better than either of the other two approaches.

However, those two conclusions are effectively drawn from only three data points (namely, the three benchmarking runs). To get a more definitive comparison of the three methods, you'd also need to test other possibilities, such as a long list of non-repeating items, or a short list with nothing but repetitions.

Better still, test on the real data that you'll actually be "unique-ing."

For example, if that data is a sorted list of a quarter of a million words, with only minimal repetitions, and which has to remain sorted, then test exactly that:

our @data = slurp '/usr/share/biglongwordlist.txt';

use Benchmark qw( cmpthese );

cmpthese 10, {
    # Note: the non-grepped solutions need a post-uniqification re-sort
    anon  => 'my @uniq = sort(unique_via_anon(@data))',
    grep  => 'my @uniq = unique_via_grep(@data)',
    slice => 'my @uniq = sort(unique_via_slice(@data))',
};

Not surprisingly, this benchmark indicates that the grepped solution is markedly superior on a large sorted data set:

s/iter anon slice  grep
anon    4.28 --   -3%  -46%
slice   4.15 3%    --  -44%
grep    2.30 86%   80%    --

Perhaps more interestingly, the grepped solution still benchmarks as being marginally faster when the two hash-based approaches aren't re-sorted. This suggests that the better scalability of the sliced solution as seen in the earlier benchmark is a localized phenomenon, and is eventually undermined by the growing costs of allocation, hashing, and bucket-overflows as the sliced hash grows very large.

Above all, that last example demonstrates that benchmarks only benchmark the cases you actually benchmark, and that you can only draw useful conclusions about performance from benchmarking real data.

Perl.com Compilation Copyright © 1998-2005 O'Reilly Media, Inc.

My butt is numb

Blog
It's 9:20 at night. I have just finished up and launched the website, user-facing part of online rostering for the UPA. I have been nominally been sitting for, according to the timeclock I use, 11.5 hours. That's eleven and a half billable hours.

No wonder my butt is numb.

I need a run.

MySQL: dumping data from a single table

To dump data from a single table, use the --tables option. Otherwise, mysqldump may interpret the table name as a database name

mysqldump -p -T mySubdir --tables myDB myTable

Bad Performance Review

Blog
I received a bad performance review today. I'll admit it to being a bit of a shock, though in retrospect, not surprising.

In the past two weeks, I've had food poisoning (-2 days), a migraine (-1 day), traveled to Virginia for my father-in-law's open-heart surgery recovery (-4 days), and to Pasadena to deal with a condo flooding (-2 days). I desparately want to say, "Look! I'm not making shit up! I'm not making up excuses!" but the end result is that I'm behind in a project and it's affecting not only one client/customer/project, but also other projects.

And I don't like it one bit.

I'm going full tilt (20 minutes work, 5 minutes pause, 14+ hours today) to get this stuff done, but I don't feel like I'm getting any closer to the end. The more I do the more I see I have left to do. Geez, does it ever end?

Ta-da!

I have officially posted my most boring, I'm whining post ever. This is why blogs suck. It's someone whining about a life that is actually pretty damn fucking good, with just a hint of stress in it.

The good thing about today? I didn't cry. I realized that, well, you know, crying isn't going to help a darn thing. When I'm done, I'm still going to have all this work to do.

Nothing to be done about it? Then don't worry about it.

Postnuke to Drupal Conversions: phpbb2.0 forums

Book page

Converting forums for phpbb2.0 to Drupal 4.5 forums. This is in a comment from http://drupal.org/node/12311.

# Forums
# Add the phpbb forum topics
#Replace XXX with the vid of the vocabularly you want to create, e.g., on a fresh drupal install, you can use "1" - otherwise, check your sequences table for the next available number
INSERT INTO vocabulary
VALUES (XXX, "Forum", "Topics for forums", "", 0, 1, 0, 1, "forum", -10);
# add the forum head topics by forum categories
#Replace YYY with the next available term data TID from your sequences table.
#For a fresh install, you can just delete "+ YYY". Replace XXX with the number you used above.
INSERT INTO term_data (tid, vid, name, description, weight)
SELECT cat_id + YYY, XXX, cat_title, cat_title, 0 FROM phpbb_categories;
#YYY same number as above or delete
INSERT INTO term_hierarchy (tid, parent)
SELECT cat_id + YYY, 0 FROM phpbb_categories;
# add the forum specific topics.
#Check your term_data table and find the highest TID number
#and replace ZZZ with a higher number.
#Use same XXX as above.
INSERT INTO term_data (tid, vid, name, description, weight)
SELECT forum_id + ZZZ, XXX, forum_name, forum_desc, 0 FROM phpbb_forums;
INSERT INTO term_hierarchy (tid, parent)
SELECT forum_id + ZZZ, cat_id + YYY FROM phpbb_forums;
#
# Create temporary tables for sorting topics and comments.
#
DROP TABLE IF EXISTS temp_posts;
CREATE TABLE temp_posts (
post_id mediumint(8) UNSIGNED NOT NULL auto_increment,
topic_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
forum_id smallint(5) UNSIGNED DEFAULT '0' NOT NULL,
poster_id mediumint(8) DEFAULT '0' NOT NULL,
post_time int(11) DEFAULT '0' NOT NULL,
post_edit_time int(11),
post_subject char(120),
post_text text,
PRIMARY KEY (post_id),
KEY forum_id (forum_id),
KEY topic_id (topic_id),
KEY poster_id (poster_id),
KEY post_time (post_time)
);
DROP TABLE IF EXISTS temp_node;
CREATE TABLE temp_node (
post_id mediumint(8) UNSIGNED NOT NULL auto_increment,
topic_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL,
PRIMARY KEY (post_id),
KEY topic_id (topic_id)
);
#
# Copy into temporary table topics without comments
#
INSERT INTO temp_node (post_id,topic_id)
SELECT MIN(post_id), topic_id
FROM phpbb_posts
GROUP BY topic_id;
INSERT INTO temp_posts (post_id, topic_id,forum_id,poster_id, post_time,post_edit_time,post_subject,post_text)
SELECT c.post_id, c.topic_id, a.forum_id, IF(a.poster_id='-1','0',a.poster_id), a.post_time, a.post_edit_time, REPLACE(b.post_subject, CONCAT(':',b.bbcode_uid),''), REPLACE(b.post_text, CONCAT(':',b.bbcode_uid),'')
FROM phpbb_posts AS a, phpbb_posts_text AS b, temp_node AS c
WHERE c.post_id=a.post_id AND c.post_id=b.post_id;
#
# Insert nid and tid from temp_posts into term_node
#
#check your node table and find the highest NID
#and replace WWW with a higher number.
#USe same ZZZ as above
INSERT INTO term_node (nid,tid)
SELECT WWW+topic_id,ZZZ+forum_id
FROM temp_posts;
ALTER TABLE term_node ORDER BY nid;
#
# Insert forum topics from temp_posts into node
#USe same WWW as above
INSERT INTO node (nid,type,title,uid,created,comment,body,changed)
SELECT WWW+topic_id,'forum',post_subject,poster_id,post_time,'2',post_text,IF(post_edit_time<>'NULL',post_edit_time,post_time)
FROM temp_posts;
ALTER TABLE node ORDER BY nid;
#
# Insert nid into forum
#Use same WWW and ZZZ as above
DELETE FROM forum;
INSERT INTO forum (nid,tid)
SELECT WWW+topic_id,ZZZ+forum_id
FROM temp_posts;
#
# Insert comments into comments for topics from temp_posts
#Use same WWW as above
INSERT INTO comments (nid,uid,subject,comment,hostname,timestamp,users)
SELECT WWW+a.topic_id,
CASE WHEN a.poster_id='-1' THEN '0' ELSE a.poster_id END,
REPLACE(c.post_subject, CONCAT(':',c.bbcode_uid),''),
REPLACE(c.post_text, CONCAT(':',c.bbcode_uid),''),
CONCAT_WS('.',CONV(SUBSTRING(a.poster_ip,1,2),16,10),CONV(SUBSTRING(a.poster_ip,3,2),16,10),CONV(SUBSTRING(a.poster_ip,5,2),16,10),CONV(SUBSTRING(a.poster_ip,7,2),16,10)),
a.post_time,'a:1:{i:0;i:0;}'
FROM phpbb_posts AS a LEFT JOIN temp_posts AS b ON a.post_id=b.post_id,phpbb_posts_text AS c
WHERE b.post_id IS NULL AND a.post_id=c.post_id;
ALTER TABLE comments ORDER BY cid;
UPDATE comments,node
SET comments.subject=IF(comments.subject='',CONCAT('Re:',node.title),comments.subject)
WHERE comments.nid=node.nid;
DROP TABLE IF EXISTS temp_posts;
DROP TABLE IF EXISTS temp_node;
#replace UUU with number higher than your highest current UID,
#or delete +UUU if this is fresh install
INSERT INTO users (uid+UUU,name,pass,mail,signature,timestamp,status,init,rid)
SELECT user_id,username,user_password,user_email,user_sig,IF(user_session_time='0',user_regdate,user_session_time),'1',user_email,'2'
FROM phpbb_users;
WHERE user_id>1
#replace WWW
INSERT INTO node_comment_statistics(
nid,
cid,
last_comment_timestamp,
last_comment_name,
last_comment_uid,
comment_count
)
SELECT
t.topic_id + WWW,
0,
t.topic_time,
p.username,
t.topic_poster,
t.topic_replies
FROM phpbb_topics t, users p
WHERE t.topic_poster = p.pn_uid;
#replace WWW
UPDATE node_comment_statistics n, phpbb_topics z SET
n.last_comment_timestamp = z.topic_last_post_id
WHERE n.nid = z.topic_id + WWW AND z.topic_last_post_id != 0;
UPDATE node_comment_statistics n, users z, phpbb_posts p SET
n.last_comment_name = z.username, n.last_comment_uid = z.uid
WHERE p.post_id = n.last_comment_timestamp and p.poster_id = z.uid;
UPDATE node_comment_statistics n, phpbb_posts p SET
n.last_comment_timestamp = p.post_time
WHERE p.post_id = n.last_comment_timestamp AND n.last_comment_timestamp != 0 ;
#
# Update Drupal variables
# This may not work and you ahve to update sequnces manually
SELECT @term_data_tid:=MAX(tid) FROM term_data;
SELECT @comments_cid:=MAX(cid) FROM comments;
SELECT @node_nid:=MAX(nid) FROM node WHERE type = 'forum';
SELECT @users_uid:=MAX(uid) FROM users;
UPDATE sequences SET id=@term_data_tid WHERE name='term_data_tid';
UPDATE sequences SET id=@comments_cid WHERE name = 'comments_cid';
UPDATE sequences SET id=@node_nid WHERE name = 'node_nid';
UPDATE sequences SET id=@users_uid WHERE name = 'users_uid';
#Now you have to install the Drupal BB code module AND the Drupal Quote Module, and you have to hack them:
#In the quote module replace function _quote_filter_process($text) with this:
function _quote_filter_process($text) {
// Quoting with or without specifying the source (code borrowed from bbcode.module)
// Thanks: function based on code from punbb.org
if (strpos($text, '[quote') !== false) {
$text = preg_replace('#\[quote=(?:"|"|\')?(.*?)["\']?(?:"|"|\')?\]#si', '
'.'\\1'." ".t("wrote:").'
', $text); $text = str_replace('[quote]', '
'.t("Quote:").'
', $text); $text = str_replace('[/quote]', '
', $text); $text = preg_replace('#\[quote:(.*?)=(?:"|"|\')?(.*?)["\']?(?:"|"|\')?\]#si', '
'.'\\2'." ".t("wrote:").'
', $text); $text = str_replace('[quote]', '
'.t("Quote:").'
', $text); $text = preg_replace('#\[/quote:(.*?)\]#', '
', $text); } return $text; } #In the BB code module, file bb-code-filter.inc, comment out the following lines: Quoting with or without specifying the source '#\[quote(?::\w+)?\](?:[\r\n])*(.*?)\[/quote(?::\w+)?\]#si' => '
'.$quote_text.':
\\1
', '#\[quote:(.*?)=(?:"|"|\')?(.*?)["\']?(?:"|"|\')?\](?:[\r\n])*(.*?)\[/quote(?::\w+)?\]#si' => '
'.$quote_user.':
\\2
',

ultimateteam.org launch

Blog

After nearly 4 years of talking about it, 3 years of much of nothing, ultimateteam.org has finally launched.

It needs a lot of work. It's based off an old version of the open source release of sourceforge.net. I think I'd like to switch it to a drupal code base eventually.

However, it's up. It's running. It might actually be working. We'll see.

Ultimate teams out there: enjoy!

PostNuke to Drupal Conversions: Basic conversion Reference

Book page
This is drawn from http://www.phrixus.net/migration, which at some point drew information from one of my Drupal comments and postnuke forums to drupal forums script.

That said, here's the original post:


Recently this website was migrated from PHP-Nuke to Drupal. Importing the data from one CMS to another presented a number of problems since the database tables are quite different.

The biggest difference is that Drupal treats everything as a 'node' and therefore uses one table for most entries. PHP-Nuke has separate tables for most sections of the site. Migrating the data from PHP-Nuke to Drupal requires that the table ids be changed to avoid conflicts. The trick is to keep all of the IDs relative to each other so the comments and other entries match up properly.

The following snippets are the MySQL code I used to migrate Phrixus from PHP-Nuke to Drupal. Each snippet is a separate file and they should be executed in the order in which they appear.

Executing these scripts as a file requires copying the contents into a file and running the following command:

$ mysql -p drupal < file.sql


Assumptions
  • The database names are 'drupal' and 'nuke'
  • The user doing the migration has full access to both databases.
  • The drupal database is empty aside from the entries created by database.mysql


Credits: Much of the following code was based on the PostNuke to Drupal migration scripts (pn2drupal.sql and pn2drupal_forums.sql) created by David Poblador Garcia and kitt (from drupal.org).

  1. Users
    The first step is to migrate all of the users.

    -- User Migration: PHP-Nuke to Drupal --

    -- Delete existing data --
    DELETE FROM drupal.users;

    INSERT INTO drupal.users
        ( uid, name, pass, mail, timestamp, status, init, rid )
      SELECT
        user_id, username, user_password, user_email, user_lastvisit, 1,
        user_email, 2
      FROM
        nuke.nuke_users ;


  2. Stories
    The next step is to migrate all of the stories and their associated comments.

    -- Story Migration --

    -- Delete existing vocabulary and terms --
    DELETE FROM drupal.vocabulary;
    DELETE FROM drupal.term_data;
    DELETE FROM drupal.term_hierarchy;

    INSERT INTO drupal.vocabulary VALUES
        ( 1, "Content", "Articles, blogs, and other short entry-based content",
          0, 1, 0, 1, "blog,poll,story", 0 );

    INSERT INTO drupal.term_data
        ( tid, vid, name, description, weight )
      SELECT
        topicid, 1, topicname, topictext, 0
      FROM
        nuke.nuke_topics;
     
    INSERT INTO drupal.term_hierarchy
        ( tid, parent )
      SELECT
        topicid, 0
      FROM
        nuke.nuke_topics;


    -- Migrate Stories --

    -- Delete existing nodes --
    DELETE FROM drupal.node;
    DELETE FROM drupal.term_node;

    INSERT INTO drupal.node
        ( nid, type, title, uid, created, comment, promote, teaser, body, changed )
      SELECT
        s.sid, "story", s.title, u.user_id, UNIX_TIMESTAMP(s.time), 2, 1,
        s.hometext,
        CONCAT(s.hometext, "", s.bodytext), now()
      FROM
        nuke.nuke_stories s, nuke.nuke_users u
      WHERE
        s.informant=u.username;
     
    INSERT INTO drupal.term_node
        ( nid, tid )
      SELECT
        s.sid, s.topic
      FROM
        nuke.nuke_stories s;

    -- Migrate Story Comments --

    DELETE FROM drupal.comments;

    INSERT INTO drupal.comments
        ( cid, pid, nid, uid, subject, comment, hostname, timestamp )
      SELECT
        c.tid, c.pid, c.sid, u.user_id, c.subject, c.comment, c.host_name,
        UNIX_TIMESTAMP(c.date)
      FROM
        nuke.nuke_comments c, nuke.nuke_users u
      WHERE c.name=u.username;


  3. Polls
    The next step is to migrate the polls. Since polls in Drupal are also considered nodes, some id offsets need to be set before this script is run.

    -- Migrate Polls --

    -- Make sure new polls don't conflict with existing NIDs --
    -- Use the following query to set the variable --
    -- SELECT MAX(sid) FROM nuke.nuke_stories --
    SET @POLL_NID_OFFSET=87;

    -- Make sure poll comments don't conflict with any existing CIDs --
    -- Use the following query to set the variable --
    -- SELECT MAX(tid) FROM nuke.nuke_comments  --
    SET @POLL_CID_OFFSET=368;


    -- delete any existing data --
    DELETE FROM drupal.poll;
    DELETE FROM drupal.poll_choices;

    INSERT INTO drupal.node
        ( nid, type, title, score, votes, uid, status, created, comment, promote,
          moderate, users, teaser, body, changed, revisions, static )
      SELECT
        pollID+@POLL_NID_OFFSET, "poll", pollTitle, 1, 1, 0, 1, timeStamp, 2, 1, 0,
        "", pollTitle, "", NOW(), "", 0
      FROM
        nuke.nuke_poll_desc;


    -- Migrate Polls --
    INSERT INTO drupal.poll
        ( nid, runtime, voters, active )
      SELECT
        pollID+@POLL_NID_OFFSET, timeStamp, voters, 1
      FROM
        nuke.nuke_poll_desc;

    INSERT INTO drupal.poll_choices
        ( chid, nid, chtext, chvotes, chorder )
      SELECT
        0, pollID+@POLL_NID_OFFSET, optionText, optionCount, voteID
      FROM
        nuke.nuke_poll_data;


    -- Migrate Poll Comments --

    INSERT INTO drupal.comments
        ( cid, pid, nid, uid, subject, comment, hostname, timestamp )
      SELECT
        c.tid + @POLL_CID_OFFSET, IF(c.pid, c.pid+@POLL_CID_OFFSET, 0),
        c.pollID+@POLL_NID_OFFSET, u.user_id, c.subject, c.comment, c.host_name,
        UNIX_TIMESTAMP(c.date)
      FROM
        nuke.nuke_pollcomments c, nuke.nuke_users u
      WHERE c.name=u.username;


  4. Forums
    The next step is the forums. This was the most difficult script to create since it alters so many different tables (nodes, comments, term_data, term_hierarchy, and vocabulary).

    -- Migrate Forums --

    -- Make sure new forum containers don't conflict with existing TIDs --
    -- Use the following query to set the variable --
    -- SELECT MAX(tid) FROM drupal.term_data --
    SET @FORUM_CONTAINER_OFFSET=5;

    -- Make sure new forums don't conflict with existing TIDs --
    -- Use the SUM of the following two queries to set the variable --
    -- SELECT MAX(tid) FROM drupal.term_data --
    -- SELECT COUNT(*) FROM nuke.nuke_bbcategories  --
    SET @FORUM_TERM_OFFSET=7;

    -- Make sure new forum topics don't conflict with existing NIDs --
    -- Use the following query to set the variable --
    -- SELECT MAX(nid) FROM drupal.node --
    SET @FORUM_NID_OFFSET=101;

    -- Make sure new forum comments don't conflict with existing CIDs --
    -- Use the following query to set the variable --
    -- SELECT MAX(cid) FROM drupal.comments --
    SET @FORUM_CID_OFFSET=418;

    -- Create a new vocabulary ID for forums --
    -- Use the following query to set the variable --
    -- SELECT MAX(vid)+1 FROM drupal.vocabulary --
    SET @FORUM_VID=2;


    -- delete existing data --
    DELETE FROM drupal.forum;
    DELETE FROM drupal.vocabulary WHERE vid=@FORUM_VID;

    -- Create the Forums --

    INSERT INTO drupal.vocabulary
        VALUES ( @FORUM_VID, "Forums", "Topics for forums", 0, 1, 0, 1,
                 "forum", 0 ) ;


    INSERT INTO drupal.term_data
        ( tid, vid, name, description, weight )
      SELECT
        cat_id + @FORUM_CONTAINER_OFFSET, @FORUM_VID, cat_title, cat_title, 0
      FROM
        nuke.nuke_bbcategories;


    INSERT INTO drupal.term_hierarchy
        ( tid, parent )
      SELECT
        cat_id + @FORUM_CONTAINER_OFFSET, 0
      FROM
        nuke.nuke_bbcategories;


    INSERT INTO drupal.term_data
        ( tid, vid, name, description, weight )
      SELECT
        forum_id + @FORUM_TERM_OFFSET, @FORUM_VID, forum_name, forum_desc, 0
      FROM
        nuke.nuke_bbforums;


    INSERT INTO drupal.term_hierarchy
        ( tid, parent )
      SELECT
        forum_id + @FORUM_TERM_OFFSET, cat_id + @FORUM_CONTAINER_OFFSET
      FROM
        nuke.nuke_bbforums;

       
    -- Add the forum topics (posts become comments to these) --

    INSERT INTO drupal.node
        ( nid, type, title, uid, status, created, comment, promote, moderate,
          users, teaser, body, changed, revisions, static )
      SELECT
        t.topic_id + @FORUM_NID_OFFSET, "forum", t.topic_title,
        t.topic_poster, 1, t.topic_time, 2, 1, 0, "",
        t.topic_title, t.topic_title, NOW(), "", 0
      FROM
        nuke.nuke_bbtopics t;


    INSERT INTO drupal.forum
        ( nid, tid )
      SELECT
        topic_id + @FORUM_NID_OFFSET, forum_id + @FORUM_TERM_OFFSET
      FROM
        nuke.nuke_bbtopics;

    INSERT INTO drupal.term_node
        ( nid, tid )
      SELECT
        topic_id + @FORUM_NID_OFFSET, forum_id + @FORUM_TERM_OFFSET
      FROM
        nuke.nuke_bbtopics;


    -- Add the forum posts as comments --

    INSERT INTO drupal.comments
        ( cid, pid, nid, uid, subject, comment, timestamp )
      SELECT
        c.post_id + @FORUM_CID_OFFSET, 0, c.topic_id + @FORUM_NID_OFFSET,
        c.poster_id, t.post_subject, t.post_text, c.post_time
      FROM
        nuke.nuke_bbposts c, nuke.nuke_bbposts_text t
      WHERE
        c.post_id=t.post_id;


  5. Journals
    PHP-Nuke has a journals module that I regrettably made use of. The journal section of PHP-Nuke was not well-designed especially with regard to the database schema. Because of this, the migration to drupal was not as smooth as it could have been despite being a very easy query to execute. The problem is that the PHP-Nuke journal table uses VARCHAR for its date fields instead of DATE. While it's possible these dates could be salvaged, I gave up after trying numerous queries. The following script migrates all of the journal content but sets a static date of Jan 01, 2003 for all journals.

    -- Migrate Journals to Personal Blog Entries --

    -- Make sure new journals (blogs) don't conflict with existing NIDs --
    -- Use the following query to set the variable --
    -- SELECT MAX(nid) FROM drupal.node --
    SET @JOURNAL_NID_OFFSET=179;

    INSERT INTO drupal.node
        ( nid, type, title, uid, status, created, comment, promote, moderate,
          users, teaser, body, changed, revisions, static )
      SELECT
        j.jid + @JOURNAL_NID_OFFSET, "blog", j.title, u.user_id, 1,
        UNIX_TIMESTAMP('2003-01-01'), 2, 1, 0, "", j.title, j.bodytext,
        UNIX_TIMESTAMP('2003-01-01'),"", 0
      FROM
        nuke.nuke_journal j, nuke.nuke_users u
      WHERE
        j.status='yes'
      AND
        j.aid=u.username;


  6. Private Messages
    The migration of private messages requires the use of the privatemsg module in Drupal.

    -- Migrate Private Messages --

    -- delete existing data --
    DELETE FROM drupal.privatemsg;

    INSERT INTO drupal.privatemsg
        ( id, author, recipient, subject, message, timestamp )
      SELECT
        p.privmsgs_id, p.privmsgs_from_userid, p.privmsgs_to_userid,
        p.privmsgs_subject, t.privmsgs_text, p.privmsgs_date
      FROM
        nuke.nuke_bbprivmsgs p, nuke.nuke_bbprivmsgs_text t
      WHERE
        t.privmsgs_text_id = p.privmsgs_id ;



  7. Sequences and Database Fixes
    The last step is to update the sequences table so new entries can be created and to fix some of the migration discrepancies that occurred.

    -- Fix some Nuke/Drupal discrepancies --

    -- Set the Drupal site admin username/uid here --
    SET @SITE_ADMIN='david';
    SET @SITE_ADMIN_NUKE_UID=2;

    -- Get the max IDs for various tables in order to update drupal.sequences --
    -- Use the following queries to set the variables --

    -- SELECT MAX(uid) FROM drupal.users --
    SET @MAX_UID=57;

    -- SELECT MAX(nid) FROM drupal.node --
    SET @MAX_NID=256;

    -- SELECT MAX(cid) FROM drupal.comments --
    SET @MAX_CID=947;

    -- SELECT MAX(vid) FROM drupal.vocabulary --
    SET @MAX_VID=2;

    -- SELECT MAX(tid) FROM drupal.term_data --
    SET @MAX_TID=16;
     

    -- PHP-Nuke has UID 1 as 'Anonymous'. Replace with the drupal site admin --
    DELETE FROM drupal.users WHERE uid='1';
    UPDATE drupal.users SET uid='1' WHERE name=@SITE_ADMIN;
    UPDATE drupal.node SET uid=1 WHERE uid=@SITE_ADMIN_NUKE_UID;
    UPDATE drupal.comments SET uid=1 WHERE uid=@SITE_ADMIN_NUKE_UID;
    UPDATE drupal.privatemsg SET author=1 WHERE author=@SITE_ADMIN_NUKE_UID;
    UPDATE drupal.privatemsg SET recipient=1 WHERE recipient=@SITE_ADMIN_NUKE_UID;
       
     
    -- Add the UID 0 so the drupal Anonymous user works properly --
    INSERT INTO drupal.users (uid,rid) VALUES (0,1);
       

    -- Update the sequences table so new entries can be created --
    INSERT INTO drupal.sequences (name, id) VALUES ('users_uid', @MAX_UID);
    INSERT INTO drupal.sequences (name, id) VALUES ('node_nid', @MAX_NID);
    INSERT INTO drupal.sequences (name, id) VALUES ('comments_cid', @MAX_CID);
    INSERT INTO drupal.sequences (name, id) VALUES ('vocabulary_vid', @MAX_VID);
    INSERT INTO drupal.sequences (name, id) VALUES ('term_data_tid', @MAX_TID);


Hopefully these scripts will be useful to others facing a similar situation. Just as note, these scripts do not come with any warranty and are not guaranteed to work. That said, they did work for my migration and with minimal tweaking should at least make a PHP-Nuke to Drupal migration easier.


End of post from other site.

Starting a Freecycle module

Blog

I started my Freecycle module for drupal. You can see an example of it working on my site, though it's in a state of flux and may not be working at any given point.

Freecycle is a growing, grassroots movement that reduces landfill trash by promoting the free exchange of used yet still useable goods. In other words, "One man's trash is another man's treasure."

The basic concept is that some used goods can be used by other people. Rather than throwing out usable goods, an owner can post the item to a list, offering it to others. If another person has need of the goods, s/he can respond requesting the used goods.

Part of the problem I have with the process is the difficulty with selecting one person to give the item to, or asking for an item (oooo! pick me! I want it! I need it. I hate sob-story emails from strangers.). When I post items (and I've posted a lot to my local group), I often get a flood of emails. I then have to figure out which person I should give the item to, arrange for pickup, wait to see if they pickup (no shows are a big deal), and reoffer if the items aren't picked up. I think the "Sorry, already taken." emails after the first n emails are received (where n varies on how much I think someone really wants the items and will be likely to pick them up) suck the most.

This Freecycle module will alleviate some of those issues, by having people sign up online. I'll be able to configure how many emails I accept before automatically terminating the list, provide a giveaway/pickup status for unclaimed items, limit how many items someone can pick up (by email address, IP address, etc.) and provide feedback (ala ebay) about no-shows.

Nothing like scratching an itch for the common good.

Project chartering notes

Book page
From a person who managed to attend the Bay Area XP December session about Project Charting. I note that Brian Slesinsky uses the same email address schema I do (with both .org and -yahoo).

More information on yahoogroups.com/groups/bayxp

A project charter is an agreement between the developers and 
the gold-owner (project funder, money spender).

- No technology, protocol, user interface, or other design 
  decisions.  
 
  The charter assumes the project will build a black box 
  containing perfect technology.

Prerequisites for charter:

Vision: hazy statement of the overall company goal (1 sentence)
Mission: the direction we will take to achieve that goal (1 sentence)

The Vision and Mission are persistent across multiple projects.

Project Charter components:

External Objectives:
   - not a feature list or a list of stories
   - a binary, measurable way of evaluating the product or 
     service (success or failure)
   - has an assessment date attached (time at which we 
     measure)
   - may be multiple, sequential objectives (milestones), or 
     repeated assessments
   - out of the team or gold-owner's direct control
   - hard evidence used by gold-owner to justify expense
   Examples:
     - car gets X miles per gallon (but be careful not to 
       make it a feature list)
     - survey of beta-testers shows that 90% are satisfied 
       with the beta version
     - three out of the top five consumer magazines give it
       a positive review
     - three of five main suppliers order the product
       (but typically not sales goals since that's too 
       late/indirect for software developers)

  Internal Objectives:
   - things like improving reuse, process maturity, etc.
   
  Project Boundaries:
   - names the external actors
   - names their inputs and outputs
     (doesn't include technology; no specified protocol)
   - events:
   - actor-initiated, detectable events
   - scheduled events (e.g. due date arrives)

Committed Resources:
   - money, people's time, tools
   - work environment
   - access to information
   - access to decision-makers
   - permission to iterate (don't ship the demo)
   - agreement to re-negotiate charter if a committed 
     resource becomes unavailable
   - the developers must say no if committed resources are
     insufficient

Authorizing players:
   - must be able to make a decision on two questions, and 
     make it stick:
        - Is what we've done so far okay?
        - Can we continue?
        - approve and champion objectives
        - must be actual people, not policy or job titles

Pages