Test::More
assertionsThese are the topics I plan to cover today. This won't cover everything you need to know about unit testing in Perl, but it will give you more than the intro talk.
This just sets up some definitions so we will be on the same page when I begin talking about tests. A test usually requires some quantity of setup, the execution of code, and then one or more assertions about the result of the code.
Test::More
Assertionsis( $actual, $expected, $name )
isnt( $actual, $expected, $name )
like( $actual, $regex, $name )
unlike( $actual, $regex, $name )
Test::More
provides several wrappers around ok
that do a much better job of expressing intent and providing troubleshooting
information. These are the most obvious ones.
is()
and isnt()
use string comparisons. Most
of the time that doesn't matter. But, once in a while you need to know.
Test::More
Assertions
#!/usr/bin/perl
use Test::More tests => 4;
use strict;
use warnings;
my $var = 1;
is( $var, 1, 'Initial sanity verified' );
like( $var, qr/^\d+$/, 'Sanity still exists' );
is( 2+2, 5, 'Sanity has left the building' );
isnt( 2+2, 5, 'Sanity has been restored' );
These assertions are more expressive. These don't really do much more
than the straight ok()
function if the assertion succeeds.
But, if the assertion fails, they provide more useful diagnostic
output.
Test::More
Assertions
1..4
ok 1 - Initial sanity verified
ok 2 - Sanity still exists
not ok 3 - Sanity has left the building
# Failed test 'Sanity has left the building'
# at examples/sanity.t line 11.
# got: '4'
# expected: '5'
ok 4 - Sanity has been restored
# Looks like you failed 1 test of 4.
As you can see, the failure condition actually gives useful output this time. If I had given actual useful names to the assertions, you might even be able to tell what the problem is from here.
Test::More
Assertionsuse_ok
require_ok
These assertions are not nearly as useful as they appear at first glance.
They attempt to use
(or require
) the specified module.
A successful load passes the assertion and and unsuccessful load, fails. This
seems relatively straight-forward and reasonable.
The two points that fall down are:
use_ok
happens at runtime, unlike use
.is_deeply( $actual, $expect, $name )
cmp_ok( $actual, $op, $expect, $name )
isa_ok( $actual, $type, $name )
can_ok( $obj_or_class, @methods )
#!/usr/bin/perl
use Test::More tests => 4;
use strict;
use warnings;
my %hash = qw/c 3 b 2 a 1/;
is_deeply( {a=>1, b=>2, c=>3}, \%hash, 'hash items' );
cmp_ok( 4, '<', 5, 'integer ordering' );
isa_ok( \%hash, 'HASH', '%hash' );
can_ok( 'Test::More', qw/ok is isnt like unlike/ );
Use of all of these are pretty much as you might expect.
Each assertion returns a true value on success and false on failure.
This is a feature of the assertions that can sometimes be exploited to make your test suites even more useful.
diag()
- display output as comments in TAPnote()
- display output only when verboseexplain()
- dumper-like toolThese are sometimes used to display information that the maintainer of the suite may find interesting. Unfortunately, this information is not always as useful the 400th time you see it.
#!/usr/bin/perl
use Test::More tests => 1;
use strict;
use warnings;
my %hash = qw/c 3 b 2 a 1/;
# Pretend that $output came from a function under test.
my $output = {a=>1, b=>2, c=>3};
is_deeply( $output, \%hash, 'hash items' )
or note explain $output;
The or note explain idiom is much more useful than what we normally do.
print Dumper( $output )
use Data::Dumper
we forgotUnique names are easier to find in the file when an assertion fails. A quick grep is all it takes. Saves you from troubleshooting the wrong assertion.
If the name is expressive enough, you may be able to go right to the problem in the code, instead of spending time relearning the tests. It also documents what you are testing.
Don't assume the person troubleshooting a failure knows why you are making this assertion.
Test::NoWarnings
Test::Warn
Test::Exception
Test::Output
Test::NoWarnings
The fact that it adds an extra (hidden) assertion can be a source of confusion.
Test::Warn
#!/usr/bin/perl
use Test::More tests => 5;
use strict;
use warnings;
warning_is { noisy() } "Bad things\n",
'noisy triggers a warning';
warnings_are { annoying() } ["Bad things\n", "More bad\n"],
'annoying triggers multiple warnings';
warning_like { noisy() } qr/Bad/, 'noisy triggers a warning';
warnings_like { annoying() } [qr/Bad/, qr/More/],
'noisy triggers a warning';
warnings_exist { annoying() } [ qr/Bad/ ],
'At least this warning';
These also have special forms for Carp-based warnings
Test::Exception
#!/usr/bin/perl
use Test::More tests => 4;
use strict;
use warnings;
lives_ok { robust() } 'robust() never dies';
dies_ok { fragile() } 'fragile() dies, as expected';
throws_ok { fragile() } qr/Badness happens/, 'fragile() dies';
throws_ok { fragile2() } 'Acme::Exception',
'fragile2() throws class';
These also have special forms for Carp-based warnings
Test::Output
Example
use Test::Output;
stdout_is { print "Hello World"; } 'Hello World';
stdout_like { print 'Hello Wade'; } qw/Wade/;
stderr_is { print STDERR 'Hello'; } 'Hello';
combined_is { print STDOUT 'Hello '; print STDERR 'All'; }
'Hello All';
output_is { print STDOUT 'Hello '; print STDERR 'All'; }
'Hello ', 'All';
Test::Output
methods check the appropriate streams for
expected output. You can test either STDOUT, STDERR, or both. You can
test the output for an exact match or for a regulr expression.
use lib
is your friend
use lib "t/lib";
use FindBin;
use lib "$FindBin::Bin/lib";
When thinking about unit testing at a high level, you begin to see certain patterns and practices emerge.
die
stops entire test fileBAIL_OUT()
to stop entire runThe simplest and most common failure in a test suite manifests as one (or a few) individual assertion failures. These are normally pretty easy to localize. Something changed. Find and fix.
Sometimes there is a failure that pretty much makes the rest of the test
file immaterial. You need to perform some tests on a database and
connection fails. No sense in going any further. Using die
to
fail the whole test file is appropriate here.
Finally, there is the rare circumstance where there's a problem so
severe that you need to abort the entire test suite. In that case, the
BAIL_OUT()
function is the right choice.
#!/usr/bin/perl
use Test::More tests => 4;
use strict;
use warnings;
SKIP: {
skip 'Root access needed', 2 if $< != 0;
ok( do_root_action(), 'Root does action' );
is( get_root_information(), 'root stuff',
'Root gets info' );
}
ok( perform_unprivileged(), 'Any user action' );
is( get_information(), 'unprivileged',
'Get normal information' );
Sometimes you need to run tests in only certain circumstances. The
skip
function allows you to bypass assertions, treating
them as successful for the sake of the test suite.
#!/usr/bin/perl
use Test::More ($^O eq 'MSWin32'
? (skip_all => 'Cannot test under windows')
: (tests => 4));
If all of the assertions in a given file must be bypassed for the same reason, skip_all is the better tool.
#!/usr/bin/perl
use Test::More tests => 4;
use strict;
use warnings;
ok( working_method(), 'this works' );
TODO: {
local $TODO = 'foo method is not finished';
ok( !foo(), 'foo with no arguments' );
is_deeply( foo(qw/a b c/), [qw/C B A/],
'foo with arguments' );
}
ok( other_working_method(), 'this does too' );
These tests serve as reminders that we need to write more functionality. However, they are not counted as failures, even though the assertions do not succeed. Just as importantly, they also report if they begin functioning.
use Test::More 'no_plan';
use Test::More;
...
done_testing();
use Test::More 'tests' => 42;
plan( ... );
The no_plan option is a really bad idea. You can accidentally
bypass or miss assertions without any indication from the test harness. The
done_testing
option is safer and becoming more popular. However,
it also allows the possibility of skipped assertions without any indication.
Explicitly setting the number of tests is the safest option. But it is more work to keep it properly configured when building your test file.
I find myself using pattern quite a bit. The idea is that the initial string should describe the overall function, method, or behavior we are testing and the individual assertions are the details of how we know the test passes.
The first two are mostly useful for reducing global dependencies and removing coupling between independent systems. The third through fifth are ways to think about generating new tests. The final is magic.
Bugs lurk in corners and congregate at boundaries.
— Boris Bezier
Most of the inputs of a function are pretty much the same. Boundaries are where behaviour of the function changes. Concentrate in those areas more.
undef
undef
elementsRandom inputs in the hopes of triggering unusual error conditions.
Often used to attack code.
How do you know how well you have tested your code?
if( get_value() > 0 ) {
do_it();
}
do_other();
if( defined $var && $var > 0 ) {
do_it();
}
else {
do_other();
}
Devel::Cover
CPAN module that instruments code to determine what parts of it have been exercised.
100% coverage is necessary for complete testing, but it may not be sufficient.
Think about what needs to be tested rather than try to hit every line/branch.