Interviews

All about the new Test2 framework and how it will help your tests

May 30, 2016 CPAN, Interviews, Perl 5, Tools No comments

The new Test2 framework has been released after a couple years of development. I wanted to find out about what this means for users of Test::Simple and Test::More, so I chatted with the project leader, Chad Granum (exodist).

Andy Lester: So Test2 has just been released after a couple of years of work, and a lot of discussion. For those of us who haven’t followed its development, what is Test2 and why is it a good thing?

Chad Granum: The big changes will be for people who write test modules. The old Test::Builder was tied to specific generation of TAP output. That’s been replaced with a flexible event system.

It all started when David Golden submitted a patch to change the indentation of a comment intended for humans who read the test. The change would help people, but meant nothing to the machine. I had to reject the patch because it broke a lot of downstream modules. Things broke because they tested that Test::Builder produced the message in its original form. I thought that was crazy, and wanted to make things easier to maintain, test, and improve.

Andy: Test::Builder’s internals were pretty fragile?

Chad: That is true, but that’s not the whole picture. The real problem was the tools people used to validate testing tools. Test::Builder::Tester was the standard, and it boiled down to giant string comparisons of TAP output, which mixes messages for the computer’s use, and messages for human use.

While most of the changes are under the hood, there are improvements for people who just want to write tests. Test2 has a built-in synchronization system for forking/threading. If you modify a test to load Test2::IPC before loading Test::More, then you can fork in your tests and it will work in sane/reasonable ways. Up until now doing this required external tools such as Test::SharedFork which had severe limitations.

Another thing I want to note is an improvement in how Test2 tracks file+line number for error reporting purposes. As you know diagnostics are reported when a test fails, and it gives you the filename and line number of the failure. Test::Builder used a global variable $Test::Builder::Level which people were required to localize and bump whenever they added a stack frame to their tool. This was confusing and easy to get wrong.

Test2 now uses a Context object. This object solves the problem by locking in the “context” (file + line) when the tool is first called. All nested tools will then find that context. The context object also doubles as the primary interface to Test2 for tool writers, which means it will not be obscure like the $Level variable was.

Andy: I just counted 1045 instances of $Test::Builder::Level in my codebase at work. Are you saying that I can throw them all away when I start using Test2?

Chad: Yes, if you switch to using Test2 in those tools you can stop counting your stack frames. That said, the $Level variable will continue to work forever for backwards compatibility.

Andy: Will the TAP output be the same? We’re still using an ancient install of Smolder as our CI tool and I believe it expects TAP to look a certain way.

Chad: Extreme care was taken to ensure that the TAP output did not change in any significant ways. The one exception is David Golden’s change that started all this:

Ok 1 - random test
    # a subtest
    Ok 1 - subtest result
    1..1
Ok 2 - a subtest

This has changed to:

Ok 1 - random test
# a subtest
    Ok 1 - subtest result
    1..1
Ok 2 - a subtest

That is the change that started all this, and had the potential to break CPAN.

Andy: So Test2 is all about possibilities for the future. It’s going to make it easier for people to create new Test:: modules. As the author of a couple of Test:: modules myself, I know that the testing of the tests is always a big pain. There’s lots of cut & paste from past modules that work and tweaking things until they finally pass the tests. What’s different between the old way of doing the module testing and now?

Chad: Test::Builder assumed TAP would be the final product, and did not give you any control or hooks into everything between your tool and the TAP, as such you had to test your final TAP output, which often included text you did not yourself produce. In Test2 we drop those assumptions, TAP is no longer assumed, and you also have hooks in almost every step of the process between your tool and the final output.

Many of the actions Test::Builder would accomplish have been turned into Event objects. Test tools do their thing, and then fire events off to Test2 for handling. Eventually these events hit a formatter (TAP by default) and are rendered for a harness. Along with the hooks there is a tool in Test2::API called intercept, it takes a codeblock, all events generated inside that codeblock are captured and returned, they are not rendered and do not affect the global test state. Once you capture your events you can test them as data structures, and ignore ones that are not relevant to your tools.

The Test::Builder::Tester way may seem more simple at first, but that is deceptive. There is a huge loss of information. Also if there are changes to how Test::Builder renders TAP, such as dropping the ‘-‘ then everything breaks.

Using Test::Builder::Tester

test_out("ok 1 - a passing test");
ok(1, 'a passing test');
test_test("Got expected line of TAP output");

Using intercept and basic Test::More tools

my $events = intercept {
    ok(1, 'a passing test');
};

my $e = shift @$events;

ok($e->pass, "passing tests event");
is($e->name, "a passing test", "got event name");
is_deeply(
    $e->trace->frame,
    [__PACKAGE__, __FILE__, 42, 'Test2::Tools::Basic::ok'],
    "Got package, file, line and sub name"
);

Using Test2::Tools::Compare

like(
    intercept {
        ok(1, 'a passing test');
    },
    array {
        event Ok => sub {
            call pass => 1;
            call name => 'a passing test';

            prop file    => __FILE__;
            prop package => __PACKAGE__;
            prop line    => 42; 
            prop subname => 'Test2::Tools::Basic::ok';
        };
    },
    'A passing test'
);

Andy: What other features does Test2 include for users who aren’t creating Test:: modules?

Chad: Test2’s core, which is included in the Test-Simple distribution does not have new features at the user level. However Test2-Suite was released at the same time as Test2/Test-Simple, and it contains new versions of all the Test::More tools, and adds some things people have been requesting for years, but were not possible with the old Test::Builder

The biggest example would be “die/bail on fail”, which lets you tell the test suite to stop after the first failure. The old stuff could not do this because there was no good hook point, and important diagnostics would be lost.

It’s as simple as using one of these two modules:

use Test2::Plugin::DieOnFail;
use Test2::Plugin::BailOnFail;

The difference is that DieOnFail calls die under the hood. The BailOnFail will send a bail-out event which will abort the current file, and depending on the harness might stop the entire test run.

Andy: So how do I start using Test2? At my day job, our code base has 1,200 *.t files totalling 282,000 lines of code. Can I expect to install the new version of Test::Simple (version 1.302019) that includes Test2 and everything will “just work”?

Chad: For the vast majority of cases the answer is “yes”. Back-compatibility was one of the most significant concerns for the project. That said, some things did unfortunately break. A good guide to what breaks, and why can be found in this document. Usually things that break do so because they muck about with the Test::Builder internals in nasty ways. Usually these modules had no choice due to Test::Builder’s limitations. When I found such occurrences I tried to add hooks or APIs to do those things in sane/reasonable ways.

Andy: Do I have to upgrade? Can I refuse to go up to Test-Simple 1.302019? What are the implications of that?

Chad: Well, nobody is going to come to you and force you to install the latest version. If you want to keep using your old version you can. You might run into trouble down the line if other Test:: tools you use decide to make use of Test2-specific features, at which point you would need to lock in old versions of those as well. You also would not be able to start using any new tools people build using Test2.

Andy: And the tools you’re talking about are Test:: modules, right? The command line tool prove and make test haven’t changed, because they’re part of Test::Harness?

Chad: Correct. Test::Harness has not been touched, it will work on any test files that produce TAP, and Test2 still produces TAP by default. That said I do have a project in the works to create an alternative harness specifically for Test2 stuff, but it will never be a requirement to use it, things will always work on Test::Harness.

Andy: So if I’m understanding the Changes file correctly, Test-Simple 1.302012 was the last old-style version and 1.302014 is the new version with Test2?

Chad: No, Test-Simple-1.001014 is the last STABLE release of Test-Simple that did not have Test2, then Test-Simple-1.302015 was the first stable release to include Test2. There were a lot of development releases between the 2, but no stable ones. The version numbers had to be carefully crafted to follow the old scheme, but we also had to keep it below 1.5xxxxx because of the previous maintainers’ projects which used that number as well as 2.0. Some downstream users had code switched based on version number and expected an API that never came to be. Most of these downstream distributions have been fixed now, but we are using a “safe” version number just in case.

Andy: What has development for this been like? This has been in the works for, what, two years now? I remember talking to you briefly about it at OSCON 2014.

Chad: At the point we talked I had just been given Test-Simple, and did not have any plans to make significant changes. What we actually talked about was my project Fennec which was a separate Test::Builder based test framework. Some features from Fennec made their way into Test2, enough so that Fennec will be deprecated once I have a stable Test2::Workflow release.

Initially development started as a refactor of Test::Builder that was intended to be fairly small. The main idea was to introduce the events, and a way to capture them. From there it ballooned out as I fixed bugs, or made other changes necessary to support events.

At one point the changes were significant enough, and broke enough downstream modules that I made it a complete fork under the name Test-Stream. I figured it would be easier to make Test::Builder a compatibility wrapper.

In 2015, I attended the QA hackathon in Berlin, and my Test-Stream fork was a huge topic of conversation. The conversation resulted in a general agreement (not unanimous) that it would be nice to have these changes. There was also a list of requests (demands?) for the project before it could go stable. We called it the punch-list.

After the Berlin hackathon there was more interest in the project. Other toolchain people such as Graham Knop (Haarg), Daniel Dragan (bulk88), Ricardo Signes (rjbs), Matt Trout (mst), Karen Etheridge (ether), Leon Timmermans (leont), Joel Berger (jberger), Kent Fredric (kentnl), Peter Rabbitson (ribasushi), etc. started reviewing my code, making suggestions and reporting bugs. This was one of the most valuable experiences. The project as it is now, is much different than it was in Berlin, it is much better from the extra eyes and hands.

A month ago was another QA hackathon, in Rugby UK, and once again Test2 was a major topic. This time the general agreement was that it was ready now. The only new requirements on the table were related to making the broken downstream modules very well known, and also getting a week of extra cpan-testers results prior to release.

I must note that at both QA hackathons the decisions were not unanimous, but in both cases there was a very clear majority.

Andy: So what’s next? I see that you have a grant for more documentation. Tell me about that, and what can people do to help?

Chad: The Test2 core API is not small, and has more moving pieces than Test::Builder did. Right now there is plenty of technical/module documentation, but there is a lack of overview documentation. There is a need for a manual that helps people find solutions to their problems, and tied the various parts together. This is the first part of the manual docs for tool authors.

Test2::Suite is also not small, but provides a large set of tools for people to use, some are improvements on old tools, some are completely new. The manual will have a second section on using these new tools. This second part of the manual will be geared towards people writing tests.

The best way for people to help would be to start using Test2::Suite in their tests, and Test2 in their test tools. People will undoubtedly find places where more documentation is needed, or where things are not clear. Reporting such documentation gaps would help me to write better documentation. (Test::More repo,
Test2::Suite repo)

Apart from the documentation, I have 2 other Test2 related projects nearing completion: Test2-Workflow, which is an implementation of the tools from Fennec that are not a core part of Test2, and Test2-Harness which is an optional alternative to Test::Harness. Both are pretty much code-complete on GitHub, but neither has the test coverage I feel is necessary before putting them on CPAN.

Andy: Thanks for all the work that’s gone into this, both to you and the rest of those who’ve contributed. It sounds like we’ll soon see more tools to make testing easier and more robust.

Perlbuzz news roundup for 2010-07-27

July 27, 2010 CPAN, Interviews, Perl 5, Perl 6, Perl Foundation, Rakudo No comments

These links are collected from the
Perlbuzz Twitter feed.
If you have suggestions for news bits, please mail me at
andy@perlbuzz.com.

Big interview with Damian Conway

August 21, 2008 Interviews No comments

O’Reilly interviewed Damian Conway at OSCON. There’s surprisingly little craziness, but lots of good discussion of programming languages, programming curricula and of course, Perl 6. Oh, and a fair amount of mocking of American accents. Laugh it up, Mr. I-Live-On-A-Giant-Penal-Colony-Island!

[youtube https://www.youtube.com/watch?v=RqU-G_ptdGU&hl=en&fs=1&color1=0x402061&color2=0x9461ca&border=1]

The O’Reilly page has a transcription if you don’t want to devote 36 minutes of your life to it, but why wouldn’t you?

Interviews with Michaud & Dice

February 16, 2008 Community, Interviews, Perl Foundation 1 comment

Here are a couple of interviews for your reading enjoyment. Patrick Michaud talks about Perl 6 in advance of FOSDEM ’08, a conference in Brussels. The interview is a bit old, pre-dating the naming of Rakudo.


In the second interview, Richard Dice talks about his 14 years with Perl, and current news about the Perl Foundation.

You know, I’m guessing there’s other good content in $foo perl magazin, but since Richard’s interview is the only thing in English, it’s going to have to stay at the guessing stage.

Gerard Goossen talks about Kurila, a Perl 5 fork

November 15, 2007 Interviews, Perl 5 No comments

A few days ago Gerard Goossen released version 1.5 of his kurila project to the CPAN, a fork of Perl 5, both the language and the implementation. I talked with about the history of this new direction.


Andy: Why Kurila? Who would want to use it? What are your goals?

Gerard: Kurila is a fork of Perl 5. Perl Kurila is a
dialect of Perl. Kurila is currently unstable, the language is
continuously changing, and has just started.

There are a few goals, not all of them going in
the same direction. One of the goals is to simplify the Perl internals
to make hacking on it easier. Another is to make the Perl syntax
more consistent, remove some of the oddities, most of them historical
legacy.

What is currently being done is removing some of the more
object/error-prone syntax like indirect-object-syntax and removing
symbol references. Both of these are not yet very radical yet,
most modern Perl doesn’t use indirect-object-syntax or symbol
references.

But I am now at the stage of doing more radical changes, like
not doing the sigil-change, so that my %foo; $foo{bar}
would become my %foo; %foo{bar} .

Andy: Where do you see Kurila getting used? Who’s the
target audience for it?

Gerard: Kurila would be used for anything where currently
Perl is being used. I am using Perl for large websites so changes
will be favored in that direction.

I am working for TTY Internet Solutions,
a web development company. We develop and maintain websites in
Perl, Ruby and Java. Websites we develop include www.2dehands.be,
www.sellaband.com, www.ingcard.nl and www.nationalevacaturebank.nl.
Of these www.2dehands.be and www.nationalevacaturebank.nl are
entirely written in Perl.

We are not yet using kurila in production, but I have a testing
environment of www.2dehands.nl which is running on Kurila. Developing
Kurila is part of my work at TTY.

Many of the changes in Kurila are inspired by bugs/mistakes we
made developing these sites. It started with the UTF8 flag. We
encountered many problems making our websites UTF-8 compatible. In
many cases the UTF8-flag got “lost” somewhere, and after combining
it with another string, the string got internally upgraded and our
good UTF-8 destroyed. Because everything we have is default UTF-8.
The idea was simply to make UTF-8 the default encoding, instead of
the current default of latin1.

Andy: Did you raise the possibility of changing the default
encoding in Perl?

Gerard: The problem is that changing the default encoding
the UTF-8 is that is destroys the identity between bytes and
codepoints. So it’s not a possibility for Perl 5. Like what does
chr(255) do? Does it create a byte with value 255 or
character with codepoint 255?

I made a patch removing the UTF-8 flag and changing the default
encoding to UTF-8 and sent it to p5p.

Andy: What was the response?

Gerard: There was as good as no response to it, I guess
because it was obvious that it seriously broke backwards compatibility
and the patch was quite big, making it difficult to understand.

About two weeks after the utf8 patch, I announced that I wanted
to change the current Perl 5 development to make it a language which
evolves to experiment with new ideas, try new syntax and not be
held back by old failed experiments. One of the interesting things
about Perl is that it has a lot of different ideas and these are
coupled to the syntax.

There was of course the question of why not Perl 6.  That it
should/could be done in backwards-compatible way. That there is no
way of making the Perl internals clean, that is better to start
over.

And about half a year ago I announced that I had started Kurila,
my proof of-concent for the development of Perl 7. Rewriting some
software from scratch is much more difficult then it seems, and I
think starting with a well proven good working base is much easier.
Perl 5 is there, it is working very good, has few bugs, etc., but
it can be much better if you don’t have to worry about possibly
breaking someone code, and just fix those oddities.

Andy: Do you have a website for it?  Are you looking for
help?

Gerard: There isn’t a website yet, and also no specific
mailing list, currently all the discussion is on p5p. There is a
public git repository at git://dev.tty.nl/perl.

Andy: What can someone do if he/she is interested in
helping?

Gerard: Contact me at gerard at tty dot nl. Make
a clone of git://dev.tty.nl/perl and start making changes.

Andy Armstrong talks about the road to Test::Harness 3.00

November 7, 2007 Interviews No comments

Test::Harness 3.00 has finally been released, and it’s a huge opportunity for anyone who writes tests with Perl, if only for the ability to run prove -j and run tests in parallel. I took a few minutes as
the maintainer of the old 2.xx series to interview Andy Armstrong, the new maintainer of 3.x, about the history of the new Test::Harness, and what it took to get here.

Andy Lester: So, Andy Armstrong, a joyous day has come: Test::Harness 3.00 has been released.

Andy Armstrong: Yes. I was relieved.

Andy Lester: What has it taken to get to this point?

Andy Armstrong: I think we had a fair bit of paranoia about breaking the toolchain for everyone, and thus becoming extraordinarily unpopular.
That made us cautious. We spent a lot of time building our own smoke testing setup. And running lots of people’s tests against our code.

Andy Lester: Who is the “we” in this? Back in June of 2006, Schwern and I started the kick off to Test::Harness 3.00 at YAPC::NA in Chicago. What’s happened since then?

Andy Armstrong: Ovid got the code started and had just about everything in place by April 2007.
Around then I volunteered to look at a Windows problem.
And I sort of got dragged in. I really liked the code Ovid had written and enjoyed working on it – so that was an attraction.
The Windows problem took a few minutes – but I’m still here.

Andy Lester: You’ve done a lot more than being dragged in. You hosted the Subversion repository, and the mailing list. What else?

Andy Armstrong: My monopolist plan laid bare for all to see…
I have a server which is nominally so I can do things like that – so then I have to do them to justify its existence.
So I’m hosting the perl-qa wiki, the TAP wiki. Just sites that needed a home. Like an orphanage 🙂

Andy Lester: You’ve uploaded T::H 3. Are you now the maintainer? I thought Ovid was going to be the maintainer of T::H3. (I ask both for the benefit of the Perlbuzz readers, and for my own knowledge:-))

Andy Armstrong: I think I made a move on Ovid somewhere back there and he didn’t struggle. So now I’m it. I honestly can’t remember how that happened.

Andy Lester: Glad to have two Andys maintaining different versions of the same module. 🙂
So why does someone want to upgrade to Test::Harness 3? What’s in it for the average Perl user?

Andy Armstrong: If you do nothing else – just install it – you’ll get better looking test reports. Color even 🙂
And when people start writing test suites that use TAP version 13 features you’ll get even more informative reports as an indirect result of T::H 3.00.

Andy Lester: And it’s completely compatible?

Andy Armstrong: It’s very slightly more fussy about completely crazy syntax errors. But generally yes, compatible – foibles and all.
That’s syntax errors in TAP (Test Anything Protocol) – just for folk who don’t know what’s going on behind the scenes.

Andy Lester: So what’s in the future for Test::Harness and prove, its command-line interface?

Andy Armstrong: Well we’re just talking about TSP (Test Steering Protocol) on
the perl-qa list. And we need to do something interesting with the
YAML diagnostic syntax we have now. I’ve written a module for
TextMate that uses that so that the cursor jumps to the right line
in the test program when you click on the diagnostic.

Andy Lester: What’s the benefit of TSP? How would a tester use that?

It would give a test suite more active control over its own execution.
Particularly in the case of things like user interface toolkits or
modules that are highly platform or configuration dependent you may
have large number of tests you’d like to skip conditionally. So
TSP would be a convenient way to have a single controller program
that would decide which other parts of the test suite to execute.
And then you’d probably grow on that to expose more control over
which tests to run via prove or whatever UI you’d be using.

Andy Lester: So a more advanced version of SKIP blocks, and you wouldn’t have to figure out what tests to run when you ran Makefile.PL or Build.PL.

Andy Armstrong: Yes. And an area where people are likely to find new applications too.

Andy Lester: Anything else people should know?

Andy Armstrong: That there’s still plenty more to do with
T::H and testing in general. And that I’m surprisingly cheap 🙂 I
also want to thank people who have worked on Test::Harness 3, in
alphabetical order: Sebastien Aperghis-Tramoni, Shlomi Fish, David
Golden, Jim Keenan, Andy Lester, Michael Peters, Curtis “Ovid” Poe,
Michael Schwern, Gabor Szabo and Eric Wilhelm. All helped immensely
– even you 🙂

Andy Lester: Well, I do want to thank all of you for doing
this massive overhaul of Test::Harness. Abandoning the existing
code and starting from scratch has given T::H a new lease on life,
and a new platform to move forward.

Richard Dice talks about changes and projects at the Perl Foundation

October 1, 2007 Interviews, Perl Foundation No comments

I talked today with Richard Dice,
the newly-elected president of the Perl Foundation,
about the recent changes in TPF, and what TPF has been working on lately.
If you’ve asked “What does TPF do? Why should I support it?”, this
interview should help answer that.

Andy: Richard, you’re now President of
The Perl Foundation,
Jim Brandt is Vice-President, and Bill Odom is Chairman. What do these
changes mean for TPF and for the Perl community?

Richard: Regarding the first of the two questions you have
embedded in there, what it means for TPF, there’s a pretty
straightforward answer – it means that I am now the person entrusted
with the abilities of the President, per Article V, item 5.05 of
the Bylaws of The Perl Foundation.
From the point of view of TPF being a corporation the abilities of
the President are pretty standard President-stuff. It basically
means that I’m the guy able to sign contracts and am responsible
for the general management of the corporation. The President is
also a member of the Board of Directors so I have a voice within
that group and a vote in all voting matters of the Board. I don’t
think that there are too many surprises as far as any of that goes.
It’s vanilla-corporation-legal stuff.

For the previous two years the TPF President had been
Bill Odom. In the past few months Bill had been considering what
his own personal strengths, interests and abilities to commit time
would be in the future and mostly he was thinking that what we
wanted to invest his energies into were Board considerations. That
is, organizing how the Board would conduct its responsibilities.
And that’s a Chairman job. The chairman for the previous few years
had been Kevin Lenzo. After all the years Kevin had been involved
he felt as though he had done all he wanted and needed to in terms
of active participation. So the Board thought that Bill would be
the right person to take on that role. I was the Steering Committee
Chair of TPF for the almost-two years up to that point. Bill and I
did plenty of work and discussion together because of that, and I
got a level of familiarity with much of the rest of the Board over
that time as well. They thought I would be a good choice to fill
the position.

I think that the Board liked one aspect of my
thinking in particular pertaining to the Perl community. That is:
the Perl community is just fine. Better than fine. The community
is great. TPF exists to support the community. So what we
have to pay attention to is the areas where the community is not
great.

Andy: What areas would those be?

Richard: We need to help the rest of the world understand
what Perl has to offer them. We need to talk with the rest of the
world and gather together what they have to tell us, organize it,
and present it back to the community in a coherent way so that we
understand what the perceptions of the rest of the world are regarding
what Perl and its community are all about. This kind of communication
is a pre-condition for the next step, which is figuring out how the
community and the rest of the world can help each other.

Andy: Any plans or grand ideas to share along those lines?

Richard: I’ll share one plan that has already come to pass.
Forrester Research approached
TPF back in April 2007, asking us to participate in a survey of
dynamic languages (Perl, Python, Ruby, PHP, Javascript) they were
putting together. That was really important.
Forrester has a lot of reach into the corporate IT
world, at the VP and CIO/CTO level. I thought it was very important
for us to get the word “Perl” prominently placed within that survey.
What followed was a few weeks of brain-wracking work, not just mine
but with a ton of help from about a dozen people inside and outside
of TPF. But it was also important for us to participate because
just seeing what sorts of considerations Forrester put into the
survey were reflection of what they thought their audience was interested
in. Participation was an excellent two-way communication opportunity.

Andy: What were the results of their report?

Richard: The results were quite good I thought. Forrester
“Wave” surveys have a
pretty standard format; in it, Perl was considered a “Leader” in
this space. TPF will issue a larger press release about the results
of this survey later. (The citation guidelines are complicated and
we have to spend some real time in sifting through it all before
we can make an official and detailed statements of the results.)

Another project that I’m involved with now is trying to make Perl
5.8.8 an official part of the
Linux Standard Base
3.2 spec. This is a really good idea, as it means that any Linux
ISVs that make a product that targets LSB 3.2 can assume the presence
of a (sane) Perl distribution and so they don’t have to ship it
themselves.

These two examples suggest what I think will be a theme of the next
year, which is TPF working with other organizations in alliances.
Everyone is good at something. No one is good at everything. We have
to be able to offer our expertise to other organizations, and we
have to be willing to work with others in order to take advantage
of their expertise. Trying to do things another way is a recipe for
frustration and limited results.

Andy: Is the LSB project something that needs to happen at
the TPF level? Is this one of those things that couldn’t happen if
TPF weren’t doing it?

Richard: That’s a good question. I don’t immediately see a
reason why TPF would have to be involved.
Linux Foundation
could have tried doing this without our help. However, this goes
back to what I said about everyone having their own areas of
expertise. The people in Linux Foundation aren’t experts about Perl.
From my perspective, the things I helped them with on this are
pretty minor. But I saved them a ton of time helping them stay
away from blind alleys in where they were going with this. And I
could give them confidence that this was an effort that was worth
undertaking. If they wanted to include Perl in LSB and they couldn’t
find a “Perl door” to knock on to get help in what they’re doing,
maybe they’d think that it wasn’t worth the effort because Perl
wasn’t vibrant, active and supported.

[Note: Allison Randal noted after this
interview was published “In fact, the Linux Foundation did try to do do it without our help, but had a hard time figuring out who to talk to in the community.” — Andy]

As I said before, I think TPF has a huge role to fill in interfacing
between people on the inside of the community and people on the
outside. Perhaps some Perlbuzz readers can’t imagine other people
thinking that Perl isn’t vibrant, active and supportive. But that’s
exactly my point — without an organization like TPF to speak
for Perl in these kinds of situations, that’s exactly the kind of
impression that would be conveyed. Some aspects of what TPF does
are simple, but they’re crucial.

Andy: So did the Linux Foundation come to TPF asking about
getting Perl in LSB 3.2?

Richard: I’m not quite sure what the mechanics of the
engagement were. Allison Randal, a TPF Board member, was the one
who started up the discussion with Linux Foundation. Once she made
initial contact I inherited the doing of the work.

Andy: So what can Perlbuzz readers do to help out with TPF?

Richard: The first thing I’d urge Perlbuzz readers to do is
to be involved with the Perl community. Go to
YAPC conferences, go to Perl
Workshops and Hackathons, go to your local
Perl Mongers meetings. This
strengthens the whole Perl community, not just TPF. (And as an
aside, it’s something I’ve found enormously personally rewarding
and enjoyable. I recommend it to anyone.)

Be eyes and ears on the ground and in the local Perl and IT scenes.
If you see something interesting that you think has implications
for Perl, let us know. Email pr@perlfoundation.org.
Pay attention to websites like
news.perlfoundation.org,
use.perl.org,
perlbuzz.com and
yapc.org. Every now and then something
can happen where TPF could use specific help. These are the places
where the news would first go out to. There is also the #tpf IRC
channel at irc.perl.org. If you want to talk to TPF folk, you can
look for us there.

Andy: I should note that I am the pr@perlfoundation.org
contact, and that Perlbuzz is sort of an outgrowth of my PR role
for TPF, although entirely separate from TPF.

Richard: That’s it for projects right now, but please track
me down for another interview in a few months. We can cover what’s
been going on then. And thanks for the great work with perlbuzz.com!
And while it’s separate from your Perl Foundation PR hat, I think
the most important thing is that Perl gets promoted! You and Skud
are doing this fantastically well with perlbuzz.com so I’m a big
supporter.

Andy: Any time you have something to say to the community,
Richard, I’m glad to publicize it. Thanks for your time.

Perl::Critic: an interview with Chris Dolan

September 14, 2007 Interviews No comments

Chris Dolan recently received a Perl Foundation Grant to write 20 new policy modules for Perl::Critic. I managed to catch up with Chris by email and ask him a bunch of questions about his work on Perl::Critic, the TPF grant, and more.

Kirrily: So, how did you first get involved in Perl::Critic?

Chris: Funny, I had to look back at my email to remember this. You’d think I could remember two years ago…

I read Damian Conway’s Perl Best Practices (aka PBP) in summer 2005 and thought, “Finally! The solution to the ‘Perl is not readable’ myth!” I tried applying all of the ideas manually in a new module I was just about to release to CPAN (Net::IP::Match::Regexp). I found the process rewarding, but a little tedious and error-prone.

After working with the PBP recommendations manually, I thought about writing some code to judge some of the recommendations automatically, but happily I decided to check CPAN first. I found Perl::Metric::Basic and posted some RT reports. Then I found Perl::Critic, which had just been renamed from Perl::Review. I found the code much more approachable that Perl::Metric and soon filed a few RT reports and a cpanratings review. Jeff Thalhammer, the P::C creator and leader, emailed me back personally thanking me for the feedback. After that, we had about a dozen back-and-forths where Jeff sent me pre-release tarballs of Perl::Critic and Test::Perl::Critic to test before sending them to CPAN. Then, I wrote my first from-scratch policy (Modules::RequireEndWithOne) about a month later, and I was hooked.

Kirrily: What prompted you to apply for a TPF grant? How did you convince them to give you money? How did you find the grant process?

Chris: I hope nobody criticizes me for admitting this, but my reason for applying for the grant was self-motivation. Our second child was born Jan 1, 2007 and the company where I worked was struggling, so it was a hard time to find energy for my open source efforts. I usually work best under pressure, so I decided the publicity of a TPF grant would force me to get something done (and the money will be a nice bonus too).

I wrote a ton of grant proposals in my former life as an astronomer and, of course, tons of business proposals as a web/programming consultant, so I found the TPF grant process pretty easy. The positive impact of Perl::Critic is obvious on the community of people who care about good Perl code, so the justification part was straightforward. The harder part was deciding what part of P::C to
propose to build.

Kirrily: Is there a theme or themes to the 20 modules you’re doing?

Chris: Yep. We keep an extensive TODO.pod file in the P::C svn repository. One portion of that file is devoted to PBP recommendations that seemed feasible to implement in code. I had created that list over a year earlier by paging through the PBP book and brainstorming implementations for each recommendation. Needless to say, some of the PBP recommendations will never make it into P::C — for example “Never assume warning-free compilation implies correctness” (page 432).

It turned out there were an even 20 PBP-based ideas in the TODO list (vs. about 40 non-PBP ideas). Twenty seemed like a nice round number and a challenging-but-doable quantity. At that time, I was primary
author of 27 of the 93 finished policies, so I had a pretty good idea of what was involved.

Kirrily: Why should people use Perl Critic? Why shouldn’t they?

Chris: In my opinion, it’s good for two things:

1) Finding bugs (real bugs and potential bugs)

2) Making your code more approachable

Both are especially important for open source code that might be used by thousands of other programmers. Why write bad open source code?

One reason not to use Perl::Critic is to put a badge of honor on your code. Achieving P::C compliance is a good thing to announce to the world because it tells users that you care about the quality and consistency of your code. But P::C doesn’t help make your code better directly. It just tells you that your code isn’t broken in one of about 100+ known ways. And some of those 100+ contradict each other! Compare that to the innumerable ways to write bad code. So, you’ve got to start with good code and use P::C as a tool to make it a little better.

One piece of advice I like to share is that you should not be afraid to turn off P::C policies that don’t work for your code. I think Damian gave similar advice in PBP. In the P::C code itself, we have 98 instances of ## no critic(...) which turns off certain policies. Even Perl::Critic is not 100% Perl::Critic compliant! But
98 out of 27,000 lines of code isn’t bad.

Another controversial topic is whether .t files should be P::C compliant. Personally, I don’t bother. Writing test code needs to be fast and easy or it falls by the wayside.

Kirrily: Do you know of any really interesting uses of P::C out there? Famous companies using it?

Chris: Good question. We should ask Jeff Thalhammer about that — as P::C founder, he has a closer connection to our user community. There was brief talk about creating a for-profit consulting entity to help big companies write better code using P::C, but I don’t think anything came of that (I was too busy to participate).

What I find interesting is the people/groups/companies who have privately implemented their own policies for P::C. Parrot, for example, had a few custom policies the last time I looked. Another example is MathWorks, who funded the Perl::Critic developers to write some specific policies they desired. Now that’s a business model any open source developer craves!

Kirrily: What next after you’ve implemented these P::C modules?

Hmm, I have no real plans. More policies, I guess. I’d love to get more people writing them. They aren’t too hard to create and without them P::C is worthless. If anyone wants to get involved, I’d be happy to be a mentor. Giving fish vs. teaching fishing, y’know.

In my day job, I do a lot of Java coding (and often enjoy it). We use a commercial package called CodePro by Instantiations which is conceptually similar to Perl::Critic but is much more polished. They have over 900 policies to choose from, many of them specifically designed to squelch anti-patterns in the big APIs like Spring, Hibernate, Struts, etc. Jealously, I’ve often thought that they have an easier job than we do because Java is so much easier to parse than Perl, and the code smells seem to fall in more easily identifiable patterns.

When Perl 6 becomes popular, I’d like to see Perl::Critic be implemented against it. Perl 6 will have a grammar that is much more amenable to grepping. Behind the scenes, I’d like to see the Perl 6 version of P::C work against de-sugared syntax or perhaps even the abstract syntax tree to simplify the policy writing. Unless a huge amount of work goes into improving PPI, that’s not going to happen for Perl 5. At one point, we talked about writing P::C policies using an XPath-like notation, but nobody ever championed that goal.
I discovered that FindBugs (another Java code analysis tool) took that approach successfully. Maybe P::C can benefit from the Parrot tree grammar engine (TGE). Or the optimizer. After all, the syntax
pattern matching that P::C has to do is not that different from the opcode pattern matching that an optimizer performs.

Overall, what I’d really love is for Perl 6 to avoid the “write-only language” moniker that got slapped on Perl 4 and Perl 5. If P::C can help with that, then it has succeeded.

Thanks, Chris!

Find out more about Perl::Critic at PerlCritic.com or listen to this PerlCast interview with Jeff Thalhammer, creator of Perl::Critic.