*Sent to editor of DrDobbs/InformationWeek*
I enjoy Sid Sidner’s [article on static code analysis tools](http://www.drdobbs.com/tools/224600102), but was surprised to see two big omissions, especially as they may provide a low-cost point of entry to the organization just starting to look at static analysis.
First, [PC-Lint](http://www.gimpel.com/) is a relatively low-cost tool that does a fine job of C/C++ analysis. It’s been around for years, and has found many C bugs in my code back in the early 90s. I’ve also been using the open source [Splint](http://splint.org/), for years on the [Perl 5](http://www.perl.org/) and [Parrot](http://parrot.org/) open source projects. Although Splint’s not nearly as complete a package as Coverity’s Scan product (Coverity runs Scan on dozens of open source projects for free as a service to the community), it’s a great introduction to the power of static code analysis tools. I also suggest readers check the [“List of tools for static code analysis” page](http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis) on Wikipedia.
Second, one crucial point missed is how any tool is going to require tuning. Splint will generate hundreds of errors in each source file on its first run on your code, since nobody in the real world is as pedantic as the tool is. Each organization will have to decide which policies are worth following, and which are just noise.
Finally, static code analysis isn’t strictly for C++ and Java. Many dynamic languages have similar tools. For example, [Perl::Critic](http://perlcritic.com) is a fantastic tool for analysis of Perl code, as well as an extensible framework that lets each organization create custom policies to fit its own development practices.