More looking at open source quality processes

; Date: Tue Aug 16 2005

Tags: Java

Like I said in my previous posting, I'm looking at the quality processes in open source projects. I'm studying how we in the Java quality team might be more open about what we're doing.

On my way home tonight I stopped at a geeks book store and found Succeeding with Open Source. It has part of a chapter devoted to quality processes, and thought I'd share a little.

The context of the chapter is how to assess the open source project, and appears to be aimed at an executive trying to decide whether to incorporate some OSS project in what they're doing inside their company. The section I'm looking at is titled "Assessing Product Quality".

It starts with a description of the typical commercial quality process. The typical commercial product has an opaque quality process, that doesn't allow outsiders to have any clue how extensively tested the product is. That is actually a very important point to consider, as it brings up memories from a startup company I formerly worked for.

I'm not going to name the startup, but it was very small at the time (around 50 people). We had no QA, just one person who handled support calls, some marketing, and a buncha developers many of whom were assigned to bug fixing (my role). I remember one customer visit where they showed me a regression (a bug we'd fixed before) and the fella turned to me and asked point blank "what do you guys do for testing?" I knew well we didn't do much testing, but covered somehow without admitting to this, and we went on with the meeting and they continued being a customer.

In the book's assessment of "product quality" for an open source project, they list a few parameters

  • Source code inspection
  • Consistent coding style
  • Code written clear and well maintained manner
  • Presence of tests
  • Count the number of tests
  • Assess the tests themselves in much the same way you'd assess the source code
  • Look at test coverage
  • Number of outstanding bugs
  • Number of checkins

I don't fully grok why this assessment process is useful, as it appears to be very time consuming. For example source code inspection is repeatedly mentioned in this section, but my experience of that is it's very laborious to read the code well enough to grasp the overall workings. I mean, you can inspect the code at a scan and see that indentations are even or messy, and perhaps do some shallow analysis for obvious programming errors. With a Java project you could take it one step further, and run findbugs or a similar package to see the number of potential bugs it indicates. But if you're going to understand a large package at any depth, e.g. an architectural understanding, that takes a long time mapping out dependencies and whatnot.

The one attribute I'm in complete agreement with is the code coverage. Code coverage is a crude measurement, but it's a danged useful crude measurement.

On the "number of checkins" the book suggests a high checkin rate indicates active development, and that's a good thing. Well, I tend to agree, but for one thing. A high checkin rate is also a high churn rate, and each new bit of code checked in are new potential bugs. Quality professionals know that unmodified code has known quality, while recently modified code has unknown quality.

In this section the book tells one fallacy: The availability of product source code defines the term open source. Well, that's obviously not true (hint: Java's source code is available, but the license doesn't fit the accepted definitions of open source), but that's not going to make me discount the whole book.

Overall the one useful point I'm taking from this section is how the opacity of the typical commercial software quality team impedes any potential customer from doing a good assessment of the quality of that software.

Source: (web.archive.org) weblogs.java.net

Comments

"I don't fully grok why this assessment (of tests) process is useful"

I'm going to follow your lead and refrain from naming names, but unfortunately I have witnessed many "unit tests" checked in over the years that were incapable of failing.

--John

Posted by: johnreynolds on August 17, 2005 at 06:12 AM

I don't know how to say it better .. but "I don't fully grok" doesn't mean that I disagree with everything that book said. No, there's much to agree with. What I meant to say is I'm not yet understanding why that assessment process is the best assessment process to follow.

And, yes, writing tests is hard ... and it's not so uncommon to find tests that are incapable of failing. Hence, the one point the book mentioned of assessing the test quality itself is a good thing to go through.

Posted by: robogeek on August 17, 2005 at 11:22 AM

I'm going to follow your lead and refrain from naming names, but unfortunately I have witnessed many "unit tests" checked in over the years that were incapable of failing. In other words: who's responsible for testing the tests for correctness?

Posted by: jwenting on August 18, 2005 at 11:16 PM

About the Author(s)

(davidherron.com) David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.