An experiment in community process

; Date: Sun Oct 09 2005

Tags: Java

On friday I was reading some discussion about open source projects. Among the claims was the typical statement that the thousands of eyeballs results in high quality. The assumption is that with that many people, there is continual ad-hoc testing going on and any problems get spotted quickly.

Well, I don't think that's actually the case. First it presumes the project has a lot of active participation. Projects with few participants don't have thousands of eyeballs, do they? The way the model works is if there are thousands of users each doing their thing, who then effectively serve as an ad-hoc test team. Having few users means the daily usage of the participants doesn't cover much of the code, and there could easily be bugs lurking in the corners.

The second presumption is that daily usage by the thousands of users is going to cover a large part of the code. Any quality professional knows that valuable testing has broad coverage of the code being tested. Your testing only validates the code the tests cover. If you rely on the users to run the software and effectively ad-hoc test it for you, but if they all have the same or similar usage, they're all going to be hitting the same lines of code. Hence, there will be bugs lurking in the corners.

So in any case, let's get back to my experiment.

When I read that on Friday I thought "let's test this". I'd remembered a slashdot story several months ago of someone posting on the wikipedia a bogus/random article and then waiting. The Wikipedia is supposed to have the same advantage that open source projects have. Lots of eyeballs and the ability of anybody to change/edit anything. One thinks this would lead to chaos, but the wikipedia shows that it actually leads to a pretty good result. For the most part the wikipedia has a lot of good information in it.

However in that experiment mentioned on slashdot, the guy left the article there for a week. Nobody touched it, until he deleted it himself. Hmmm.... so much for the wisdom of the masses?

So, here's what I did for my experiment.

First, I took this article ( Hash Tables Considered Harmful from my personal web site. This is a randomly generated computer science paper that comes from some software also previously discussed on slashdot. The final citation to the paper links to the software.

Second, I created myself an account on wikipedia. (Robogeek)

Third, I posted this article. It took me quite awhile to clean it up for the wikipedia formatting, and I dropped the images because I didn't want to pollute the wikipedia more than this one article.

Fourth, I waited.

The article: You'll notice the article is no longer there. It was deleted within 18 hours of posting the article. Heh, "Patent nonsense - and a lot of it", why thank you.

Anyway, this experiment shows that in this instance anyway the wikipedia community process worked.

Source: (


A few points to understanding. The idea is that with enough eyeballs, no bug is shallow... meaning "deep" bugs will be turned up quicker than in a closed source project.

Next, don't confuse users with developers. The number of developers count, not users. It used to be all users in open source were developers as well, but that time has past.

I don't see how a wikipedia test is the same as a source code test.

If you want to read more on how it all works, and what is required to make it work, check out

Posted by: neelm on October 10, 2005 at 06:25 AM

About the Author(s)

David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.