Tags: Java
This article at TheServerSide was pointed to by a poster on javalobby claiming it's an example of how "ugly" annotations are. Hmmm... They're different alright, but ugly? To each their own I suppose.
Test framework comparison, by Justin Lee, July 2005, TheServerSide.com
He goes over a couple new test frameworks and how annotations have affected their design. I guess JUnit is no longer the king of test frameworks.
For the record, in my team (Java SE SQE) we use none of those. Our team began in the earliest days of Java, and hence we've got tools and practices that predate everything y'all have done in the Open Source world around Java. One of the things I'm looking at is how (or if) this team can make use of the testing tools developed in the Open Source community, and how we can collaborate on further tool development.
Back to the article ... It discusses these frameworks:
JTiger ( http://www.jtiger.org)
TestNG ( http://www.testng.org/)
JUnit ( http://junit.org/)
The main point of the article is looking at annotations and how they are used in the frameworks. Annotations are a new (1.5) language feature, and they are (to my eye) an obvious way to collect pointers to test methods. I'm glad the open source test frameworks have adopted this technique.
I don't quite buy the claim that the annotations make the source ugly. Like I said, ugliness or beauty is really in the eye of the beholder. In particular annotations can be interpolated by IDE's to be displayed nicely. While I'm a longtime Emacs user (XEmacs specifically) I've stopped using that kind of editor and moved to the IDE's. They add so much to the experience of editing source code that I can't see ever going back. For example, being able to find source to edit by browsing a package and class hierarchy is simply lovely. Why should I have to keep in my head the mapping of the class hierarchy to the files in the file system?
The last point I want to make is to contrast between this approach to writing test cases, to what we do in the Java SQE team.
JUnit, and apparently so also JTiger and TestNG, all mix together two separate things which the SQE teams toolset keep separate. Instead of a "test framework" we use a "test harness". The role of the test harness is to encapsulate a set of tests, execute them, perhaps executing a subset based on some filtering rules, and collect status, results and output of each test executed. The test cases have little in requirements and are not expected to extend some class like TestCase, and they can be run standalone if needed.
The distinction is this:
A "Test Framework" provides various classes and infrastructure to ease writing test cases.
A "test harness" encapsulates a set of tests, reliably executes them, and produces a useful report.
They are rather orthogonal pieces of functionality, and I'm curious why the two concepts are mixed in these test frameworks. I don't know the answer as I've not studied these test frameworks in depth (yet). I'd be curious to learn from the community more about this.
Source: weblogs.java.net