Introduction
Read almost any software developer journal or website and we're told that responsible developers write test cases. If those developers are Java developers then, most likely, they use JUnit for those test cases. JUnit is probably the oldest and certainly the most popular Java-based testing framework around. Created by Erich Gamma and Kent Beck, JUnit has become the de facto standard for unit testing. In the interim, however, other frameworks have been built to address various faults and deficiencies with JUnit.
One such option is JTiger written by Tony Morris. JTiger is a JDK 1.5 annotation based testing framework that leverages many of the new features that JDK 1.5 introduced. Another option is TestNG by Cedric Beust and Alexandru Popescu. Both offer some new features over JUnit 3.x, but as we'll see at the end of this article, the forthcoming version 4 of JUnit will include many of these new features.
Though many Java developers are familiar with JUnit, a brief discussion follows to allow those unfamiliar with JUnit to get caught up. To facilitate the discussion of these frameworks, and to try to provide apples to apples comparisons of features, I have written a simple test case to provide some basis for comparison.
JUnit
JUnit is one of the oldest and most popular testing frameworks available for Java development. Because of its early adoption, most Java developers are probably familiar with it. There are numerous extensions available for JUnit to allow for testing just about any component of a system from XML files to javascript.
A basic JUnit test case extends junit.framework.TestCase and can be bundled together into a suite using a junit.framework.TestSuite instance. This allows the test developer(s) to bundle together similar tests. TestCase provides some basic methods for building a system of unit tests including the assert methods which provide the actual testing and results mechanism.
public class PersonTest extends TestCase { private Person _person; public void setUp() { _person = PersonManager.create("John", "Cleese"); Department department = new Department("Ministry of Silly Walks"); _person.setDepartment(department); _person.setSalary(50000d); } public void testGetFromManager() { assertNotNull(PersonManager.get(_person.getID())); assertNull(PersonManager.get(-12345L)); } ... <snip/>
There is a setUp() method to set up the data for the tests. This particular test case has no tear down method. Notice that the test extends TestCase and that the test method begins with the word test and takes no arguments. These tests can throw any exception the developer wants however any exception thrown from the method is treated as a failure by JUnit. To test methods that should throw exceptions, the test would have to look something like this:
try { shouldFail(); fail("This test should have thrown an exception."); } catch (SomeException se) { // success }
The actual tests are done using a variety of assertion methods like the assertNotNull and assertNull shown above. These are a set of methods to validate data and provide pass/fail semantics for a test. If these assertions fail, an AssertFailedError is thrown and the test is marked as failed. TestCase inherits such methods as assertTrue() , assertEquals() , and assertNotNull() from junit.framework.Assert . These methods notify the JUnit framework of the pass or fail status of a given test. There are a variety of asserts available and each test can make as many asserts as the developer needs to thoroughly test a given test case.
As in the case above, sometimes some set up work needs to be done before a test can be run, e.g., setting up data in the database or starting a utility thread. TestCase provides setUp() and tearDown() methods for this. These two methods are called before and after (respectively) each test method is run. These test methods are required to have the following form: public void testFoo() where Foo is some name. The name is free form and can be whatever the developer wants but the method name must start with test , return void , and take no parameters. Again, a test method can throw any exception desired but any exception thrown at runtime is considered a failure.
JUnit tests are simple enough to create and should be easily learned by most developers. There is no external configuration except what is need to actually run the tests via ant or the command line. These tests can be run in a variety of ways including a swing based test runner and a command line based test runner. Probably the most common form, and the only form we'll deal with here, is the ant task provided by JUnit.
Executing JUnit Tests
The following snippet is basically how I run my JUnit tests:
<junit printsummary="yes" showoutput="no" fork="true"> <formatter type="xml"/> <classpath refid="test.class.path"/> <batchtest todir="${test.results}"> <fileset dir="${test.dir}" includes="**/*TestSuite.java"/> </batchtest> </junit>
Of course, for this to work, the junit jar must be in either ant's lib directory or referenced in the property test.class.path . This will recursively find and execute all the test suites (I use the naming convention of FooTestSuite.java to denote my test suites) in the test.dir directory. The results are stored as XML and can later be converted to an HTML report using junitreport.
JTiger
JTiger is a newer framework which relies heavily on the new facilities of 1.5. The most obvious to the user will probably be the annotations and static imports. JTiger requires no external configuration files to execute the tests. For me, JTiger passed the ten minute test easily. Using the user guide, I was able to convert my JUnit test easily and have tests running quite quickly.
Test Example
Below is the JTiger version of the unit test we've already seen. I'll list it here and analyze it in the following sections.
package com.antwerkz.testing.jtiger; import com.antwerkz.testing.Person; import com.antwerkz.testing.PersonManager; import com.antwerkz.testing.Department; import static org.jtiger.assertion.Basic.assertEqual; import static org.jtiger.assertion.Basic.assertFalse; import static org.jtiger.assertion.Basic.assertNotNull; import static org.jtiger.assertion.Basic.assertNull; import static org.jtiger.assertion.Basic.assertTrue; import org.jtiger.framework.SetUp; import org.jtiger.framework.Test; import org.jtiger.framework.Category; import org.jtiger.framework.Fixture; import org.jtiger.framework.Ignore; @Fixture("Tests various aspects of Person functionality") public class PersonTest { private Person _person; @SetUp public void setUpTestData() { _person = PersonManager.create("John", "Cleese"); Department department = new Department("Ministry of Silly Walks"); _person.setDepartment(department); _person.setSalary(50000d); } @Category("manager") @Test(value="managerGets", description="Tests to ensure that we can get from the manager correctly") public void managerGets() { assertNotNull(PersonManager.get(_person.getID())); assertNull(PersonManager.get(-12345L)); } @Category("manager") @Test(value="manager removes", description="Tests removes from the manager") public void managerRemoves() { assertTrue(PersonManager.remove(_person.getID())); assertFalse(PersonManager.remove(-12345L)); } @Category("bean") @Test(value="attribute tests", description="Test the attribute getters and setters") public void attributes() { Person person = PersonManager.get(_person.getID()); assertEqual(_person.getDepartment(), person.getDepartment()); assertEqual(_person.getSalary(), person.getSalary()); assertEqual(_person.getFirstName(), person.getFirstName()); assertEqual(_person.getLastName(), person.getLastName()); } @Category("bean") @Test(value="departments", description="Test the relationships") public void departments() { Department dept = PersonManager.get(_person.getID()).getDepartment(); int count = dept.size(); PersonManager.get(_person.getID()).setDepartment(null); assertNull(PersonManager.get(_person.getID()).getDepartment()); assertTrue(count == (dept.size() + 1)); } @Category("manager") @Test(value = "ignored test", description = "this test should be ignored") @Ignore public void ignoreMe() { assertTrue(false); } }
This test is basically the JUnit version ported over to JTiger. I have renamed the test methods to highlight the fact that method names can be whatever makes sense to the developers. As one can see the test is very simple. There is not much to the test code and it requires no configuration beyond what is shown here. There are two categories of tests defined here manager and bean and one of the manager tests is ignored. Notice the static import of the assert methods. While the static imports are not necessary, most developers will probably use them to improve readability.
Tests
Like JUnit, the core of JTiger is the test cases or fixtures. JTiger tests are denoted with the @Test annotation. This annotation can take a single string for the name, or named parameters for a name and a description. This example uses the named parameter approach where value is the name of the test and description is, well, the description. Any method marked up with this annotation will be executed by JTiger. However, sometimes only a certain subset of tests need to be run. JTiger provides the @Category annotation to allow for logical grouping of tests. A given test can be in multiple categories and the @Category annotation can be applied to the class definition so that all the tests on the class are in the category. These categories are invoked via a regular expression value passed to JTiger when the tests are run.
As with JUnit, a fixture in JTiger must have either a default constructor or one that takes only a single String . A test on a fixture is potentially any method on the class and these methods must take no arguments but can return something other than void if a result message is desired. If something other than void is returned, JTiger will call toString() on the object to retrieve the result message. If a method marked with @Test has any parameters, it is marked as " Ignored (Cannot Invoke) ." We'll discuss result types a little later.
JTiger provides many different assertion capabilities. These assertions are found in the pacakge org.jtiger.assertion and range from Basic.assertTrue() or Basic.assertFalse() to assertions as complex as ObjectFactoryContract.assertObjectFactoryXConsistentlyEqual(). Unlike JUnit, a JTiger test case does not inherit these assertion facilities since JTiger tests are not required to subclass anything. Instead, these methods are provided as static methods on various classes. Typical usage of these methods is through the use of static imports, but that is not mandated by the framework. Developers are also free to add new assertions as necessary, of course.
Test Results
There are six different types of results with JTiger tests. These are: " Success ," " Ignored (Annotated), " " Ignored (Cannot Invoke), " " Failure (Set Up), " " Failure, " " Failure (Tear Down) ." " Success " is fairly obvious, as is " Failure " but let's take a quick look at the other types.
" Ignored (Annotated) " results from the use of another annotation JTiger provides. When a test is marked with @Ignore the test will show up as " Ignored (Annotated) " on the report. This annotation is useful for temporarily disabling a test without running the risk of losing track of it. " Failure (Set Up) " and " Failure (Tear Down) " reflect failures during the set up or tear down of tests. And, as mentioned earlier, " Ignored (Cannot Invoke) " is used when JTiger cannot invoke the test method for various reasons.
Available Annotations
JTiger provides a number of annotations by default. We have seen some already, but I will list them here with a brief bit of coverage just for completeness.
Annotation | Description |
@Test | defines a test case to be executed |
@Category | Specifies the category (or categories) to which the test belongs. Can be attached to a method or class declaration |
@Ignore | Tells JTiger to ignore the test |
@SetUp | Defines the set up method to be used for the tests |
@TearDown | Defines the tear down method to be used for the tests |
@ExpectException | Tells JTiger to expect exceptions to be thrown from the test. This takes the class definition of the expected exception with an optional boolean to allow subclasses or not. This removes the need for things like this:
try { shouldFail(); fail(); } catch (SomeException e) { // success } |
@Repeat | Tell JTiger to repeat the test n times. Each run will have a distinct result and if n is less than 0, then 1 is used for n. |
@Fixture | Used to document the fixture class with more useful names and descriptions. |
The advantage of using @SetUp and @TearDown to denote the set up and tear down code for tests is that it prevents typographical errors which can result in subtle bugs in your tests that can take some time to track down. Using JUnit, it is possible to define a method called setup() and expect JUnit to run set up code before the tests but due to the difference in case, that method is never run. If a developer mistypes @SetUp , the compiler will complain. The use of @Override in 1.5 mitigates this to some extent but does require the developer(s) to remember to use it.
Test Execution
Like JUnit, JTiger has an ant option to run the tests. To run JTiger from ant, the build.xml would need a snippet similar to the following:
<target name="jtiger" depends="main"> <taskdef name="jtiger" classname="org.jtiger.ant.JTigerTask" classpathref="test.class.path"/> <jtiger> <category regex="manager"/> <fixtures> <fileset dir="${test.dir}"> <include name="**/jtiger/*.java"/> </fileset> </fixtures> <result name="~html"> <param value="${build}"/> </result> <java failonerror="true"> <classpath refid="test.class.path"/> </java> </jtiger> </target>
This target will find all the test fixtures in the category " manager under the jtiger tree and execute them using the ant property test.class.path . The <result name="~html"> tag tells JTiger to process the results into HTML form and place the reports into the directory ${build}.
TestNG
TestNG is a testing framework that was started due to some frustrations with JUnit. These issues include such things as awkwardness in the assertions, executions of batched tests, and test initialization code that only needs to be run once. (Further details can be found here: http://www.beust.com/weblog/archives/000173.html). Like JTiger, TestNG also takes the annotation approach for marking up tests but takes the extra step of providing a javadoc mechanism for users still on JDK 1.4.
TestNG provides a few more features beyond what JUnit provides. TestNG has a flexible test configuration system (partly through the XML configuration). It also has built-in logging, a powerful execution model and configuration, test grouping, and dependent methods. TestNG allows a developer to define the chain of methods to execute before a given test case can be executed. This has the potential to be quite powerful and lead to less work as dependent tests will not be listed as failures because they rely on buggy code that has already failed. But we'll get to that in a bit.
Tests
The annotations for a TestNG test look very similar to a JTiger test case. However, there are a few differences. Let's take a look at a TestNG version of the JUnit test we've already seen and then we'll take it bit by bit:
public class PersonTest { private Person _person; @Configuration(beforeTestMethod = true) public void setUpPersonData() { _person = PersonManager.create("John", "Cleese"); Department department = new Department("Ministry of Silly Walks"); _person.setDepartment(department); _person.setSalary(50000d); } @Test(groups = {"manager"}) public void managerGets() { assert null != PersonManager.get(_person.getID()); assert null == PersonManager.get(-12345L); } @Test(groups = {"manager"}) public void removes() { assert PersonManager.remove(_person.getID()); assert ! PersonManager.remove(-12345L); } @Test(groups = {"bean"}) public void attributes() { Person person = PersonManager.get(_person.getID()); assert _person.getDepartment().equals(person.getDepartment()); assert _person.getSalary() == person.getSalary(); assert _person.getFirstName().equals(person.getFirstName()); assert _person.getLastName().equals(person.getLastName()); } @Test(groups = {"bean"}) public void testSetDepartment() { Department dept = PersonManager.get(_person.getID()).getDepartment(); int count = dept.size(); PersonManager.get(_person.getID()).setDepartment(null); assert PersonManager.get(_person.getID()).getDepartment() == null ; assert count == (dept.size() + 1); } @Test(groups = {"manager"}, enabled = false) public void ignoreMe() { assert false; } }
This test looks very much like the JTiger test we just discussed. Note, however, the lack of @Category that JTiger uses. TestNG, instead, defines groups in the @Test annotation. Notice, also, the enabled = false. This provides the same functionality the same as the @Ignore JTiger provides. However, one big difference is that disabled tests will not be displayed in the report so that any disabled tests run the risk of getting lost. TestNG also supports groups of groups with the following syntax: @Test(groups = {"group1", "group2"}). @Test can be applied to methods as well as classes. When applied to the class, all public methods on the class are considered to be test cases. Subsequent @Test tags can be used to add groups to various methods, if desired. Like JTiger, TestNG tests can be repeated with @repeat(invocationCount=10).
TestNG also has the idea of dependencies on tests. A test can depend on one or more other tests and on groups as well. The syntax for these dependencies is @Test(dependsOnMethods={"method1, method2"}) and @Test(dependsOnGroups={"group1, init.*"}). Group dependencies can use regular expressions if desired. Method and group dependencies are mutually exclusive of each other so only one can be used. If these dependencies fail, the dependent methods are marked as skipped. This reduces the number of failures seen on the reports due to upstream failures and helps to highlight the actual error. There are three results possible for TestNG tests: SUCCESS, FAIL or SKIP.
Unlike JUnit and JTiger, TestNG methods can take parameters. @Parameters({"param1", "param2"}) defines the parameters param1 and param2. Values for these parameters are defined in the XML configuration file we'll see in a later section. TestNG also supports factories to allow for dynamic test generation and parallel execution of tests with timeouts. I mention them for the curious but won't cover them here.
Also note that the TestNG examples in the user guide make use of Java's assertion facilities even though more complex assertion facilities are offered in the org.testng.Assert class. TestNG also ships with JUnit's assertion methods in the class org.testng.AssertJUnit. These two classes provide approximately the same types of asserts however, the TestNG asserts put the message on the end of the method signature. The developers believed this arrangement leads to more readable code as the values being compared are up front.
One annotation I haven't mentioned yet is @Configuration. Using this annotation, a developer can specify exactly when a method executes during a test run. Valid parameters for this annotation are before/afterSuite, before/afterTest, before/afterTestClass, and before/afterTestMethod. Suite and Test are defined in the XML configuration file examined below.
XML Configuration
TestNG uses a fairly simple XML configuration file to define test runs. A simple configuration file might look like this:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Antwerkz Sample Suite"> <test name="Sample test"> <classes> <class name="com.antwerkz.testing.testng.PersonTest"/> </classes> </test> </suite>
This will execute the test and build the report in build/testng. The current configuration results in all successes, of course. However, if the annotation on ignoreMe() is changed to enabled = true, then we'll get one failure. TestNG generates a file in the report directory called testng-failed.xml listing all the failed tests. This file can then be used to run only the failed tests so that a developer can focus on fixing the failures without having to wait for an entire run again. It should be noted that the test ignoreMe() does not show up on the report when enabled=false. As mentioned earlier, TestNG tests can take parameters. The syntax for that configuration looks like this:
<target name="testng"> <taskdef name="testng" classname="org.testng.TestNGAntTask" classpathref="test.class.path"/> <testng classpathref="test.class.path" outputDir="build/testng" sourcedir="${test.dir}" haltOnfailure="true"> <xmlfileset dir="etc" includes="testng.xml"/> </testng> </target>
Initially, I was not all that excited about having an XML file to configure what tests to run. The reasoning is that a developer should not need to change Java code to change what tests are run. It also affords us the nice benefit of being able to run only the failed tests. With this set up, it is also possible to provide different testing configurations for different groups of developers so that only the tests on the appropriate subsystems are run. The ability to define parameter values is also very handy and would be awkward without an XML configuration file. However, the XML file is optional and the tests can be run via ant, for example, by passing a classfileset to the ant task rather than the xmlfileset .
Reporting
I would be remiss if I didn't take a moment to discuss reporting in TestNG. TestNG supports a pluggable reporting system. It, of course, has its own built-in report mechanism but also supports JUnitReport. As I prefer the JUnitReport output over the default TestNG format, this is a great feature for me to have. What's interesting to note about this is that this compatibility is achieved using the pluggable reporting framework. It would be interesting to see what kinds of integration is possible with continuous integration frameworks using this feature.
Available Annotations
Here is the list of available annotations provided by TestNG:
Annotation | Description |
@Test | defines a test case to be executed |
@Configuration | Configuration information for a TestNG class. Provides facilities for grouping and parameters. |
@ExpectedExceptions | Like the JTiger annotation, tells TestNG to expect certain exceptions. Takes an array of class definitions. |
@Factory | Factory annotation |
JUnit 4
Before I finish this article, I would like to take a look at what is coming up for JUnit in version 4. I didn't spend much time at the beginning talking about JUnit because the framework is so popular I didn't really feel it warranted much discussion. But some of the new features planned for JUnit do warrant a brief discussion.
To quote the developers, the goal of JUnit 4 is to "encourage more developers to write tests by simplifying JUnit." IDE integration improvements are high on the list as well as simplifying the suites. JUnit 4 will also move to static assert methods like JTiger and TestNG use.
The JUnit developers got a lot of feedback from their users asking for annotations after having seen how useful they were in NUnit (http://www.nunit.org/). As a result, JUnit 4 is moving to an annotation based system. Set up and tear down will be handled with @Before and @After. A developer can use as many of these as desired and inherited @Before and @After annotations will be run before and after (respectively) those on subclasses. Shared set up is handled with @BeforeClass and @AfterClass which is run before and after (respectively) any tests on the class. JUnit 4 will introduce @Ignore like JTiger has rather than go the route of enabling/disabling tests like TestNG does. Tests marked with @Ignore will be reported as such so tests won't get lost.
Test methods in JUnit 4 will no longer be required to start with test: JUnit 4 will also introduce the @Test annotation. The @Test annotation will support expected exceptions as well as timeouts. Developers will now be able to fail tests that take too long to execute which will make testing response times much easier. A test annotated with @Test(expected=NullPointerException.class) will fail if that exception is not thrown but will be marked as successful if it is.
JUnit 4 will introduce test ordering and prioritization as well as categorization and filtering. JUnit 4 will also provide "forward compatibility" for current test runners so that they can run the new tests as well as backwards compatibility for the existing tests to run on top of the new framework. This effort has taken half the work of the new version. There's far too much time and effort invested in the current architecture to throw it away. JUnit 4 tests can be run inside old test runners by using a compatibility wrapper class so the new tests will work in, say, Eclipse until the test runners can catch up to the new way of doing things.
JUnit 4 will see a number of other updates:
- a new assert: assertEquals(Object[], Object[]) as well as support Java's assert and AssertionError
- remove the distinction between failures and errors.
- a new package structure: org.junit .
- native logging support
The developers are also targeting 100% code coverage before release.
That's a little bit of what to expect with JUnit 4. It seems JUnit hasn't had any real major steps forward in a while (a testimony to its maturity and robustness) but with Java 5 out, this represents a good chance to shake things up a bit. The presence of other frameworks such as NUnit, JTiger, and TestNG have also proven to be a good testing ground for some other ways to approach testing and JUnit seems unashamed to borrow these ideas.
Summary
When I started this article, I had never used either JTiger or TestNG so I have hopefully been able to avoid any biases when evaluating the two. I am, however, a long time JUnit user so I was curious what the other options were. Both frameworks provide some intriguing new features over what the current JUnit offers though JUnit 4 will close that gap considerably. Features that stand out to me are TestNG's ability to run only the failed tests from the last run and JTiger's fine grained success/failure reporting among others.
Choosing which framework to use is a hard one for me. All three frameworks have some really nice features. My advice for trying to pick one is to use more than one for something more significant than just a couple of classes and see how each feels. Each framework has strengths and weaknesses and each will fit into your particular environment differently. The point is to pick something and test as much as possible.
The testing space is finally seeing some major action again. It will be interesting to see how things shake out over the next year or so. I'll be especially interested in seeing how the many JUnit-based projects adapt to the new JUnit. With luck, the increasing number of options and approaches to testing will encourage more developers to rigorously test their applications. There are certainly fewer and fewer reasons not to do so.