SummaryBy Andy Schneider
Extreme Programming's rise in popularity among the Java community has prompted more development teams to use JUnit: a simple test framework for building and executing unit tests. Like any toolkit, JUnit can be used effectively and ineffectively. In this article, Andy Schneider discusses good and bad ways to use JUnit and provides practical recommendations for its use by development teams. In addition, he explains simple mechanisms to support:
- Automatic construction of composite tests
- Multithreaded test cases
This article assumes some familiarity with JUnit. (4,000 words)
JUnit is a typical toolkit: if used with care and with recognition of its idiosyncrasies, JUnit will help to develop good, robust tests. Used blindly, it may produce a pile of spaghetti instead of a test suite. This article presents some guidelines that can help you avoid the pasta nightmare. The guidelines sometimes contradict themselves and each other -- this is deliberate. In my experience, there are rarely hard and fast rules in development, and guidelines that claim to be are misleading.
We'll also closely examine two useful additions to the developer's toolkit:
TestCase
that better supports tests in multiple threads
When faced with unit testing, many teams end up producing some kind of testing framework. JUnit, available as open source, eliminates this onerous task by providing a ready-made framework for unit testing. JUnit, best used as an integral part of a development testing regime, provides a mechanism that developers can use to consistently write and execute tests. So, what are the JUnit best practices?
Do not use the test-case constructor to set up a test
case
Setting up a test case in the constructor is not a good
idea. Consider:
public class SomeTest extends TestCase
public
SomeTest (String testName) {
super
(testName);
// Perform test set-up
}
}
Imagine that while performing the setup, the setup code throws an
IllegalStateException
. In response, JUnit would throw an
AssertionFailedError
, indicating that the test case could not be
instantiated. Here is an example of the resulting stack trace:
junit.framework.AssertionFailedError: Cannot instantiate test case:
test1 at
junit.framework.Assert.fail(Assert.java:143) at
junit.framework.TestSuite$1.runTest(TestSuite.java:178) at
junit.framework.TestCase.runBare(TestCase.java:129) at
junit.framework.TestResult$1.protect(TestResult.java:100) at
junit.framework.TestResult.runProtected(TestResult.java:117) at
junit.framework.TestResult.run(TestResult.java:103) at
junit.framework.TestCase.run(TestCase.java:120) at
junit.framework.TestSuite.run(TestSuite.java, Compiled Code) at
junit.ui.TestRunner$12.run(TestRunner.java:429)
This stack trace proves rather uninformative; it only indicates that the test case could not be instantiated. It doesn't detail the original error's location or place of origin. This lack of information makes it hard to deduce the exception's underlying cause.
Instead of setting up the data in the constructor, perform test setup by
overriding setUp()
. Any exception thrown within
setUp()
is reported correctly. Compare this stack trace with the
previous example:
java.lang.IllegalStateException: Oops at bp.DTC.setUp(DTC.java:34) at
junit.framework.TestCase.runBare(TestCase.java:127) at
junit.framework.TestResult$1.protect(TestResult.java:100) at
junit.framework.TestResult.runProtected(TestResult.java:117) at
junit.framework.TestResult.run(TestResult.java:103)
...
This stack trace is much more informative; it shows which exception was
thrown (IllegalStateException
) and from where. That makes it far
easier to explain the test setup's failure.
Don't assume the order in which tests within a test
case run
You should not assume that tests will be called in
any particular order. Consider the following code segment:
public class SomeTestCase extends TestCase {
public SomeTestCase (String testName) {
super (testName);
}
public void testDoThisFirst () {
...
}
public void testDoThisSecond () {
}
}
In this example, it is not certain that JUnit will run these tests in any specific order when using reflection. Running the tests on different platforms and Java VMs may therefore yield different results, unless your tests are designed to run in any order. Avoiding temporal coupling will make the test case more robust, since changes in the order will not affect other tests. If the tests are coupled, the errors that result from a minor update may prove difficult to find.
In situations where ordering tests makes sense -- when it is more efficient
for tests to operate on some shared data that establish a fresh state as each
test runs -- use a static suite()
method like this one to ensure
the ordering:
public static Test suite() {
suite.addTest(new
SomeTestCase ("testDoThisFirst";));
suite.addTest(new
SomeTestCase ("testDoThisSecond";));
return suite;
}
There is no guarantee in the JUnit API documentation as to the order your
tests will be called in, because JUnit employs a Vector
to store
tests. However, you can expect the above tests to be executed in the order they
were added to the test suite.
Avoid writing test cases with side effects
Test cases that have side effects exhibit two problems:
In the first situation, the individual test case may operate correctly.
However, if incorporated into a TestSuite
that runs every test case
on the system, it may cause other test cases to fail. That failure mode can be
difficult to diagnose, and the error may be located far from the test failure.
In the second situation, a test case may have updated some system state so that it cannot run again without manual intervention, which may consist of deleting test data from the database (for example). Think carefully before introducing manual intervention. First, the manual intervention will need to be documented. Second, the tests could no longer be run in an unattended mode, removing your ability to run tests overnight or as part of some automated periodic test run.
Call a superclass's setUp() and tearDown() methods when
subclassing
When you consider:
public class SomeTestCase extends AnotherTestCase {
// A connection to a database
private Database theDatabase;
public
SomeTestCase (String testName) {
super
(testName);
}
public void
testFeatureX () {
...
}
public void setUp () {
// Clear out the database
theDatabase.clear ();
}
}
Can you spot the deliberate mistake? setUp()
should call
super.setUp()
to ensure that the environment defined in
AnotherTestCase
initializes. Of course, there are exceptions: if
you design the base class to work with arbitrary test data, there won't be a
problem.
Do not load data from hard-coded locations on a
filesystem
Tests often need to load data from some location
in the filesystem. Consider the following:
public void setUp () {
FileInputStream inp
("C:\\TestData\\dataSet1.dat");
...
}
The code above relies on the data set being in the C:\TestData
path. That assumption is incorrect in two situations:
C:
and
stores it on another disk
One solution might be:
public void setUp () {
FileInputStream inp
("dataSet1.dat");
...
}
However, that solution depends on the test running from the same directory as the test data. If several different test cases assume this, it is difficult to integrate them into one test suite without continually changing the current directory.
To solve the problem, access the dataset using either
Class.getResource()
or Class.getResourceAsStream()
.
Using them, however, means that resources load from a location relative to the
class's origin.
Test data should, if possible, be stored with the source code in a configuration management (CM) system. However, if you're using the aforementioned resource mechanism, you'll need to write a script that moves all the test data from the CM system into the classpath of the system under test. A less ungainly approach is to store the test data in the source tree along with the source files. With this approach, you need a location-independent mechanism to locate the test data within the source tree. One such mechanism is a class. If a class can be mapped to a specific source directory, you could write code like this:
InputStream inp = SourceResourceLoader.getResourceAsStream
(this.getClass (), "dataSet1.dat");
Now you must only determine how to map from a class to the directory that
contains the relevant source file. You can identify the root of the source tree
(assuming it has a single root) by a system property. The class's package name
can then identify the directory where the source file lies. The resource loads
from that directory. For Unix and NT, the mapping is straightforward: replace
every instance of '.' with File.separatorChar
.
Keep tests in the same location as the source code
If the test source is kept in the same location as the
tested classes, both test and class will compile during a build. This forces you
to keep the tests and classes synchronized during development. Indeed, unit
tests not considered part of the normal build quickly become dated and useless.
Name tests properly
Name the test
case TestClassUnderTest
. For example, the test case for the class
MessageLog
should be TestMessageLog
. That makes it
simple to work out what class a test case tests. Test methods' names within the
test case should describe what they test:
testLoggingEmptyMessage()
testLoggingNullMessage()
testLoggingWarningMessage()
testLoggingErrorMessage()
Proper naming helps code readers understand each test's purpose.
Ensure that tests are time-independent
Where possible, avoid using data that may expire; such data
should be either manually or programmatically refreshed. It is often simpler to
instrument the class under test, with a mechanism for changing its notion of
today. The test can then operate in a time-independent manner without having to
refresh the data.
Consider locale when writing tests
Consider a test that uses dates. One approach to creating
dates would be:
Date date = DateFormat.getInstance ().parse ("dd/mm/yyyy");
Unfortunately, that code doesn't work on a machine with a different locale. Therefore, it would be far better to write:
Calendar cal = Calendar.getInstance ();
Cal.set (yyyy, mm-1, dd);
Date date = Calendar.getTime ();
The second approach is far more resilient to locale changes.
Utilize JUnit's assert/fail methods and exception
handling for clean test code
Many JUnit novices make the
mistake of generating elaborate try and catch blocks to catch unexpected
exceptions and flag a test failure. Here is a trivial example of this:
public void exampleTest () {
try {
// do some test
}
catch (SomeApplicationException e) {
fail ("Caught SomeApplicationException
exception");
}
}
JUnit automatically catches exceptions. It considers uncaught exceptions to be errors, which means the above example has redundant code in it.
Here's a far simpler way to achieve the same result:
public void exampleTest () throws SomeApplicationException {
// do some test
}
In this example, the redundant code has been removed, making the test easier to read and maintain (since there is less code).
Use the wide variety of assert methods to express your intention in a simpler fashion. Instead of writing:
assert (creds == 3);
Write:
assertEquals ("The number of credentials should be 3", 3, creds);
The above example is much more useful to a code reader. And if the assertion fails, it provides the tester with more information. JUnit also supports floating point comparisons:
assertEquals ("some message", result, expected, delta);
When you compare floating point numbers, this useful function saves you from repeatedly writing code to compute the difference between the result and the expected value.
Use assertSame()
to test for two references that point to the
same object. Use assertEquals()
to test for two objects that are
equal.
Document tests in javadoc
Test
plans documented in a word processor tend to be error-prone and tedious to
create. Also, word-processor-based documentation must be kept synchronized with
the unit tests, adding another layer of complexity to the process. If possible,
a better solution would be to include the test plans in the tests'
javadoc
, ensuring that all test plan data reside in one place.
Avoid visual inspection
Testing
servlets, user interfaces, and other systems that produce complex output is
often left to visual inspection. Visual inspection -- a human inspecting output
data for errors -- requires patience, the ability to process large quantities of
information, and great attention to detail: attributes not often found in the
average human being. Below are some basic techniques that will help reduce the
visual inspection component of your test cycle.
Swing
When testing a Swing-based UI, you can write tests
to ensure that:
A more thorough treatment of this can be found in the worked example of testing a GUI, referenced in the Resources section.
XML
When testing classes that process XML, it pays to
write a routine that compares two XML DOMs for equality. You can then
programmatically define the correct DOM in advance and compare it with the
actual output from your processing methods.
Servlets
With servlets, a couple of approaches can work.
You can write a dummy servlet framework and preconfigure it during a test. The
framework must contain derivations of classes found in the normal servlet
environment. These derivations should allow you to preconfigure their responses
to method calls from the servlet.
For example:
HttpServletRequest
can be subclassed to allow the test class
to specify the header, method, path info, and other data
HttpServletResponse
can be subclassed to return an output
stream that stores the servlets' responses in a string for later checking
A simpler solution is to use HttpUnit
to test your servlets.
HttpUnit
provides a DOM view of a request's results, which makes it
relatively simple to compare actual data with expected results.
You can avoid visual inspection in many ways. However, sometimes it is more cost-effective to use visual inspection or a more specialized testing tool. For example, testing a UI's dynamic behavior within JUnit is complicated, but possible. It may be a better idea to purchase one of the many UI record/playback testing tools available, or to perform some visual inspection as part of testing. However, that doesn't mean the general rule -- don't visually inspect -- should be ignored.
Keep tests small and fast
Executing
every test for the entire system shouldn't take hours. Indeed, developers will
more consistently run tests that execute quickly. Without regularly running the
full set of tests, it will be difficult to validate the entire system when
changes are made. Errors will start to creep back in, and the benefits of unit
testing will be lost. This means stress tests and load tests for single classes
or small frameworks of classes shouldn't be run as part of the unit test suite;
they should be executed separately.
Use the reflection-driven JUnit API
Allowing TestSuite
to populate itself with test
cases using reflection reduces maintenance time. Reflection ensures that you
don't need to update the suite()
implementation whenever a new test
is added.
Build a test case for the entire system
It is important to build a test case for the entire system.
If one test case exercises the whole system, then developers can test the impact
their changes will have on every class in the system. This increases the chance
of errors resulting from unanticipated side effects being caught earlier.
Without a universal test case, developers tend to test only the class they have
modified. Also, running all the tests for the system becomes a painstaking
manual process.
If we built a test case for the entire system, it would consist of all the
other test cases, already defined. The test case would define the
suite()
method, which would add all test cases defined in the
system to a TestSuite
. This test suite would then be returned from
the suite()
method. If you had many test cases, building such a
test suite would be time-consuming. In addition, you would have to update the
universal test case when new test cases were added or existing test cases were
renamed or deleted. Instead of manually building and maintaining the test suite,
build a test case that automatically builds a TestSuite
from all of
your system's test cases. Here is an outline of the requirements for such a test
case:
TestCase
s that are
meant to be subclasses, and not directly executed.
We can use the Java type system to determine what sort of test a test case
represents. We can have test cases extend classes like UnitTest
,
StressTest
, LoadTest
, and so on. However, this would
make test case classes difficult to reuse between test types, because the test
type decision is made near the root of the inheritance hierarchy; it should be
made at each leaf instead. As an alternative, we can distinguish tests using a
field: public static final String TEST_ALL_TEST_TYPE
. Test cases
will be loaded if they have this field declared with a value matching a string
that the automatic test case has been configured with. To build this, we'll
implement three classes:
ClassFinder
recursively searches a directory
tree for classfiles. Each classfile is loaded and the class's full class name
is extracted. That class name is added to a list for later loading.
TestCaseLoader
loads each class in the list
found by ClassFinder
and determines if it is a test case. If it
is, it is added to a list.
TestAll
is a subclass of
TestCase
with an implementation of suite()
that will
load in a set of test cases by TestCaseLoader
. Let's look at each class in turn.
ClassFinder ClassFinder
locates the classes
within the system to be tested. It is constructed with the directory that holds
the system's classes. ClassFinder
then finds all the classes in the
directory tree and stores them for later use. The first part of
ClassFinder
's implementation is below:
public class ClassFinder {
// The cumulative list
of classes found.
final private Vector classNameList = new
Vector ();
/**
* Find all classes stored in classfiles in classPathRoot
* Inner classes are not supported.
*/
public ClassFinder(final
File classPathRoot) throws IOException {
findAndStoreTestClasses (classPathRoot);
}
/**
* Recursive method that adds all class names related
to classfiles it finds in
* the currentDirectory (and below).
*/
private void findAndStoreTestClasses (final File
currentDirectory) throws IOException {
String files[] =
currentDirectory.list();
for(int i = 0;i
< files.length;i++) {
File file = new
File(currentDirectory, files[i]);
String fileBase =
file.getName ();
int
idx = fileBase.indexOf(".class");
final int
CLASS_EXTENSION_LENGTH = 6;
if(idx != -1
&& (fileBase.length() - idx) == CLASS_EXTENSION_LENGTH) {
In the code above, we iterate over all the files in a directory. If a filename has a ".class" extension, we determine the fully qualified class name of the class stored in the classfile, as seen here:
JcfClassInputStream
inputStream = new JcfClassInputStream(new FileInputStream (file));
JcfClassFile
classFile = new JcfClassFile (inputStream);
System.out.println
("Processing: " + classFile.getFullName ().replace ('/','.'));
classNameList.add
(classFile.getFullName ().replace ('/','.'));
This code uses the JCF package to load the classfile and determine the name of the class stored within it. The JCF package is a set of utility classes for loading and examining classfiles. (See Resources for more information.) The JCF package allows us to find each class's full class name. We could infer the class name from the directory name, but that doesn't work well for build systems that don't store classes according to this structure. Nor does it work for inner classes.
Lastly, we check to see if the file is actually a directory. (See the code snippet below.) If it is, we recurse into it. This allows us to discover all the classes in a directory tree:
} else
if(file.isDirectory()) {
findAndStoreTestClasses
(file);
}
}
}
/**
* Return an iterator over the collection of classnames (Strings)
*/
public Iterator getClasses () {
return
classNameList.iterator ();
}
}
TestCaseLoader TestCaseLoader
finds the
test cases among the class names from ClassFinder
. This code
snippet shows the top-level method for adding a class that represents a
TestCase
to the list of test cases:
public class TestCaseLoader {
final private
Vector classList = new Vector ();
final private String
requiredType;
/**
* Adds
testCaseClass to the list of classdes
* if the class
is a test case we wish to load. Calls
*
shouldLoadTestCase () to determine that.
*/
private void addClassIfTestCase (final Class
testCaseClass) {
if (shouldAddTestCase
(testCaseClass)) {
classList.add
(testCaseClass);
}
}
/**
* Determine if we should load this test case. Calls
isATestCaseOfTheCorrectType
* to determine if the
test case should be
* added to the class list.
*/
private boolean
shouldAddTestCase (final Class testCaseClass) {
return isATestCaseOfTheCorrectType
(testCaseClass);
}
You'll find the meat of the class in the
isATestCaseOfTheCorrectType()
method, listed below. For each class
being considered, it:
TestCase
. If not, it is
not a test case.
public final static
TEST_ALL_TEST_TYPE
has a value matching that specified in the member
field requiredType
. Here's the code:
private boolean isATestCaseOfTheCorrectType (final
Class testCaseClass) {
boolean
isOfTheCorrectType = false;
if
(TestCase.class.isAssignableFrom(testCaseClass)) {
try {
Field
testAllIgnoreThisField = testCaseClass.getDeclaredField("TEST_ALL_TEST_TYPE");
final
int EXPECTED_MODIFIERS = Modifier.STATIC | Modifier.PUBLIC | Modifier.FINAL;
if
(((testAllIgnoreThisField.getModifiers() & EXPECTED_MODIFIERS) !=
EXPECTED_MODIFIERS) ||
(testAllIgnoreThisField.getType()
!= String.class)) {
throw
new IllegalArgumentException ("TEST_ALL_TEST_TYPE should be static private final
String");
}
String
testType = (String)testAllIgnoreThisField.get(testCaseClass);
isOfTheCorrectType
= requiredType.equals (testType);
} catch
(NoSuchFieldException e) {
} catch
(IllegalAccessException e) {
throw
new IllegalArgumentException ("The field " + testCaseClass.getName () +
".TEST_ALL_TEST_TYPE is not accessible.");
}
}
return isOfTheCorrectType;
}
Next, the loadTestCases()
method examines each class name. It
loads the class (if it can be loaded); if the class is a test case and of the
required type, the method adds the class to its list of test cases:
public void loadTestCases (final Iterator
classNamesIterator) {
while
(classNamesIterator.hasNext ()) {
String className =
(String)classNamesIterator.next ();
try {
Class
candidateClass = Class.forName (className);
addClassIfTestCase
(candidateClass);
}
catch (ClassNotFoundException e) {
System.err.println
("Cannot load class: " + className);
}
}
}
/**
* Construct this instance.
Load all the test cases possible that derive
* from
baseClass and cannot be ignored.
* @param
classNamesIterator An iterator over a collection of fully qualified class names
*/
public TestCaseLoader(final String
requiredType) {
if (requiredType ==
null) throw new IllegalArgumentException ("requiredType is null");
this.requiredType = requiredType;
}
/**
*
Obtain an iterator over the collection of test case classes loaded by
loadTestCases
*/
public Iterator
getClasses () {
return
classList.iterator ();
}
TestAll TestCall
pulls everything together.
It uses the aforementioned classes to build a list of test cases defined in the
system. It adds those test cases to a TestSuite
and returns the
TestSuite
as part of its implementation of the suite()
method. The result: a test case that automatically extracts every defined test
case in the system, ready for execution by JUnit.
public class TestAll extends TestCase {
The addAllTests()
method iterates over the classes loaded by the
TestCaseLoader
and adds them to the test suite:
private static int addAllTests(final TestSuite suite,
final Iterator classIterator)
throws java.io.IOException {
int testClassCount = 0;
while (classIterator.hasNext ()) {
Class testCaseClass =
(Class)classIterator.next ();
suite.addTest (new
TestSuite (testCaseClass));
System.out.println
("Loaded test case: " + testCaseClass.getName ());
testClassCount++;
}
return testClassCount;
}
With suite()
, the test cases are added to the
TestSuite
, then returned to JUnit for execution. It obtains, from
the system property "class_root"
, the directory where the classes
are stored. It obtains, from the system property "test_type"
, the
type of test cases to load. It uses the ClassFinder
to find all the
classes, and the TestCaseLoader
to load all the appropriate test
cases. It then adds these to a new TestSuite
:
public static Test suite()
throws Throwable {
try {
String classRootString
= System.getProperty("class_root");
if (classRootString ==
null) throw new IllegalArgumentException ("System property class_root must be
set.");
String
testType = System.getProperty("test_type");
if (testType == null)
throw new IllegalArgumentException ("System property test_type must be set.");
File classRoot = new
File(classRootString);
ClassFinder
classFinder = new ClassFinder (classRoot);
TestCaseLoader
testCaseLoader = new TestCaseLoader (testType);
testCaseLoader.loadTestCases
(classFinder.getClasses ());
TestSuite suite = new
TestSuite();
int
numberOfTests = addAllTests (suite, testCaseLoader.getClasses ());
System.out.println("Number
of test classes found: " + numberOfTests);
return suite;
} catch (Throwable t) {
// This ensures we
have extra information. Otherwise we get a "Could not invoke the suite method."
message.
t.printStackTrace ();
throw t;
}
}
/**
* Basic constructor - called by
the test runners.
*/
public
TestAll(String s) {
super(s);
}
}
To test an entire system using these classes, execute the following command (in a Windows command shell):
java -cp C:\project\classes;C:\junit3.2\junit.jar:C:\jcf\jcfutils.zip
-Dclass_root=C:\project\classes -Dtest_type=UNIT junit.ui.TestRunner bp.TestAll
This command loads and runs all test cases of type UNIT
that
have classes stored under C:\project\classes
.
Test thread safety
You'll want to
guarantee the status of supposedly thread-safe classes by testing them. Such
tests prove difficult using Junit 3.2's existing set of facilities. You can use
junit.extensions.ActiveTest
to run a test case in a different
thread. However, TestSuite
assumes that a test case is complete
when it returns from run()
; with
junit.extensions.ActiveTest
, it is not. We could work hard to
define a properly working ActiveTestSuite
; instead, let's look at a
simpler solution: MultiThreadedTestCase
. First, I'll show how
MultiThreadedTestCase
assists with multithreaded testing. Then I'll
show how MultiThreadedTestCase
is implemented.
To use MultiThreadedTestCase
, we implement the standard elements
of a TestCase
, but we derive from
MultiThreadedTestCase
. The standard elements are the class
declaration, the constructor, and since we're using TestAll
, the
definition of the test type:
public class MTTest extends MultiThreadedTestCase {
/**
* Basic constructor -
called by the test runners.
*/
public MTTest(String s) {
super (s);
}
public static final String TEST_ALL_TEST_TYPE =
"UNIT";
A multithreaded test case needs to spawn a number of threads that perform some operation. We need to start those threads, wait until they've executed, and then return the results to JUnit -- all done in the code below. The code is trivial; in practice, this code would spawn multiple threads that performed different operations on the class under test. After each operation the class invariants and post-conditions would be tested to ensure that the class was behaving properly.
public void testMTExample ()
{
// Create 100 threads containing the
test case.
TestCaseRunnable tct [] = new
TestCaseRunnable [100];
for (int i = 0;
i < tct.length; i++)
{
tct[i] = new
TestCaseRunnable () {
public
void runTestCase () {
assert
(true);
}
};
}
// Run the 100 threads, wait for them to
complete and return the results to JUnit.
runTestCaseRunnables (tct);
}
}
Now that I've shown how to use MultiThreadedTestCase
, I'll
examine the implementation. First, we declare the class and add an array where
the running threads will be stored:
public class MultiThreadedTestCase extends TestCase {
/**
* The threads that are
executing.
*/
private Thread
threads[] = null;
testResult
, seen below, holds the testResult
that
declares that the test case's run()
will be passed. We override
run()
so we can store the testResult
for later
population by the test threads:
/**
* The tests
TestResult.
*/
private
TestResult testResult = null;
/**
* Simple constructor.
*/
public MultiThreadedTestCase(final String s) {
super(s);
}
/**
* Override run so we can
save the test result.
*/
public void run(final TestResult result) {
testResult = result;
super.run(result);
testResult = null;
runTestCaseRunnables()
runs each TestCaseRunnable
in a seperate thread. All the threads are created and then started at the same
time. The method waits until every thread has finished and then returns:
protected void runTestCaseRunnables (final
TestCaseRunnable[] runnables) {
if(runnables == null) {
throw new
IllegalArgumentException("runnables is null");
}
threads = new Thread[runnables.length];
for(int i = 0;i < threads.length;i++)
{
threads[i] = new
Thread(runnables[i]);
}
for(int i = 0;i < threads.length;i++)
{
threads[i].start();
}
try {
for(int i = 0;i <
threads.length;i++) {
threads[i].join();
}
}
catch(InterruptedException ignore) {
System.out.println("Thread
join interrupted.");
}
threads = null;
}
Exceptions caught in the test threads must be propagated into the
testResult
instance we saved from the run()
method.
handleException()
, below, does just that:
/**
* Handle an
exception. Since multiple threads won't have their
*
exceptions caught the threads must manually catch them and call
* handleException().
*
@param t Exception to handle.*/
private void
handleException(final Throwable t) {
synchronized(testResult) {
if(t instanceof
AssertionFailedError) {
testResult.addFailure(this,
(AssertionFailedError)t);
}
else {
testResult.addError(this,
t);
}
}
}
Finally, we define the class that each test thread extends. The purpose of
this class is to provide an environment (runTestCase()
) where
thrown exceptions will be caught and passed to JUnit. The implementation of this
class is:
/**
* A test case thread.
Override runTestCase () and define
* behaviour of
test in there.*/
protected abstract class TestCaseRunnable
implements Runnable {
/**
* Override this to define the
test*/
public abstract void
runTestCase()
throws
Throwable;
/**
* Run the test in an environment
where
* we can handle the
exceptions generated by the test method.*/
public void run() {
try {
runTestCase();
}
catch(Throwable t) /*
Any other exception we handle and then we interrupt the other threads.*/ {
handleException(t);
interruptThreads();
}
}
}
}
The implementation above helps to develop multithreaded test cases. It handles exceptions thrown in the multiple testing threads and passes them back to JUnit. JUnit only sees a test case that behaves like a single-threaded test. The unit test developer can extend that test case to develop multithreaded tests, without spending much time developing thread-handling code.
Conclusion
Using JUnit to develop
robust tests takes some practice (as does writing tests). This article contains
a number of techniques for improving your tests' usefulness. Those techniques
range from avoiding basic mistakes (such as not using setUp()
) to
more design-level issues (avoiding intertest coupling). I've covered some basic
ideas to help you use JUnit to test parts of your UI or Web application. I've
also shown how to build an automated test suite that removes the overhead of
maintaining hand-coded test suites and a mechanism for reducing the effort of
developing multithreaded JUnit test cases.
JUnit is an excellent framework for unit-testing Java applications. One final thought: If you just started using JUnit to produce unit tests, stick at it. For the first few weeks, you may not see any real reward for your labors. In fact, you may feel that the whole process slows you down. However, after a few weeks, you'll begin to enhance existing code. Then you'll run your tests, pick up new bugs, and fix them. You'll be far more confident in your code base and you will see the value of unit testing.
About
the author Andy Schneider is a technical architect for BJSS. He has been using object technology since 1988 to build both large- and small-scale systems. Schneider has been using xUnit in projects for over 18 months. His interests include distributed architectures and development processes. |
(c) Copyright 2000 ITworld.com, Inc., an IDG Communications company