Category Archives: Testing

The Java lowercase conversion surprise in Turkey

You can convert Strings to lowercase in Java with String.toLowerCase().

The following code will work anywhere in the world except Turkey:

import java.util.Locale;

public class LocaleTest {

  @Test
  public void test() {
    Assert.assertEquals("title", "TITLE".toLowerCase());
  }

}

In Turkey, this will give you:

org.junit.ComparisonFailure: expected:<t[i]tle> but was:<t[?]tle>
	at org.junit.Assert.assertEquals(Assert.java:115)
	at org.junit.Assert.assertEquals(Assert.java:144)
	at com.garygregory.util.LocaleTest.testBug2(LocaleTest.java:23)
	... 24 more

Why is that?

Under the covers, the call to toLowerCase() becomes toLowerCase(Locale.getDefault()), and of course getDefault() changes depending on your country. So what really happens in Turkey is toLowerCase(Locale.forLanguageTag("TR"))).

In the Turkish locale, the Unicode LATIN CAPITAL LETTER I becomes a LATIN SMALL LETTER DOTLESS I. That’s not a lowercase “i”.

If you index maps with String keys that include the letter I and you normalize keys to lowercase, your program will have a bug when running on the Turkish locale.

How are you supposed to deal with that?! Well, luckily, you can use a Locale that does not perform such conversions, namely Locale.ENGLISH and more generically, Locale.ROOT.

You defensively written code becomes one of:

Assert.assertEquals("title", "TITLE".toLowerCase(Locale.ENGLISH));
Assert.assertEquals("title", "TITLE".toLowerCase(Locale.ROOT));

If you write your code “in English”, the Locale.ENGLISH is fine, but consider Locale.ROOT for more general support.

Java defines Locale.ROOT as:

/**
 * Useful constant for the root locale. The root locale is the locale whose
 * language, country, and variant are empty ("") strings. This is regarded
 * as the base locale of all locales, and is used as the language/country
 * neutral locale for the locale sensitive operations.
 *
 * @since 1.6
 */
static public final Locale ROOT = createConstant("", "");

As a reminder, Java provides localization support with the java.util.Locale class, from the Javadoc:

A Locale object represents a specific geographical, political, or cultural region. An operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to tailor information for the user. For example, displaying a number is a locale-sensitive operation— the number should be formatted according to the customs and conventions of the user’s native country, region, or culture.

The Locale class implements IETF BCP 47 which is composed of RFC 4647 “Matching of Language Tags” and RFC 5646 “Tags for Identifying Languages” with support for the LDML (UTS#35, “Unicode Locale Data Markup Language”) BCP 47-compatible extensions for locale data exchange.

For further experimentation, the following Java system properties affect how Java initializes the default locale:

  • user.language
  • user.region
  • user.script
  • user.country
  • user.variant

Here is the full source code code for the test:

package com.garygregory.util;

import java.util.Locale;

import org.junit.Assert;
import org.junit.Assume;
import org.junit.ComparisonFailure;
import org.junit.Test;

public class LocaleTest {

  private static final Locale TURKISH = Locale.forLanguageTag("tr");

  @Test
  public void testBug1() {
    Assume.assumeTrue(Locale.getDefault() != TURKISH);
    Assert.assertEquals("title", "TITLE".toLowerCase());
    Assert.assertEquals("title", "TITLE".toLowerCase(Locale.getDefault()));
  }

  @Test(expected = ComparisonFailure.class)
  public void testBug2() {
    Assert.assertEquals("title", "TITLE".toLowerCase(TURKISH));
  }

  @Test
  public void testFix() {
    Assert.assertEquals("title", "TITLE".toLowerCase(Locale.ENGLISH));
    Assert.assertEquals("title", "TITLE".toLowerCase(Locale.ROOT));
  }

}

In summary, use toLowerCase(Locale.ROOT) instead of toLowerCase() for internal maps.

Happy Coding,
Gary Gregory

The Art of Test Driven Development: Understanding Logging

Modern logging framework include the concepts of levelsloggers, markers, and filters. How and when should you use each? This is what I want to clarify based on discussions recently taking place on the Log4j user mailing list.

While logging is obviously a Martha Stewart-like “good thing”, what, when, and how to log might not be obvious to some. The how can be quite tricky, or not even possible with older frameworks. This article will clarify basic and advanced logging concepts using Java and Apache Log4j 2 as our reference platform.

Log4j 2 is the best Java logging library out there, I claim. Yes, better than Slf4j, Logback, and without a doubt, JUL. Granted, I am partial, as one of its contributors.

Central to any logging system is the log event, this is the piece of information you are sending to the log system, like “hello, I just started running.” Let’s start with the simplest concept: the log event’s level.

Understanding Log Levels

Log4jLevelsI think of levels as “how important is this event” and “who needs to know this”. Some of the art of logging is deciding who your audience is. To help your development team chase down a bug, you want to make sure that your app logs interesting events at the DEBUG and TRACE level. This is different than “what it is I am telling my audience”, which is where loggers come in.

To help you rate the importance of log events, Log4J provides the following levels: FATAL, ERROR, WARN, INFO, DEBUG, and TRACE. When I enable a level, Log4j logs these events at that level and all levels above it. For example, enabling WARN events will show me WARN through FATAL events, but not DEBUG through TRACE. In addition, Log4j bookends these levels with two special levels: OFF and ALL.

Let’s look at logging events in an authentication module. A user successfully logging in is normal and logs an INFO event. The module rejects a user after he enters the wrong password and logs a WARN event. After three unsuccessful attempts, the module locks out this user, and logs an ERROR event:

INFO : User alice logged in.
WARN : User bob entered an invalid password.
ERROR: User bob entered an invalid password three times, user locked out.

Then from a different part of the app, the email agent, for example:

DEBUG: Loading user bob from database jdbc:...
INFO : Emailed user bob: three login attempts rejected, user locked out.
ERROR: Email to alice bounced back; subject: Your weekly summary.

Depending on who does the logging, the meaning of what is normal vs. an error can be very different.

To wrap up this section, I usually configure logging when I start the application, through a configuration file. For example, telling Log4j to log all events at the WARN level and above. This lets me see only non-mundane events, stuff I should probably pay attention to. It is also possible to reconfigure Log4j while an application is up and running, a topic I’ll leave outside the scope of this article; it’s worth noting that the Log4j can watch its configuration file for changes.

Using Custom Log Levels

Advanced logging frameworks like Log4j allow you to define custom levels. Custom levels can help you understand levels better by applying the level concept to your application domain. Let’s take a little detour and look at some interesting level nomenclatures.
DEFCON levels
My favorite example is a War Games module that implements the five defense readiness conditions (DEFCONs); the alert states used by the United States Armed Forces:

“It prescribes five graduated levels of readiness (or states of alert) for the U.S. military, and increase in severity from DEFCON 5 (least severe) to DEFCON 1 (most severe) to match varying military situations.”[1]

The U.S. Department of Defense uses exercise terms when referring to DEFCON levels. The intent being to preclude confusing exercise commands with actual operational commands. The exercise term are: COCKED PISTOL (DEFCON 1), FAST PACE, ROUND HOUSE, DOUBLE TAKE, and FADE OUT (DEFCON 5).

DEFCON levels
Name Description Color
DEFCON 1 Maximum force readiness. White
DEFCON 2 Armed Forces ready to deploy and engage in less than 6 hours. Red
DEFCON 3 Air Force ready to mobilize in 15 minutes. Yellow
DEFCON 4 Above normal readiness. Green
DEFCON 5 Normal peacetime readiness. Blue

The DEFCONs are perfect candidates for custom log levels, as are their exercise terms.

You can also imagine a set of custom levels based on the Saffir-Simpson hurricane wind scale.

All North Atlantic and Eastern North Pacific major hurricanes (at least Category 3 on the Saffir-Simpson Hurricane Scale)

This graphic shows all North Atlantic and Eastern North Pacific major hurricanes (at least Category 3 on the Saffir-Simpson Hurricane Scale)

The National Hurricane Center defines the the Saffir-Simpson Hurricane Wind Scale as “a 1 to 5 rating based on a hurricane’s sustained wind speed. This scale estimates potential property damage. Hurricanes reaching Category 3 and higher are considered major hurricanes because of their potential for significant loss of life and damage. Category 1 and 2 storms are still dangerous, however, and require preventative measures. In the western North Pacific, the term “super typhoon” is used for tropical cyclones with sustained winds exceeding 150 mph.”

Saffir-Simpson Hurricane Wind Scale (more)
Category Sustained Winds Damage
5 (major) 157 mph or higher
137 kt or higher
252 km/h or higher
Catastrophic damage will occur.
4 (major) 130-156 mph
113-136 kt
209-251 km/h
Catastrophic damage will occur.
3 (major) 111-129 mph
96-112 kt
178-208 km/h
Devastating damage will occur.
2 96-110 mph
83-95 kt
154-177 km/h
Extremely dangerous winds will cause extensive damage.
1 74-95 mph
64-82 kt
119-153 km/h
Very dangerous winds will produce some damage.

This is another great example where domain levels would make a great example for custom log levels.

I hope these examples make it clear, if not already, that the level is about the relative importance of log events, regardless of who sends events.

To recap: A log level describes how important an event is: Is this a low-level step, or a rare and dangerous once-in-a-100-year event?

Next up are loggers, the kind of logging object an application interfaces with the most often.

Understanding Loggers

An application send log events like “I woke up” to the logging system through a logger.

The log event includes a level so what the application is really saying is, “Here is an interesting bit of information (nothing alarming); I just work up.”

A car could log a warning event saying, “Hey, driver, I’m running low on gas!” In code, this usually looks something like this:

logger.warn(“I’m low on gas.”);

A better message would include how much gas remains:

logger.warn(“I’m low on gas; {} gallons remaining.”, gallonsRemaining);

In this example, the {} is a parameter marker.

A logger has a name, in this case, we could call ours “car”.

If you imagine a large enough system, using the same logger to log all events will make it harder to find any single event or an interesting subset out of the whole event set. Let’s say my system includes my house, office, home office, kitchen, each room in the house, people, cars, toys, appliances, and so on. You can see that we will end up with a lot of logging events, from all over the place. To remedy this issue, you use a distinct logger for each kind of object and give each one a unique name.

If I want to analyze my driving and still be alerted of all other important events, I can now configure Log4j to log all car events at the DEBUG level and all other loggers at the WARN level.

Refining Logging with Hierarchical Loggers

Let’s take this a step further, looking at logging for our kitchen subsystem, and imagine that we look at all events for one day, regardless of the levels involved.

Every time the refrigerator door opens, I get an event, when it closes, another event. Same for the kitchen door, the oven door, the dishwasher door; you get the idea. The refrigerator can also warn me when its water filter needs replacing. The oven can tell me when preheating to a preset temperature is done, when the timer goes off and my cake is done.

What I then get is a flood of events from the kitchen logger, even if I can throttle the importance of events through levels, I still get to hear from all my appliances. What if I am only interested in the oven? Now, I know what you’re thinking: “Gary, just create one logger for each appliance”. “Well, yes you’re right” I’d say, but what would you call each logger? “I’d call them refrigerator, dishwasher, oven door, oven timer and so on”.

OK, but what if want to listen to all kitchen events again? Not just the one appliance? I’d have to set the level on each appliance logger, if I have dozens (imagine hundreds) of loggers, that’s quite a pain. There should be a way I can set the level for the whole kitchen, including everything in it.

And there is: By using hierarchical loggers.

To create a logger hierarchy, all you have to do is use periods in the name of your loggers, kind of like with internet domains. For example:

  • Kitchen
  • Kitchen.Oven
  • Kitchen.Oven.BottomOven
  • Kitchen.Oven.BottomOven.Door
  • Kitchen.Oven.BottomOven.Timer
  • Kitchen.Oven.BottomOven.Light
  • Kitchen.Oven.TopOven.Door
  • Kitchen.Oven.TopOven.Timer
  • Kitchen.Oven.TopOven.Light
  • Kitchen.Dishwasher
  • Kitchen.Refrigerator
  • Kitchen.Refrigerator.Door
  • Kitchen.Refrigerator.Filter
  • Kitchen.Refrigerator.Light
  • Kitchen.Refrigerator.Freezer.Door
  • Kitchen.Refrigerator.Freezer.Light
  • And so on

If my configuration only sets the Kitchen level to INFO, then all loggers that start with the string Kitchen inherit its level, INFO. In addition, if I configure the Kitchen.Oven logger to DEBUG, then all loggers that start with the string Kitchen.Oven will log at the DEBUG level. That’s how level inheritance works. Pretty cool and very powerful.

On the coding side, a common and pragmatic practice is to use one static logger for each class, named with the fully qualified name of the class (meaning the package name is included, like com.example.appliance.oven).

By carefully naming your loggers, you can get precisely defined log events, suitable for all manners of clever filtering, searching, and analysis.

You can already see that with just two concepts, levels and hierarchical loggers, you can get a world of well-defined log events at your fingertips, allowing you to analyze and debug your system to your hearts content, in live applications or post-mortem.

Top Gun Logging with Markers

The next ingredient in our logging soup is the event marker, which adds another level (pardon the pun) of information (again, sorry) to a log event. A Log4j marker has a name, and yes, it too can be hierarchical, but let’s leave that bit for another article. So, what is a marker? The way I like to look at it, is that a marker qualifies the kind of event I am signaling, like a tag on a blog post. This is very handy when many parts of the system, using different loggers, log the same kind of event. For example, every time a user opens and closes a door, whether that be an appliance door, a garage door, a doggy door or a POD (a Plain Old Door). This would sure be handy for security monitoring.

How would you only log open and close door events using loggers and levels. You can assume that door events are logged at… what? Maybe some appliances log door events at the INFO level and others at the DEBUG level. What if doors from a third party library log this kind of event at TRACE, another twist!

Noodle on that one for a minute…

Well, tricky, isn’t it? If I have loggers like Kitchen.Refrigerator.Door and Kitchen.Refrigerator.Door, the common root is Kitchen, not Door, so setting Kitchen to INFO is not what I want because I will get INFO events from all kitchen loggers! And what about the appliance that logs door events at the DEBUG level? That approach is dead on arrival.

Well, what about the reverse domain name trick? For each logger I could create another logger with the name in reverse, one word at time, where Kitchen.Refrigerator.Door becomes Door.Refrigerator.Kitchen. Then I can say set the Door logger to INFO and get all Door events! That would work for a given level but it sure is nasty! Two loggers for each appliance? But it’s worse, I’d need more reversed names for my Filters, Lights, and so on. Worse of all, this does not account for appliances that logs door events at the DEBUG and TRACE level! Nope, can’t be done with this reverse hack.

That’s when Markers come to the rescue. The code in all your classes that contains a door changes from:

logger.info(&quot;Door opened&quot;);
...
logger.info(&quot;Door closed&quot;);

to:

logger.info(DOOR, &quot;Door opened&quot;);
...
logger.info(DOOR, &quot;Door closed&quot;);

Where DOOR is a constant that defines a marker, in Log4j, you do it like this:

public static final Marker DOOR = MarkerManager.getMarker(“DOOR”);

No new loggers needed, just an additional parameter to your log call, regardless of the level API used.

Now, I can configure Log4j to log only events that contain the marker DOOR (if that level is enabled). Truly awesome. In order to do this, we must use filters:

&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&lt;Configuration status=&quot;WARN&quot; monitorInterval=&quot;5&quot;&gt;
  &lt;Appenders&gt;
    &lt;Console name=&quot;Console&quot; target=&quot;SYSTEM_OUT&quot; /&gt;
  &lt;/Appenders&gt;
  &lt;Loggers&gt;
    &lt;Root level=&quot;WARN&quot;&gt;
      &lt;AppenderRef ref=&quot;Console&quot; /&gt;
    &lt;/Root&gt;
    &lt;Logger name=&quot;com.example.oven&quot; level=&quot;DEBUG&quot; /&gt;
    &lt;Logger name=&quot;com.example.garage&quot; level=&quot;INFO&quot; /&gt;
  &lt;/Loggers&gt;
  &lt;Filters&gt;
    &lt;MarkerFilter marker=&quot;DOOR&quot; onMatch=&quot;ACCEPT&quot; onMismatch=&quot;DENY&quot; /&gt;
  &lt;/Filters&gt;
&lt;/Configuration&gt;

Here, start by setting the root logger to WARN, then we set the oven to log at DEBUG because the oven logs door events at the DEBUG level, and the garage to log at INFO because the garage logs door events at the INFO level. Next, we define filters, and here we only have one: a marker filter set to only accept DOOR markers and reject all others.

If instead, we wanted to hide DOOR events, we’d say:

&lt;MarkerFilter marker=&quot;DV.BUFFER&quot; onMatch=&quot;DENY&quot; onMismatch=&quot;NEUTRAL&quot; /&gt;

I’ve used markers for:

  • Different kinds of hex dumps:
    • Compressed buffers
    • Decompressed buffers
    • IO buffers
  • Different kinds of SQL statements:
    • Queries
    • Batches
    • Prepared statements
  • Flow tracing (built-in Log4j)
  • and more

That, my friends, is what I call Top Gun logging!

Summary

Levels tell you how important an event is. Loggers define who does the logging. Markers refine what it is you are logging. Filters let you control what ends up in your logs. If you combine these concepts with care, your applications will create logs that will help you debug, analyze, monitor and audit your applications with ease and grace.

Happy coding,
Gary Gregory

[1] DEFCON: https://en.wikipedia.org/wiki/DEFCON

The Art of Test Driven Development: Per-Test Logging

Log4j logoAll applications complex enough have a logging subsystem. When writing tests, it is often useful to enable a specific logging configuration for a given test case. In this article, I will show how to use Apache Log4j and JUnit to achieve this goal.

Log4j is the granddaddy of logging framework and yes, I am partial to it as one of its developers and member of its Project Management Committee. I find it far superior, as we are about to release version 2.4, to all of its cousins. JUnit and the best practice of Test Driven Development should need no introduction; if you use neither, then I welcome you to the 21st century and its ‘extreme’ or ‘agile’ development practices.

When you are running a larger build with thousands of tests or even a small build with dozens, you do not want to set the logging level to debug or trace for the whole build. A small subsystem can end up generating a large amount of log events, leaving you with the task equivalent to finding a needle in a haystack in order to locate the few logging events relevant to your development session.

What I want, is to tell JUnit that a given test case should run with a specific Log4j configuration and then forget this configuration when the test finishes. To accomplish this, we will use a JUnit Rule and Log4j’s LoggerContextRule (This class is located in Log4j’s test jar. Log4j 2.3 named this class InitialContextRule).

Here is our test boilerplate:

public class MyTest {

  @Rule
  public LoggerContextRule ctx = 
    new LoggerContextRule("resources/MyConfig.xml");

  @Test
  public void testSomething() throws Exception {
    // testing…
  }

}

In JUnit, a @Rule annotates a field that points to a rule or a method that returns a rule. In our case, we store our rule in an instance variable, which must be public and non-static. A rule will run methods annotated with @Before, then @Test methods, and finally @After methods. JUnit throws an exception when any of these fail. The code for the rule itself runs before each @Before method. This means we get a fresh logging configuration for each @Test method.

The LoggerContextRule argument is a URI or path to a configuration file on your classpath.

If you want the rule to run only once for the whole test class, then use a class rule instead:

@ClassRule
public static LoggerContextRule ctx = 
  new LoggerContextRule("resources/MyConfig.xml");

Note that the variable is now static (and still public).

Whether you should use a rule or a class rule depends on how much isolation you want between test methods. In the case of logging, it should not matter and using a class rule should be preferred since your build be faster.

You can find the Log4j rule class in the Core’s test jar here.

Happy Coding,
Gary Gregory

Understanding JUnit method order execution (updated for version 4.11)

a-zI’m revisiting my post Understanding JUnit method order execution to discuss how to fix tests that depend on method order by using JUnit 4.11.

Let’s start with a rule: You should not create test methods that depend on the order in which they are executed.

If you do have such tests and they are failing randomly or rarely, this is why: JUnit relies on Java’s reflection API to get which test methods to execute. The problem is that the API does not define the order of the methods it returns. Your tests may work for a long time and then fail, apparently randomly. The things is, you’ve just been lucky all along and relying on the Java run-time giving you a consistent answer when it makes no such guarantee.

You may even see some very confusing behavior like a superclass’ test methods being mixed in with its subclass. I’ve seen this in Eclipse and Ant for example.

JUnit 4.11 provides a workarounds for this behavior with a new class-level annotation: FixMethodOrder(MethodSorters.NAME_ASCENDING). For example:

 @FixMethodOrder(MethodSorters.NAME_ASCENDING)
 public class MyTestCase { ... }
 

The annotation FixMethodOrder sorts the test methods in lexicographic method name order, and uses Method#toString() to break ties.

The recipe is: (1) Use @FixMethodOrder, (2) Fix your tests, (3) Remove @FixMethodOrder.

JUnit is available from GitHub.

JUnit 4.9 to the rescue with @ClassRule

In a previous article titled “JUnit @Rule not initialized before @BeforeClass“, I bemoaned the fact that an @Rule could not be set up to run before an @BeforeClass.

It appears my wish has been answered with the release of JUnit 4.9 and the new @ClassRule annotation.

The only change required to make our example work is to replace @Rule with @ClassRule:

package test;

import java.io.File;
import java.io.IOException;

import junit.framework.Assert;

import org.junit.BeforeClass;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;

public class TemporaryFolderRuleBeforeClass {
    @ClassRule
    public static TemporaryFolder testFolder = new TemporaryFolder();

    @BeforeClass
    public static void testInTempFolder() throws IOException {
        File tempFile = testFolder.newFile("file.txt");
        File tempFolder = testFolder.newFolder("folder");
        System.out.println("Test folder: " + testFolder.getRoot());
        // test
        Assert.assertNotNull(testFolder.getRoot());
    }

    @Test
    public void test() {
        // ...
    }
}

Voila! It just works, the @ClassRule is set up correctly before the @BeforeClass method is called. Thank you JUnit 4.9!

Java multi-threaded unit testing

With Java 5 and JUnit 4, writing multi-threaded unit tests has never been easier.

Let’s take a brain dead ID generator as our domain object example:

    /**
     * Generates sequential unique IDs starting with 1, 2, 3, and so on.
     * <p>
     * This class is NOT thread-safe.
     * </p>
     */
    static class BrokenUniqueIdGenerator {
        private long counter = 0;

        public long nextId() {
            return ++counter;
        }
    }

This is how you test this class with different threads loads of 1, 2, 4, 8, 16, and 32 threads. We use the Java java.util.concurrent API to manage threads to use our domain object concurrently. We use JUnit @Test methods to run the test and verify results.

    @Test
    public void test01() throws InterruptedException, ExecutionException {
        test(1);
    }

    @Test
    public void test02() throws InterruptedException, ExecutionException {
        test(2);
    }

    @Test
    public void test04() throws InterruptedException, ExecutionException {
        test(4);
    }

    @Test
    public void test08() throws InterruptedException, ExecutionException {
        test(8);
    }

    @Test
    public void test16() throws InterruptedException, ExecutionException {
        test(16);
    }

    @Test
    public void test32() throws InterruptedException, ExecutionException {
        test(32);
    }

    private void test(final int threadCount) throws InterruptedException, ExecutionException {
        final BrokenUniqueIdGenerator domainObject = new BrokenUniqueIdGenerator();
        Callable<Long> task = new Callable<Long>() {
            @Override
            public Long call() {
                return domainObject.nextId();
            }
        };
        List<Callable<Long>> tasks = Collections.nCopies(threadCount, task);
        ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
        List<Future<Long>> futures = executorService.invokeAll(tasks);
        List<Long> resultList = new ArrayList<Long>(futures.size());
        // Check for exceptions
        for (Future<Long> future : futures) {
            // Throws an exception if an exception was thrown by the task.
            resultList.add(future.get());
        }
        // Validate the IDs
        Assert.assertEquals(threadCount, futures.size());
        List<Long> expectedList = new ArrayList<Long>(threadCount);
        for (long i = 1; i <= threadCount; i++) {
            expectedList.add(i);
        }
        Collections.sort(resultList);
        Assert.assertEquals(expectedList, resultList);
    }

Let’s walk through the test(int threadCount) method. We start by creating our domain object:

final BrokenUniqueIdGenerator domainObject = new BrokenUniqueIdGenerator();

This class has one method, nextId, which we wrap into a task, an instance of Callable:

        Callable<Long> task = new Callable<Long>() {
            @Override
            public Long call() {
                return domainObject.nextId();
            }
        };

This is just a generic way to fit our API call in the Java concurrency API.

We then make copies of this task, one for each thread:

List<Callable<Long>> tasks = Collections.nCopies(threadCount, task);

Next, we create a thread pool, sized at least as big as the number of threads we want to test, in this case we use the exact given value threadCount.

ExecutorService executorService = Executors.newFixedThreadPool(threadCount);

And ask Java to run all the tasks concurrently using threads from the pool:

List<Future<Long>> futures = executorService.invokeAll(tasks);

The call to invokeAll blocks until all the threads are done. Each task is run on a thread, which invokes the tasks’ call method, which in turn calls our domain object API, nextId().

When you run this test case, it will sometimes pass and sometimes fail.

That’s multithreaded testing with Java 5 and JUnit 4. Voila!

BTW, the proper implementation is:

    /**
     * Generates sequential unique IDs starting with 1, 2, 3, and so on.
     * <p>
     * This class is thread-safe.
     * </p>
     */
    static class UniqueIdGenerator {
        private final AtomicLong counter = new AtomicLong();

        public long nextId() {
            return counter.incrementAndGet();
        }
    }

The full listing is:

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.atomic.AtomicLong;

import org.junit.Assert;
import org.junit.Test;

public class MultiThreadedTestCase {

    /**
     * Generates sequential unique IDs starting with 1, 2, 3, and so on.
     * <p>
     * This class is NOT thread-safe.
     * </p>
     */
    static class BrokenUniqueIdGenerator {
        private long counter = 0;

        public long nextId() {
            return ++counter;
        }
    }

    /**
     * Generates sequential unique IDs starting with 1, 2, 3, and so on.
     * <p>
     * This class is thread-safe.
     * </p>
     */
    static class UniqueIdGenerator {
        private final AtomicLong counter = new AtomicLong();

        public long nextId() {
            return counter.incrementAndGet();
        }
    }

    private void test(final int threadCount) throws InterruptedException, ExecutionException {
        final UniqueIdGenerator domainObject = new UniqueIdGenerator();
        Callable<Long> task = new Callable<Long>() {
            @Override
            public Long call() {
                return domainObject.nextId();
            }
        };
        List<Callable<Long>> tasks = Collections.nCopies(threadCount, task);
        ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
        List<Future<Long>> futures = executorService.invokeAll(tasks);
        List<Long> resultList = new ArrayList<Long>(futures.size());
        // Check for exceptions
        for (Future<Long> future : futures) {
            // Throws an exception if an exception was thrown by the task.
            resultList.add(future.get());
        }
        // Validate the IDs
        Assert.assertEquals(threadCount, futures.size());
        List<Long> expectedList = new ArrayList<Long>(threadCount);
        for (long i = 1; i <= threadCount; i++) {
            expectedList.add(i);
        }
        Collections.sort(resultList);
        Assert.assertEquals(expectedList, resultList);
    }

    @Test
    public void test01() throws InterruptedException, ExecutionException {
        test(1);
    }

    @Test
    public void test02() throws InterruptedException, ExecutionException {
        test(2);
    }

    @Test
    public void test04() throws InterruptedException, ExecutionException {
        test(4);
    }

    @Test
    public void test08() throws InterruptedException, ExecutionException {
        test(8);
    }

    @Test
    public void test16() throws InterruptedException, ExecutionException {
        test(16);
    }

    @Test
    public void test32() throws InterruptedException, ExecutionException {
        test(32);
    }
}

Note: I used Oracle Java 1.6.0_24 (64-bit) on Windows 7 (64-bit).

Conditionally ignoring tests in JUnit

JUnit conveniently lets you annotate a test method with @Ignore.

@Test
@Ignore
public void testAdd() {
   ...
}

This causes JUnit to completely ignore the method.

What if you want to only ignore the test under a condition determined at runtime? This would be nice:

@Test
@Ignore(someCondition())
public void testAdd() {
   ...
}

But Java and JUnit do not work like that. Instead, you need to use the Assume class.

@Test
public void testAdd() {
   Assume.assumeTrue(someCondition());
   ...
}

Voila!