Multi-threading is one of the most powerful yet challenging aspects of Java development. I’ve spent years working with concurrent systems, and I still encounter subtle bugs that take hours to diagnose. In this article, I’ll share the most common Java concurrency anti-patterns I’ve seen in production code and provide concrete solutions to fix them.
Incorrect Synchronization
The most fundamental concurrency mistake is failing to properly synchronize access to shared mutable state. Consider this innocent-looking counter:
public class UnsafeCounter {
private int count = 0;
public int increment() {
return ++count; // Not atomic!
}
}
The problem? The increment operation isn’t atomic. It involves reading the value, incrementing it, and writing it back. If two threads call this method simultaneously, they might read the same value, resulting in a lost update.
Here’s how to fix it:
public class SafeCounter {
private int count = 0;
public synchronized int increment() {
return ++count;
}
// Alternative using atomic classes
private AtomicInteger atomicCount = new AtomicInteger(0);
public int incrementAtomic() {
return atomicCount.incrementAndGet();
}
}
The synchronized version ensures only one thread executes the method at a time. The AtomicInteger approach is often more efficient as it uses CPU-specific atomic instructions instead of locking.
Thread Pool Configuration Issues
Thread pools are essential for managing thread lifecycle, but misconfiguring them can lead to serious problems:
public class ThreadPoolMisuse {
// Problematic: Unbounded queue with fixed thread pool
private ExecutorService badPool = Executors.newFixedThreadPool(10);
}
The fixed thread pool uses an unbounded queue by default. If tasks are submitted faster than they can be processed, the queue will grow indefinitely, potentially causing OutOfMemoryError.
A better approach:
private ExecutorService goodPool = new ThreadPoolExecutor(
5, // Core pool size
10, // Max pool size
60, TimeUnit.SECONDS, // Thread keep-alive time
new ArrayBlockingQueue<>(100), // Bounded queue
new ThreadPoolExecutor.CallerRunsPolicy() // Rejection policy
);
This configuration limits the queue size and applies a rejection policy (in this case, executing the task in the caller’s thread) to prevent resource exhaustion.
ThreadLocal Memory Leaks
ThreadLocal variables are great for thread-confined data, but they can cause memory leaks in applications using thread pools:
public class ThreadLocalLeak {
private static ThreadLocal<ExpensiveObject> threadLocal = new ThreadLocal<>();
public void process() {
threadLocal.set(new ExpensiveObject());
doWork();
// Missing threadLocal.remove() - memory leak!
}
}
When a thread completes a task and returns to the pool, any ThreadLocal values remain attached to it. Over time, this can accumulate significant memory waste.
The fix is simple but crucial:
public void process() {
try {
threadLocal.set(new ExpensiveObject());
doWork();
} finally {
threadLocal.remove(); // Prevent memory leak
}
}
Always clean up ThreadLocal variables when you’re done with them, especially in server applications using thread pools.
Lock Ordering Deadlocks
Deadlocks often occur when two threads acquire multiple locks in different orders:
public class DeadlockProne {
private final Object lock1 = new Object();
private final Object lock2 = new Object();
public void methodA() {
synchronized(lock1) {
synchronized(lock2) {
// Operation
}
}
}
public void methodB() {
synchronized(lock2) {
synchronized(lock1) {
// Operation
}
}
}
}
If thread 1 calls methodA while thread 2 calls methodB, they might deadlock: thread 1 holds lock1 and waits for lock2, while thread 2 holds lock2 and waits for lock1.
The solution is to establish a consistent lock ordering throughout your codebase:
public void methodBFixed() {
synchronized(lock1) { // Always acquire lock1 first
synchronized(lock2) {
// Operation
}
}
}
For complex systems, consider using java.util.concurrent.locks.Lock with tryLock() and timeout parameters to recover from potential deadlocks.
Double-Checked Locking Pitfall
The infamous double-checked locking pattern tries to optimize singleton creation:
public class DoubleCheckedLockingBroken {
private static Helper instance;
public static Helper getInstance() {
if (instance == null) { // First check (no lock)
synchronized(DoubleCheckedLockingBroken.class) {
if (instance == null) { // Second check (with lock)
instance = new Helper();
}
}
}
return instance;
}
}
Prior to Java 5, this pattern was broken due to memory visibility issues. Even in modern Java, it’s incorrect without proper memory barriers.
The fixed version requires the volatile keyword:
private static volatile Helper instance;
public static Helper getInstance() {
if (instance == null) {
synchronized(DoubleCheckedLockingBroken.class) {
if (instance == null) {
instance = new Helper();
}
}
}
return instance;
}
However, in most cases, a simpler approach is better:
private static class HelperHolder {
static final Helper INSTANCE = new Helper();
}
public static Helper getInstance() {
return HelperHolder.INSTANCE;
}
This initialization-on-demand holder idiom leverages JVM class loading guarantees and avoids explicit synchronization.
Ignoring Interrupted Exceptions
I’ve seen this mistake in nearly every large Java codebase:
public void badRun() {
try {
while (true) {
Thread.sleep(1000);
doWork();
}
} catch (InterruptedException e) {
// Exception swallowed!
}
}
Interruption is Java’s cooperative cancellation mechanism. When you catch InterruptedException but don’t respond appropriately, you break this mechanism.
The correct approach:
public void run() {
try {
while (!Thread.currentThread().isInterrupted()) {
Thread.sleep(1000);
doWork();
}
} catch (InterruptedException e) {
// Restore the interrupted status
Thread.currentThread().interrupt();
}
}
When catching InterruptedException, either rethrow it or restore the interrupt status so higher-level code can detect the cancellation request.
Using Non-Thread-Safe Collections
Using regular collections in concurrent contexts is asking for trouble:
public class CollectionMisuse {
private List<String> unsafeList = new ArrayList<>();
private Map<String, Integer> unsafeMap = new HashMap<>();
// Multiple threads accessing these collections will cause problems
}
These collections aren’t thread-safe and can produce unpredictable results or throw ConcurrentModificationException when accessed by multiple threads.
Fix by using proper concurrent collections:
private List<String> safeList = Collections.synchronizedList(new ArrayList<>());
private Map<String, Integer> safeMap = new ConcurrentHashMap<>();
Remember that even with synchronized collections, iteration requires external synchronization:
public void processItemsSafely() {
synchronized(safeList) {
for (String item : safeList) {
process(item);
}
}
}
ConcurrentHashMap and other java.util.concurrent collections offer better scalability than synchronized collections and don’t require external synchronization for iteration.
Race Conditions in Lazy Initialization
Lazy initialization can introduce subtle race conditions:
public class LazyInitRace {
private Map<String, Object> cache;
public Object getCachedItem(String key) {
if (cache == null) {
cache = new HashMap<>(); // Race condition
}
Object value = cache.get(key);
if (value == null) {
value = createExpensiveObject(key);
cache.put(key, value); // Another race condition
}
return value;
}
}
Multiple threads might initialize the cache or create duplicate expensive objects.
Here’s a thread-safe implementation:
public Object getCachedItemSafely(String key) {
Map<String, Object> localCache = cache;
if (localCache == null) {
synchronized(this) {
localCache = cache;
if (localCache == null) {
cache = localCache = new ConcurrentHashMap<>();
}
}
}
return localCache.computeIfAbsent(key, this::createExpensiveObject);
}
This uses double-checked locking for the cache initialization and the computeIfAbsent method to atomically get-or-create values.
Parallel Stream Misuse
Parallel streams can dramatically improve performance, but they’re also easy to misuse:
public int sumValuesIncorrectly(List<Integer> values) {
int sum = 0;
values.parallelStream().forEach(value -> sum += value); // Race condition
return sum;
}
This code has a race condition on the sum variable, leading to incorrect results.
The correct approach uses the stream’s built-in reduction capabilities:
public int sumValuesCorrectly(List<Integer> values) {
return values.parallelStream().reduce(0, Integer::sum);
}
Another common mistake is using parallel streams for IO-bound operations:
files.parallelStream().forEach(this::readAndProcessFile); // Inefficient
Parallel streams are optimized for CPU-bound tasks with minimal contention. For IO-bound operations, CompletableFuture is usually more appropriate:
List<CompletableFuture<Void>> futures = files.stream()
.map(file -> CompletableFuture.runAsync(() -> readAndProcessFile(file)))
.collect(Collectors.toList());
futures.forEach(CompletableFuture::join);
Executor Service Resource Leaks
Failing to shut down executor services properly is a common source of resource leaks:
public void processTasksBadly(List<Runnable> tasks) {
ExecutorService executor = Executors.newFixedThreadPool(4);
tasks.forEach(executor::submit);
// Missing shutdown, causing resource leak
}
This code creates threads that will never be reclaimed until the application terminates.
Proper shutdown is essential:
public void processTasksProperly(List<Runnable> tasks) {
ExecutorService executor = Executors.newFixedThreadPool(4);
try {
tasks.forEach(executor::submit);
} finally {
executor.shutdown();
try {
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
}
This ensures the executor’s threads are properly terminated, preventing resource leaks. In real applications, consider creating executor services at the application level rather than creating and shutting them down for individual operations.
Final Thoughts
Concurrent programming is inherently complex, and these anti-patterns represent just the tip of the iceberg. The good news is that Java’s concurrency libraries have evolved significantly, providing higher-level abstractions that make correct concurrent code easier to write.
When writing concurrent code, I follow these principles:
- Minimize shared mutable state
- Use existing thread-safe classes instead of low-level synchronization
- Prefer immutability when possible
- Document thread-safety characteristics of your classes
- Test thoroughly with concurrency-focused tools
By avoiding these common anti-patterns and following best practices, you can build robust concurrent applications that effectively leverage modern multi-core processors.
Remember, the simplest solution that works correctly is usually the best one. Don’t prematurely optimize concurrent code—first make it correct, then make it fast if necessary.