10 Java Concurrency Patterns That Stop Multithreaded Bugs Before They Start

Learn 10 Java concurrency patterns that prevent race conditions, deadlocks, and data corruption. Real code examples included — start writing thread-safe programs today.

10 Java Concurrency Patterns That Stop Multithreaded Bugs Before They Start

I remember the first time I tried to write a Java program that did two things at once. I had a simple counter that two threads both incremented. The result was never right. Sometimes it was 1, sometimes 2, sometimes 0. It took me hours to understand that each thread had its own copy of the variable in its CPU cache, and that incrementing is actually three separate steps. That is why we need concurrency patterns—they are the guardrails that keep multithreaded code from falling apart.

I will walk you through ten patterns that have saved me hundreds of hours of debugging. Each one addresses a specific problem, and I will show you exactly how to use them, with code you can copy and modify. Start with the simplest, and only move to more complex tools when the simpler ones no longer fit.

1. Immutable Objects – Share without Fear

The easiest way to avoid thread-safety headaches is to create objects that cannot change. If an object’s state never changes after construction, you can pass it to any thread without locks or synchronization. The rule is simple: make all fields final, do not provide setter methods, and if you must store a mutable object like a Date or a List, copy it defensively in the constructor.

public final class User {
    private final String name;
    private final int age;

    public User(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() { return name; }
    public int getAge() { return age; }
}

I use immutable objects for configuration data, event payloads, and any value that represents a snapshot of state. When you need to “update” the object, create a new one with the new values. This pattern eliminates all race conditions because no thread can see a partially modified state.

2. Synchronized Blocks – The Basic Lock

When you must change shared mutable data, the synchronized keyword is your first and most straightforward tool. It guarantees that only one thread can execute the block at a time, and any changes made are visible to other threads after they exit the synchronized block.

public class VisitCounter {
    private int count = 0;

    public synchronized void increment() {
        count++;
    }

    public synchronized int getCount() {
        return count;
    }
}

I make sure to keep the synchronized block as short as possible—just the operation that must be atomic. If you have multiple methods that need independent locks, synchronize on a private final lock object instead of the whole method.

private final Object lock = new Object();

public void methodA() {
    synchronized(lock) { /* ... */ }
}

This prevents external code from synchronizing on your instance and causing deadlocks.

3. ReentrantLock – When You Need More Flexibility

Sometimes synchronized is too rigid. You might want to try acquiring a lock for a limited time, or you need a lock that can be interrupted. That is when I reach for ReentrantLock. It works like synchronized but with extra features. Always release the lock in a finally block to avoid deadlocks.

private final ReentrantLock lock = new ReentrantLock();

public void process() {
    lock.lock();
    try {
        // critical section
    } finally {
        lock.unlock();
    }
}

If you want to attempt the lock for two seconds, use tryLock().

if (lock.tryLock(2, TimeUnit.SECONDS)) {
    try {
        // do work
    } finally {
        lock.unlock();
    }
} else {
    // could not get lock – do something else
}

I avoid fairness policies (new ReentrantLock(true)) unless I have a proven starvation problem, because fairness reduces throughput.

4. ReadWriteLock – Many Readers, One Writer

When you have data that is read frequently but changed rarely, a ReadWriteLock allows many threads to read at the same time, while still giving exclusive access to writers. This can dramatically speed up read-heavy applications like caches or configuration stores.

private final Map<String, String> config = new HashMap<>();
private final ReadWriteLock rwLock = new ReentrantReadWriteLock();

public String get(String key) {
    rwLock.readLock().lock();
    try {
        return config.get(key);
    } finally {
        rwLock.readLock().unlock();
    }
}

public void set(String key, String value) {
    rwLock.writeLock().lock();
    try {
        config.put(key, value);
    } finally {
        rwLock.writeLock().unlock();
    }
}

I keep the write lock held for as short a time as possible. If the update involves complex calculations, I compute the new value outside the lock, then enter the write lock only to replace the reference.

5. Atomic Variables – Lock-Free for Simple Counters

For a single integer, boolean, or object reference that many threads update, I prefer atomic classes from java.util.concurrent.atomic. They use low-level CPU instructions to update variables atomically without locks. That makes them faster and immune to thread scheduling delays.

private final AtomicInteger counter = new AtomicInteger(0);

public int next() {
    return counter.incrementAndGet();
}

public boolean compareAndSetExpected(int expected, int newValue) {
    return counter.compareAndSet(expected, newValue);
}

I use AtomicBoolean for flags, AtomicLong for sequence numbers, and AtomicReference for swapping entire objects. They are perfect for statistics counters, ID generators, and state machines.

6. CopyOnWrite Collections – Safe Iteration Without Locks

When you have a list that is read far more often than it is written (like a listener list or a small set of rules), CopyOnWriteArrayList and CopyOnWriteArraySet are your friends. Every time you add or remove an element, they create a fresh copy of the underlying array. Iteration is lock-free and never throws ConcurrentModificationException.

private final List<Runnable> listeners = new CopyOnWriteArrayList<>();

public void addListener(Runnable r) {
    listeners.add(r);
}

public void fireEvent() {
    for (Runnable r : listeners) {
        r.run();
    }
}

I only use these collections when writes are rare and the collection is small. If you modify them frequently, the copying cost can ruin performance.

7. ConcurrentHashMap – The King of Concurrent Maps

A regular HashMap is deadly in multithreaded code – it can cause infinite loops. ConcurrentHashMap is designed from the ground up for concurrency. It allows multiple threads to read and write without blocking each other, thanks to internal partitioning and lock-free reads.

private final ConcurrentHashMap<String, Task> tasks = new ConcurrentHashMap<>();

public Task getOrCreate(String id, Supplier<Task> factory) {
    return tasks.computeIfAbsent(id, k -> factory.get());
}

public void removeIfCompleted(String id) {
    tasks.computeIfPresent(id, (k, v) -> v.isCompleted() ? null : v);
}

I use computeIfAbsent to atomically initialize a value without extra locking. For bulk operations, I prefer using forEach with parallelism controlled explicitly.

8. CountDownLatch and CyclicBarrier – Coordinating Threads

Sometimes you need threads to wait for each other. CountDownLatch lets one or more threads wait until a set of operations completes. It is one-shot – once the latch reaches zero, it cannot be reused. CyclicBarrier lets a fixed number of threads wait at a common point, then proceed together. It resets automatically, so you can reuse it.

// CountDownLatch example
CountDownLatch latch = new CountDownLatch(3);
ExecutorService executor = Executors.newFixedThreadPool(3);
for (int i = 0; i < 3; i++) {
    executor.submit(() -> {
        doWork();
        latch.countDown();
    });
}
latch.await(); // Wait for all three tasks
System.out.println("All done");

// CyclicBarrier example
CyclicBarrier barrier = new CyclicBarrier(3, () -> System.out.println("Barrier reached"));
for (int i = 0; i < 3; i++) {
    executor.submit(() -> {
        prepare();
        barrier.await(); // wait for others
        process();
    });
}

I use CountDownLatch for startup gate conditions, and CyclicBarrier for iterative parallel algorithms like matrix multiplication.

9. ExecutorService – Managing Thread Pools

Creating new threads manually is a bad idea—it is expensive and easy to leak. I always use an ExecutorService, which manages a pool of threads and lets me submit tasks. The factory methods (newFixedThreadPool, newCachedThreadPool, newSingleThreadExecutor) cover most needs, but I often build a custom ThreadPoolExecutor for fine-grained control.

ThreadPoolExecutor pool = new ThreadPoolExecutor(
    4,                     // core pool size
    8,                     // max pool size
    60, TimeUnit.SECONDS,  // keep-alive time
    new LinkedBlockingQueue<>(200),
    new ThreadPoolExecutor.CallerRunsPolicy() // backpressure
);

for (int i = 0; i < 1000; i++) {
    pool.submit(() -> process());
}

pool.shutdown();
pool.awaitTermination(1, TimeUnit.MINUTES);

I size the pool based on workload: CPU-bound tasks get availableProcessors() + 1, I/O‑bound tasks get much larger pools. The CallerRunsPolicy prevents silent task loss by running rejected tasks on the submitting thread.

10. Deadlock Avoidance – Consistent Lock Ordering

Deadlocks happen when two threads each hold a lock the other needs. The only sure way to avoid them is to always acquire locks in a fixed global order. I use System.identityHashCode() to provide a consistent ordering for objects that don’t have a natural one.

Object lockA = ...;
Object lockB = ...;

void transfer() {
    Object first = System.identityHashCode(lockA) < System.identityHashCode(lockB) ? lockA : lockB;
    Object second = (first == lockA) ? lockB : lockA;

    synchronized(first) {
        synchronized(second) {
            // do the transfer
        }
    }
}

I also add a timeout to lock attempts when using ReentrantLock so that if a deadlock does happen (for example, in legacy code), threads don’t hang forever. I run stress tests with tools like jcstress to find hidden deadlocks.


These ten patterns are the building blocks of safe multithreaded Java. I started with the simplest ones—immutable objects and synchronized—and only moved to ReentrantLock and ConcurrentHashMap when my applications demanded more throughput. The key is to choose the pattern that matches the problem, not the most advanced one. Always test your code under concurrent load. Write small, focused tests that race multiple threads, and use tools like ThreadMXBean to detect deadlocks.

Concurrency is hard, but with these patterns, you can build systems that are correct and fast. I still use them every day. So can you.


// Keep Reading

Similar Articles