Java’s CompletableFuture fundamentally changed how I approach asynchronous programming. By representing asynchronous tasks as composable building blocks, it allows creating complex workflows without callback hell. Here are practical techniques I regularly use in production systems, with concrete examples from real projects.
Basic Execution
Starting simple: supplyAsync
offloads work to ForkJoinPool. I use this for independent tasks like fetching configuration. Block with join()
only when absolutely necessary—it defeats non-blocking benefits.
CompletableFuture<Config> configFuture = CompletableFuture.supplyAsync(() -> {
return loadConfigFromRemote(); // Simulate 200ms I/O
});
// Do other work here
Config config = configFuture.join(); // Last resort blocking
Chaining Transformations
Chaining via thenApply
avoids thread hopping. This pipeline converts CSV to objects then filters them, all in the same worker thread:
CompletableFuture<List<Product>> products = CompletableFuture
.supplyAsync(() -> readCsv("products.csv"))
.thenApply(csv -> parseProducts(csv))
.thenApply(list -> filterInStock(list));
Combining Results
When merging API calls, thenCombine
shines. Below, user data and orders fetch concurrently. When both complete, we build a unified response:
CompletableFuture<User> userFuture = fetchUserAsync(userId);
CompletableFuture<Order> orderFuture = fetchOrderAsync(orderId);
userFuture.thenCombine(orderFuture, (user, order) -> {
return new UserOrderComposite(user, order); // Combine when both ready
});
Error Recovery
Use exceptionally
for fallbacks. In this payment service, failed transactions default to manual review:
CompletableFuture<Receipt> payment = processPaymentAsync(tx)
.exceptionally(ex -> {
log.warn("Payment failed, queuing review: {}", ex.getMessage());
return reviewService.queueManualReview(tx);
});
Timeout Handling
Forget stuck threads with orTimeout
(Java 9+). This inventory check fails fast after 500ms:
CompletableFuture<Boolean> stockCheck = checkInventoryAsync(itemId)
.orTimeout(500, TimeUnit.MILLISECONDS)
.exceptionally(ex -> {
if (ex.getCause() instanceof TimeoutException) {
return false; // Assume out-of-stock on timeout
}
throw new CompletionException(ex);
});
Parallel Aggregation
Process 100 images concurrently with allOf
. Collect results via join()
after completion:
List<CompletableFuture<Thumbnail>> thumbnails = imageIds.stream()
.map(id -> generateThumbnailAsync(id))
.toList();
CompletableFuture<Void> allDone = CompletableFuture.allOf(
thumbnails.toArray(new CompletableFuture[0])
);
allDone.thenRun(() -> {
List<Thumbnail> results = thumbnails.stream()
.map(CompletableFuture::join) // Safe since all completed
.toList();
createZipArchive(results);
});
Sequential Dependencies
thenCompose
chains dependent async operations. Fetch user, then use their ID to get profile:
CompletableFuture<Profile> profileFuture = getUserAsync(userId)
.thenCompose(user -> getProfileAsync(user.getProfileId()));
Custom Thread Pools
Avoid resource starvation with dedicated pools. For blocking I/O, I use fixed pools:
ExecutorService dbPool = Executors.newFixedThreadPool(10);
CompletableFuture<List<Record>> dbFuture = CompletableFuture.supplyAsync(() -> {
return jdbcTemplate.query("SELECT * FROM logs"); // Blocking call
}, dbPool); // Isolate from CPU-bound tasks
Manual Completion
Take control for legacy integrations. Complete futures from callback-based libraries:
CompletableFuture<Response> bridge = new CompletableFuture<>();
legacyApi.sendRequest(request, new Callback() {
@Override
public void onSuccess(Response r) { bridge.complete(r); }
@Override
public void onFailure(Exception e) { bridge.completeExceptionally(e); }
});
Reactive Cleanup
Use thenAccept/thenRun
for side effects. After saving data, notify audit log and release connection:
saveDataAsync(data)
.thenAccept(savedId -> auditLog.log("Created", savedId))
.thenRun(connectionPool::releaseCurrentConnection)
.exceptionally(ex -> {
connectionPool.releaseFailedConnection();
return null;
});
These patterns transformed how I design concurrent systems. By treating futures as lego blocks, I build pipelines that handle failures, respect timeouts, and maximize throughput. The real power emerges when combining techniques—like using custom pools with chained transformations for CPU-heavy workflows. Start simple, add complexity gradually, and always measure performance under load.