Reactive programming changes how we think about data flow and application responsiveness. It moves us away from traditional blocking calls and toward a model where data streams asynchronously, and systems react to changes as they happen. This approach builds applications that are more resilient under load and make better use of computing resources. At the heart of this shift in the Java ecosystem is Project Reactor, a powerful library that implements the Reactive Streams specification.
When I first started working with reactive code, the mental model felt different. Instead of pulling data or waiting for operations to complete, you define a pipeline of operations that process data as it becomes available. This pipeline is non-blocking, meaning your application can continue doing other work instead of sitting idle.
Let’s start with the basic building blocks. In Project Reactor, you work with two main types of publishers: Mono and Flux. A Mono represents a stream that will emit zero or one item, while a Flux represents a stream of zero to many items. Creating these streams is straightforward.
Mono<String> greeting = Mono.just("Hello, World!");
Flux<Integer> numbers = Flux.range(1, 10);
These publishers don’t do anything until you subscribe to them. This lazy evaluation is key to reactive programming—you’re defining what should happen, not necessarily making it happen immediately.
Transforming data within these streams often involves the map operator. It lets you synchronously transform each element as it passes through the pipeline. The operation is applied to each item individually as it becomes available.
Flux<String> names = Flux.just("alice", "bob", "charlie")
.map(name -> name.toUpperCase());
This code takes each name and converts it to uppercase. The map operation is simple and immediate—it doesn’t involve any waiting or asynchronous processing.
When you need to work with asynchronous operations, flatMap becomes essential. I’ve found this to be one of the most powerful operators in reactive programming. It allows you to take each element from a stream and transform it into a new publisher, then flatten all these publishers back into a single stream.
Flux<User> users = Flux.just(1, 2, 3)
.flatMap(id -> userRepository.findById(id));
Here, each ID triggers a separate call to find a user—perhaps a database operation that returns a Mono
Combining data from multiple streams is another common requirement. The zip operator is perfect for this scenario. It waits for all the involved publishers to emit an item, then combines them using a provided function.
Mono<String> firstName = Mono.just("John");
Mono<String> lastName = Mono.just("Doe");
Mono<String> fullName = firstName.zipWith(lastName, (first, last) -> first + " " + last);
This approach ensures that we only process data when all necessary components are available. I often use this when I need to combine results from multiple independent operations.
Error handling in reactive streams requires a different approach than traditional try-catch blocks. Reactive streams provide operators that allow graceful handling of exceptions without breaking the entire pipeline.
Flux<Integer> processedNumbers = Flux.range(1, 10)
.map(n -> {
if (n == 5) throw new RuntimeException("Error at 5");
return n * 2;
})
.onErrorResume(e -> Flux.just(99, 100));
The onErrorResume operator provides a fallback stream when an error occurs. This allows the application to continue processing with alternative data rather than failing completely.
Backpressure management is a crucial aspect of reactive programming that many developers initially overlook. It’s the mechanism that allows consumers to signal to producers how much data they can handle.
Flux.range(1, 1000)
.onBackpressureBuffer(100)
.subscribe(value -> {
try {
Thread.sleep(10); // Simulate slow processing
System.out.println(value);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
In this example, the buffer holds up to 100 items when the consumer can’t keep up. Without backpressure management, a fast producer could overwhelm a slow consumer, leading to memory issues or degraded performance.
Thread management in reactive programming is explicit through schedulers. This gives you fine-grained control over which operations run on which threads.
Flux.range(1, 10)
.publishOn(Schedulers.parallel())
.map(i -> i * 2)
.subscribeOn(Schedulers.boundedElastic())
.subscribe(System.out::println);
The publishOn operator affects subsequent operations, moving them to the specified scheduler. subscribeOn affects the entire chain. I typically use boundedElastic for blocking operations and parallel for CPU-intensive work.
Understanding the difference between hot and cold publishers is important. Cold publishers create a new data source for each subscriber, while hot publishers share a single data source among all subscribers.
ConnectableFlux<Integer> hotNumbers = Flux.range(1, 5)
.delayElements(Duration.ofSeconds(1))
.publish();
hotNumbers.connect(); // Starts emitting immediately
// Subscribers added later will miss earlier emissions
Thread.sleep(3000);
hotNumbers.subscribe(System.out::println);
Hot publishers are useful when you want to broadcast real-time data to multiple subscribers, like stock prices or sensor readings.
Testing reactive streams requires special consideration. Project Reactor provides StepVerifier to help test your reactive pipelines thoroughly.
StepVerifier.create(Flux.just(1, 2, 3).map(n -> n * 2))
.expectNext(2)
.expectNext(4)
.expectNext(6)
.expectComplete()
.verify();
StepVerifier allows you to verify not just the values, but also the timing and completion of your streams. I use it extensively in my tests to ensure streams behave as expected.
Integrating blocking code with reactive streams is sometimes necessary, especially when working with legacy systems. The key is to isolate the blocking operations on dedicated threads.
Mono.fromCallable(() -> {
// This is a blocking call
return traditionalDatabaseQuery();
})
.subscribeOn(Schedulers.boundedElastic())
.subscribe(result -> processResult(result));
The boundedElastic scheduler provides a pool of threads specifically designed for blocking operations. This prevents blocking calls from occupying the limited threads available for non-blocking operations.
Working with Project Reactor has changed how I design systems. The reactive approach encourages thinking in terms of data flow and transformation rather than sequential execution. It requires a shift in mindset, but the benefits in terms of scalability and resource utilization are significant.
The techniques I’ve discussed form a foundation for building responsive applications. They help create systems that can handle more concurrent requests with fewer resources. While there’s a learning curve, the investment pays off in application performance and maintainability.
Remember that reactive programming isn’t always the right solution. It excels in I/O-bound applications and systems that need to handle many concurrent connections. For CPU-bound tasks or simple applications, the traditional imperative approach might be more appropriate.
As you explore these techniques, start small. Convert individual components to reactive patterns rather than trying to rewrite entire systems at once. Test thoroughly and monitor performance to ensure the reactive approach is providing the benefits you expect.
The reactive landscape in Java continues to evolve, with Project Reactor at the forefront. These techniques provide a solid foundation for building modern, responsive applications that can scale to meet growing demands.