Let’s talk about how to write programs that do many things at once in Rust. You’ve probably written code that runs one line after another. That’s synchronous code. It’s like waiting in a single, slow line at a coffee shop. Async programming is different. It’s like giving your order and then stepping aside to let the next person order while your coffee is being made. The shop can handle many people at once, even with just one barista.
Rust gives us tools to write this kind of code using async and await. When you mark a function with async, you’re saying, “This function might need to wait for something, like reading from a network or a file.” It doesn’t do the work immediately. Instead, it returns a Future. Think of a Future as a ticket with a promise that the result will be ready later. The .await keyword is how you hand over that ticket and say, “I’ll pause here until my coffee (or data) is ready.”
Here’s what that looks like in its simplest form.
async fn fetch_url(url: &str) -> Result<String, reqwest::Error> {
let response = reqwest::get(url).await?;
response.text().await
}
#[tokio::main]
async fn main() {
match fetch_url("https://example.com").await {
Ok(body) => println!("Fetched {} bytes", body.len()),
Err(e) => eprintln!("Failed: {}", e),
}
}
The fetch_url function is async. It uses .await twice: first to wait for the network request to finish, and then to wait for the response body to be read. The #[tokio::main] part sets up the engine, or runtime, that drives all these paused and resumed tasks. Without a runtime, your async functions don’t actually run.
Now, let’s get into the patterns. The first big problem you’ll hit is waiting. What if you’re waiting for one thing, but something else happens first? Or what if something takes too long? You can’t just sit there forever. You need to listen for several things at once and react to the first one that’s ready.
This is where select comes in. It’s like telling the barista, “I’m waiting for my latte, but I’m also keeping an eye on the door for my friend.” Whichever happens first—the latte arriving or my friend walking in—gets my immediate attention. In code, tokio::select! lets you wait on multiple futures and branches when any one of them completes.
use tokio::time::{sleep, Duration, timeout};
async fn unreliable_task() -> &'static str {
sleep(Duration::from_millis(rand::random::<u8>() as u64)).await;
"result"
}
async fn with_timeout() -> Result<&'static str, tokio::time::error::Elapsed> {
timeout(Duration::from_secs(1), unreliable_task()).await
}
#[tokio::main]
async fn main() {
tokio::select! {
result = with_timeout() => {
match result {
Ok(s) => println!("Task succeeded: {}", s),
Err(_) => println!("Task timed out"),
}
}
_ = sleep(Duration::from_secs(2)) => {
println!("Fallback after 2 seconds");
}
}
}
Here, we run a task that has a random delay. We give it a 1-second timeout using timeout. The select! macro watches two things: the result of that timed task, and a simple 2-second sleep. If the task finishes (or times out) within 1 second, we handle that result. If it’s somehow still going after 2 seconds, the sleep future completes, and we hit the fallback branch. The key is that the other future, the one not selected, is dropped and cancelled. This automatic cleanup is very important.
Often, you don’t want to choose just one future; you need to run many of them together and wait for all of them. Imagine you need to fetch data from three different websites before you can build a webpage. You could fetch them one after another, but that’s slow. You want to fetch them all at the same time.
For this, we use join!. It takes several futures and runs them side-by-side, returning a tuple of all their results when they’re all done. There’s a handy variant called try_join! that works with futures that return Result. It has a useful feature: if any of the futures returns an error, try_join! stops waiting for the others and returns that error immediately.
use futures::future::try_join_all;
async fn fetch_multiple(urls: &[&str]) -> Result<Vec<String>, reqwest::Error> {
let fetches = urls.iter().map(|&url| reqwest::get(url));
let responses = try_join_all(fetches).await?;
let texts = try_join_all(responses.into_iter().map(|r| r.text())).await?;
Ok(texts)
}
In this example, try_join_all (from the futures crate) is like try_join! but for a variable number of futures in a collection. We first try_join_all the network requests, then try_join_all the reading of each response body. This does all the network calls concurrently, then all the reading concurrently, which is much faster than doing them in sequence.
Sometimes data doesn’t come all at once. It trickles in. Think of a chat application where messages arrive one by one, or a sensor sending readings every second. For this, we use a Stream. A Stream is like an asynchronous version of an Iterator. Instead of calling .next() which blocks, you call .next().await, which yields control until the next item is available.
use tokio_stream::{Stream, StreamExt};
use tokio::time::{interval, Duration};
async fn process_stream<S>(mut stream: S)
where
S: Stream<Item = i32> + Unpin,
{
while let Some(value) = stream.next().await {
println!("Received: {}", value);
}
}
#[tokio::main]
async fn main() {
let interval_stream = interval(Duration::from_secs(1))
.map(|_| rand::random::<i32>())
.take(5);
process_stream(interval_stream).await;
}
We create a stream from a timer that “ticks” every second. On each tick, we map it to a random number. The .take(5) means we only want five of these numbers. The process_stream function uses a while let loop to keep pulling items from the stream until it ends. Streams are powerful for modeling any source of asynchronous events.
A common pitfall in async programming is doing too much at once. If you spawn 10,000 tasks that all need to talk to a database, you might overwhelm it. You need a way to limit how many things happen concurrently. This is where a Semaphore comes in. You can think of it as a club with a limited number of wristbands. Only tasks holding a wristband can enter the club (access the resource). When they leave, they give back the wristband.
use std::sync::Arc;
use tokio::sync::Semaphore;
async fn limited_task(semaphore: Arc<Semaphore>, id: u32) {
let _permit = semaphore.acquire().await.unwrap();
println!("Task {} started", id);
tokio::time::sleep(Duration::from_secs(1)).await;
println!("Task {} finished", id);
// Permit is released automatically when dropped
}
#[tokio::main]
async fn main() {
let semaphore = Arc::new(Semaphore::new(3)); // Allow 3 concurrent tasks
let mut handles = vec![];
for i in 0..10 {
let sem = Arc::clone(&semaphore);
handles.push(tokio::spawn(limited_task(sem, i)));
}
for handle in handles {
handle.await.unwrap();
}
}
We create a Semaphore with 3 permits. Ten tasks are spawned, but each one must first acquire a permit. Only three can get one initially. The others will wait at the .await point. When a task finishes and the _permit is dropped, a slot opens up, and the next waiting task can proceed. This pattern is essential for controlling load on shared resources.
As your code gets more complex, you’ll want to write reusable pieces of logic. Rust lets you write async closures, which opens the door to higher-order functions—functions that take other async functions as arguments. This is great for creating utilities like a retry mechanism.
async fn retry<F, Fut, T, E>(mut action: F, max_attempts: usize) -> Result<T, E>
where
F: FnMut() -> Fut,
Fut: std::future::Future<Output = Result<T, E>>,
E: std::fmt::Debug,
{
for attempt in 1..=max_attempts {
match action().await {
Ok(value) => return Ok(value),
Err(e) if attempt == max_attempts => return Err(e),
Err(e) => {
eprintln!("Attempt {} failed: {:?}, retrying...", attempt, e);
tokio::time::sleep(Duration::from_secs(1 << attempt)).await;
}
}
}
unreachable!()
}
This retry function takes an action (an async closure that returns a Result) and a maximum number of tries. It calls the action. If it succeeds, it returns the value. If it fails, it logs the error, waits for an exponentially increasing amount of time (1 << attempt means 2, 4, 8 seconds…), and tries again. If all attempts fail, it returns the final error. You can now wrap any fallible async operation with retry(|| async { my_operation().await }, 5). It neatly separates the “what to do” from the “how to handle failures.”
In a world where tasks can be cancelled at any time (like when a select! branch is dropped), you must clean up after yourself. If a task opens a file or a network connection, simply stopping the future might leave that resource open. Rust’s Drop trait is your friend here. You can create a wrapper type that, when dropped, sends a signal to cancel the ongoing work and clean up resources.
struct CancellableTask {
handle: tokio::task::JoinHandle<()>,
cancel_sender: tokio::sync::oneshot::Sender<()>,
}
impl CancellableTask {
async fn new() -> Self {
let (cancel_sender, cancel_receiver) = tokio::sync::oneshot::channel();
let handle = tokio::spawn(async move {
tokio::select! {
_ = async {
// Simulate long-running work
tokio::time::sleep(Duration::from_secs(10)).await;
println!("Work completed successfully.");
} => {},
_ = cancel_receiver => {
println!("Task was cancelled, cleaning up...");
}
}
});
Self { handle, cancel_sender }
}
}
impl Drop for CancellableTask {
fn drop(&mut self) {
let _ = self.cancel_sender.send(());
}
}
When you create a CancellableTask, it spawns a new task that does some long work. Inside that task, a select! waits either for the work to finish, or for a cancellation signal. The CancellableTask struct holds the handle to that spawned task and a sender for the cancellation signal. The magic is in the Drop implementation: when the CancellableTask struct is dropped (because it went out of scope, or was part of a cancelled future), it automatically sends the cancellation signal. The spawned task then immediately branches to the cleanup code. This gives you graceful, automatic cancellation.
Finally, not everything in the world is async. Sometimes you have to use a library that blocks, or you have a chunk of heavy computation. If you run that directly in an async task, you’ll block the entire runtime thread, stopping all other tasks. The solution is to move that work to a separate thread pool dedicated to blocking operations. In Tokio, you do this with spawn_blocking.
async fn compute_heavy_async(input: Vec<u8>) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
let result = tokio::task::spawn_blocking(move || {
// Expensive, synchronous computation
println!("Starting heavy computation on a blocking thread...");
std::thread::sleep(Duration::from_secs(2)); // Simulating work
input.into_iter().map(|b| b.wrapping_add(1)).collect()
}).await?;
Ok(result)
}
The closure inside spawn_blocking is moved to a background thread where it can block without issue. The spawn_blocking call itself returns a future that you can .await. This future completes when the blocking work is done on that other thread. It’s a bridge between the async and synchronous worlds.
Putting it all together, these patterns form a toolkit. You start with async/await for the basic pause-and-resume model. You use select! to race futures and handle timeouts. You use join! and try_join! for running tasks in parallel. You model ongoing events with Streams. You protect limited resources with Semaphores. You write reusable logic with async closures and higher-order functions. You ensure resources are freed with careful Drop implementations. And you offload blocking work with spawn_blocking.
The goal isn’t to use every pattern in every project. It’s to know they exist. When you face a problem—like “I need to limit calls to this API”—you’ll remember the semaphore pattern. When you need a cleanup action, you’ll think of the Drop trait. These patterns help you structure code that is not only fast and concurrent but also clear and reliable.