rust

How to Build a Rust Web API: 8 Approaches From Bare Metal to Full Frameworks

Learn how to build a Rust web API using 8 proven approaches—from hyper to Axum and Rocket. Find the right method for your project and start building today.

How to Build a Rust Web API: 8 Approaches From Bare Metal to Full Frameworks

Building a web API can feel like a big task, especially when you’re picking up a new language. Rust, with its focus on speed and safety, is a fantastic choice for this. I remember starting out and feeling overwhelmed by the options. Should I build everything from scratch or use a big framework? Let’s walk through the different ways you can do it, from the ground up to full-featured systems. I’ll show you the code and explain the thinking behind each choice.

We can start at the very beginning, with no framework at all. This means using the hyper library directly. It’s the low-level HTTP library that powers many other tools in Rust. When you use hyper yourself, you handle every part of the request and response cycle. You check the URL path and the method, like GET or POST, and then you manually build the reply.

Here is a simple server that listens on your local machine and responds to two different paths.

use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server, StatusCode};

async fn handle_request(req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    let response = match (req.method(), req.uri().path()) {
        (&hyper::Method::GET, "/") => Response::new(Body::from("Home")),
        (&hyper::Method::GET, "/api/data") => Response::new(Body::from(r#"{"status":"ok"}"#)),
        _ => Response::builder()
            .status(StatusCode::NOT_FOUND)
            .body(Body::from("Not Found"))
            .unwrap(),
    };
    Ok(response)
}

#[tokio::main]
async fn main() {
    let addr = ([127, 0, 0, 1], 3000).into();
    let make_svc = make_service_fn(|_conn| async { Ok::<_, hyper::Error>(service_fn(handle_request)) });
    let server = Server::bind(&addr).serve(make_svc);
    if let Err(e) = server.await { eprintln!("server error: {}", e); }
}

This approach is great for learning. You see exactly how an HTTP server works. It’s also useful if you need very specific control over how connections are managed for top performance. The trade-off is you write more code for common tasks like routing or parsing JSON.

Most of the time, you want a bit more structure without too much overhead. That’s where a framework like axum comes in. It sits on top of hyper and gives you tools to organize your code neatly. You define routes clearly, and it can automatically pull data out of requests for you, a feature called “extractors.” I find this balance to be perfect for most projects.

Setting up a basic API with two routes is straightforward.

use axum::{routing::get, Router, Json};
use serde_json::{json, Value};

async fn root() -> &'static str { "Hello" }
async fn api_data() -> Json<Value> { Json(json!({ "data": [1, 2, 3] })) }

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/", get(root))
        .route("/api/data", get(api_data));
    let listener = tokio::net::TcpListener::bind("127.0.0.1:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

The router makes it clear what function handles which URL. The Json extractor and response type handle converting Rust data to JSON automatically. It removes a lot of repetitive code while staying fast and predictable.

If your main goal is to build something quickly, a framework with stronger opinions can help. Rocket is a popular choice here. It uses Rust’s macros to make route declaration feel very natural. It has built-in solutions for forms, JSON, and even web page templates. You trade some low-level flexibility for a faster development pace, which is ideal for prototypes.

A simple Rocket application looks like this.

#[macro_use] extern crate rocket;

#[get("/")]
fn index() -> &'static str { "Hello, world!" }

#[get("/user/<id>")]
fn user(id: usize) -> String { format!("User ID: {}", id) }

#[launch]
fn rocket() -> _ {
    rocket::build().mount("/", routes![index, user])
}

The #[get] macro tells Rocket this function is for GET requests. The <id> in the path is a dynamic segment that gets passed to your function. The #[launch] macro creates the server. It feels intuitive and lets you focus on your application’s logic rather than setup.

For teams building an API that others will use, documentation and a clear contract are vital. The poem-openapi library is built for this. You define your API as a Rust trait, and it generates both the server code and a complete OpenAPI specification. This spec can power interactive documentation sites like Swagger UI. It ensures your code and your documentation can never get out of sync.

Here’s how you define a simple endpoint.

use poem_openapi::{OpenApi, payload::Json};
use serde_json::Value;

struct Api;

#[OpenApi]
impl Api {
    #[oai(path = "/data", method = "get")]
    async fn get_data(&self) -> Json<Value> {
        Json(serde_json::json!({ "items": [1, 2, 3] }))
    }
}

#[tokio::main]
async fn main() {
    let api_service = poem_openapi::OpenApiService::new(Api, "Demo", "1.0")
        .server("http://localhost:3000");
    let ui = api_service.swagger_ui();
    let app = poem::Route::new().nest("/", api_service).nest("/docs", ui);
    poem::Server::new(TcpListener::bind("127.0.0.1:3000"))
        .run(app)
        .await
        .unwrap();
}

You annotate your methods with #[oai] to define the path and HTTP method. The library takes care of the rest, even hosting the docs at /docs. This method is excellent for public APIs where clarity and stability are as important as functionality.

As your application grows, keeping your code organized becomes critical. A clean architecture pattern helps a lot. The idea is to separate your core business logic from the web layer. Your web framework (like Axum or Rocket) becomes a thin shell. Its only job is to take HTTP requests, call functions in your application’s core, and then package the results back into HTTP responses.

You might have a service layer that knows nothing about the web.

// In a domain module
pub struct UserService { /* ... */ }
impl UserService {
    pub fn get_user(&self, id: u64) -> Option<User> { /* ... */ }
}

// In a web adapter module using axum
use axum::{extract::Path, Extension, Json};
use std::sync::Arc;

async fn get_user_handler(
    Path(user_id): Path<u64>,
    Extension(service): Extension<Arc<UserService>>,
) -> Result<Json<User>, StatusCode> {
    match service.get_user(user_id) {
        Some(user) => Ok(Json(user)),
        None => Err(StatusCode::NOT_FOUND),
    }
}

The handler is just an adapter. It gets the user ID from the request path, asks the UserService for data, and translates the result to an HTTP response. This separation means you could replace Axum with a different web framework later, and your core application logic wouldn’t need to change at all.

Most real applications need to talk to a database. In Rust, sqlx is a powerful library for this. It works asynchronously, which fits perfectly with async web servers. Its standout feature is compile-time checked queries. You write a SQL string, and sqlx will check it against your real database schema when you compile your program. It catches typos in table or column names early.

Here’s how you might fetch a list of posts.

use sqlx::PgPool;
use serde::Serialize;

#[derive(sqlx::FromRow, Serialize)]
struct Post { id: i32, title: String }

async fn get_posts(pool: &PgPool) -> Result<Vec<Post>, sqlx::Error> {
    sqlx::query_as::<_, Post>("SELECT id, title FROM posts LIMIT 10")
        .fetch_all(pool)
        .await
}

// In an Axum route handler
use axum::{Extension, Json};

async fn posts_handler(Extension(pool): Extension<PgPool>) -> Json<Vec<Post>> {
    let posts = get_posts(&pool).await.unwrap_or_default();
    Json(posts)
}

The query_as function maps the database rows directly to your Post struct. The Extension mechanism in Axum is a common way to share resources, like your database connection pool, across all your route handlers.

You’ll often need functionality that applies to every request, like logging, authentication, or adding CORS headers. This is called middleware. Most frameworks provide a way to add layers of middleware around your routes. You can write your own or use existing ones from the community.

Let’s write a simple middleware that logs each request and how long it took to process.

use axum::middleware::{self, Next};
use axum::http::{Request, Response};

async fn log_middleware<B>(req: Request<B>, next: Next<B>) -> Response<axum::body::Body> {
    println!("Request: {} {}", req.method(), req.uri().path());
    let start = std::time::Instant::now();
    let response = next.run(req).await;
    let duration = start.elapsed();
    println!("Response took: {:?}", duration);
    response
}

// Apply it to your router
let app = Router::new()
    .route("/", get(|| async { "Hi" }))
    .layer(middleware::from_fn(log_middleware));

The middleware function receives the request, can do work (like logging), then calls next.run(req) to pass the request down the chain to your actual route handler. It gets the response back, can do more work (like timing), and then returns it. This pattern is powerful for keeping cross-cutting concerns separate from your business logic.

Finally, you need to get your API running somewhere other than your laptop. Rust has a huge advantage here. It compiles to a single, static binary. This binary has no external dependencies, so you can run it on a bare-minimum server. The deployment process is very simple.

You can build your project for a specific target system.

# Build a release binary for a common Linux environment
cargo build --release --target x86_64-unknown-linux-musl

# The resulting binary is self-contained and ready to run
./target/x86_64-unknown-linux-musl/release/my_api

For consistency, especially with a team, packaging the binary in a Docker container is common. The Dockerfile can be extremely small because you don’t need to install any system libraries or a runtime.

# Start from a tiny base image
FROM alpine:latest

# Copy our pre-built binary
COPY target/release/my_api /usr/local/bin/

# Tell Docker what command to run
CMD ["/usr/local/bin/my_api"]

You build the Docker image, push it to a registry, and then can run it anywhere Docker is installed. This workflow is reliable and easy to automate.

Each of these eight methods exists for a reason. The right one for you depends on what you’re building. If you need ultimate control and are willing to manage the details, start with hyper. For a great blend of performance and ease, axum is a strong default. If speed of development is your main concern, try Rocket. For API-first projects with strict contracts, poem-openapi is a powerful tool.

No matter which you pick, patterns like clean architecture, safe database access with sqlx, and strategic use of middleware will help your project stay organized and robust. And when you’re done, Rust’s deployment story is refreshingly simple. You have a spectrum of choices, from the bare metal to the fully featured, all within the same safe and fast language.

Keywords: Rust web API, building REST API in Rust, Rust API framework, Rust backend development, Rust HTTP server, Rust web development tutorial, axum framework Rust, Rocket framework Rust, hyper library Rust, sqlx Rust database, Rust async web server, poem-openapi Rust, OpenAPI Rust, Rust REST API tutorial, Rust web framework comparison, Rust API middleware, Rust JSON API, tokio Rust async, Rust API routing, Rust web server from scratch, Rust compile-time SQL, Rust database integration, deploy Rust API Docker, Rust static binary deployment, Rust web API performance, Rust API authentication, Rust CORS middleware, Rust clean architecture, Rust service layer pattern, Rust web framework beginner, best Rust web framework 2024, Rust API development guide, Rust high performance API, Rust web API examples, Rust swagger documentation, Rust OpenAPI specification, axum vs Rocket Rust, Rust web API code examples, Rust backend framework tutorial, Rust web server tutorial



Similar Posts
Blog Image
Mastering Rust Concurrency: 10 Production-Tested Patterns for Safe Parallel Code

Learn how to write safe, efficient concurrent Rust code with practical patterns used in production. From channels and actors to lock-free structures and work stealing, discover techniques that leverage Rust's safety guarantees for better performance.

Blog Image
Rust WebAssembly Optimization: 8 Proven Techniques for Faster Performance and Smaller Binaries

Optimize Rust WebAssembly performance with size-focused compilation, zero-copy JS interaction, SIMD acceleration & memory management techniques. Boost speed while reducing binary size.

Blog Image
8 Essential Rust Optimization Techniques for High-Performance Real-Time Audio Processing

Master Rust audio optimization with 8 proven techniques: memory pools, SIMD processing, lock-free buffers, branch optimization, cache layouts, compile-time tuning, and profiling. Achieve pro-level performance.

Blog Image
10 Essential Rust Concurrency Primitives for Robust Parallel Systems

Discover Rust's powerful concurrency primitives for robust parallel systems. Learn how threads, channels, mutexes, and more enable safe and efficient concurrent programming. Boost your systems development skills.

Blog Image
Unraveling the Mysteries of Rust's Borrow Checker with Complex Data Structures

Rust's borrow checker ensures safe memory management in complex data structures. It enforces ownership rules, preventing data races and null pointer dereferences. Techniques like using indices and interior mutability help navigate challenges in implementing linked lists and graphs.

Blog Image
Rust's Type State Pattern: Bulletproof Code Design in 15 Words

Rust's Type State pattern uses the type system to model state transitions, catching errors at compile-time. It ensures data moves through predefined states, making illegal states unrepresentable. This approach leads to safer, self-documenting code and thoughtful API design. While powerful, it can cause code duplication and has a learning curve. It's particularly useful for complex workflows and protocols.