Strategic Crate Partitioning
Large crates force Rust’s compiler to process everything together. I split projects into focused units. When working on a data processing tool, separating core logic from CLI and GUI interfaces cut build times by 40%. The compiler parallelizes independent crates efficiently.
// Before (monolithic structure):
// my_app/
// src/
// lib.rs (200 modules)
// After partitioning:
// core_logic/
// src/lib.rs (shared functions)
// cli_tool/
// src/main.rs (command handling)
// web_api/
// src/main.rs (HTTP endpoints)
Keep crates cohesive but minimal. I limit crate sizes to under 5,000 lines where possible. Dependency graphs shrink dramatically when crates have single responsibilities.
Dependency Pruning
Unused dependencies silently bloat builds. I run cargo tree --depth 1
weekly to audit imports. One project had 30 unused transitive dependencies costing 90 seconds per build.
# Before:
[dependencies]
reqwest = { version = "0.11", features = ["json", "stream"] }
# After pruning:
reqwest = { version = "0.11", default-features = false, features = ["json"] }
Disable default features aggressively. In my network monitor tool, disabling tokio’s full feature set saved 23% compilation time. Use cargo udeps
to detect hidden unused dependencies.
Incremental Build Configuration
Rust’s incremental compilation caches intermediate artifacts. I configure it globally in ~/.cargo/config.toml
:
[build]
incremental = true
rustc-wrapper = "sccache"
Combine with sccache for distributed caching. My team shares compilation caches across CI and development machines. After setting up sccache with AWS S3, initial builds dropped from 15 minutes to 3 minutes. Remember to exclude large generated files from caches.
Workspace Build Order Control
Cargo builds workspace members in arbitrary order. I sequence dependencies explicitly:
[workspace]
members = [
"base_types", # Fundamental structs
"data_parsers", # Depends on base_types
"api_server" # Depends on both
]
This ensures parallel compilation pipelines stay efficient. For a compiler project, ordering crates by dependency depth reduced build spikes by 35%. Use cargo build --timings
to visualize critical paths.
Feature Flag Isolation
Heavy features should be opt-in. I gate resource-intensive modules:
// In audio_engine/lib.rs:
#[cfg(feature = "spatial_audio")]
pub mod binaural_processor {
// 3D audio DSP algorithms
}
// Cargo.toml:
[features]
spatial_audio = []
In my game engine, conditional compilation of physics simulations saved 18 seconds per debug build. Test feature-gated code separately with cargo test --features spatial_audio
.
Build Script Optimization
Complex build scripts trigger unnecessary recompilation. I cache expensive operations:
// build.rs
fn main() {
let data_path = "processed/assets.bin";
if !std::path::Path::new(data_path).exists() {
convert_assets(); // Runs only once
}
println!("cargo:rerun-if-changed=assets/raw");
}
For a graphics project, this reduced asset processing from 47 seconds to 0.3 seconds after initial build. Always specify rerun-if-changed
precisely—wildcards cause overbuilding.
Link-Time Optimization Tuning
LTO improves runtime performance but harms build speed. I configure profiles separately:
[profile.dev]
opt-level = 0
lto = "off"
codegen-units = 16
[profile.release]
opt-level = 3
lto = "thin"
During active development, I disable LTO completely. For final builds, thin
provides 80% of fat
LTO’s gains with 50% less compile time. Measure tradeoffs with perf
on critical paths.
Macro Usage Discipline
Procedural macros significantly impact parsing. I use declarative macros for boilerplate:
// Instead of proc macro:
// #[generate_getters]
// Declarative alternative:
macro_rules! generate_getters {
($struct:ident {$($field:ident: $ty:ty),*}) => {
impl $struct {
$(pub fn $field(&self) -> &$ty { &self.$field })*
}
}
}
generate_getters!(User {
name: String,
id: u64
});
After refactoring a derive-heavy configuration crate, compile times improved by 28%. Reserve procedural macros for complex code generation that can’t be expressed otherwise.
Applying these techniques cumulatively transformed my workflow. A medium-sized project (~20k LOC) now builds in 1.2 minutes instead of 6.5 minutes. Start with dependency audits and crate partitioning—they yield the most immediate gains. Remember that optimizations compound: each 10% reduction accelerates the entire development loop. Profile builds with cargo build --timings
to identify your specific bottlenecks. What took hours now finishes during coffee breaks, letting us focus on solving problems instead of waiting.