Getting a Java application from code on your machine to a running service for users involves many steps. Compile the code, run tests, package dependencies, and prepare it for different environments. Doing this manually every time is slow and prone to mistakes. That’s where build automation comes in. It’s like writing a recipe that your computer can follow to prepare your software, the same way, every single time.
When you combine this automated recipe with a Continuous Integration and Delivery (CI/CD) pipeline, the process becomes seamless. Every change you make can be automatically built, tested, and even deployed. This isn’t just about convenience; it’s about building software with confidence. I want to share some practical methods that have helped me create builds that are fast, reliable, and ready for production.
Let’s start with the structure of a project. Large applications are rarely a single block of code. They’re divided into modules—like a core library, a service layer, and a web interface. Building everything from scratch each time you change one line in the service module is a waste of time. You need a smarter build.
With Maven, you can build only what you need. Imagine your project has a parent file and several modules listed inside it. You can instruct Maven to build just the service module and any other module it depends on.
<!-- This is in the main pom.xml file. It lists the children. -->
<modules>
<module>core</module>
<module>service</module>
<module>web</module>
</modules>
From the command line, you don’t always run mvn clean install for the whole project. Instead, you can be precise.
# Builds only the 'service' module and its dependencies ('core')
mvn clean install -pl service -am
The -pl flag tells Maven which project (module) to build. The -am flag means “also make” its dependencies. This means if you’re working on the service logic, you can compile and test it in seconds, not minutes. Your development loop becomes much tighter and more efficient.
As your build logic grows, you might find yourself repeating the same tasks in multiple projects. Copying and pasting configuration is a recipe for drift and errors. A better approach is to create your own build plugin. This packages your common tasks into a single, reusable component.
In Gradle, you can do this easily by creating a buildSrc directory. This folder is special. Gradle automatically compiles any code in here and makes it available to your main build script.
Let’s say every Java application you build needs to generate a simple text file with the current version number. Instead of writing this task in every project, you create a plugin.
// This file is: buildSrc/src/main/groovy/com/mycompany/MyJavaPlugin.groovy
class MyJavaPlugin implements Plugin<Project> {
void apply(Project project) {
project.with {
apply plugin: 'java'
tasks.register('generateVersionFile') {
doLast {
// This creates a file in the build output
new File("$buildDir/version.txt").text = "Version: ${project.version}"
}
}
// Ensure this runs before the JAR is created
tasks.named('jar') { dependsOn generateVersionFile }
}
}
}
Now, in your actual project’s build.gradle, applying your custom work is one line.
apply plugin: com.mycompany.MyJavaPlugin
Suddenly, all your projects have the same standard behavior. You can use this for custom code formatting checks, adding license headers to files, or processing configuration templates. It keeps your main build files clean and consistent.
Writing code is one thing; writing good, maintainable code is another. Tools like Checkstyle, PMD, and SpotBugs exist to scan your code for potential problems, style violations, or bug patterns. The trick is to make these checks automatic, so they can’t be forgotten.
You integrate them directly into your build lifecycle. If the check finds serious problems, the build fails. This acts as a gate, preventing problematic code from moving forward. In Maven, you bind the tool to a phase like verify, which runs after compilation but before packaging.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<!-- Run during the 'verify' phase -->
<phase>verify</phase>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
<configuration>
<!-- Which rule set to use -->
<configLocation>google_checks.xml</configLocation>
<!-- Stop the build if there are violations -->
<failOnViolation>true</failOnViolation>
</configuration>
</plugin>
Now, when a developer runs mvn verify locally or when the CI server runs it, the code is automatically inspected. This moves code quality from a manual, periodic review to a continuous, enforced standard.
Your application behaves differently in development, testing, and production. It connects to different databases, uses different API keys, and might have different features turned on. Hard-coding these settings is not an option. You need a way to inject the right configuration based on the environment.
Maven profiles solve this. A profile is a set of configuration that is only activated under certain conditions. You can define properties, resources, and even dependencies that are specific to, say, a production environment.
<profiles>
<!-- This profile is active by default, for local development -->
<profile>
<id>development</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<database.url>jdbc:h2:mem:testdb</database.url>
</properties>
</profile>
<!-- This profile is activated with the '-P production' flag -->
<profile>
<id>production</id>
<properties>
<database.url>jdbc:postgresql://prod-db:5432/myapp</database.url>
</properties>
<build>
<!-- It can even use a completely different set of config files -->
<resources>
<resource>
<directory>src/main/resources-prod</directory>
</resource>
</resources>
</build>
</profile>
</profiles>
To build a production package, you simply add -P production to your Maven command. The build tool swaps in the correct settings. This keeps environment-specific details out of your main code and under the control of the build process.
Waiting for a build to finish is frustrating, especially when you’ve only changed one small file. Modern build tools understand this and support incremental builds. The core idea is simple: each task declares what it needs (its inputs) and what it produces (its outputs). If neither has changed since the last run, the task is skipped.
Gradle excels at this. You must design your custom tasks to declare these inputs and outputs properly.
task processTemplates {
// Input 1: The project version property
inputs.property("version", project.version)
// Input 2: A directory of template files
inputs.dir file("src/main/templates")
// Output: The directory where generated files go
outputs.dir file("$buildDir/generated/resources")
doLast {
// Your logic to read templates, inject the version, and write files
println "Processing templates for version ${project.version}"
}
}
The first time you run ./gradlew processTemplates, it executes the doLast block. If you run it again without changing the version or any file in the src/main/templates directory, Gradle will report the task as “UP-TO-DATE” and skip it entirely. This makes development feedback loops incredibly fast.
Keeping your project’s dependencies up-to-date is a chore, but it’s critical for security and access to new features. Manually scouring websites for new versions is not a good use of time. Your build tool can help.
In Gradle, plugins like “RefreshVersions” can manage this for you. You declare your dependencies with a placeholder version, and the plugin maintains a separate file with the actual version numbers.
First, you apply the plugin in your settings.gradle.kts file.
plugins {
id("de.fayard.refreshVersions") version "0.60.3"
}
Then, in your build.gradle.kts, you use an underscore as the version placeholder.
dependencies {
implementation("com.squareup.okhttp3:okhttp:_")
}
The plugin keeps a versions.properties file. When you want to check for updates, you run a simple command.
./gradlew refreshVersions
The plugin will check for newer versions and update the properties file. You can review the changes, test them, and commit. It turns a manual, error-prone task into a quick, automated check.
A build produces more than just a JAR or WAR file. It generates valuable information: test results, code coverage reports, dependency licenses, and API documentation (Javadoc). This information should be created automatically and made available to the team.
With Maven, the mvn site command can generate a whole project website. You configure plugins in the reporting section of your pom.xml.
<reporting>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-report-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-project-info-reports-plugin</artifactId>
</plugin>
</plugins>
</reporting>
Running mvn site generates a target/site directory full of HTML reports. The real power comes when your CI/CD pipeline runs this command and then publishes the resulting site to an internal web server. Suddenly, you have a live, always-updated dashboard showing the health of your project, available to every developer and manager.
If you work in an organization that produces many related Java libraries, version management becomes a challenge. You need to ensure that Project A and Project B are using compatible versions of your internal common-utils library. A Bill of Materials (BOM) is the solution.
A BOM is a special kind of project that doesn’t contain code. Instead, it contains a list of dependencies with their recommended versions. Other projects import the BOM and can then depend on those libraries without specifying a version.
Here’s a simplified BOM pom.xml:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany</groupId>
<artifactId>platform-bom</artifactId>
<version>1.0.0</version>
<packaging>pom</packaging>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.mycompany.internal</groupId>
<artifactId>common-utils</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>com.mycompany.internal</groupId>
<artifactId>data-client</artifactId>
<version>1.8.0</version>
</dependency>
</dependencies>
</dependencyManagement>
</project>
A consumer project imports this BOM in its dependencyManagement section.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.mycompany</groupId>
<artifactId>platform-bom</artifactId>
<version>1.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<!-- No version needed! It's taken from the BOM. -->
<dependency>
<groupId>com.mycompany.internal</groupId>
<artifactId>common-utils</artifactId>
</dependency>
</dependencies>
This creates a single source of truth for versions, making your ecosystem much easier to manage and upgrade.
The pipeline that builds, tests, and deploys your code is just as important as the code itself. It shouldn’t be a mysterious series of clicks in a web UI. It should be defined as code, stored in your repository right next to your source files.
This is often called “Pipeline-as-Code.” GitHub Actions, GitLab CI, and Jenkinsfile are all examples. You write the pipeline steps in a YAML or Groovy file.
Here’s a basic GitHub Actions workflow for a Java project:
name: Java Build and Test
on: [push, pull_request]
jobs:
verify:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
- name: Setup Java 21
uses: actions/setup-java@v3
with:
java-version: '21'
distribution: 'temurin'
- name: Cache Maven Dependencies
uses: actions/cache@v3
with:
path: ~/.m2/repository
key: maven-${{ hashFiles('**/pom.xml') }}
- name: Run Maven Verify
run: mvn -B clean verify
- name: Upload Test Reports on Failure
if: failure()
uses: actions/upload-artifact@v3
with:
name: test-reports
path: target/surefire-reports
This file, placed in .github/workflows/, means every push and pull request triggers an identical build process. Developers can see the pipeline definition, propose changes to it, and understand exactly why a build failed. It brings the same collaboration and version control benefits to your infrastructure as to your application code.
The final artifact of a modern Java application is often a Docker image. You could build your JAR and then have a separate Dockerfile to copy it in, but that splits your process. A more integrated approach is to have your build tool create the Docker image directly.
Tools like the Jib plugin for Maven and Gradle do this beautifully. Jib builds optimized Docker images without requiring you to write a Dockerfile or even have Docker installed on your machine.
Here’s how you configure it in Maven:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.4.0</version>
<configuration>
<to>

<tags>
<tag>${project.version}</tag>
<tag>latest</tag>
</tags>
</to>
<container>
<ports>
<port>8080</port>
</ports>
</container>
</configuration>
</plugin>
To build and push the image to a registry, you run a Maven goal.
mvn compile jib:build
Jib uses the layers of your application (dependencies, resources, classes) to create a Docker image that is efficient and fast to push. Because it’s part of the build, the image is always built from the exact JAR that was just tested. This completes the automated path from a developer’s commit to a ready-to-run container in a registry.
These methods are not just isolated tricks. They connect to form a robust, automated pathway for your software. It starts with a fast, modular build on a developer’s machine, enforced with quality gates. It moves through a standardized, versioned pipeline in the cloud. It ends with a packaged, deployable artifact that is consistent and traceable. The result is a process that lets you focus on writing code, confident that the steps to ship it are reliable, repeatable, and fast.