Have you ever run a load test only to be flooded with Connection is not available, request timed out errors? In most cases, the root cause is shipping HikariCP’s default configuration straight to production.
Spring Boot uses HikariCP by default, but those defaults are conservative settings that prioritize “just working.” Production traffic will always require tuning. In this article, we’ll walk through the key parameters, what they mean, and how to arrive at appropriate values.
Why HikariCP
Before Spring Boot 2.0, the default connection pool was Tomcat JDBC. The switch to HikariCP in 2.0 was driven primarily by its overwhelming benchmark performance and lightweight codebase. Its internal design minimizes locking, allowing it to maintain low latency even under high-throughput conditions.
That said, “fast” does not mean “zero configuration required.” The right mindset is to treat the defaults as not production-ready from the start.
How Connection Pool Exhaustion Happens
Creating a database connection is surprisingly expensive. Establishing a TCP connection, authenticating, and initializing a session can take longer than executing the SQL itself. A connection pool pre-creates these connections and lends them out as requests arrive.
The problem occurs when borrowed connections are slow to return — due to slow queries or connection leaks — causing the pool to run dry. New requests then wait up to connectionTimeout before receiving a timeout exception. This is “DB connection exhaustion.”
Understanding the Key Parameters
There are many configurable parameters, but understanding these six will cover the vast majority of cases.
| Parameter | Default | Role |
|---|---|---|
maximumPoolSize | 10 | Maximum number of connections the pool maintains |
minimumIdle | Same as maximumPoolSize | Minimum number of idle connections to keep alive |
connectionTimeout | 30000ms | Maximum wait time to acquire a connection |
idleTimeout | 600000ms | Time before an idle connection is discarded |
maxLifetime | 1800000ms | Maximum lifetime of a connection |
keepaliveTime | 0 (disabled) | Interval for connection liveness checks |
The HikariCP documentation explicitly recommends setting minimumIdle equal to maximumPoolSize to operate as a fixed-size pool. Allowing the idle count to fluctuate adds management complexity.
How to Calculate maximumPoolSize Correctly
It’s tempting to think “bigger is safer,” but this can actually backfire. The HikariCP official wiki “About Pool Sizing” provides the following formula:
connections = (core_count * 2) + effective_spindle_count
effective_spindle_count refers to the number of spinning disks; for SSDs or RDS, a value of 1 is generally used. For a 4-core application server: 4 * 2 + 1 = 9, meaning around 10 is a reasonable starting point.
This is a guideline, however. In practice, base the value on your application’s actual concurrent thread count. Tomcat’s default maximum thread count is 200, but not all threads are hitting the database simultaneously. The realistic approach is to measure and adjust based on observed pool utilization (hikaricp.connections.active).
One more thing not to overlook: the DB-side max_connections. When running multiple application server instances, if instance_count × maximumPoolSize exceeds the database limit, the DB will start rejecting connections. Design with the total connection count in mind.
Failures Caused by Misconfiguring connectionTimeout and idleTimeout
If connectionTimeout is too short, legitimate requests will time out during high load. If it’s too long, threads block for extended periods and the entire server backs up. The default of 30 seconds is too long for most cases — tightening it to around 3–5 seconds is recommended.
For idleTimeout and maxLifetime, pay attention to the DB-side connection timeout (MySQL’s wait_timeout, which defaults to 8 hours). If maxLifetime is set longer than the DB’s wait_timeout, HikariCP may hold a connection that the DB has already closed. The next time that connection is used, a SQLException will be thrown.
As a rule of thumb, set maxLifetime to several tens of seconds shorter than the DB’s wait_timeout. For RDS, where wait_timeout is 8 hours, the default 30-minute maxLifetime is usually fine — but in environments with short TCP timeouts at the firewall level, also enable keepaliveTime.
Configuration Example for application.yml
The minimum configuration is straightforward. At a minimum, revisit maximumPoolSize and connectionTimeout.
# Minimal configuration
spring:
datasource:
hikari:
maximum-pool-size: 20
connection-timeout: 3000
Here is the recommended configuration for production. Making values overridable via environment variables is a good practice.
# Recommended configuration
spring:
datasource:
url: ${DB_URL}
username: ${DB_USER}
password: ${DB_PASSWORD}
hikari:
# Calculated based on core count, thread count, and DB max_connections
maximum-pool-size: ${HIKARI_MAX_POOL_SIZE:20}
minimum-idle: ${HIKARI_MAX_POOL_SIZE:20}
# Upper bound to avoid timing out legitimate requests under high load
connection-timeout: 3000
# Set shorter than DB wait_timeout (e.g., 28800s)
max-lifetime: 1800000
# Discard idle connections after 10 minutes
idle-timeout: 600000
# Enable for disconnection detection behind firewalls
keepalive-time: 60000
pool-name: MyAppPool
The configuration key prefix spring.datasource.hikari.* is the same for both Spring Boot 2.x and 3.x.
Before and After Comparison
The following illustrates results from a load scenario with 100 concurrent connections, 20% of which are slow queries (averaging 2 seconds).
| Configuration | maximumPoolSize | connectionTimeout | Timeout Error Rate |
|---|---|---|---|
| Default | 10 | 30000ms | ~40% |
| Tuned | 20 | 3000ms | ~3% |
| Over-provisioned | 100 | 3000ms | ~8% (throughput degraded) |
Over-provisioning increases thread contention and context switching on the DB side, actually reducing throughput. “Bigger is safer” is precisely what the HikariCP documentation argues against.
Monitoring Pool Usage with Actuator
After tuning, verify how the pool is actually behaving. Adding spring-boot-actuator and micrometer-core as dependencies exposes HikariCP metrics.
<!-- pom.xml -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
# Enable the metrics endpoint
management:
endpoints:
web:
exposure:
include: metrics
/actuator/metrics/hikaricp.connections.active shows the number of connections currently in use, and hikaricp.connections.pending shows the number of threads waiting. If pending connections appear consistently, it’s a signal to increase maximumPoolSize. For detailed visualization using Prometheus and Grafana, refer to Spring Boot Observability Setup (Micrometer + Prometheus + Grafana).
Configuration Review Checklist
When auditing your project’s configuration, work through the following:
- Is
maximumPoolSizestill at the default (10) in production? - Is
maxLifetimelonger than the DB’swait_timeout? - Is
connectionTimeoutstill at the default 30000ms? - Is there a reasonable balance between Tomcat’s
max-threadsandmaximumPoolSize? - In a multi-instance setup, does
instance_count × poolSizeexceed the DB’smax_connections?
Conclusion
HikariCP’s default configuration is sufficient for development, but production traffic always warrants a review. The safe approach is to start by adjusting maximumPoolSize and connectionTimeout to reflect actual conditions, then refine incrementally while monitoring metrics via Actuator.
For JPA query optimization, see Spring Data JPA Performance Optimization. For using HikariCP alongside Virtual Threads, see Java 21 Virtual Threads with Spring Boot. Once your connection pool configuration is stable, cleaning up your transaction management will make incident diagnosis significantly easier.