1. Configuration
1.1. test/resources/

1.1.1. application-test.yml
spring: jpa: hibernate: ddl-auto: none # Let the generated schema.sql do this (from prisma) show-sql: true properties: hibernate: format_sql: true dialect: org.hibernate.dialect.PostgreSQLDialect jdbc: batch_size: 20 use_sql_comments: false # Don't add comments to show what Hibernate is doing datasource: hikari: maximum-pool-size: 10 minimum-idle: 2 connection-timeout: 30000 idle-timeout: 600000 max-lifetime: 1800000 auto-commit: false # Ensure auto-commit is enabled logging: level: org.hibernate.SQL: DEBUG org.hibernate.type.descriptor.sql.BasicBinder: TRACE org.hibernate.engine.transaction: DEBUG # Log transaction details org.springframework.transaction: DEBUG # Log Spring transaction management org.springframework.orm.jpa: DEBUG # Log JPA operations org.testcontainers: INFO com.scriptmanager: DEBUG com.zaxxer.hikari: DEBUG # Log connection pool issues
datasource.hikari.auto-commit is set to false on purpose because our commandInvoker is already executed within transactionals managed by TransactionTemplate (which we haved defined in Part 1).
1.1.2. junit-platform.properties
spring.test.constructor.autowire.mode=all
This is to enable constructor injection in tests, otherwise only @Autowired can achieve dependency injection in SpringBootTest.
1.1.3. schema.sql
schema.sql is used to instantiate all the tables when our testcontainer is launched. This is basically a sql script consisting of CREATE IF NOT EXISTS statements
Since I have been using Prisma, we will introduce how to produce such a sql script from Prisma in 〈2. Convert schema.prisma into a schema.sql to Init all Tables〉.
1.2. Gradle Dependencies
dependencies { // Testcontainers testImplementation("org.testcontainers:testcontainers:1.19.3") testImplementation("org.testcontainers:postgresql:1.19.3") testImplementation("org.springframework.boot:spring-boot-testcontainers") // Jackson for JSON testImplementation("com.fasterxml.jackson.module:jackson-module-kotlin") // JUnit 5 testImplementation("org.springframework.boot:spring-boot-starter-test") }
1.3. ~/.testcontainers.properties
This config file is positioned at the User Level instead of the project level, as it directly influences how mac interact with the docker engine.
testcontainers.reuse.enable=true docker.client.strategy=org.testcontainers.dockerclient.UnixSocketClientProviderStrategy
1.3.1. testcontainers.reuse.enable=true
- Technically this is an hard-requirement for docker to prevent accidental container accumulation.
- Without this docker will ignore the
withReuse(true)in ourTestContainerConfiguration.
1.3.2. docker.client.strategyy=org.testcontainers.dockerclient.UnixSocketClientProviderStrategy
1.3.2.1. Problem
Testcontainers needs to communicate with Docker, but there are multiple ways:
- Unix socket (macOS/Linux): docker.sock
- Named pipes (Windows)
- TCP connection (remote Docker)
- Docker Desktop on macOS with specific socket locations
docker.client.strategy tells Testcontainers which method to use. Without this setting, Testcontainers tries multiple strategies in order, which can:
- Add 5-10 seconds of delay on startup
- Fail if auto-detection picks the wrong strategy
1.3.2.2. Solution
-
Setting
docker.client.strategyas above forces Testcontainers to use the Unix socket strategy, which:- Connects directly to docker.sock (or Docker Desktop's socket)
- Most reliable on macOS with Docker Desktop
- Avoids auto-detection issues that can cause delays or failures
-
Other Common Strategies:
UnixSocketClientProviderStrategy- Unix socket (macOS/Linux)DockerMachineClientProviderStrategy- Docker Machine (legacy)EnvironmentAndSystemPropertyClientProviderStrategy- UseDOCKER_HOSTenv varNpipeSocketClientProviderStrategy- Named pipes (Windows)
Conclusion. It speeds up test startup and ensures reliable Docker connection by skipping auto-detection.
2. Convert schema.prisma into a schema.sql to Init all Tables
schema.prisma into a schema.sql to Init all TablesThis script will:
-
Read our prisma file, execute
npx prisma migrate diffto produce a.sqlfile; -
Translate SQLite specific stored procedures into PostgreSQL specific stored procedures (see
convert_to_postgresql), this step can be ignored when we have already used PostgreSQL (as the conversion script willstr-substitute nothing); -
Rearrange the order of the table creations to prevent incorrect sequence to create resources (like an index is created before the table exists).
1#!/bin/bash 2 3# Prisma Schema Converter: SQLite to PostgreSQL 4# Converts and reorders models based on dependencies 5 6# Project root that contains `prisma/` directory 7PRISMA_PROJECT_ROOT="/Users/chingcheonglee/Repos/rust/2025-10-27-shell-script-manager-tauri/src-tauri" 8# Sql file to be copied into t`est resource directory for spring boot project 9SQL_FILE_DESTINATION="/Users/chingcheonglee/Repos/rust/2025-10-27-shell-script-manager-tauri/backend-spring/src/test/resources/schema.sql" 10 11INPUT_FILE="$PRISMA_PROJECT_ROOT/prisma/schema.prisma" 12OUTPUT_FILE="$PRISMA_PROJECT_ROOT/prisma/schema_postgresql.prisma" 13 14echo "Converting Prisma schema from SQLite to PostgreSQL..." 15echo "Input: $INPUT_FILE" 16echo "Output: $OUTPUT_FILE" 17echo "" 18 19python3 - "$INPUT_FILE" "$OUTPUT_FILE" << 'END_PYTHON' 20import re 21import sys 22from collections import defaultdict, deque 23from typing import Dict, List, Set, Tuple 24 25def parse_prisma_schema(content: str) -> Tuple[str, List[Dict]]: 26 """Parse Prisma schema and extract models""" 27 28 # Extract header (generator, datasource) 29 header_match = re.search(r'^(.*?)(?=model\s+\w+)', content, re.DOTALL) 30 header = header_match.group(1) if header_match else "" 31 32 # Find all models 33 model_pattern = r'(model\s+(\w+)\s*\{[^}]*\})' 34 models = [] 35 36 for match in re.finditer(model_pattern, content, re.DOTALL): 37 model_text = match.group(1) 38 model_name = match.group(2) 39 models.append({ 40 'name': model_name, 41 'text': model_text, 42 'dependencies': set() 43 }) 44 45 return header, models 46 47def extract_dependencies(model: Dict) -> Set[str]: 48 """Extract foreign key dependencies from a model""" 49 dependencies = set() 50 51 # Find all relation lines 52 for line in model['text'].split('\n'): 53 if '@relation' in line: 54 # Extract the referenced model type 55 # Pattern: model_name @relation(...) 56 type_match = re.search(r'(\w+)\s+@relation', line) 57 if type_match: 58 ref_model = type_match.group(1) 59 dependencies.add(ref_model) 60 61 return dependencies 62 63def topological_sort(models: List[Dict]) -> List[Dict]: 64 """Sort models based on their dependencies using topological sort""" 65 66 # Build dependency graph 67 for model in models: 68 model['dependencies'] = extract_dependencies(model) 69 70 # Create a mapping of model names to model objects 71 model_map = {m['name']: m for m in models} 72 73 # Calculate in-degrees 74 in_degree = {m['name']: 0 for m in models} 75 for model in models: 76 for dep in model['dependencies']: 77 if dep in in_degree: 78 in_degree[model['name']] += 1 79 80 # Find all nodes with no incoming edges 81 queue = deque([name for name, degree in in_degree.items() if degree == 0]) 82 sorted_models = [] 83 84 while queue: 85 model_name = queue.popleft() 86 sorted_models.append(model_map[model_name]) 87 88 # Reduce in-degree for dependent models 89 for other_model in models: 90 if model_name in other_model['dependencies']: 91 in_degree[other_model['name']] -= 1 92 if in_degree[other_model['name']] == 0: 93 queue.append(other_model['name']) 94 95 # Check for cycles 96 if len(sorted_models) != len(models): 97 print("⚠️ Warning: Circular dependencies detected!") 98 sorted_names = {m['name'] for m in sorted_models} 99 for model in models: 100 if model['name'] not in sorted_names: 101 sorted_models.append(model) 102 103 return sorted_models 104 105def convert_to_postgresql(model_text: str) -> str: 106 """Convert SQLite-specific syntax to PostgreSQL""" 107 108 # Replace SQLite julianday with PostgreSQL epoch 109 model_text = re.sub( 110 r'@default\(dbgenerated\("?\(CAST\(\(julianday\(\'now\'\)\s*-\s*2440587\.5\)\s*\*\s*86400000\.0\s+AS\s+REAL\)\)"?\)\)', 111 '@default(dbgenerated("ROUND(extract(epoch from NOW()::TIMESTAMPTZ) * 1000, 0)::float"))', 112 model_text 113 ) 114 115 # Replace SQLite strftime with PostgreSQL TO_CHAR 116 model_text = re.sub( 117 r'@default\(dbgenerated\("?\(strftime\(\'%Y-%m-%d %H:%M:%S\',\s*datetime\(\'now\',\s*\'\+8 hours\'\)\)\)"?\)\)', 118 '@default(dbgenerated("TO_CHAR((NOW()::TIMESTAMPTZ AT TIME ZONE \'UTC\' AT TIME ZONE \'GMT+8\'), \'YYYY-MM-DD HH24:MI:SS\')"))', 119 model_text 120 ) 121 122 return model_text 123 124def convert_datasource(header: str) -> str: 125 """Convert datasource from SQLite to PostgreSQL""" 126 127 datasource_pattern = r'datasource\s+db\s*\{[^}]*\}' 128 129 new_datasource = '''datasource db { 130 provider = "postgresql" 131 url = env("DATABASE_URL") 132}''' 133 134 header = re.sub(datasource_pattern, new_datasource, header, flags=re.DOTALL) 135 136 return header 137 138def main(): 139 if len(sys.argv) < 3: 140 print("Usage: script.sh <input_file> <output_file>") 141 sys.exit(1) 142 143 input_file = sys.argv[1] 144 output_file = sys.argv[2] 145 146 print(f"Reading Prisma schema from: {input_file}") 147 148 try: 149 with open(input_file, 'r') as f: 150 content = f.read() 151 except FileNotFoundError: 152 print(f"❌ Error: File '{input_file}' not found!") 153 sys.exit(1) 154 155 # Parse schema 156 print("Parsing Prisma schema...") 157 header, models = parse_prisma_schema(content) 158 print(f"Found {len(models)} models") 159 160 # Convert datasource 161 print("Converting datasource to PostgreSQL...") 162 header = convert_datasource(header) 163 164 # Convert each model to PostgreSQL 165 print("Converting SQLite syntax to PostgreSQL...") 166 for model in models: 167 model['text'] = convert_to_postgresql(model['text']) 168 169 # Sort models by dependencies 170 print("Reordering models based on dependencies...") 171 sorted_models = topological_sort(models) 172 173 # Print dependency order 174 print("\nModel creation order:") 175 for i, model in enumerate(sorted_models, 1): 176 deps = model['dependencies'] 177 deps_str = f" → depends on: {', '.join(sorted(deps))}" if deps else "" 178 print(f" {i:2d}. {model['name']}{deps_str}") 179 180 # Build output 181 output_lines = [header.rstrip()] 182 output_lines.append("") 183 output_lines.append("// " + "=" * 76) 184 output_lines.append("// Models ordered by dependencies (base models first)") 185 output_lines.append("// " + "=" * 76) 186 output_lines.append("") 187 188 for model in sorted_models: 189 output_lines.append(model['text']) 190 output_lines.append("") 191 192 output_content = '\n'.join(output_lines) 193 194 # Write output file 195 print(f"\nWriting converted schema to: {output_file}") 196 with open(output_file, 'w') as f: 197 f.write(output_content) 198 199 print("✅ Conversion complete!") 200 201if __name__ == "__main__": 202 main() 203 204END_PYTHON 205 206echo "" 207echo "Generating SQL migration from PostgreSQL schema..." 208cd "$PRISMA_PROJECT_ROOT" 209 210# Generate SQL from the PostgreSQL schema 211npx prisma migrate diff \ 212 --from-empty \ 213 --to-schema-datamodel prisma/schema_postgresql.prisma \ 214 --script > temp_schema.sql 215 216echo "Converting to final SQL format..." 217 218# Do final SQL conversions 219python3 << 'PYTHON_SCRIPT' 220import re 221 222with open('temp_schema.sql', 'r') as f: 223 sql_content = f.read() 224 225# SQLite to PostgreSQL conversions 226sql_content = re.sub( 227 r'INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT', 228 'SERIAL PRIMARY KEY', 229 sql_content 230) 231 232sql_content = re.sub( 233 r'BIGINT NOT NULL PRIMARY KEY AUTOINCREMENT', 234 'BIGSERIAL PRIMARY KEY', 235 sql_content 236) 237 238sql_content = re.sub(r'\bREAL\b', 'DOUBLE PRECISION', sql_content) 239 240sql_content = re.sub( 241 r"\(CAST\(\(julianday\('now'\) - 2440587\.5\) \* 86400000\.0 AS REAL\)\)", 242 "ROUND(extract(epoch from NOW()::TIMESTAMPTZ) * 1000, 0)::float", 243 sql_content 244) 245 246sql_content = re.sub( 247 r"\(CAST\(\(julianday\('now'\) - 2440587\.5\) \* 86400000\.0 AS DOUBLE PRECISION\)\)", 248 "ROUND(extract(epoch from NOW()::TIMESTAMPTZ) * 1000, 0)::float", 249 sql_content 250) 251 252sql_content = re.sub( 253 r"\(strftime\('%Y-%m-%d %H:%M:%S', datetime\('now', '\+8 hours'\)\)\)", 254 "TO_CHAR((NOW()::TIMESTAMPTZ AT TIME ZONE 'UTC' AT TIME ZONE 'GMT+8'), 'YYYY-MM-DD HH24:MI:SS')", 255 sql_content 256) 257 258with open('temp_schema.sql', 'w') as f: 259 f.write(sql_content) 260 261PYTHON_SCRIPT 262 263echo "" 264echo "Moving SQL file to Spring test resources..." 265mv temp_schema.sql "$SQL_FILE_DESTINATION" 266 267echo "" 268echo "✅ Complete!" 269echo " - PostgreSQL Prisma schema: $OUTPUT_FILE" 270echo " - SQL migration file: $SQL_FILE_DESTINATION" 271echo "" 272echo "Next steps:" 273echo " 1. Review the PostgreSQL schema at $OUTPUT_FILE" 274echo " 2. Review the SQL migration at $SQL_FILE_DESTINATION" 275echo " 3. Set DATABASE_URL environment variable" 276echo " 4. Run: cd $PRISMA_PROJECT_ROOT && npx prisma migrate dev --name init"
3. The TestcontainersConfiguration Class
TestcontainersConfiguration Class3.1. Implementation
From line 39-50 we will check that if a test-container is being reused.
-
If a schema exists (testcontainer being reused), we simply truncate all tables to empty all existing data to restore our database into a fresh state.
-
Otherwise we apply the
schema.sqlfile to generate all tables.
1// src/test/kotlin/com/scriptmanager/config/TestcontainersConfiguration.kt 2package com.scriptmanager.config 3 4import org.springframework.boot.test.context.TestConfiguration 5import org.springframework.boot.testcontainers.service.connection.ServiceConnection 6import org.springframework.context.annotation.Bean 7import org.springframework.core.io.ClassPathResource 8import org.testcontainers.containers.PostgreSQLContainer 9import org.testcontainers.utility.DockerImageName 10import java.sql.DriverManager 11import java.time.Duration 12 13 14@TestConfiguration(proxyBeanMethods = false) 15class TestcontainersConfiguration { 16 17 /** 18 * PostgreSQL container that will be shared across all tests. 19 * Using singleton pattern to avoid spinning up multiple containers. 20 * 21 * Applies Prisma schema from src-tauri/prisma/schema.prisma if available. 22 */ 23 @Bean 24 @ServiceConnection 25 fun postgresContainer(): PostgreSQLContainer<*> { 26 val container = PostgreSQLContainer(DockerImageName.parse("postgres:15-alpine")) 27 .withDatabaseName("testdb") 28 .withUsername("test") 29 .withPassword("test") 30 .withStartupTimeout(Duration.ofMinutes(2)) 31 .withReuse(true) // Reuse container across test runs for faster execution 32 33 container.start() 34 printConnectionInfo(container) 35 36 println() 37 38 // Check if schema already exists (for container reuse) 39 if (schemaExists(container)) { 40 println("✓ Schema already exists - skipping migration (container reuse)") 41 println(" Truncating all tables to clear test data...") 42 truncateAllTables(container) 43 verifySchema(container) 44 } else { 45 println(" Applying schema from schema.sql file (first time)...") 46 applySchemaFromFile(container) 47 println("✓ Schema applied successfully!") 48 println() 49 verifySchema(container) 50 } 51 52 println() 53 54 return container 55 } 56 57 /** 58 * Truncates all tables to clear data while preserving schema. 59 * This is called before each Spring context creation to ensure test isolation. 60 */ 61 private fun truncateAllTables(container: PostgreSQLContainer<*>) { 62 try { 63 DriverManager.getConnection( 64 container.jdbcUrl, 65 container.username, 66 container.password 67 ).use { connection -> 68 connection.autoCommit = false 69 try { 70 connection.createStatement().use { statement -> 71 // Get all table names 72 val tables = mutableListOf<String>() 73 val rs = statement.executeQuery( 74 "SELECT tablename FROM pg_tables WHERE schemaname = 'public'" 75 ) 76 while (rs.next()) { 77 tables.add(rs.getString("tablename")) 78 } 79 rs.close() 80 81 if (tables.isNotEmpty()) { 82 // Disable foreign key checks temporarily, truncate all tables, then re-enable 83 val tableList = tables.joinToString(", ") { "\"$it\"" } 84 statement.execute("TRUNCATE TABLE $tableList RESTART IDENTITY CASCADE") 85 connection.commit() 86 println(" ✓ Truncated ${tables.size} table(s): ${tables.joinToString(", ")}") 87 } else { 88 println(" ℹ️ No tables to truncate") 89 } 90 } 91 } catch (e: Exception) { 92 connection.rollback() 93 throw e 94 } 95 } 96 } catch (e: Exception) { 97 println(" [!!] Could not truncate tables: ${e.message}") 98 e.printStackTrace() 99 throw RuntimeException("Failed to truncate tables", e) 100 } 101 } 102 103 /** 104 * Checks if the schema has already been applied by looking for key tables. 105 * Returns true if tables exist, false otherwise. 106 */ 107 private fun schemaExists(container: PostgreSQLContainer<*>): Boolean { 108 try { 109 DriverManager.getConnection( 110 container.jdbcUrl, 111 container.username, 112 container.password 113 ).use { connection -> 114 val statement = connection.createStatement() 115 val resultSet = statement.executeQuery( 116 "SELECT COUNT(*) as count FROM information_schema.tables WHERE table_schema = 'public'" 117 ) 118 119 if (resultSet.next()) { 120 val tableCount = resultSet.getInt("count") 121 if (tableCount > 0) { 122 println(" ℹ️ Found $tableCount existing table(s) in database") 123 return true 124 } 125 } 126 return false 127 } 128 } catch (e: Exception) { 129 println(" ⚠️ Could not check if schema exists: ${e.message}") 130 return false 131 } 132 } 133 134 135 /** 136 * Applies the schema.sql file to the PostgreSQL container. 137 * This reads the schema file from test resources and executes it. 138 * SQL statements are split and executed individually. 139 */ 140 private fun applySchemaFromFile(container: PostgreSQLContainer<*>) { 141 try { 142 val schemaResource = ClassPathResource("schema.sql") 143 if (!schemaResource.exists()) { 144 println(" ⚠️ WARNING: schema.sql file not found in test resources!") 145 return 146 } 147 148 val schemaContent = schemaResource.inputStream.bufferedReader().use { it.readText() } 149 println(" 📖 Schema file size: ${schemaContent.length} bytes") 150 151 // More robust SQL statement splitting 152 val sqlStatements = splitSqlStatements(schemaContent) 153 println(" 📝 Found ${sqlStatements.size} SQL statements to execute") 154 155 // Connect to the database 156 DriverManager.getConnection( 157 container.jdbcUrl, 158 container.username, 159 container.password 160 ).use { connection -> 161 connection.autoCommit = false 162 163 try { 164 connection.createStatement().use { statement -> 165 var successCount = 0 166 sqlStatements.forEachIndexed { index, sql -> 167 try { 168 val trimmedSql = sql.trim() 169 if (trimmedSql.isNotEmpty()) { 170 println(" [${index + 1}/${sqlStatements.size}] Executing: ${trimmedSql.take(60)}...") 171 statement.execute(trimmedSql) 172 successCount++ 173 } 174 } catch (e: Exception) { 175 println(" ✗ Failed to execute statement ${index + 1}:") 176 println(" ${sql.take(200)}...") 177 println(" Error: ${e.message}") 178 connection.rollback() 179 throw e 180 } 181 } 182 connection.commit() 183 println(" ✓ Successfully executed $successCount SQL statements") 184 } 185 } catch (e: Exception) { 186 connection.rollback() 187 throw e 188 } 189 } 190 } catch (e: Exception) { 191 println(" ✗ Error applying schema: ${e.message}") 192 e.printStackTrace() 193 throw RuntimeException("Failed to apply schema.sql", e) 194 } 195 } 196 197 /** 198 * More robust SQL statement splitting that handles: 199 * - Multi-line statements 200 * - Comments (-- and /* */) 201 * - Semicolons within strings 202 */ 203 private fun splitSqlStatements(sql: String): List<String> { 204 val statements = mutableListOf<String>() 205 val currentStatement = StringBuilder() 206 var inSingleLineComment = false 207 var inMultiLineComment = false 208 var inString = false 209 var stringChar = '\u0000' 210 211 val lines = sql.lines() 212 for (line in lines) { 213 var i = 0 214 while (i < line.length) { 215 val char = line[i] 216 val nextChar = if (i + 1 < line.length) line[i + 1] else '\u0000' 217 218 // Handle single-line comments 219 if (!inString && !inMultiLineComment && char == '-' && nextChar == '-') { 220 inSingleLineComment = true 221 i++ 222 continue 223 } 224 225 // Handle multi-line comments 226 if (!inString && !inSingleLineComment && char == '/' && nextChar == '*') { 227 inMultiLineComment = true 228 i += 2 229 continue 230 } 231 232 if (inMultiLineComment && char == '*' && nextChar == '/') { 233 inMultiLineComment = false 234 i += 2 235 continue 236 } 237 238 // Skip if in comment 239 if (inSingleLineComment || inMultiLineComment) { 240 i++ 241 continue 242 } 243 244 // Handle strings 245 if ((char == '\'' || char == '"') && !inString) { 246 inString = true 247 stringChar = char 248 currentStatement.append(char) 249 } else if (inString && char == stringChar) { 250 // Check for escaped quotes 251 if (nextChar == stringChar) { 252 currentStatement.append(char).append(nextChar) 253 i += 2 254 continue 255 } else { 256 inString = false 257 currentStatement.append(char) 258 } 259 } else if (!inString && char == ';') { 260 // Statement terminator found 261 val statement = currentStatement.toString().trim() 262 if (statement.isNotEmpty()) { 263 statements.add(statement) 264 } 265 currentStatement.clear() 266 } else { 267 currentStatement.append(char) 268 } 269 270 i++ 271 } 272 273 // Reset single-line comment flag at end of line 274 inSingleLineComment = false 275 currentStatement.append('\n') 276 } 277 278 // Add any remaining statement 279 val lastStatement = currentStatement.toString().trim() 280 if (lastStatement.isNotEmpty()) { 281 statements.add(lastStatement) 282 } 283 284 return statements 285 } 286 287 /** 288 * Verifies that the schema was applied by listing all tables 289 */ 290 private fun verifySchema(container: PostgreSQLContainer<*>) { 291 try { 292 DriverManager.getConnection( 293 container.jdbcUrl, 294 container.username, 295 container.password 296 ).use { connection -> 297 val statement = connection.createStatement() 298 val resultSet = statement.executeQuery( 299 "SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'" 300 ) 301 302 val tables = mutableListOf<String>() 303 while (resultSet.next()) { 304 tables.add(resultSet.getString("table_name")) 305 } 306 307 if (tables.isEmpty()) { 308 println(" ⚠️ WARNING: No tables found in the database!") 309 } else { 310 println(" ✓ Verified ${tables.size} table(s) created:") 311 tables.sorted().forEach { tableName -> 312 println(" • $tableName") 313 } 314 } 315 } 316 } catch (e: Exception) { 317 println(" ⚠️ Could not verify schema: ${e.message}") 318 } 319 } 320 321 /** 322 * Prints connection information for connecting with GUI tools (DataGrip, DBeaver, etc.) 323 */ 324 private fun printConnectionInfo(container: PostgreSQLContainer<*>) { 325 val host = container.host 326 val port = container.getMappedPort(5432) 327 val database = container.databaseName 328 val username = container.username 329 val password = container.password 330 val jdbcUrl = container.jdbcUrl 331 332 println("=".repeat(80)) 333 println("🔗 TESTCONTAINERS DATABASE CONNECTION INFO") 334 println("=".repeat(80)) 335 println("Host: $host") 336 println("Port: $port") 337 println("Database: $database") 338 println("Username: $username") 339 println("Password: $password") 340 println("JDBC URL: $jdbcUrl") 341 println() 342 println(" GUI Tool Connection (DataGrip, DBeaver, TablePlus, etc.):") 343 println(" Host: $host") 344 println(" Port: $port") 345 println(" Database: $database") 346 println(" User: $username") 347 println(" Password: $password") 348 println() 349 println(" Container will stay alive with reuse=true") 350 println(" To find it: docker ps | grep postgres") 351 println("=".repeat(80)) 352 } 353}
3.2. What Happens on Startup
🔧 PostgreSQL Test Container Configuration Container: postgres:15-alpine Database: testdb (port: 52106) <-- this is random, but persistent until we stop/restart our docker process. Username: test Password: test JDBC URL: jdbc:postgresql://localhost:52106/testdb ✔ Schema already exists - skipping migration (container reuse) Truncating all tables to clear test data... ✔ Truncated 18 table(s) ✔ Verified 18 table(s) created
4. Testing
4.1. Test Files Structure

-
We separated our commands by resources (a natural separations, as is controller).
Remark. If we only separate commands by aggregate level, that separation is usually too bulky (a god test file).
-
We test our commands one by one based on resource level naturally.
-
Since each endpoint in a controller will call exactly one command, the tests will cover all basic functionalities, but it is not enough.
-
Some method will have authentication and authroization, for those methods we should also add a controller test.
But concerns are clearly separated, we are concerned only about if the
AuthroizationExceptionwas thrown, then that's enough.
4.2. Which kind of Tests we are Doing?
| Test Type | What It Tests | Our Current Test |
|---|---|---|
| Unit Test | Single class in isolation (mocked dependencies) | ❌ No external dependencies are mocked |
| Integration Test | Multiple layers working together | ✅ We are here |
| Controller Test | HTTP layer (MockMvc) | ❌ We have no Authentication |
| E2E Test | Full system via UI/API | ❌ We don't have software to test Tauri App UI-wise |
4.3. The BaseTest Class and @BeforeEach
Purpose of the Class. This defines the basic operations that all test would execute.
In our case, we truncates/clean-up the event table before each test to keep a clean event table so that the events dispatched from our commands are easily testable.
// src/test/kotlin/com/scriptmanager/integration/BaseTest.kt @SpringBootTest @ActiveProfiles("test") @Import(TestcontainersConfiguration::class) abstract class BaseTest ( private val eventRepository: EventRepository ){ @BeforeEach fun truncateEventsBeforeEachTest() { println("[BaseTest] Truncating events table...") eventRepository.deleteAll() println(" ✔ Events table cleared") } }
Based on the natural of the tests we may add further clearnup process for each using @BeforeEach to ensure all tests are completely isolatd.
4.4. Two Levels of Cleanup
4.4.1. Context Level (in TestcontainersConfiguration)
- When: Spring context is created
- What: All tables are truncated
- How:
TRUNCATE TABLE ... RESTART IDENTITY CASCADE
private fun truncateAllTables(container: PostgreSQLContainer<*>) { val tables = getAllTableNames() statement.execute("TRUNCATE TABLE ${tables.joinToString(", ")} RESTART IDENTITY CASCADE") }
4.4.2. Test Level (BaseTest)
- When: Before each test method
- What: Only
eventtable will be cleared - How:
eventRepository.deleteAll()
@BeforeEach fun truncateEventsBeforeEachTest() { eventRepository.deleteAll() }
4.4.3. Why Two Levels?
| Level | Frequency | Scope | Use Case |
|---|---|---|---|
| Context | Once per context | All tables | Fresh start for test class |
| Test | Before each test | Events only | Isolate event assertions |
4.5. Test Suite
We can group a list of test classes and launch all the testing at the same time.
Since we can control the execution order in @SelectClasseds, it is possible to launch a "resource initialization step" and let the remaining tests reuse the resources.
import org.junit.platform.suite.api.SelectClasses import org.junit.platform.suite.api.Suite import org.junit.platform.suite.api.SuiteDisplayName @Suite @SuiteDisplayName("All Tests Suite") @SelectClasses( InitializeResourcesTest::class, // order 0 DataBaseTest::class, // order 1 EventPersistenceTest::class, // order 2 CommandInvokerTest::class // order 3 ) class AllTestsSuite
4.6. Assetions that we can use
With import org.junit.jupiter.api.Assertions.* we have:
- ID Validation:
assertNotNull(entity.id!!) - Property Matching:
assertEquals(expected, actual.property) - Exception Verification:
val exception = assertThrows(IllegalArgumentException::class.java) { // some transaction ... } // we can even test the rollbacked state here assertTrue(exception.message!!.contains("..."))
4.7. Examples
4.7.1. Integration Test
4.7.1.1. Simple Arrange, Act and Assert (AAA)
Straight forward tests can be defined easily via annotated methods:
package com.scriptmanager.integration.shellscriptmanager import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.module.kotlin.readValue import com.scriptmanager.domain.infrastructure.CommandInvoker import com.scriptmanager.domain.scriptmanager.command.CreateWorkspaceCommand import com.scriptmanager.domain.scriptmanager.event.WorkspaceCreatedEvent import com.scriptmanager.integration.BaseTest import com.scriptmanager.repository.EventRepository import org.junit.jupiter.api.Assertions.* import org.junit.jupiter.api.Test import org.springframework.boot.test.context.SpringBootTest @SpringBootTest class EventPersistenceTest( private val eventRepository: EventRepository, private val commandInvoker: CommandInvoker, private val objectMapper: ObjectMapper ) : BaseTest(eventRepository) { @Test fun `should emit WorkspaceCreatedEvent when creating workspace`() { // Arrange val workspaceName = "TestWorkspace_${System.currentTimeMillis()}" // Act val result = commandInvoker.invoke(CreateWorkspaceCommand(workspaceName)) // Assert - Find the event val events = eventRepository.findAll() .filter { it.eventType == "WorkspaceCreatedEvent" } .filter { event -> val payload = objectMapper.readValue<WorkspaceCreatedEvent>(event.payload) payload.workspace.name == workspaceName } assertEquals(1, events.size, "Should have exactly 1 WorkspaceCreatedEvent") val event = events.first() assertTrue(event.success) val payload = objectMapper.readValue<WorkspaceCreatedEvent>(event.payload) assertEquals(workspaceName, payload.workspace.name) assertEquals(result.id, payload.workspace.id) } }
4.7.1.2. Complicated AAA Using Nested Inner Class
-
For complicated tests we may need to separate the
arrangeandact-assertseparately. -
Sometimes when multiple
act-asserts can share the same arrange logic, we also group similar tests together:
@SpringBootTest class FolderTest( private val eventRepository: EventRepository, private val folderRepository: ScriptsFolderRepository, private val commandInvoker: CommandInvoker, private val objectMapper: ObjectMapper, private val entityManager: EntityManager ) : BaseTest(eventRepository) { @Nested @DisplayName("Should delete folder and its children") open inner class ShouldDeleteFolderAndChildren { private lateinit var parentFolder: ScriptsFolder private lateinit var subfolder: ScriptsFolder private lateinit var scriptInSubfolder: ShellScriptResponse private lateinit var scriptInfolder: ShellScriptResponse @BeforeEach @Transactional open fun arrange() { parentFolder = commandInvoker.invoke( CreateFolderCommand("Parent_${System.currentTimeMillis()}") ) subfolder = commandInvoker.invoke( AddSubfolderCommand( parentFolderId = parentFolder.id!!, name = "Subfolder_${System.currentTimeMillis()}" ) ) this@FolderTest.entityManager.flush() scriptInfolder = commandInvoker.invoke( CreateScriptCommand( folderId = subfolder.id!!, name = "Script_${System.currentTimeMillis()}_in_folder", content = "echo 'Hello, World!'" ) ) scriptInSubfolder = commandInvoker.invoke( CreateScriptCommand( folderId = parentFolder.id!!, name = "Script_${System.currentTimeMillis()}_in_subfolder", content = "echo 'Hello, World!'" ) ) } @Test @Transactional open fun `should delete folder, subfolders and all scripts inside`() { // Act commandInvoker.invoke(DeleteFolderCommand(parentFolder.id!!)) this@FolderTest.entityManager.flush() // Assert - All entities deleted assertNull( folderRepository.findByIdOrNull(parentFolder.id!!), "Parent folder should be deleted" ) assertNull( folderRepository.findByIdOrNull(subfolder.id!!), "Subfolder should be deleted" ) assertNull( shellScriptRepository.findByIdOrNull(scriptInfolder.id!!), "Script should be deleted" ) assertNull( shellScriptRepository.findByIdOrNull(scriptInSubfolder.id!!), "Script should be deleted" ) // Assert - Events emitted val events = eventRepository.findAll() val folderCreatedEvents = events.filter { it.eventType == "FolderCreatedEvent" } val subfolderCreatedEvents = events.filter { it.eventType == "SubfolderAddedEvent" } val scriptCreatedEvents = events.filter { it.eventType == "ScriptCreatedEvent" } val folderDeletedEvents = events.filter { it.eventType == "FolderDeletedEvent" } val scriptDeletedEvents = events.filter { it.eventType == "ScriptDeletedEvent" } assertEquals(1, folderCreatedEvents.size, "Should have 1 FolderCreatedEvents from setup") assertEquals(1, subfolderCreatedEvents.size, "Should have 1 SubfolderAddedEvent from setup") assertEquals(2, scriptCreatedEvents.size, "Should have 2 ScriptCreatedEvent from setup") assertEquals(2, folderDeletedEvents.size, "Should emit 2 FolderDeletedEvents") assertEquals(2, scriptDeletedEvents.size, "Should emit 2 ScriptDeletedEvent") } @Test @Transactional open fun `should emit events with correct folder IDs`() { // Act commandInvoker.invoke(DeleteFolderCommand(parentFolder.id!!)) this@FolderTest.entityManager.flush() // Assert val folderDeleteEvents = eventRepository.findAll() .filter { it.eventType == "FolderDeletedEvent" } .map { objectMapper.readValue<FolderDeletedEvent>(it.payload) } assertTrue( folderDeleteEvents.any { it.folderId == parentFolder.id!! }, "Should emit event for parent folder" ) assertTrue( folderDeleteEvents.any { it.folderId == subfolder.id!! }, "Should emit event for subfolder" ) } } }
4.7.2. Controller Test
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc import org.springframework.test.web.servlet.MockMvc import org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post import org.springframework.test.web.servlet.result.MockMvcResultMatchers.* @SpringBootTest @AutoConfigureMockMvc @ActiveProfiles("test") @Import(TestcontainersConfiguration::class) class FolderControllerTest( private val mockMvc: MockMvc, private val objectMapper: ObjectMapper, private val eventRepository: EventRepository, private val folderRepository: ScriptsFolderRepository ) { @BeforeEach fun setUp() { eventRepository.deleteAll() } @Test fun `should create folder with required headers`() { // Arrange val folderName = "TestFolder_${System.currentTimeMillis()}" val request = CreateFolderRequest(name = folderName) val requestJson = objectMapper.writeValueAsString(request) // Act & Assert mockMvc.perform( post("/folders") .contentType(MediaType.APPLICATION_JSON) .content(requestJson) .header("Authorization", "Bearer mock-jwt-token") ) .andExpect(status().isOk) .andExpect(jsonPath("$.success").value(true)) .andExpect(jsonPath("$.result.name").value(folderName)) val folders = folderRepository.findAll() assert(folders.any { it.name == folderName }) } }
5. IDE Freezes During Tests
Problem. IntelliJ IDEA tries to index the build/ directory after each test run, causing the IDE to freeze.
Solution. Mark build/ as Excluded via:
- Right-click
build/folder in Project view - Select "Mark Directory as" → "Excluded"
- Folder turns orange/red and won't be indexed
When we exclude build/:
- No indexing of generated files
- Faster IDE performance
- No autocomplete from generated code
- Test reports still accessible
Next we also uncheck the following to avoid accidentally indexing a large file:

6. Further Questions
6.1. Can I inspect the database during tests?
Yes! Use the connection info printed at startup:
JDBC URL: jdbc:postgresql://localhost:52106/testdb Username: test Password: test
We can connect to the database via any GUI application for inspection.
6.2. Are tables dropped after each test?
No, schema persists, data are truncated only before the launch of new tests.
6.3. What about transaction rollback?
Not needed. Event table is cleared before each test via @BeforeEach. Other tables are truncated between
contexts.
6.4. How do I test transaction rollback?
Use database events approach (no test queue) and force a rollback:
@Test fun `should rollback on error`() { assertThrows<RuntimeException> { commandInvoker.invoke(CommandThatFails()) } // Verify nothing was saved assertEquals(0, eventRepository.findAll().size) }
6.5. Can I run tests without Docker?
No. Testcontainers requires Docker. Consider H2 database if Docker isn't available, but you'll lose PostgreSQL-specific testing.












