1spring:2jpa:3hibernate:4ddl-auto: none # Let the generated schema.sql do this (from prisma)5show-sql:true6properties:7hibernate:8format_sql:true9dialect: org.hibernate.dialect.PostgreSQLDialect
10jdbc:11batch_size:2012use_sql_comments:false# Don't add comments to show what Hibernate is doing1314datasource:15hikari:16maximum-pool-size:1017minimum-idle:218connection-timeout:3000019idle-timeout:60000020max-lifetime:180000021auto-commit:false# Ensure auto-commit is enabled2223logging:24level:25org.hibernate.SQL: DEBUG
26org.hibernate.type.descriptor.sql.BasicBinder: TRACE
27org.hibernate.engine.transaction: DEBUG # Log transaction details28org.springframework.transaction: DEBUG # Log Spring transaction management29org.springframework.orm.jpa: DEBUG # Log JPA operations30org.testcontainers: INFO
31com.scriptmanager: DEBUG
32com.zaxxer.hikari: DEBUG # Log connection pool issues
datasource.hikari.auto-commit is set to falseon purpose because our commandInvoker is already executed within transactionals managed by TransactionTemplate (which we haved defined in Part 1).
1.1.2. junit-platform.properties
1spring.test.constructor.autowire.mode=all
This is to enable constructor injection in tests, otherwise only @Autowired can achieve dependency injection in SpringBootTest.
1.1.3. schema.sql
schema.sql is used to instantiate all the tables when our testcontainer is launched. This is basically a sql script consisting of CREATE IF NOT EXISTS statements
1dependencies {2// Testcontainers3testImplementation("org.testcontainers:testcontainers:1.19.3")4testImplementation("org.testcontainers:postgresql:1.19.3")5testImplementation("org.springframework.boot:spring-boot-testcontainers")67// Jackson for JSON8testImplementation("com.fasterxml.jackson.module:jackson-module-kotlin")910// JUnit 511testImplementation("org.springframework.boot:spring-boot-starter-test")12}
1.3. ~/.testcontainers.properties
This config file is positioned at the User Level instead of the project level, as it directly influences how mac interact with the docker engine.
EnvironmentAndSystemPropertyClientProviderStrategy - Use DOCKER_HOST env var
NpipeSocketClientProviderStrategy - Named pipes (Windows)
Conclusion. It speeds up test startup and ensures reliable Docker connection by skipping auto-detection.
2. Convert schema.prisma into a schema.sql to Init all Tables
This script will:
Read our prisma file, execute npx prisma migrate diff to produce a .sql file;
Translate SQLite specific stored procedures into PostgreSQL specific stored procedures (see convert_to_postgresql), this step can be ignored when we have already used PostgreSQL (as the conversion script will str-substitute nothing);
Rearrange the order of the table creations to prevent incorrect sequence to create resources (like an index is created before the table exists).
1#!/bin/bash23# Prisma Schema Converter: SQLite to PostgreSQL4# Converts and reorders models based on dependencies56# Project root that contains `prisma/` directory7PRISMA_PROJECT_ROOT="/Users/chingcheonglee/Repos/rust/2025-10-27-shell-script-manager-tauri/src-tauri"8# Sql file to be copied into t`est resource directory for spring boot project9SQL_FILE_DESTINATION="/Users/chingcheonglee/Repos/rust/2025-10-27-shell-script-manager-tauri/backend-spring/src/test/resources/schema.sql"1011INPUT_FILE="$PRISMA_PROJECT_ROOT/prisma/schema.prisma"12OUTPUT_FILE="$PRISMA_PROJECT_ROOT/prisma/schema_postgresql.prisma"1314echo"Converting Prisma schema from SQLite to PostgreSQL..."15echo"Input: $INPUT_FILE"16echo"Output: $OUTPUT_FILE"17echo""1819python3 - "$INPUT_FILE""$OUTPUT_FILE"<<'END_PYTHON'
20import re
21import sys
22from collections import defaultdict, deque
23from typing import Dict, List, Set, Tuple
2425def parse_prisma_schema(content: str) -> Tuple[str, List[Dict]]:
26 """Parse Prisma schema and extract models"""
2728 # Extract header (generator, datasource)
29 header_match = re.search(r'^(.*?)(?=model\s+\w+)', content, re.DOTALL)
30 header = header_match.group(1) if header_match else ""
3132 # Find all models
33 model_pattern = r'(model\s+(\w+)\s*\{[^}]*\})'
34 models = []
3536 for match in re.finditer(model_pattern, content, re.DOTALL):
37 model_text = match.group(1)
38 model_name = match.group(2)
39 models.append({
40 'name': model_name,
41 'text': model_text,
42 'dependencies': set()
43 })
4445 return header, models
4647def extract_dependencies(model: Dict) -> Set[str]:
48 """Extract foreign key dependencies from a model"""
49 dependencies = set()
5051 # Find all relation lines
52 for line in model['text'].split('\n'):
53 if '@relation' in line:
54 # Extract the referenced model type
55 # Pattern: model_name @relation(...)
56 type_match = re.search(r'(\w+)\s+@relation', line)
57 if type_match:
58 ref_model = type_match.group(1)
59 dependencies.add(ref_model)
6061 return dependencies
6263def topological_sort(models: List[Dict]) -> List[Dict]:
64 """Sort models based on their dependencies using topological sort"""
6566 # Build dependency graph
67 for model in models:
68 model['dependencies'] = extract_dependencies(model)
6970 # Create a mapping of model names to model objects
71 model_map = {m['name']: m for m in models}
7273 # Calculate in-degrees
74 in_degree = {m['name']: 0 for m in models}
75 for model in models:
76 for dep in model['dependencies']:
77 if dep in in_degree:
78 in_degree[model['name']] += 1
7980 # Find all nodes with no incoming edges
81 queue = deque([name for name, degree in in_degree.items() if degree == 0])
82 sorted_models = []
8384 while queue:
85 model_name = queue.popleft()
86 sorted_models.append(model_map[model_name])
8788 # Reduce in-degree for dependent models
89 for other_model in models:
90 if model_name in other_model['dependencies']:
91 in_degree[other_model['name']] -= 1
92 if in_degree[other_model['name']] == 0:
93 queue.append(other_model['name'])
9495 # Check for cycles
96 if len(sorted_models) != len(models):
97 print("⚠️ Warning: Circular dependencies detected!")
98 sorted_names = {m['name'] for m in sorted_models}
99 for model in models:
100 if model['name'] not in sorted_names:
101 sorted_models.append(model)
102103 return sorted_models
104105def convert_to_postgresql(model_text: str) -> str:
106 """Convert SQLite-specific syntax to PostgreSQL"""
107108 # Replace SQLite julianday with PostgreSQL epoch
109 model_text = re.sub(
110 r'@default\(dbgenerated\("?\(CAST\(\(julianday\(\'now\'\)\s*-\s*2440587\.5\)\s*\*\s*86400000\.0\s+AS\s+REAL\)\)"?\)\)',
111 '@default(dbgenerated("ROUND(extract(epoch from NOW()::TIMESTAMPTZ) * 1000, 0)::float"))',
112 model_text
113 )
114115 # Replace SQLite strftime with PostgreSQL TO_CHAR
116 model_text = re.sub(
117 r'@default\(dbgenerated\("?\(strftime\(\'%Y-%m-%d %H:%M:%S\',\s*datetime\(\'now\',\s*\'\+8 hours\'\)\)\)"?\)\)',
118 '@default(dbgenerated("TO_CHAR((NOW()::TIMESTAMPTZ AT TIME ZONE \'UTC\' AT TIME ZONE \'GMT+8\'), \'YYYY-MM-DD HH24:MI:SS\')"))',
119 model_text
120 )
121122 return model_text
123124def convert_datasource(header: str) -> str:
125 """Convert datasource from SQLite to PostgreSQL"""
126127 datasource_pattern = r'datasource\s+db\s*\{[^}]*\}'
128129 new_datasource = '''datasource db {
130 provider = "postgresql"
131 url = env("DATABASE_URL")
132}'''
133134 header = re.sub(datasource_pattern, new_datasource, header, flags=re.DOTALL)
135136 return header
137138def main():
139 if len(sys.argv) < 3:
140 print("Usage: script.sh <input_file> <output_file>")
141 sys.exit(1)
142143 input_file = sys.argv[1]
144 output_file = sys.argv[2]
145146 print(f"Reading Prisma schema from: {input_file}")
147148 try:
149 with open(input_file, 'r') as f:
150 content = f.read()
151 except FileNotFoundError:
152 print(f"❌ Error: File '{input_file}' not found!")
153 sys.exit(1)
154155 # Parse schema
156 print("Parsing Prisma schema...")
157 header, models = parse_prisma_schema(content)
158 print(f"Found {len(models)} models")
159160 # Convert datasource
161 print("Converting datasource to PostgreSQL...")
162 header = convert_datasource(header)
163164 # Convert each model to PostgreSQL
165 print("Converting SQLite syntax to PostgreSQL...")
166 for model in models:
167 model['text'] = convert_to_postgresql(model['text'])
168169 # Sort models by dependencies
170 print("Reordering models based on dependencies...")
171 sorted_models = topological_sort(models)
172173 # Print dependency order
174 print("\nModel creation order:")
175 for i, model in enumerate(sorted_models, 1):
176 deps = model['dependencies']
177 deps_str = f" → depends on: {', '.join(sorted(deps))}" if deps else ""
178 print(f" {i:2d}. {model['name']}{deps_str}")
179180 # Build output
181 output_lines = [header.rstrip()]
182 output_lines.append("")
183 output_lines.append("// " + "=" * 76)
184 output_lines.append("// Models ordered by dependencies (base models first)")
185 output_lines.append("// " + "=" * 76)
186 output_lines.append("")
187188 for model in sorted_models:
189 output_lines.append(model['text'])
190 output_lines.append("")
191192 output_content = '\n'.join(output_lines)
193194 # Write output file
195 print(f"\nWriting converted schema to: {output_file}")
196 with open(output_file, 'w') as f:
197 f.write(output_content)
198199 print("✅ Conversion complete!")
200201if __name__ == "__main__":
202 main()
203204END_PYTHON205206echo""207echo"Generating SQL migration from PostgreSQL schema..."208cd"$PRISMA_PROJECT_ROOT"209210# Generate SQL from the PostgreSQL schema211npx prisma migrate diff\212 --from-empty \213 --to-schema-datamodel prisma/schema_postgresql.prisma \214 --script > temp_schema.sql
215216echo"Converting to final SQL format..."217218# Do final SQL conversions219python3 <<'PYTHON_SCRIPT'
220import re
221222with open('temp_schema.sql', 'r') as f:
223 sql_content = f.read()
224225# SQLite to PostgreSQL conversions
226sql_content = re.sub(
227 r'INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT',
228 'SERIAL PRIMARY KEY',
229 sql_content
230)
231232sql_content = re.sub(
233 r'BIGINT NOT NULL PRIMARY KEY AUTOINCREMENT',
234 'BIGSERIAL PRIMARY KEY',
235 sql_content
236)
237238sql_content = re.sub(r'\bREAL\b', 'DOUBLE PRECISION', sql_content)
239240sql_content = re.sub(
241 r"\(CAST\(\(julianday\('now'\) - 2440587\.5\) \* 86400000\.0 AS REAL\)\)",
242 "ROUND(extract(epoch from NOW()::TIMESTAMPTZ) * 1000, 0)::float",
243 sql_content
244)
245246sql_content = re.sub(
247 r"\(CAST\(\(julianday\('now'\) - 2440587\.5\) \* 86400000\.0 AS DOUBLE PRECISION\)\)",
248 "ROUND(extract(epoch from NOW()::TIMESTAMPTZ) * 1000, 0)::float",
249 sql_content
250)
251252sql_content = re.sub(
253 r"\(strftime\('%Y-%m-%d %H:%M:%S', datetime\('now', '\+8 hours'\)\)\)",
254 "TO_CHAR((NOW()::TIMESTAMPTZ AT TIME ZONE 'UTC' AT TIME ZONE 'GMT+8'), 'YYYY-MM-DD HH24:MI:SS')",
255 sql_content
256)
257258with open('temp_schema.sql', 'w') as f:
259 f.write(sql_content)
260261PYTHON_SCRIPT262263echo""264echo"Moving SQL file to Spring test resources..."265mv temp_schema.sql "$SQL_FILE_DESTINATION"266267echo""268echo"✅ Complete!"269echo" - PostgreSQL Prisma schema: $OUTPUT_FILE"270echo" - SQL migration file: $SQL_FILE_DESTINATION"271echo""272echo"Next steps:"273echo" 1. Review the PostgreSQL schema at $OUTPUT_FILE"274echo" 2. Review the SQL migration at $SQL_FILE_DESTINATION"275echo" 3. Set DATABASE_URL environment variable"276echo" 4. Run: cd $PRISMA_PROJECT_ROOT && npx prisma migrate dev --name init"
3. The TestcontainersConfiguration Class
3.1. Implementation
From line 39-50 we will check that if a test-container is being reused.
If a schema exists (testcontainer being reused), we simply truncate all tables to empty all existing data to restore our database into a fresh state.
Otherwise we apply the schema.sql file to generate all tables.
1// src/test/kotlin/com/scriptmanager/config/TestcontainersConfiguration.kt2package com.scriptmanager.config
34import org.springframework.boot.test.context.TestConfiguration
5import org.springframework.boot.testcontainers.service.connection.ServiceConnection
6import org.springframework.context.annotation.Bean
7import org.springframework.core.io.ClassPathResource
8import org.testcontainers.containers.PostgreSQLContainer
9import org.testcontainers.utility.DockerImageName
10import java.sql.DriverManager
11import java.time.Duration
121314@TestConfiguration(proxyBeanMethods =false)15class TestcontainersConfiguration {1617/**
18 * PostgreSQL container that will be shared across all tests.
19 * Using singleton pattern to avoid spinning up multiple containers.
20 *
21 * Applies Prisma schema from src-tauri/prisma/schema.prisma if available.
22 */23@Bean24@ServiceConnection25funpostgresContainer(): PostgreSQLContainer<*>{26val container =PostgreSQLContainer(DockerImageName.parse("postgres:15-alpine"))27.withDatabaseName("testdb")28.withUsername("test")29.withPassword("test")30.withStartupTimeout(Duration.ofMinutes(2))31.withReuse(true)// Reuse container across test runs for faster execution3233 container.start()34printConnectionInfo(container)3536println()3738// Check if schema already exists (for container reuse)39if(schemaExists(container)){40println("✓ Schema already exists - skipping migration (container reuse)")41println(" Truncating all tables to clear test data...")42truncateAllTables(container)43verifySchema(container)44}else{45println(" Applying schema from schema.sql file (first time)...")46applySchemaFromFile(container)47println("✓ Schema applied successfully!")48println()49verifySchema(container)50}5152println()5354return container
55}5657/**
58 * Truncates all tables to clear data while preserving schema.
59 * This is called before each Spring context creation to ensure test isolation.
60 */61privatefuntruncateAllTables(container: PostgreSQLContainer<*>){62try{63 DriverManager.getConnection(64 container.jdbcUrl,65 container.username,66 container.password
67).use{ connection ->68 connection.autoCommit =false69try{70 connection.createStatement().use{ statement ->71// Get all table names72val tables = mutableListOf<String>()73val rs = statement.executeQuery(74"SELECT tablename FROM pg_tables WHERE schemaname = 'public'"75)76while(rs.next()){77 tables.add(rs.getString("tablename"))78}79 rs.close()8081if(tables.isNotEmpty()){82// Disable foreign key checks temporarily, truncate all tables, then re-enable83val tableList = tables.joinToString(", "){"\"$it\""}84 statement.execute("TRUNCATE TABLE $tableList RESTART IDENTITY CASCADE")85 connection.commit()86println(" ✓ Truncated ${tables.size} table(s): ${tables.joinToString(", ")}")87}else{88println(" ℹ️ No tables to truncate")89}90}91}catch(e: Exception){92 connection.rollback()93throw e
94}95}96}catch(e: Exception){97println(" [!!] Could not truncate tables: ${e.message}")98 e.printStackTrace()99throwRuntimeException("Failed to truncate tables", e)100}101}102103/**
104 * Checks if the schema has already been applied by looking for key tables.
105 * Returns true if tables exist, false otherwise.
106 */107privatefunschemaExists(container: PostgreSQLContainer<*>): Boolean {108try{109 DriverManager.getConnection(110 container.jdbcUrl,111 container.username,112 container.password
113).use{ connection ->114val statement = connection.createStatement()115val resultSet = statement.executeQuery(116"SELECT COUNT(*) as count FROM information_schema.tables WHERE table_schema = 'public'"117)118119if(resultSet.next()){120val tableCount = resultSet.getInt("count")121if(tableCount >0){122println(" ℹ️ Found $tableCount existing table(s) in database")123returntrue124}125}126returnfalse127}128}catch(e: Exception){129println(" ⚠️ Could not check if schema exists: ${e.message}")130returnfalse131}132}133134135/**
136 * Applies the schema.sql file to the PostgreSQL container.
137 * This reads the schema file from test resources and executes it.
138 * SQL statements are split and executed individually.
139 */140privatefunapplySchemaFromFile(container: PostgreSQLContainer<*>){141try{142val schemaResource =ClassPathResource("schema.sql")143if(!schemaResource.exists()){144println(" ⚠️ WARNING: schema.sql file not found in test resources!")145return146}147148val schemaContent = schemaResource.inputStream.bufferedReader().use{ it.readText()}149println(" 📖 Schema file size: ${schemaContent.length} bytes")150151// More robust SQL statement splitting152val sqlStatements =splitSqlStatements(schemaContent)153println(" 📝 Found ${sqlStatements.size} SQL statements to execute")154155// Connect to the database156 DriverManager.getConnection(157 container.jdbcUrl,158 container.username,159 container.password
160).use{ connection ->161 connection.autoCommit =false162163try{164 connection.createStatement().use{ statement ->165var successCount =0166 sqlStatements.forEachIndexed{ index, sql ->167try{168val trimmedSql = sql.trim()169if(trimmedSql.isNotEmpty()){170println(" [${index +1}/${sqlStatements.size}] Executing: ${trimmedSql.take(60)}...")171 statement.execute(trimmedSql)172 successCount++173}174}catch(e: Exception){175println(" ✗ Failed to execute statement ${index +1}:")176println(" ${sql.take(200)}...")177println(" Error: ${e.message}")178 connection.rollback()179throw e
180}181}182 connection.commit()183println(" ✓ Successfully executed $successCount SQL statements")184}185}catch(e: Exception){186 connection.rollback()187throw e
188}189}190}catch(e: Exception){191println(" ✗ Error applying schema: ${e.message}")192 e.printStackTrace()193throwRuntimeException("Failed to apply schema.sql", e)194}195}196197/**
198 * More robust SQL statement splitting that handles:
199 * - Multi-line statements
200 * - Comments (-- and /* */)201*- Semicolons within strings
202*/203privatefunsplitSqlStatements(sql: String): List<String>{204val statements = mutableListOf<String>()205val currentStatement =StringBuilder()206var inSingleLineComment =false207var inMultiLineComment =false208var inString =false209var stringChar ='\u0000'210211val lines = sql.lines()212for(line in lines){213var i =0214while(i < line.length){215val char = line[i]216val nextChar =if(i +1< line.length) line[i +1]else'\u0000'217218// Handle single-line comments219if(!inString &&!inMultiLineComment && char =='-'&& nextChar =='-'){220 inSingleLineComment =true221 i++222continue223}224225// Handle multi-line comments226if(!inString &&!inSingleLineComment && char =='/'&& nextChar =='*'){227 inMultiLineComment =true228 i +=2229continue230}231232if(inMultiLineComment && char =='*'&& nextChar =='/'){233 inMultiLineComment =false234 i +=2235continue236}237238// Skip if in comment239if(inSingleLineComment || inMultiLineComment){240 i++241continue242}243244// Handle strings245if((char =='\''|| char =='"')&&!inString){246 inString =true247 stringChar = char
248 currentStatement.append(char)249}elseif(inString && char == stringChar){250// Check for escaped quotes251if(nextChar == stringChar){252 currentStatement.append(char).append(nextChar)253 i +=2254continue255}else{256 inString =false257 currentStatement.append(char)258}259}elseif(!inString && char ==';'){260// Statement terminator found261val statement = currentStatement.toString().trim()262if(statement.isNotEmpty()){263 statements.add(statement)264}265 currentStatement.clear()266}else{267 currentStatement.append(char)268}269270 i++271}272273// Reset single-line comment flag at end of line274 inSingleLineComment =false275 currentStatement.append('\n')276}277278// Add any remaining statement279val lastStatement = currentStatement.toString().trim()280if(lastStatement.isNotEmpty()){281 statements.add(lastStatement)282}283284return statements
285}286287/**
288 * Verifies that the schema was applied by listing all tables
289 */290privatefunverifySchema(container: PostgreSQLContainer<*>){291try{292 DriverManager.getConnection(293 container.jdbcUrl,294 container.username,295 container.password
296).use{ connection ->297val statement = connection.createStatement()298val resultSet = statement.executeQuery(299"SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'"300)301302val tables = mutableListOf<String>()303while(resultSet.next()){304 tables.add(resultSet.getString("table_name"))305}306307if(tables.isEmpty()){308println(" ⚠️ WARNING: No tables found in the database!")309}else{310println(" ✓ Verified ${tables.size} table(s) created:")311 tables.sorted().forEach{ tableName ->312println(" • $tableName")313}314}315}316}catch(e: Exception){317println(" ⚠️ Could not verify schema: ${e.message}")318}319}320321/**
322 * Prints connection information for connecting with GUI tools (DataGrip, DBeaver, etc.)
323 */324privatefunprintConnectionInfo(container: PostgreSQLContainer<*>){325val host = container.host
326val port = container.getMappedPort(5432)327val database = container.databaseName
328val username = container.username
329val password = container.password
330val jdbcUrl = container.jdbcUrl
331332println("=".repeat(80))333println("🔗 TESTCONTAINERS DATABASE CONNECTION INFO")334println("=".repeat(80))335println("Host: $host")336println("Port: $port")337println("Database: $database")338println("Username: $username")339println("Password: $password")340println("JDBC URL: $jdbcUrl")341println()342println(" GUI Tool Connection (DataGrip, DBeaver, TablePlus, etc.):")343println(" Host: $host")344println(" Port: $port")345println(" Database: $database")346println(" User: $username")347println(" Password: $password")348println()349println(" Container will stay alive with reuse=true")350println(" To find it: docker ps | grep postgres")351println("=".repeat(80))352}353}
3.2. What Happens on Startup
1🔧 PostgreSQL Test Container Configuration
2Container: postgres:15-alpine
3Database: testdb (port: 52106) <-- this is random, but persistent until we stop/restart our docker process.
4Username: test
5Password: test
6JDBC URL: jdbc:postgresql://localhost:52106/testdb
78✔ Schema already exists - skipping migration (container reuse)
9 Truncating all tables to clear test data...
10 ✔ Truncated 18 table(s)
11 ✔ Verified 18 table(s) created
4. Testing
4.1. Test Files Structure
We separated our commands by resources (a natural separations, as is controller).
Remark. If we only separate commands by aggregate level, that separation is usually too bulky (a god test file).
We test our commands one by one based on resource level naturally.
Since each endpoint in a controller will call exactly one command, the tests will cover all basic functionalities, but it is not enough.
Some method will have authentication and authroization, for those methods we should also add a controller test.
But concerns are clearly separated, we are concerned only about if the AuthroizationException was thrown, then that's enough.
4.2. Which kind of Tests we are Doing?
Test Type
What It Tests
Our Current Test
Unit Test
Single class in isolation (mocked dependencies)
❌ No external dependencies are mocked
Integration Test
Multiple layers working together
✅ We are here
Controller Test
HTTP layer (MockMvc)
❌ We have no Authentication
E2E Test
Full system via UI/API
❌ We don't have software to test Tauri App UI-wise
4.3. The BaseTest Class and @BeforeEach
Purpose of the Class. This defines the basic operations that all test would execute.
In our case, we truncates/clean-up the event table before each test to keep a clean event table so that the events dispatched from our commands are easily testable.
We can group a list of test classes and launch all the testing at the same time.
Since we can control the execution order in @SelectClasseds, it is possible to launch a "resource initialization step" and let the remaining tests reuse the resources.
1import org.junit.platform.suite.api.SelectClasses
2import org.junit.platform.suite.api.Suite
3import org.junit.platform.suite.api.SuiteDisplayName
45@Suite6@SuiteDisplayName("All Tests Suite")7@SelectClasses(8 InitializeResourcesTest::class,// order 09 DataBaseTest::class,// order 1 10 EventPersistenceTest::class,// order 211 CommandInvokerTest::class// order 312)13class AllTestsSuite
4.6. Assetions that we can use
With import org.junit.jupiter.api.Assertions.* we have:
1val exception =assertThrows(IllegalArgumentException::class.java){2// some transaction3...4}5// we can even test the rollbacked state here6assertTrue(exception.message!!.contains("..."))
4.7. Examples
4.7.1. Integration Test
4.7.1.1. Simple Arrange, Act and Assert (AAA)
Straight forward tests can be defined easily via annotated methods: