1. Motivation
After watching the video
from Milan Jovanović, I realize that I need a service to trace my database query to identify performance bottleneck like:
- Am I missing an index?
- What is the execution plan that is being used?
- Or is the created index actaully being used?
and more importantly:
- Is adding an index making the query worse? (it does in some cases like paging using offset and limit, that's why people invented cursor pagination)
In Spring, we don't have Aspire from the dotnet world as an observability monitoring tool, but we have tight integration of datasource-micrometer with opentelemetry:
Terminology. Span Single Unit of Work
1HTTP Request comes in 2 ↓ 3[Micrometer Tracing] creates root span 4 ↓ 5[OpenTelemetry] instruments the request 6 ↓ 7Database query executes 8 ↓ 9[datasource-micrometer] intercepts JDBC calls 10 ↓ 11 Creates child spans: 12 - connection 13 - query execution 14 - result-set processing 15 ↓ 16[OpenTelemetry] collects all spans 17 ↓ 18[Your TreeSpanExporter] formats and prints them 19 ↓ 20[OTLP Exporter] sends to Jaeger (if endpoint configured)
2. Logging Example
2.1. Logging Which Indicates the N+1 Problem
1════════════════════════════════════════════════════════════════════════════════════════════════════ 2📊 TRACE: 7ef5c9d8681e2093e8d9c5ebe8179f43 3════════════════════════════════════════════════════════════════════════════════════════════════════ 4🔴 🌐 http get /scripts/history (118ms) 5🔴 🔌 connection (103ms) 6🟢 📝 query (6ms) 7🟡 📊 result-set (16ms) 8🟢 📝 query (0ms) 9🟢 📊 result-set (3ms) 10🟢 📝 query (0ms) 11🟢 📊 result-set (0ms) 12──────────────────────────────────────────────────────────────────────────────────────────────────── 13📝 QUERIES & EXECUTION PLANS: 14 15┌─ Query #1 (6ms) ───────────────────────────────────────────────────────────────── 16│ 17│ 📝 SQL: 18│ select hss1_0.id,hss1_0.created_at,hss1_0.created_at_hk,hss1_0.execution_time,ss1_0.id,ss1_0.command,ss1_0.created_at,ss1_0.created_at_hk,ss1_0.is_markdown,ss1_0.locked,ss1_0.name,ss1_0.ordering,ss1_0.show_shell,ss1_2.script_ai_config_id,ss1_1.scripts_folder_id,pf1_0.id,pf1_0.created_at,pf1_0.created_at_hk,pf1_0.name,pf1_0.ordering,pf1_1.parent_folder_id,pf2_0.id,pf2_0.created_at,pf2_0.created_at_hk,pf2_0.name,pf2_0.ordering,pf2_1.parent_folder_id,pf2_2.workspace_id,pf1_2.workspace_id,hss1_0.shell_script_id 19│ FROM historical_shell_script hss1_0 20│ LEFT JOIN (shell_script ss1_0 21│ LEFT JOIN rel_scriptsfolder_shellscript ss1_1 on ss1_0.id=ss1_1.shell_script_id 22│ LEFT JOIN rel_shellscript_aiconfig ss1_2 on ss1_0.id=ss1_2.shell_script_id) on ss1_0.id=hss1_0.shell_script_id 23│ LEFT JOIN (scripts_folder pf1_0 24│ LEFT JOIN rel_folder_folder pf1_1 on pf1_0.id=pf1_1.child_folder_id 25│ LEFT JOIN rel_workspace_folder pf1_2 on pf1_0.id=pf1_2.folder_id) on pf1_0.id=ss1_1.scripts_folder_id 26│ LEFT JOIN (scripts_folder pf2_0 27│ LEFT JOIN rel_folder_folder pf2_1 on pf2_0.id=pf2_1.child_folder_id 28│ LEFT JOIN rel_workspace_folder pf2_2 on pf2_0.id=pf2_2.folder_id) on pf2_0.id=pf1_1.parent_folder_id 29│ ORDER BY hss1_0.execution_time desc 30│ LIMIT 10 31│ 32│ 📋 Execution Plan: 33│ MATERIALIZE (join-1) 34│ SCAN ss1_0 35│ SEARCH ss1_1 USING INDEX rel_scriptsfolder_shellscript_shell_script_id_idx (shell_script_id=?) LEFT-JOIN 36│ BLOOM FILTER ON ss1_2 (shell_script_id=?) 37│ SEARCH ss1_2 USING AUTOMATIC COVERING INDEX (shell_script_id=?) LEFT-JOIN 38│ MATERIALIZE (join-2) 39│ SCAN pf1_0 40│ SEARCH pf1_1 USING INDEX rel_folder_folder_child_folder_id_idx (child_folder_id=?) LEFT-JOIN 41│ SEARCH pf1_2 USING INDEX rel_workspace_folder_folder_id_idx (folder_id=?) LEFT-JOIN 42│ MATERIALIZE (join-3) 43│ SCAN pf2_0 44│ SEARCH pf2_1 USING INDEX rel_folder_folder_child_folder_id_idx (child_folder_id=?) LEFT-JOIN 45│ SEARCH pf2_2 USING INDEX rel_workspace_folder_folder_id_idx (folder_id=?) LEFT-JOIN 46│ SCAN hss1_0 47│ BLOOM FILTER ON (join-1) (id=?) 48│ SEARCH (join-1) USING AUTOMATIC COVERING INDEX (id=?) LEFT-JOIN 49│ BLOOM FILTER ON (join-2) (id=?) 50│ SEARCH (join-2) USING AUTOMATIC COVERING INDEX (id=?) LEFT-JOIN 51│ BLOOM FILTER ON (join-3) (id=?) 52│ SEARCH (join-3) USING AUTOMATIC COVERING INDEX (id=?) LEFT-JOIN 53│ USE TEMP B-TREE FOR ORDER BY 54│ 55│ 56│ ⚠️ 📦 Temporary B-tree created. Query might benefit from an index. 57│ ⚠️ 📊 Sorting without index. Consider adding index on ORDER BY column. 58└─────────────────────────────────────────────────────────────────────────────────────────────────── 59 60┌─ Query #2 (0ms) ───────────────────────────────────────────────────────────────── 61│ 62│ 📝 SQL: 63│ select w1_0.id,w1_0.created_at,w1_0.created_at_hk,w1_0.name,w1_0.ordering 64│ FROM workspace w1_0 65│ WHERE w1_0.id=? 66│ 67│ 📋 Execution Plan: 68│ SEARCH w1_0 USING INTEGER PRIMARY KEY (rowid=?) 69│ 70└─────────────────────────────────────────────────────────────────────────────────────────────────── 71 72┌─ Query #3 (0ms) ───────────────────────────────────────────────────────────────── 73│ 74│ 📝 SQL: 75│ select w1_0.id,w1_0.created_at,w1_0.created_at_hk,w1_0.name,w1_0.ordering 76│ FROM workspace w1_0 77│ WHERE w1_0.id=? 78│ 79│ 📋 Execution Plan: 80│ SEARCH w1_0 USING INTEGER PRIMARY KEY (rowid=?) 81│ 82└─────────────────────────────────────────────────────────────────────────────────────────────────── 83 84──────────────────────────────────────────────────────────────────────────────────────────────────── 85⚠️ N+1 QUERY PROBLEM DETECTED! 86 87 🔴 Query executed 2 times: 88 select w1_0.id,w1_0.created_at,w1_0.created_at_hk,w1_0.name,w1_0.ordering 89 FROM workspace w1_0 90 WHERE w1_0.id=? 91 92 📋 Execution Plan for this repeated query: 93 SEARCH w1_0 USING INTEGER PRIMARY KEY (rowid=?) 94 95 96 💡 Tip: Use JOIN FETCH or @EntityGraph to load related data in one query 97──────────────────────────────────────────────────────────────────────────────────────────────────── 98📈 Summary: Total 8 spans, 118ms total time 99🐌 Slow queries (>1ms): 1 queries 100 → 6ms: select hss1_0.id,hss1_0.created_at,hss1_0.created_at_hk,hss1_0.execution_time,ss... 101💾 Query plan cache: 2 unique queries cached 102════════════════════════════════════════════════════════════════════════════════════════════════════
2.2. After N+1 Problem is Resolved
Remark. The connection time is reduced simply because I make a second request (so there is a connection warmed-up in the connection pool).
1════════════════════════════════════════════════════════════════════════════════════════════════════ 2 📊 TRACE: da3252cf2bb102a912bad8bca18570b8 3 ════════════════════════════════════════════════════════════════════════════════════════════════════ 4 🟡 🌐 http get /scripts/history (19ms) 5 🟡 🔌 connection (15ms) 6 🟢 📝 query (1ms) 7 🟢 📊 result-set (6ms) 8 ──────────────────────────────────────────────────────────────────────────────────────────────────── 9 📝 QUERIES & EXECUTION PLANS: 10 11 ┌─ Query #1 (1ms) ───────────────────────────────────────────────────────────────── 12 │ 13 │ 📝 SQL: 14 │ select hss1_0.id,hss1_0.created_at,hss1_0.created_at_hk,hss1_0.execution_time,ss1_0.id,ss1_0.command,ss1_0.created_at,ss1_0.created_at_hk,ss1_0.is_markdown,ss1_0.locked,ss1_0.name,ss1_0.ordering,ss1_0.show_shell,ss1_2.script_ai_config_id,ss1_1.scripts_folder_id,pf1_0.id,pf1_0.created_at,pf1_0.created_at_hk,pf1_0.name,pf1_0.ordering,pf1_2.parent_folder_id,pf2_0.id,pf2_0.created_at,pf2_0.created_at_hk,pf2_0.name,pf2_0.ordering,pf2_2.parent_folder_id,pf2_1.workspace_id,pw2_0.id,pw2_0.created_at,pw2_0.created_at_hk,pw2_0.name,pw2_0.ordering,pf1_1.workspace_id,pw1_0.id,pw1_0.created_at,pw1_0.created_at_hk,pw1_0.name,pw1_0.ordering,hss1_0.shell_script_id 15 │ FROM historical_shell_script hss1_0 16 │ LEFT JOIN (shell_script ss1_0 17 │ LEFT JOIN rel_scriptsfolder_shellscript ss1_1 on ss1_0.id=ss1_1.shell_script_id 18 │ LEFT JOIN rel_shellscript_aiconfig ss1_2 on ss1_0.id=ss1_2.shell_script_id) on ss1_0.id=hss1_0.shell_script_id 19 │ LEFT JOIN (scripts_folder pf1_0 20 │ LEFT JOIN rel_workspace_folder pf1_1 on pf1_0.id=pf1_1.folder_id 21 │ LEFT JOIN rel_folder_folder pf1_2 on pf1_0.id=pf1_2.child_folder_id) on pf1_0.id=ss1_1.scripts_folder_id 22 │ LEFT JOIN workspace pw1_0 on pw1_0.id=pf1_1.workspace_id 23 │ LEFT JOIN (scripts_folder pf2_0 24 │ LEFT JOIN rel_workspace_folder pf2_1 on pf2_0.id=pf2_1.folder_id 25 │ LEFT JOIN rel_folder_folder pf2_2 on pf2_0.id=pf2_2.child_folder_id) on pf2_0.id=pf1_2.parent_folder_id 26 │ LEFT JOIN workspace pw2_0 on pw2_0.id=pf2_1.workspace_id 27 │ ORDER BY hss1_0.execution_time desc 28 │ LIMIT 10 29 │ 30 │ 📋 Execution Plan: 31 │ MATERIALIZE (join-1) 32 │ SCAN ss1_0 33 │ SEARCH ss1_1 USING INDEX rel_scriptsfolder_shellscript_shell_script_id_idx (shell_script_id=?) LEFT-JOIN 34 │ BLOOM FILTER ON ss1_2 (shell_script_id=?) 35 │ SEARCH ss1_2 USING AUTOMATIC COVERING INDEX (shell_script_id=?) LEFT-JOIN 36 │ MATERIALIZE (join-2) 37 │ SCAN pf1_0 38 │ SEARCH pf1_1 USING INDEX rel_workspace_folder_folder_id_idx (folder_id=?) LEFT-JOIN 39 │ SEARCH pf1_2 USING INDEX rel_folder_folder_child_folder_id_idx (child_folder_id=?) LEFT-JOIN 40 │ MATERIALIZE (join-3) 41 │ SCAN pf2_0 42 │ SEARCH pf2_1 USING INDEX rel_workspace_folder_folder_id_idx (folder_id=?) LEFT-JOIN 43 │ SEARCH pf2_2 USING INDEX rel_folder_folder_child_folder_id_idx (child_folder_id=?) LEFT-JOIN 44 │ SCAN hss1_0 45 │ BLOOM FILTER ON (join-1) (id=?) 46 │ SEARCH (join-1) USING AUTOMATIC COVERING INDEX (id=?) LEFT-JOIN 47 │ BLOOM FILTER ON (join-2) (id=?) 48 │ SEARCH (join-2) USING AUTOMATIC COVERING INDEX (id=?) LEFT-JOIN 49 │ SEARCH pw1_0 USING INTEGER PRIMARY KEY (rowid=?) LEFT-JOIN 50 │ BLOOM FILTER ON (join-3) (id=?) 51 │ SEARCH (join-3) USING AUTOMATIC COVERING INDEX (id=?) LEFT-JOIN 52 │ SEARCH pw2_0 USING INTEGER PRIMARY KEY (rowid=?) LEFT-JOIN 53 │ USE TEMP B-TREE FOR ORDER BY 54 │ 55 │ 56 │ ⚠️ 📦 Temporary B-tree created. Query might benefit from an index. 57 │ ⚠️ 📊 Sorting without index. Consider adding index on ORDER BY column. 58 └─────────────────────────────────────────────────────────────────────────────────────────────────── 59 60 ──────────────────────────────────────────────────────────────────────────────────────────────────── 61 📈 Summary: Total 4 spans, 19ms total time 62 💾 Query plan cache: 1 unique queries cached 63 ════════════════════════════════════════════════════════════════════════════════════════════════════
3. Configuration
3.1. Gradle Dependencies
1// Core Dependency 2implementation("io.micrometer:micrometer-core") 3 4// Micrometer Tracing with OpenTelemetry 5implementation("io.micrometer:micrometer-tracing-bridge-otel") 6 7// For logging traces to console (development) 8implementation("io.opentelemetry:opentelemetry-sdk:1.33.0") 9implementation("io.opentelemetry:opentelemetry-exporter-logging:1.33.0") 10 11// For database query tracing 12implementation("net.ttddyy.observation:datasource-micrometer-spring-boot:1.0.3")
3.2. application.yml
3.2.1. Logging in Console
1logging: 2 pattern: 3 console: '%d{HH:mm:ss.SSS} %-5level %logger{35} - %msg%n' 4 level: 5 com.scriptmanager.common.config.BetterTreeSpanExporter: INFO 6 7management: 8 tracing: 9 sampling: 10 probability: 1.0 11 otlp: 12 tracing: 13 endpoint: ""
3.2.2. Logging in External Service
Note that management.otlp.tracing.endpoint (line-13 in 〈3.2.1. Logging in Console〉) is an empty string, which means logging everything in the console.
We need to set up a non-empty OTLP endpoint to trigger sending logs to:
- Jaeger
- Zipkin
- or a vendor like Datadog/New Relic
Let's take Jaeger as an exmaple. In build.gradle.kts we add
1implementation("io.opentelemetry:opentelemetry-exporter-otlp:1.42.1")
and update to:
1 otlp: 2 tracing: 3 endpoint: "http://localhost:4318/v1/traces"
Now spin up a logging service at port 4318:
1version: '3.8' 2 3services: 4 jaeger: 5 image: jaegertracing/all-in-one:latest 6 container_name: jaeger 7 restart: unless-stopped 8 environment: 9 - COLLECTOR_OTLP_ENABLED=true 10 ports: 11 - "16686:16686" # Jaeger UI 12 - "4317:4317" # OTLP gRPC receiver 13 - "4318:4318" # OTLP HTTP receiver 14 networks: 15 - observability 16 17networks: 18 observability: 19 driver: bridge
3.3. The TracingConfig Class
The following is vibed via trial-and-error until I got satisfactory logging result:
1package com.scriptmanager.common.config 2 3import io.opentelemetry.api.common.AttributeKey 4import io.opentelemetry.sdk.common.CompletableResultCode 5import io.opentelemetry.sdk.trace.data.SpanData 6import io.opentelemetry.sdk.trace.export.SpanExporter 7import org.slf4j.LoggerFactory 8import org.springframework.context.annotation.Bean 9import org.springframework.context.annotation.Configuration 10import java.util.concurrent.ConcurrentHashMap 11import java.util.concurrent.TimeUnit 12import javax.sql.DataSource 13 14 15@Configuration 16class TracingConfig( 17 private val dataSource: DataSource 18) { 19 20 @Bean 21 fun betterTreeSpanExporter(): SpanExporter { 22 return BetterTreeSpanExporter(dataSource) 23 } 24} 25 26 27class BetterTreeSpanExporter( 28 private val dataSource: DataSource 29) : SpanExporter { 30 private val logger = LoggerFactory.getLogger(BetterTreeSpanExporter::class.java) 31 private val traceBuffer = ConcurrentHashMap<String, MutableList<SpanData>>() 32 private val queryPlanCache = ConcurrentHashMap<String, QueryPlanInfo>() 33 34 data class QueryPlanInfo( 35 val plan: String, 36 val warnings: List<String> 37 ) 38 39 override fun export(spans: Collection<SpanData>): CompletableResultCode { 40 spans.forEach { span -> 41 traceBuffer.getOrPut(span.traceId) { mutableListOf() }.add(span) 42 43 if (span.parentSpanId == "0000000000000000" && span.kind.name == "SERVER") { 44 printCompleteTrace(span.traceId) 45 traceBuffer.remove(span.traceId) 46 } 47 } 48 49 return CompletableResultCode.ofSuccess() 50 } 51 52 private fun printCompleteTrace(traceId: String) { 53 val allSpans = traceBuffer[traceId] ?: return 54 55 logger.info("\n\n" + "═".repeat(100)) 56 logger.info("📊 TRACE: $traceId") 57 logger.info("═".repeat(100)) 58 59 val spanMap = allSpans.associateBy { it.spanId } 60 val rootSpan = allSpans.find { 61 it.parentSpanId == "0000000000000000" && it.kind.name == "SERVER" 62 } 63 64 if (rootSpan != null) { 65 printSpanTree(rootSpan, spanMap, 0) 66 67 // Show all queries WITH execution plans grouped together 68 logger.info("─".repeat(100)) 69 logger.info("📝 QUERIES & EXECUTION PLANS:") 70 logger.info("") 71 72 allSpans.filter { it.name == "query" }.forEachIndexed { index, span -> 73 val duration = TimeUnit.NANOSECONDS.toMillis(span.endEpochNanos - span.startEpochNanos) 74 val query = span.attributes.get(AttributeKey.stringKey("jdbc.query[0]")) ?: "unknown" 75 val rowsAffected = span.attributes.get(AttributeKey.longKey("jdbc.row-affected")) 76 77 // Format SQL nicely 78 val formattedQuery = query 79 .replace(Regex("\\s+"), " ") 80 .replace(" from ", "\n FROM ") 81 .replace(" left join ", "\n LEFT JOIN ") 82 .replace(" inner join ", "\n INNER JOIN ") 83 .replace(" where ", "\n WHERE ") 84 .replace(" order by ", "\n ORDER BY ") 85 .replace(" limit ", "\n LIMIT ") 86 .replace(" group by ", "\n GROUP BY ") 87 88 // Get execution plan 89 = getQueryPlanInfo(query) 90 91 // Print everything together in a box 92 logger.info("┌─ Query #${index + 1} (${duration}ms) " + "─".repeat(80 - "Query #${index + 1} (${duration}ms) ".length)) 93 logger.info("│") 94 logger.info("│ 📝 SQL:") 95 formattedQuery.lines().forEach { line -> 96 logger.info("│ $line") 97 } 98 99 if (rowsAffected != null) { 100 logger.info("│ → Affected: $rowsAffected rows") 101 } 102 103 logger.info("│") 104 logger.info("│ 📋 Execution Plan:") 105 if (planInfo.plan.isNotBlank()) { 106 planInfo.plan.lines().forEach { line -> 107 logger.info("│ $line") 108 } 109 110 // Show warnings immediately after the plan 111 if (planInfo.warnings.isNotEmpty()) { 112 logger.info("│") 113 planInfo.warnings.forEach { warning -> 114 logger.warn("│ ⚠️ $warning") 115 } 116 } 117 } else { 118 logger.info("│ (No plan available)") 119 } 120 121 logger.info("└" + "─".repeat(99)) 122 logger.info("") 123 } 124 125 // Detect N+1 126 detectNPlusOne(allSpans) 127 128 val totalTime = TimeUnit.NANOSECONDS.toMillis( 129 rootSpan.endEpochNanos - rootSpan.startEpochNanos 130 ) 131 logger.info("─".repeat(100)) 132 logger.info("📈 Summary: Total ${allSpans.size} spans, ${totalTime}ms total time") 133 134 val slowQueries = allSpans 135 .filter { it.name == "query" } 136 .map { 137 val duration = TimeUnit.NANOSECONDS.toMillis(it.endEpochNanos - it.startEpochNanos) 138 Pair(it, duration) 139 } 140 .filter { it.second > 1 } 141 .sortedByDescending { it.second } 142 143 if (slowQueries.isNotEmpty()) { 144 logger.info("🐌 Slow queries (>1ms): ${slowQueries.size} queries") 145 slowQueries.take(3).forEach { (span, duration) -> 146 val query = span.attributes.get(AttributeKey.stringKey("jdbc.query[0]")) 147 ?.take(80) ?: "unknown" 148 logger.info(" → ${duration}ms: $query...") 149 } 150 } 151 152 // Show query plan cache stats 153 logger.info("💾 Query plan cache: ${queryPlanCache.size} unique queries cached") 154 } 155 156 logger.info("═".repeat(100) + "\n") 157 } 158 159 { 160 if (sql == "unknown" || sql.startsWith("EXPLAIN", ignoreCase = true)) { 161 return QueryPlanInfo("", emptyList()) 162 } 163 164 val normalizedSql = normalizeSql(sql) 165 166 return queryPlanCache.computeIfAbsent(normalizedSql) { 167 try { 168 val plan = executeExplainQuery(sql) 169 val warnings = analyzeQueryPlan(plan) 170 QueryPlanInfo(plan, warnings) 171 } catch (e: Exception) { 172 logger.debug("Could not get query plan for query: ${e.message}") 173 QueryPlanInfo(" Unable to get query plan: ${e.message}", emptyList()) 174 } 175 } 176 } 177 178 private fun executeExplainQuery(sql: String): String { 179 return dataSource.connection.use { conn -> 180 conn.createStatement().use { stmt -> 181 // SQLite uses "EXPLAIN QUERY PLAN" 182 val explainSql = "EXPLAIN QUERY PLAN $sql" 183 val rs = stmt.executeQuery(explainSql) 184 185 buildString { 186 while (rs.next()) { 187 // SQLite EXPLAIN QUERY PLAN returns: id, parent, notused, detail 188 val detail = rs.getString("detail") 189 appendLine(" $detail") 190 } 191 } 192 } 193 } 194 } 195 196 private fun normalizeSql(sql: String): String { 197 // Remove parameter values and extra whitespace to create cache key 198 return sql 199 .replace(Regex("=\\s*'[^']*'"), "= ?") // Remove string literals 200 .replace(Regex("=\\s*\\d+"), "= ?") // Remove numbers 201 .replace(Regex("=\\s*\\?"), "= ?") // Normalize placeholders 202 .replace(Regex("\\s+"), " ") // Normalize whitespace 203 .trim() 204 } 205 206 private fun analyzeQueryPlan(plan: String): List<String> { 207 val warnings = mutableListOf<String>() 208 209 // Check for full table scan 210 if (plan.contains("SCAN TABLE", ignoreCase = true) && 211 !plan.contains("USING INDEX", ignoreCase = true) 212 ) { 213 warnings.add("🐌 Full table scan detected! Consider adding an index.") 214 } 215 216 // Check for temporary B-tree 217 if (plan.contains("TEMP B-TREE", ignoreCase = true)) { 218 warnings.add("📦 Temporary B-tree created. Query might benefit from an index.") 219 } 220 221 // Check for sorting without index 222 if (plan.contains("USE TEMP B-TREE FOR ORDER BY", ignoreCase = true)) { 223 warnings.add("📊 Sorting without index. Consider adding index on ORDER BY column.") 224 } 225 226 // Check for grouping without index 227 if (plan.contains("USE TEMP B-TREE FOR GROUP BY", ignoreCase = true)) { 228 warnings.add("📊 Grouping without index. Consider adding index on GROUP BY column.") 229 } 230 231 // Check for multiple table scans 232 val scanCount = Regex("SCAN TABLE", RegexOption.IGNORE_CASE) 233 .findAll(plan) 234 .count() 235 if (scanCount > 2) { 236 warnings.add("⚠️ Multiple table scans ($scanCount) detected. Performance may be poor.") 237 } 238 239 return warnings 240 } 241 242 private fun detectNPlusOne(spans: List<SpanData>) { 243 val queries = spans.filter { it.name == "query" } 244 245 val queryGroups = queries.groupBy { span -> 246 span.attributes.get(AttributeKey.stringKey("jdbc.query[0]")) 247 ?.replace(Regex("\\s+"), " ") 248 } 249 250 val repeatedQueries = queryGroups.filter { it.value.size > 1 } 251 252 if (repeatedQueries.isNotEmpty()) { 253 logger.info("─".repeat(100)) 254 logger.info("⚠️ N+1 QUERY PROBLEM DETECTED!") 255 repeatedQueries.forEach { (query, spanList) -> 256 logger.info("") 257 logger.info(" 🔴 Query executed ${spanList.size} times:") 258 259 val formatted = query?.replace(Regex("\\s+"), " ") 260 ?.replace(" from ", "\n FROM ") 261 ?.replace(" where ", "\n WHERE ") ?: "unknown" 262 263 logger.info(" $formatted") 264 265 // Show execution plan for repeated query 266 = getQueryPlanInfo(query ?: "") 267 if (planInfo.plan.isNotBlank()) { 268 logger.info("") 269 logger.info(" 📋 Execution Plan for this repeated query:") 270 planInfo.plan.lines().forEach { line -> 271 logger.info(" $line") 272 } 273 } 274 275 logger.info("") 276 logger.info(" 💡 Tip: Use JOIN FETCH or @EntityGraph to load related data in one query") 277 } 278 } 279 } 280 281 private fun printSpanTree( 282 span: SpanData, 283 spanMap: Map<String, SpanData>, 284 level: Int 285 ) { 286 val durationMs = TimeUnit.NANOSECONDS.toMillis( 287 span.endEpochNanos - span.startEpochNanos 288 ) 289 290 val indent = " ".repeat(level) 291 val emoji = when { 292 span.kind.name == "SERVER" -> "🌐" 293 span.name == "connection" -> "🔌" 294 span.name == "query" -> "📝" 295 span.name == "result-set" -> "📊" 296 else -> " " 297 } 298 299 val rowCount = span.attributes.get(AttributeKey.longKey("jdbc.row-count")) 300 301 val colorCode = when { 302 durationMs > 50 -> "🔴" 303 durationMs > 10 -> "🟡" 304 else -> "🟢" 305 } 306 307 = buildString { 308 append("$colorCode $indent$emoji ${span.name}") 309 append(" (${durationMs}ms)") 310 311 if (rowCount != null) { 312 append(" → $rowCount rows") 313 } 314 } 315 316 logger.info(info) 317 318 val children = spanMap.values 319 .filter { it.parentSpanId == span.spanId } 320 .sortedBy { it.startEpochNanos } 321 322 children.forEach { child -> 323 printSpanTree(child, spanMap, level + 1) 324 } 325 } 326 327 override fun flush(): CompletableResultCode = CompletableResultCode.ofSuccess() 328 override fun shutdown(): CompletableResultCode = CompletableResultCode.ofSuccess() 329}













