1. Video Results
1.1. Version 1: Launch Application via OpenJDK and java -jar
OpenJDK and java -jarThis video also demonstrate a spring boot backend being launched at a random port via java -jar command when the application get started.
The backend took 5s to complete the startup process.
This also requires user to have openJDK (or any other JDK) pre-installed, which is not reasonable:
1.2. Version 2: Improvement, Launch Application via Executable by GraalVM
GraalVMWe go to the next step to build an executable to launch the backend, resulting in much faster launch speed (0.3s).
2. Project Repository
3. How to get Started with Tauri
Detail documentation can be found in:
But for us it is enough to know:
- how to instantiate the project and
- how to communicate with the Tauri backend.
Most of the documentation is more about how tauri is working under the hood, which is not of our interest if we just want to quickly build an app using this framework.
We start by executing
yarn create tauri-app
then we can follow the CLI to create a project using React in Typescript.
4. About the Tauri Application
4.1. Class and Entity Relational Diagram (Combined)
4.2. Project Structure
4.2.1. The frontend structure
shell-script-manager-tauri/ ├── src/ # React frontend │ ├── app-component/ # Main UI components │ │ ├── FolderColumn/ # Folder list & management │ │ └── ScriptsColumn/ # Script list & execution │ ├── components/ # Reusable UI components │ ├── store/ # Redux store & API slices │ │ ├── api/ # RTK Query endpoints │ │ └── slices/ # Redux state slices │ └── hooks/ # Custom React hooks
-
We use
redux-toolkit/rtk-queryto manage our server (backend) state and use slices inredux-toolkitto manage our app state (like the selected folder, boolean to trigger UI animation, etc). -
We also bring
shadcninto the application as it provides us with customizable fancy components.
Now in this application we have two backends, we introduce them in 〈4.2.2. The Tauri backend structure〉 and 〈4.2.3. The spring boot backend structure〉 respectively.
4.2.2. The Tauri backend structure
├── src-tauri/ # Rust native layer │ ├── src/lib.rs # Core Tauri application logic │ ├── prisma/schema.prisma # Database schema definition │ └── Cargo.toml # Rust dependencies
-
This backend is in charge of OS-level interaction bewteen our desktop application and the system.
-
For example, the menu bar, the tray icons, and even the permission to drag our custom title bar, etc, are configed in our Tauri backend.
-
It also handles commands sent from the frontend when there is system-level request from the frontend (e.g., I need to execute shell script displayed in the frontend).
4.2.3. The spring boot backend structure
├── backend-spring/ # Spring Boot backend │ ├── src/main/kotlin/ │ │ └── com/scriptmanager/ │ │ ├── controller/ # REST API endpoints │ │ ├── common/ │ │ │ ├── entity/ # JPA entities │ │ │ └── dto/ # Data transfer objects │ │ └── repository/ # Spring Data repositories │ └── build.gradle.kts # Gradle build configuration
4.2.3.1. Tedious association table manipulation in Rust
This spring boot layer is previously a basic CRUD repository layer in Tauri backend. However, doing CRUD without good ORM in rust is very tedious, even with query builder it eventually looks:
Handling domain models is not the strength of rust, instead our good old friend JPA in Spring Boot shines in this area.
4.2.3.2. No more assocation manipulation in JPA with DDD
Therefore we add a new layer to handle state-related domain logic. We don't even need to write query when our @OneToMany and @ManyToOne are properly written:
Each request to a controller should have been handled by an application service layer (some may call it usecase in the dotnet community). For now since our application is in POC stage, the ugly pattern here will be refactored when our application grows.
Because of Spring Boot, now we can bring Domain Model and Value Object into the application, which is beneficial in maintaining the code base in the long run.
4.3. Communication between React Frontend and Tauri Backend
4.3.1. Dispatch command from React frontend
Suppose that I want to execute a command displayed in the frontend, we execute:
import { listen } from "@tauri-apps/api/event"; const handleRun = async () => { try { // Opens terminal and executes script await invoke("run_script", { command: script.command }); } catch (error) { console.error("Failed to run script:", error); } };
Next we handle this command in the Tauri backend:
4.3.2. Receive command in Tauri backend
In Tauri backend we define a command handler
#[tauri::command] async fn run_script(command: String) { println!("Running script: {}", command); open_terminal_with_command(command); }
and register it globally:
pub fn run() { tauri::Builder::default() .invoke_handler(tauri::generate_handler![ run_script, ... ]) .setup(|app| { // ... initialization logic Ok(()) }) .run(tauri::generate_context!()) .expect("error while running tauri application"); }
4.3.3. Tricky naming convention when backend's parameter name has an "_"
When we have the following command handler:
#[tauri::command] async fn reorder_folders( from_index: usize, to_index: usize ) -> Result<(), String> { let repo = FolderRepository::new(); repo.reorder_folders(from_index, to_index) .await .map_err(|e| format!("Failed to reorder folders: {}", e))?; Ok(()) }
In frontend we need to write:
await invoke( 'reorder_folders', { fromIndex, toIndex } );
This is because the popular serialization and deserialization crate in Rust serde expects the inputs to be in camal case, and it will automatically translate the variables into snake_case.
5. Schema Managment and LLM Tooling
5.1. Schema Definition
5.1.1. What LLM can do
For existing schema migration tools in spring boot ecosystem we mainly have
- Flyway
- Liquibase
Both require manual scripting for any changes in the database schema and make corresponding code changes in the entity model.
But with prisma we can focus on schema design, we benefit from this approach by now being able to:
- Feed LLM model our clear schema definition;
- Let LLM generate/modify our entity model in spring boot and;
- Let
prismagenerate the script of database migration for the incremental update of the schema
5.1.2. Define schema and embed it into Rust script
Now our schema.prisma serves as a good documentation for LLM model of all of our tables:
1generator client { 2 provider = "cargo prisma" 3 output = "../src/prisma.rs" 4}
As long as we understand what is the auto-generated sql migration script doing, it is no harm to let the framework generate it. We can even refine the sql to match what we need.
Let's translate the diagram drawn in section 〈4.1. Class and Entity Relational Diagram (Combined)〉 into a schema definition:
5datasource db { 6 provider = "sqlite" 7 url = "file:../database.db" 8} 9 10model application_state { 11 id Int @id @default(autoincrement()) 12 last_opened_folder_id Int? 13 dark_mode Boolean @default(false) 14 created_at Float @default(dbgenerated("(CAST((julianday('now') - 2440587.5) * 86400000.0 AS REAL))")) 15 created_at_hk String @default(dbgenerated("(strftime('%Y-%m-%d %H:%M:%S', datetime('now', '+8 hours')))")) 16} 17 18model scripts_folder { 19 id Int @id @default(autoincrement()) 20 name String 21 ordering Int 22 created_at Float @default(dbgenerated("(CAST((julianday('now') - 2440587.5) * 86400000.0 AS REAL))")) 23 created_at_hk String @default(dbgenerated("(strftime('%Y-%m-%d %H:%M:%S', datetime('now', '+8 hours')))")) 24 rel_scriptsfolder_shellscript rel_scriptsfolder_shellscript[] 25 26 @@index([id]) 27} 28 29model rel_scriptsfolder_shellscript { 30 id Int @id @default(autoincrement()) 31 scripts_folder_id Int 32 shell_script_id Int 33 created_at Float @default(dbgenerated("(CAST((julianday('now') - 2440587.5) * 86400000.0 AS REAL))")) 34 created_at_hk String @default(dbgenerated("(strftime('%Y-%m-%d %H:%M:%S', datetime('now', '+8 hours')))")) 35 shell_script shell_script @relation(fields: [shell_script_id], references: [id]) 36 scripts_folder scripts_folder @relation(fields: [scripts_folder_id], references: [id]) 37 38 @@index([scripts_folder_id]) 39 @@index([shell_script_id]) 40} 41 42model shell_script { 43 id Int @id @default(autoincrement()) 44 name String 45 command String 46 ordering Int 47 created_at Float @default(dbgenerated("(CAST((julianday('now') - 2440587.5) * 86400000.0 AS REAL))")) 48 created_at_hk String @default(dbgenerated("(strftime('%Y-%m-%d %H:%M:%S', datetime('now', '+8 hours')))")) 49 rel_scriptsfolder_shellscript rel_scriptsfolder_shellscript[] 50 51 @@index([id]) 52}
5.2. Embed Schema Migration via prisma.rs
prisma.rsAlso note that we require cargo prisma in line 2-3 of section 〈5.1.2. Define schema and embed it into Rust script〉 to generate the schema related definition in "../src/prisma.rs".
This will create an embedded SQL migration method in the prisma.rs file, and we can execute it to instantiate/update the database (see init_db below) in the startup script of our Tauri backend:
mod prisma; pub fn init_db(app_handle: &tauri::AppHandle) -> Result<(), String> { let db_path = get_database_path(app_handle)?; let database_url = format!("file:{}", db_path); std::env::set_var("DATABASE_URL", &database_url); let rt_handle = RT_HANDLE .get() .ok_or_else(|| "Runtime not initialized".to_string())?; rt_handle.block_on(async move { let client = prisma::new_client_with_url(&database_url) .await .expect("Failed to create Prisma client"); println!("Syncing database schema..."); client ._db_push() .accept_data_loss() .await .expect("Failed to sync database schema"); ...
5.3. Let LLM Generate Entity Classes from schema.prisma
schema.prismaNow simply ask our agent to generate the entity classes. For example:
@Entity @GenerateDTO @DynamicInsert @Table(name = "shell_script", indexes = [Index(columnList = "id")]) data class ShellScript( @Id @GeneratedValue(strategy = GenerationType.IDENTITY) val id: Int? = null, @Column(name = "name", nullable = false) var name: String = "", @Column(name = "command", nullable = false) var command: String = "", @Column(name = "ordering", nullable = false) var ordering: Int = 0, @Column(name = "created_at") val createdAt: Double? = null, @Column(name = "created_at_hk") val createdAtHk: String? = null, ) { @ManyToOne(fetch = FetchType.LAZY) @JoinTable( name = "rel_scriptsfolder_shellscript", joinColumns = [JoinColumn(name = "shell_script_id", referencedColumnName = "id")], inverseJoinColumns = [JoinColumn(name = "scripts_folder_id", referencedColumnName = "id")] ) var scriptsFolder: ScriptsFolder? = null }
Here we manually add the @ManyToOne annotations as well as the aggregate relations as LLM cannot easily understand it without knowing the class diagram (which we draw in section 〈4.1. Class and Entity Relational Diagram (Combined)〉).
6. Bundling of the Application with Spring Boot Integration
6.1. Overview of Build Steps
┌─────────────────────────────────────────────────┐ │ 1. Write Kotlin Code (Spring Boot Backend) │ └────────────────┬────────────────────────────────┘ │ ┌────────────────▼────────────────────────────────┐ │ 2. Gradle Plugin: org.graalvm.buildtools.native│ └────────────────┬────────────────────────────────┘ │ ┌────────────────▼─────────────────────────────────┐ │ 3. Run: ./gradlew nativeCompile │ │ - Analyzes all reachable code │ │ - Resolves reflection/resources │ │ - Compiles to native machine code │ └────────────────┬─────────────────────────────────┘ │ ┌────────────────▼─────────────────────────────────┐ │ 4. Output: backend-native (executable) │ │ Size: ~100MB │ │ Location: build/native/nativeCompile/ │ └────────────────┬─────────────────────────────────┘ │ ┌────────────────▼─────────────────────────────────┐ │ 5. Copy to Tauri Resources │ │ → src-tauri/resources/backend-spring/ │ └────────────────┬─────────────────────────────────┘ │ ┌────────────────▼─────────────────────────────────┐ │ 6. Bundle with Tauri App │ │ → Final .app includes native binary │ └────────────────┬─────────────────────────────────┘ │ ┌────────────────▼────────────────────────────────────┐ │ 7. Run in Production │ │ Rust executes: ./backend-native --server.port=X │ │ No Java required! │ └─────────────────────────────────────────────────────┘
6.2. The Build (Bundling) Script
6.3. GraalVM for Building Spring Boot as an Executable
6.3.1. The gradle task
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile plugins { ... id("org.graalvm.buildtools.native") version "0.10.1" } tasks.withType<KotlinCompile> { kotlinOptions { freeCompilerArgs += "-Xjsr305=strict" jvmTarget = "17" } } // GraalVM Native Image configuration graalvmNative { binaries { named("main") { imageName.set("backend-native") mainClass.set("com.scriptmanager.ApplicationKt") buildArgs.add("--verbose") buildArgs.add("-H:+ReportExceptionStackTraces") buildArgs.add("--initialize-at-build-time=org.slf4j") buildArgs.add("--initialize-at-run-time=io.netty.handler.ssl") buildArgs.add("-H:+AddAllCharsets") buildArgs.add("-H:EnableURLProtocols=http,https") } } }
The inclusion of the gradle plugin will create a gradle task for us:
export JAVA_HOME="/Library/Java/JavaVirtualMachines/graalvm-jdk-17/Contents/Home" && \\ ./gradlew clean nativeCompile
Note that we must have graalvm installed, for mac, we brew install --cask graalvm-jdk, as stated in brew page.
6.3.2. Registration for Class Reflection
6.3.2.1. What to include manually
In src/main/resources/META-INF/native-image/reflect-config.json we add:
[ { "name": "org.sqlite.JDBC", "allDeclaredConstructors": true, "allPublicConstructors": true, "allDeclaredMethods": true, "allPublicMethods": true }, { "name": "org.sqlite.SQLiteConnection", "allDeclaredConstructors": true, "allDeclaredMethods": true }, { "name": "org.hibernate.community.dialect.SQLiteDialect", "allDeclaredConstructors": true, "allPublicConstructors": true, "allDeclaredMethods": true } ]
These are the external library that spring's (Ahead-of-Time) processing cannot see at compile time. By adding these classnames into reflect-config.json, we are telling GraalVM:
"Please include the actual compiled machine code for this class AND keep all the metadata needed for reflection"
6.3.2.2. What is already included in the native version of reflect-config.json?
reflect-config.json?Spring Boot's AOT (Ahead-of-Time) processing sees the annotations and generates reflection configuration automatically:
@RestController // <--- Spring sees this annotation @RequestMapping("/scripts") class ScriptController( // <--- Spring sees this class name directly! private val scriptRepository: ShellScriptRepository, // ← Direct reference! private val folderRepository: ScriptsFolderRepository // ← Direct reference! ) { @GetMapping // <--- Spring sees this fun getAllScripts(): ApiResponse<List<ShellScriptDTO>> { // ← Direct return type! val list = scriptRepository.findAllByOrderByOrderingAsc().map { it.toDTO() } return ApiResponse(list) // ← Direct class usage! } }
GraalVM can see the spring-managed reflect-config.json at compile time:
When combining two reflect-config.json's, we have registered:
backend-native (native image) ├── Your application code │ ├── Application.kt → machine code ✅ │ ├── ScriptController.kt → machine code ✅ │ └── ShellScript.kt → machine code ✅ │ ├── Spring Boot │ └── Core framework → machine code ✅ │ └── SQLiteDialect → ✅ NOW INCLUDED! ├── Class bytecode compiled to native machine code ✅ ├── Constructor signatures (metadata) ✅ ├── Method signatures (metadata) ✅ ├── Field information (metadata) ✅ └── Reflection registry entry ✅
6.4. Dynamic Port and Path for DataSource in Spring Boot
DataSource in Spring BootOur frontend will access to spring boot backend for state mangement differently in DEBUG and RELEASE mode.
function getBackendUrl(getState: () => unknown): string { const state = getState() as RootState; const port = state.config.backendPort; return `http://localhost:${port}`; } export const httpBaseQuery = (): BaseQueryFn< HttpQueryArgs, unknown, HttpQueryError > => { return async ({ url, method = 'GET', body, params }, api) => { try { const backendUrl = getBackendUrl(api.getState); const fullUrl = new URL(`${backendUrl}${url}`); ...
where in release mode the backendPort will be obtained from Tauri backend's lib.rs, which is responsible to:
- Search for available port for our spring boot backend
- Emit the available port to the frontend via IPC event system and let frontend update the redux store, where frontend listens via the API
import { listen } from "@tauri-apps/api/event";
Moreover in our frontend redux store:
const initialState: ConfigState = { backendPort: import.meta.env.DEV ? 7070 : 0, };
Therefore:
- In
DEBUGmode we always have a fixed port7070 - In
RELEASEmode our port will be varying.
Also, in Tauri backend we start our spring boot application via:
let child = Command::new(&native_binary) .arg(format!("--server.port={}", port)) .arg(format!("--spring.datasource.url=jdbc:sqlite:{}", db_path)) .spawn() .map_err(|e| format!("Failed to start Spring Boot backend: {}", e))?;
and eventually our spring boot application get this server.port and db_path at its launching process.
6.5. Start Bundling
Let's
yarn bundle
7. Appendix
7.1. On Various Sizes of the Application
7.1.1. File Size
-
Without Spring Boot. The applicatin is roughly 20MB.
-
With Spring Boot. The application now grows to 200MB:
7.1.2. Memory Consumption
7.2. On Getting App Icon
7.2.1. My suggestion
I got my icon from https://icons8.com/
7.2.2. Trick to get the icon of various sizes
Once you have spotted your favourite icon, click on Download:
You will find many download restrictions, but of which you can choose Link (CDN):
You can find the link https://img.icons8.com/keek/100/documents-folder.png

Now you can adjust the value from 100 1000 😂:
https://img.icons8.com/keek/1000/documents-folder.png





















