Database Migrations in Rust: A Tauri Desktop Application Approach
Introduction
When building desktop applications with local data persistence, implementing a solid database migration system is crucial. In this post, I'll explore how we approached database migrations in Syndeos, our Tauri-based VPS server management application.
The Problem
Desktop applications face a unique challenge when it comes to database management: there's no central server handling migrations, and users might skip multiple versions between updates. Our solution needed to:
- Track the current database version
- Apply only the migrations that haven't been executed yet
- Handle them in the correct order
- Be resilient to failures
Our Migration Architecture
The Database Structure
At the core of our migration system is a db_version
table that tracks the current schema version:
CREATE TABLE IF NOT EXISTS db_version (
id INTEGER PRIMARY KEY CHECK (id = 1),
version INTEGER NOT NULL
)
We use a CHECK constraint to ensure there's only ever one row in this table, making version lookups simple and fast.
Migration Management
Let's look at the key components of our migration system:
1. Version Detection
We first check if the version table exists, and if not, we need to apply our initial migration:
fn version_table_exists(tx: &Transaction) -> SqliteResult<bool> {
let count: i32 = tx.query_row(
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='db_version'",
[],
|row| row.get(0),
)?;
Ok(count > 0)
}
2. Migration Registry
We use a HashMap to register all migrations:
fn get_migrations() -> HashMap<i32, MigrationFn> {
let mut migrations: HashMap<i32, MigrationFn> = HashMap::new();
migrations.insert(1, migrate_to_v1);
// Additional migrations can be added here
migrations
}
This approach makes it easy to add new migrations as the application evolves.
3. Migration Functions
Each migration is encapsulated in a function that receives a database transaction:
fn migrate_to_v1(tx: &Transaction) -> SqliteResult<()> {
// Create servers table
tx.execute(
"CREATE TABLE IF NOT EXISTS servers (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
hostname TEXT NOT NULL,
ip_address TEXT NOT NULL,
port INTEGER NOT NULL DEFAULT 22,
username TEXT NOT NULL,
ssh_key_id INTEGER,
notes TEXT,
settings TEXT NOT NULL DEFAULT '{}',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
FOREIGN KEY (ssh_key_id) REFERENCES ssh_keys (id)
)",
[],
)?;
// Create settings table
tx.execute(
"CREATE TABLE IF NOT EXISTS settings (
id INTEGER PRIMARY KEY,
key TEXT NOT NULL UNIQUE,
value TEXT NOT NULL
)",
[],
)?;
// Create ssh_keys table
tx.execute(
"CREATE TABLE IF NOT EXISTS ssh_keys (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
path TEXT NOT NULL,
password TEXT,
is_default BOOLEAN NOT NULL DEFAULT 0,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
)",
[],
)?;
// Create version tracking table
tx.execute(
"CREATE TABLE IF NOT EXISTS db_version (
id INTEGER PRIMARY KEY CHECK (id = 1),
version INTEGER NOT NULL
)",
[],
)?;
// Initialize version
tx.execute(
"INSERT INTO db_version (id, version) VALUES (1, ?1)",
[1],
)?;
Ok(())
}
This function handles our initial migration, creating all the tables needed for our application.
4. The Migration Runner
The main migration function ties everything together:
pub fn check_and_apply_migrations(tx: &mut Connection) -> Result<(), String> {
let tx = tx.transaction().map_err(|e| e.to_string())?;
let has_version_table = version_table_exists(&tx).unwrap_or(false);
if !has_version_table {
let migrations = get_migrations();
if let Some(migration_fn) = migrations.get(&1) {
migration_fn(&tx).map_err(|e| e.to_string())?;
update_db_version(&tx, 1).map_err(|e| e.to_string())?;
} else {
tx.rollback().map_err(|e| e.to_string())?;
return Err("Missing migration for version 1".to_string());
}
}
let migrations = get_migrations();
for version in 1..=CURRENT_DB_VERSION {
if migration_executed(&tx, version).map_err(|e| e.to_string())? {
continue;
}
if let Some(migration_fn) = migrations.get(&version) {
migration_fn(&tx).map_err(|e| e.to_string())?;
update_db_version(&tx, version).map_err(|e| e.to_string())?;
} else {
tx.rollback().map_err(|e| e.to_string())?;
return Err(format!("Missing migration for version {}", version));
}
}
tx.commit().map_err(|e| e.to_string())?;
Ok(())
}
Here's what this function does:
- Starts a database transaction to ensure atomicity
- Checks if the version table exists, creating it if not
- Iterates through all migrations from version 1 to the current version
- Skips migrations that have already been applied
- Applies each pending migration in order
- Updates the database version after each successful migration
- Commits the transaction if all migrations succeed, or rolls back if any fail
Advantages of this Approach
1. Transactional Safety
By wrapping migrations in a transaction, we ensure that if any part of a migration fails, the entire migration is rolled back, preventing partial updates that could corrupt the database.
2. Incremental Updates
Our system applies only the migrations that haven't been executed yet, making it efficient for users who update frequently.
3. Version Skipping
Users who skip several versions will have all missed migrations applied in the correct order, ensuring data consistency regardless of update patterns.
4. Maintainability
Adding new migrations is as simple as:
- Incrementing the
CURRENT_DB_VERSION
constant - Adding a new migration function
- Registering it in the migrations HashMap
Integrating with Tauri
This migration system is called during application initialization:
#[tauri::command]
pub fn initialize(app_handle: AppHandle) -> Result<String, String> {
let app_dir = app_handle.path().app_data_dir().unwrap();
if !app_dir.exists() {
fs::create_dir_all(&app_dir).map_err(|e| e.to_string())?;
}
let db_path = app_dir.join("syndeos.db");
let mut conn = Connection::open(&db_path).map_err(|e| e.to_string())?;
migrations::check_and_apply_migrations(&mut conn)?;
Ok("Database initialized successfully".to_string())
}
This ensures that database migrations run automatically when the application starts.
Future Improvements
While our current system works well, there are several potential improvements:
- Migration Logging: Add detailed logging of each migration step
- Validation: Add pre and post-migration validation steps
- Backup/Restore: Create a database backup before applying migrations
- Migration UI: Show migration progress to users during application updates
- Data Transformations: Support for more complex data migrations, not just schema changes
Conclusion
Building a robust migration system for desktop applications requires careful consideration of the user update experience. By using transactions, version tracking, and an ordered migration system, we've created a reliable way to evolve our database schema as our application grows.
Our approach leverages Rust's strong type system and error handling, along with SQLite's transactional capabilities, to create a migration system that can handle the distributed nature of desktop application updates.
While this system is specifically built for Syndeos, the patterns and approaches can be applied to any Tauri application or desktop app that needs local data persistence with reliable schema evolution.