Scalable Objects Persistence
sop4rs provides safe, idiomatic Rust bindings for the Scalable Objects Persistence (SOP) engine. It allows Rust applications to leverage SOP’s high-performance B-Tree storage, ACID transactions, and Swarm Computing capabilities.
SOP includes a powerful Data Management Suite that provides full CRUD capabilities for your B-Tree stores. It goes beyond simple viewing, offering a complete GUI for inspecting, searching, and managing your data at scale.
To launch the Data Manager, download the all-in-one single-file installer from SOP Releases. Alternatively, you can use the Go toolchain:
# From the root of the repository
go run ./tools/httpserver
The SOP AI Kit transforms SOP from a storage engine into a complete AI data platform.
See ai/README.md for a deep dive into the AI capabilities.
SOP Scripts allow you to execute complex workflows on the server side, similar to Stored Procedures. Currently, scripts are executed via the SOP HTTP API Server.
Example using reqwest:
use std::collections::HashMap;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = reqwest::Client::new();
let args = json!({
"user_id": 999
});
let body = json!({
"name": "user_audit",
"category": "general",
"args": args
});
let res = client.post("http://localhost:8080/api/scripts/execute")
.json(&body)
.send()
.await?;
let text = res.text().await?;
println!("Response: {}", text);
Ok(())
}
libsop.so / libsop.dylib / libsop.dll (built from bindings/main)Add this to your Cargo.toml:
[dependencies]
sop = { path = "path/to/sop/bindings/rust" }
use sop::{Context, Database, DatabaseOptions, Item};
fn main() {
let ctx = Context::new();
// Open a database (Standalone - Local disk or shared Network drive, or Clustered)
let mut options = DatabaseOptions::default();
options.stores_folders = Some(vec!["./data".to_string()]);
let db = Database::new(&ctx, options).unwrap();
// Start a transaction
let trans = db.begin_transaction(&ctx).unwrap();
// Create a B-Tree
let btree = db.new_btree::<String, String>(&ctx, "sys_config", &trans, None).unwrap();
// Add data (Clean, idiomatic API)
btree.add(&ctx, "max_connections".to_string(), "10000".to_string()).unwrap();
btree.add(&ctx, "timeout_ms".to_string(), "500".to_string()).unwrap();
// Commit
trans.commit(&ctx).unwrap();
println!("System configuration persisted safely.");
}
SOP includes a powerful Data Management Suite that provides full CRUD capabilities for your B-Tree stores. It goes beyond simple viewing, offering a complete GUI for inspecting, searching, and managing your data at scale.
To launch it, simply run:
sop-httpserver
Country + City).Usage: By default, it opens on http://localhost:8080.
Arguments: You can pass standard flags, e.g., sop-httpserver -port 9090 -database ./my_data.
For managing multiple environments (e.g., Dev, Staging, Prod), create a config.json:
{
"port": 8080,
"databases": [
{
"name": "Local Development",
"path": "./data/dev_db",
"mode": "standalone"
},
{
"name": "Production Cluster",
"path": "/mnt/data/prod",
"mode": "clustered",
"redis": "redis-prod:6379"
}
],
"system_db": {
"name": "system",
"path": "./data/sop_system",
"mode": "standalone"
}
}
Note: This example shows the structure of
system_db, but it is best to let the Data Manager Setup Wizard create and populate it automatically on first launch. The Wizard ensures that essential stores (likeScriptandllm_knowledge) are correctly initialized for the AI Copilot.
Run with: sop-httpserver -config config.json
The SOP Data Manager includes a built-in AI Copilot that allows you to interact with your data using natural language and automate workflows using Scripts.
Start the server:
sop-httpserver
Open your browser to http://localhost:8080 and click the AI Copilot floating widget.
You can ask the assistant to perform tasks or query data:
Scripts allow you to record a sequence of actions and replay them later. This is a “Natural Language Programming” system where the LLM compiles your intent into a high-performance script.
Step 1: Record
Type /script new <name> in the chat.
/script new daily_check
Step 2: Perform Actions Interact with the AI naturally.
Check the 'logs' store for errors.
Count the number of active users.
Step 3: Stop Save the script.
/script stop
Step 4: Replay Execute the script instantly. The system runs the compiled steps without invoking the LLM again.
/script run daily_check
You can make scripts dynamic by using parameters.
/script run user_audit user_id=456
You can trigger these scripts from your Rust code via the REST API:
use reqwest::Client;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
let res = client.post("http://localhost:8080/api/ai/chat")
.json(&json!({
"message": "/play user_audit user_id=999",
"agent": "sql_admin"
}))
.send()
.await?;
println!("Response: {}", res.text().await?);
Ok(())
}
For managing multiple environments (e.g., Dev, Staging, Prod), create a config.json:
{
"port": 8080,
"databases": [
{
"name": "Local Development",
"path": "./data/dev_db",
"mode": "standalone"
},
{
"name": "Production Cluster",
"path": "/mnt/data/prod",
"mode": "clustered",
"redis": "redis-prod:6379"
}
],
"system_db": {
"name": "system",
"path": "./data/sop_system",
"mode": "standalone"
}
}
Run with: sop-httpserver -config config.json
If database(s) are configured in standalone mode, ensure that the http server is the only process/app running to manage the database(s). Alternatively, you can add its HTTP REST endpoint to your embedded/standalone app so it can continue its function and serve HTTP pages at the same time.
If clustered, no worries, as SOP takes care of Redis-based coordination with other apps and/or SOP HTTP Servers managing databases using SOP in clustered mode.
The build.rs script expects to find the jsondb library in ../main. Ensure you have built the Go shared library first:
cd ../main
go build -buildmode=c-shared -o libjsondb.dylib jsondb.main.go
To run the basic B-Tree example:
cargo run --example btree_basic
To run the Vector Search AI example:
cargo run --example vector_search_ai
To run the Concurrent Transactions demo:
cargo run --example concurrent_demo
To run the Clustered Concurrent Transactions demo (requires Redis):
cargo run --example concurrent_demo_clustered
To run the B-Tree Paging example:
cargo run --example btree_paging