sop

Scalable Objects Persistence


Project maintained by SharedCode Hosted on GitHub Pages — Theme by mattgraham

SOP Platform Tools: Accelerating Productivity on a Proven Foundation

Date: January 23, 2026 Status: Official Release v2.2.7 (Now Available for Download)


The SOP Storage Engine & Framework has long been our standard for General Public Availability (GPA)—iterated, hardened, and running in production environments. It is the reliable bedrock of our data infrastructure.

Today, we are proud to announce the official release of the SOP Platform Tools—a productivity layer built directly atop this robust engine, now available as a downloadable suite for Go, C#, Java, Python, and Rust.

Crucially, we are not labeling these tools as “Beta” or “Trial” because they rest on the solid foundation of the SOP Framework. This is a Major Release focused entirely on enabling Productivity: visualizing data, integrating AI agents, and scripting complex workflows. While the interface and tooling are new, the engine driving every transaction is time-tested and bulletproof.

Unified Ecosystem & Multi-Language Support

This release brings full parity and verification across our primary ecosystems, synchronized under a Unified Versioning strategy. Whether you are building high-performance backend services or data-intensive analytics pipelines, the tools are native to your workflow and share a consistent API surface:

The promise of Artificial Intelligence has always been stuck behind a specific barrier: Reliability.

We have Chatbots that can pass the Bar Exam but can’t be trusted to update a production database record without supervision. We have LLMs that can write poetry but “hallucinate” non-existent facts when asked for financial data.

For the enterprise, this is a showstopper. You cannot build critical infrastructure on “maybe.” You cannot scale automation if every step requires human verification.

With the release of the SOP Platform Tools, we are deploying a new paradigm that bridges this gap. We are moving beyond “Chatting with Data” to “Compiling Intelligence.”

These tools specifically complement the SOP Storage Engine—providing advanced visualization and AI capabilities without compromising the engine’s inherent stability.

Crucially, these tools are architected such that they cannot cause harm to the underlying SOP Storage Engine. Your data resides in the proven, hardened storage core. The platform tools are merely visual constructs written atop the framework—layering AI LLM integration and scripting—while the SOP Framework handles the complexity of all storage, transactionality, and distributed coordination.

The Core Innovation: Freezing Reasoning into Code

The most significant “moat” in the SOP Platform Tools is the introduction of the Self-Correcting Scripting Engine.

In traditional RAG (Retrieval-Augmented Generation) systems, the AI answers a question by guessing the next word. It is probabilistic. If you ask it the same question tomorrow, it might give you a slightly different answer.

The platform takes a radically different approach. We use the AI not to perform the task, but to write the program that performs the task.

  1. Reasoning: The AI analyzes your intent (e.g., “Find all users in Tokyo who haven’t logged in for 30 days and flag them”).
  2. Compilation: Instead of just doing it, the AI drafts a Deterministic Script—a sequence of explicit, compiled steps (Scan, Filter, Update) using the platform’s Swarm Engine.
  3. Verification: The system uses a “Nurse/Doctor” agent architecture to validate the script against the schema and safety rules.
  4. Execution: The script runs. It is fast, atomic, and ACID-compliant.
  5. The Moat: We save the Script.

The next time this task is needed, we don’t ask the AI to “think” again. We simply run the compiled, verified script. We have effectively frozen probabilistic reasoning into deterministic software.

Why this is a Game Changer

This removes the “Hallucination Risk” from the runtime loop. Once a workflow is established, it runs with the precision of a compiled binary. This unlocks Exponential Automation—allowing regulated industries (Finance, Healthcare) to finally adopt AI for complex, multi-step write operations, not just read-only summaries.

The Infrastructure Innovation: Swarm Computing

The SOP Data & Compute Platform is not just a database; it is a Distributed Coordination Framework.

Most modern stacks are fragmented: a database here (Postgres), a cache there (Redis), a vector store over there (Pinecone), and an orchestration layer (Kubernetes) trying to glue them together.

The platform collapses this complexity into a Unified Native Core.

The Data Innovation: Metadata-Carrying Keys

You cannot have High-Performance Computing without High-Performance I/O.

The platform introduces Rich Key Structures. Unlike traditional Key-Value stores that treat keys as dumb strings, the platform allows complex structs to serve as keys. Critical state—like Version, Deleted flags, or vector CentroidID—is stored directly in the B-Tree node, “riding along” with the key.

This allows the system to scan billions of records per second to answer questions like “Count active users” without ever fetching the heavy data payloads from disk. It effectively eliminates the I/O bottleneck that plagues Big Data analytics.

The Management Innovation: Visual Ecosystem

To support this advanced architecture, complexity must be managed. We are introducing new visual tools to simplify the ecosystem:

The Environment Wizard

Data Manager & RESTful API HTTP Server

The Data Manager is a complete data management suite that empowers developers and administrators to control their data infrastructure with unprecedented ease. It combines a beautiful, modern Graphical User Interface (GUI) for managing diverse databases and B-Tree stores with a powerful SOP AI Copilot that bridges the gap between intent and action.

The API Innovation: Rich Storage Layer

Under the hood, we’ve exposed powerful new API capabilities that leverage the storage engine directly:

Example: Optimization with Temporary Stores

This pattern is useful when you need to perform multiple heavy aggregations (e.g., SUM, COUNT) on a dataset and then join those results with another table. Instead of running the aggregation twice or doing a nested loop, you materialize it once.

[
  // 1. Create a Transient B-Tree (Automatically destroyed after script ends)
  // We index by 'user_id' to enable fast O(1) lookups in the subsequent join.
  { 
    "op": "open_store", 
    "args": { 
      "name": "temp_high_spenders", 
      "create": true, 
      "transient": true, 
      "key": "user_id", 
      "value": "total_amount" 
    } 
  },

  // 2. Select & Aggregate into the Temp Store
  // We scan 'orders', aggregate by user, and filter.
  // The 'save_to' argument directs the results into our B-Tree instead of returning them.
  { 
    "op": "select", 
    "args": { 
      "store": "orders", 
      "fields": ["user_id", "sum(amount) as total_amount"], 
      "group_by": "user_id",
      "having": "total_amount > 1000",
      "save_to": "temp_high_spenders"
    } 
  },

  // 3. High-Performance Join
  // Now we join 'users' with our small, indexed temp store.
  // This uses an 'Index Join' strategy (O(N*logM)) which is significantly faster 
  // than a memory-based Hash Join for large datasets.
  { 
    "op": "join", 
    "args": { 
      "left_store": "users",
      "right_store": "temp_high_spenders", 
      "on": {"id": "user_id"},
      "fields": ["users.name", "users.email", "temp_high_spenders.total_amount"]
    } 
  }
]

Automatic Cleanup

When using transient: true, the SOP Script Engine automatically tracks the lifecycle of the store. Upon the successful completion (or failure) of the script:

  1. The B-Tree is closed.
  2. The backing storage files (if any) are unlinked/deleted.
  3. The store entry is removed from the Registry.

This ensures that “scratchpad” data never pollutes the long-term storage or leaks capacity over time.

The Engine Evolution: Relational Intelligence

SOP is not just a NoSQL store; it is a progression from the RDBMS.

For decades, the “Relational Model” has been synonymous with the “SQL Table”—a rigid structure that bundles data storage, indexing, strict Foreign Key enforcement, and Triggers into a single black box. While convenient, these features often attribute to massive slowdowns in modern, high-throughput applications.

SOP “cuts” the relational model to its core essence, removing the “Table” abstraction to expose the raw power of the B-Tree. By decoupling the data structure from the heavy constraints of a traditional RDBMS, we unlock innovations that were previously impossible.

1. Removing the “Table” Bottleneck

In an RDBMS, the B-Tree is an implementation detail hidden behind the table. In SOP, the B-Tree is the first-class citizen. This allows specific tuning that RDBMS engines hide from you:

2. Built-in “Software RAID” & Chunking

Building on top of these structural optimizations, SOP introduces a storage layer designed for massive scale:

This architecture is designed specifically to innovate beyond the mechanical limitations of today’s RDBMS and first-generation NoSQL engines.

A Hint for Database Vendors: SOP is built to be the high-performance engine under your hood. We invite RDBMS and NoSQL vendors to adopt SOP as their pluggable storage layer. By building on top of SOP, you instantly gain Swarm Intelligence, Erasure Coding, and robust ACID transactions—allowing you to focus on your unique query languages and API features without reinventing the storage wheel.

Deployment Made Simple

The Road Ahead

The SOP Data & Compute Platform is a green field. What we are shipping is just the surface of what is possible.

We are not just building a faster database. We are building a Super Computing Enabler. By tightly coupling storage, network, and self-correcting AI compute, we are creating a platform where the friction of distributed systems disappears, leaving only the pure performance of the cluster.

Welcome to the era of Deterministic AI.


SOP “Self-Correcting” AI Copilot Reaches Beta

January 20, 2026 — We are proud to announce a major milestone for the SOP platform. Today, we are officially moving the SOP “Self-Correcting” AI Copilot to Beta status.

This release is not just a stabilization of existing features—it introduces a fundamental shift in how developers interact with data. We are rebranding our “AI Assistant” to SOP AI Copilot, reflecting a leap in capability from a simple helper to a proactive, intelligent partner in your development workflow.

Here is what defines this new Beta release:

1. The “Self-Correcting” Intelligence

The headline feature of this release is the introduction of a cognitive architecture that mimics human memory. The AI Copilot generally struggled with the “Goldfish Memory” problem—forgetting context as soon as a window closed. We have solved this with a dual-layer memory system:

Short-Term Memory (Contextual Awareness)

The Copilot now maintains a robust “Short-Term Memory” within the run-loop of the agent. It tracks the immediate history of your current session, understanding references to previous queries (“filter that list by date”) and maintaining the state of your current investigation without needing constant restatement of facts.

Long-Term Memory (System Knowledge)

This is the game-changer for Enterprise teams. When you correct the AI Copilot—for example, teaching it that a status field uses integers (1) instead of strings ("Active")—it doesn’t just learn for now; it learns forever.

2. From “Assistant” to “Copilot”

Why the rebrand? “Assistant” implies a reactive tool. “Copilot” implies a proactive partner. With the stability of Beta, the introduction of self-correction, and the deep integration into the storage engine, SOP AI is now capable of navigating your data alongside you, catching errors before they happen, and learning from your domain expertise to become smarter every day.


Gerardo Recinto
Creator, Scalable Objects Persistence (SOP)