Scalable Objects Persistence
In the rush to adopt AI, most enterprises are hitting a wall: The “Goldfish Memory” Problem.
You spend hours crafting the perfect prompt to teach your AI assistant how to query your specific “Users” schema. It finally works! You close the window. The next day, your colleague asks the same question, and the AI makes the same mistake.
You haven’t built an asset; you’ve just had a conversation.
At SOP, we believe that every interaction with an AI should upgrade the system permanently. We are proud to introduce our latest architecture: The Self-Correction Loop, a feature that turns your database administration tool into a learning organism.
Most AI Agents effectively have “Read-Only” access to their own operating instructions. They follow the prompt hardcoded by the developer. If that prompt is wrong or incomplete, the Agent fails—forever—until a developer redeploys the binary.
We flipped this model. In SOP, the Agent’s instructions are not code; they are data.
Instead of hardcoding tool definitions (like execute_script) into the Go binary, the SOP Agent now fetches its operating manual from a dedicated B-Tree store called llm_instructions located in the SystemDB.
When the Agent initializes, it performs a millisecond-latency lookup:
“I need to run a script. What are the current best practices for joining tables in this specific environment?”
This allows the instructions to change dynamically without restarting the server.
update_instruction Tool: Giving the AI a PenWe gave our Agent a new tool: update_instruction. This allows the LLM to rewrite its own prompt.
The Workflow:
orders where status = "active".status = 1.update_instruction.
llm_instructions B-Tree.By setting a simple environment variable (SYSTEM_DB_PATH), multiple instances of SOP—running on different developer machines or production servers—can share this single SystemDB.
This feature is difficult to replicate with standard “Chat with PDF” RAG solutions because it requires deep integration between the Reasoning Engine (LLM) and the Storage Engine (SOP).
This self-correcting brain is the perfect partner to our JIT Compiled Scripting Engine.
While the LLM is used to understand the intent and navigate the schema (using its refined instructions), the final output is not a vague chat response. It is a precise, deterministic SOP Script.
calculate_churn_v2).This combination—Adaptive Knowledge for the Architect and Deterministic Execution for the Builder—creates a platform that feels like magic but runs like engineering.
We are moving beyond “Chatbots” to “Adaptive Systems.”
With SOP’s new Self-Correcting Intelligence, your documentation is no longer a static wiki that goes out of date. It is a living database, curated by the AI itself, growing smarter with every query, every error, and every correction.
Your database shouldn’t just store your data. It should store the knowledge of how to use it.
We realized that “Goldfish Memory” wasn’t the only problem. The other problem was “The Encyclopedia Problem”. If you give an AI all the knowledge at once, it gets confused (and the context window explodes). If you give it nothing, it makes things up (hallucinations).
We’ve solved this with three new mechanisms introduced in the DataAdminAgent architecture.
Previously, the AI had to guess field names (e.g., hallucinating total instead of total_amount).
Now, before the conversation starts, the Agent performs a millisecond “Peek” operation (storeAccessor.First()). It grabs a real sample record, infers the schema (e.g., id: string, active: boolean), and injects this “Ground Truth” directly into the system prompt.
We cannot load every rule into the prompt. Instead, we now inject a dynamic list of available knowledge categories (namespaces).
[finance, sales_targets, hr_policies] in its context. When asked about “Q3 targets”, it sees the sales_targets category and proactively decides: “I should read the ‘sales_targets’ chapter before answering.”The AI has been taught a strict flowchart:
manage_knowledge(action='list', namespace='finance') completely autonomously to fetch the rules it needs.This turns the AI from a passive responder into a proactive researcher, capable of navigating gigabytes of institutional knowledge without overloading the context window.
Beyond just field names (“schema”), the Agent now perceives the relationships between stores.
- orders {id: string...} (Relations: [user_id] -> users([id]))users.user_id to join with orders.