Constructing retrieval-augmented era (RAG) techniques for AI brokers typically entails utilizing a number of layers and applied sciences for structured knowledge, vectors and graph info. In latest months it has additionally turn into more and more clear that agentic AI techniques want reminiscence, generally known as contextual reminiscence, to function successfully.
The complexity and synchronization of getting completely different knowledge layers to allow context can result in efficiency and accuracy points. It's a problem that SurrealDB is trying to remedy.
SurrealDB on Tuesday launched model 3.0 of its namesake database alongside a $23 million Collection A extension, bringing whole funding to $44 million. The corporate had taken a unique architectural method than relational databases like PostgreSQL, native vector databases like Pinecone or a graph database like Neo4j. The OpenAI engineering staff lately detailed the way it scaled Postgres to 800 million customers utilizing learn replicas — an method that works for read-heavy workloads. SurrealDB takes a unique method: Retailer agent reminiscence, enterprise logic, and multi-modal knowledge instantly contained in the database. As a substitute of synchronizing throughout a number of techniques, vector search, graph traversal, and relational queries all run transactionally in a single Rust-native engine that maintains consistency.
"Persons are operating DuckDB, Postgres, Snowflake, Neo4j, Quadrant or Pinecone all collectively, after which they're questioning why they’ll't get good accuracy of their brokers," CEO and co-founder Tobie Morgan Hitchcock advised VentureBeat. "It's as a result of they're having to ship 5 completely different queries to 5 completely different databases which solely have the data or the context that they cope with."
The structure has resonated with builders, with 2.3 million downloads and 31,000 GitHub stars to this point for the database. Present deployments span edge gadgets in automobiles and protection techniques, product suggestion engines for main New York retailers, and Android advert serving applied sciences, in accordance with Hitchcock.
Agentic AI reminiscence baked into the database
SurrealDB shops agent reminiscence as graph relationships and semantic metadata instantly within the database, not in utility code or exterior caching layers.
The Surrealism plugin system in SurrealDB 3.0 lets builders outline how brokers construct and question this reminiscence; the logic runs contained in the database with transactional ensures fairly than in middleware.
Right here's what which means in apply: When an agent interacts with knowledge, it creates context graphs that hyperlink entities, choices and area data as database information. These relationships are queryable by the identical SurrealQL interface used for vector search and structured knowledge. An agent asking a couple of buyer problem can traverse graph connections to associated previous incidents, pull vector embeddings of comparable circumstances, and be part of with structured buyer knowledge — multi functional transactional question.
"Folks don't wish to retailer simply the most recent knowledge anymore," Hitchcock mentioned. "They wish to retailer all that knowledge. They wish to analyze and have the AI perceive and run by all the information of a company over the past yr or two, as a result of that informs their mannequin, their AI agent about context, about historical past, and that may due to this fact ship higher outcomes."
How SurrealDB's structure differs from conventional RAG stacks
Conventional RAG techniques question databases primarily based on knowledge sorts. Builders write separate queries for vector similarity search, graph traversal, and relational joins, then merge ends in utility code. This creates synchronization delays as queries round-trip between techniques.
In distinction, Hitchcock defined that SurrealDB shops knowledge as binary-encoded paperwork with graph relationships embedded instantly alongside them. A single question by SurrealQL can traverse graph relationships, carry out vector similarity searches, and be part of structured information with out leaving the database.
That structure additionally impacts how consistency works at scale: Each node maintains transactional consistency, even at 50+ node scale, Hitchcock mentioned. When an agent writes new context to node A, a question on node B instantly sees that replace. No caching, no learn replicas.
"Numerous our use circumstances, a number of our deployments are the place knowledge is consistently up to date and the relationships, the context, the semantic understanding, or the graph connections between that knowledge must be continually refreshed," he mentioned. "So no caching. There's no learn replicas. In SurrealDB, each single factor is transactional."
What this implies for enterprise IT
"It's necessary to say SurrealDB isn’t the very best database for each process. I'd like to say we’re, nevertheless it's not. And you may't be," Hitchcock mentioned. "If you happen to solely want evaluation over petabytes of information and also you're by no means actually updating that knowledge, then you definately're going to be finest going with object storage or a columnar database. If you happen to're simply coping with vector search, then you possibly can go together with a vector database like Quadrant or Pinecone, and that's going to suffice."
The inflection level comes once you want a number of knowledge sorts collectively. The sensible profit reveals up in growth timelines. What used to take months to construct with multi-database orchestration can now launch in days, Hitchcock mentioned.

