@supergrain/silo
A reactive document cache for React — Suspense-compatible, request-batched, zero ceremony.
- Suspense-native — every handle exposes a stable
promisefor React 19'suse(). No query keys, no options bags, noinvalidateQueries. - Request batching — N
useDocumentcalls in a render collapse into oneadapter.find(ids). No waterfalls. - Reactive handles —
store.find(type, id)returns a stable object; its fields mutate in place when data lands, when sockets push, when youinsertDocumentlocally. - Transport-agnostic — bring your own fetch. Bulk endpoints, fan-out
GET /:id, websockets, JSON-API envelopes — all work against the same store. - Typed by model — a single
TypeToModelmap drives inference end-to-end;store.find("user", id)returnsDocumentHandle<User>with no casts.
Install
pnpm add @supergrain/silo @supergrain/kernelReact bindings are optional — @supergrain/silo/react requires react >= 18.2.
Quick start
1. Define your models and adapters
// services/store.ts
import { type DocumentAdapter, type DocumentStore } from "@supergrain/silo";
import { createDocumentStoreContext } from "@supergrain/silo/react";
export interface User {
id: string;
attributes: { firstName: string; lastName: string };
}
export interface Post {
id: string;
attributes: { title: string; body: string };
}
export type TypeToModel = { user: User; post: Post };
const userAdapter: DocumentAdapter = {
async find(ids) {
return Promise.all(ids.map((id) => fetch(`/api/users/${id}`).then((r) => r.json())));
},
};
const postAdapter: DocumentAdapter = {
async find(ids) {
return Promise.all(ids.map((id) => fetch(`/api/posts/${id}`).then((r) => r.json())));
},
};
export const { Provider, useDocumentStore, useDocument } =
createDocumentStoreContext<DocumentStore<TypeToModel>>();
export const config = {
models: {
user: { adapter: userAdapter },
post: { adapter: postAdapter },
},
};Adapters above are fan-out style — N parallel GET /:id requests, merged. The library doesn't care how you fetch; it just hands the adapter a list of ids and takes back a raw response. If your API exposes a bulk endpoint, one GET with all the ids works just as well:
const userAdapter: DocumentAdapter = {
async find(ids) {
const qs = ids.map((id) => `id=${id}`).join("&");
const res = await fetch(`/api/users?${qs}`);
return res.json();
},
};2. Mount the Provider
// main.tsx
import { Provider, config } from "./services/store";
<Provider config={config}>
<App />
</Provider>;The Provider wraps config in createDocumentStore() exactly once per mount, so every SSR request, every test, and every React tree gets an isolated store by construction. You can't accidentally share a store across requests.
For hydration or other one-time setup, pass onMount:
<Provider
config={config}
onMount={(store) => {
for (const user of window.__HYDRATION__.users) {
store.insertDocument("user", user);
}
}}
>
<App />
</Provider>onMount runs synchronously once per mount, before children render, so seeded data is visible on the initial paint.
3. Read documents
// UserCard.tsx
import { useDocument } from "./services/store";
export function UserCard({ id }: { id: string }) {
const user = useDocument("user", id);
if (user.isPending) return <Skeleton />;
if (user.error) return <ErrorState error={user.error} />;
return <div>{user.data?.attributes.firstName}</div>;
}useDocument returns a reactive DocumentHandle<User>. Same (type, id) always returns the same handle object across renders — fields update in place.
4. Or suspend, if you prefer
// UserCard.tsx
import { use } from "react";
import { useDocument } from "./services/store";
export function UserCard({ id }: { id: string }) {
const user = useDocument("user", id);
use(user.promise); // suspends on first load; never re-suspends on refetch
return <div>{user.data!.attributes.firstName}</div>;
}Wrap the component in a <Suspense> boundary. That's it. One line to opt in, nothing to configure, no { suspense: true } flag.
Why this instead of TanStack Query / SWR?
Short version: the same architecture both libraries wish they had started with.
- No parallel cache. Documents live in the same reactive graph as the rest of your state. You read them with the same primitives you use for local state.
- No query keys.
(type, id)is the key. Stable, typed, inferred. - Request batching as a primitive. The thing that makes Suspense actually scale isn't the
use()hook — it's the batch window that collapses 50 component-leveluseDocumentcalls into one network request. TQ doesn't do this automatically. Here it's the default. - No refetch-on-focus / stale-time matrix. Deliberately — see non-goals. If you need that complexity, reach for TQ. If you don't, don't pay for it.
For a full capability-by-capability breakdown, trade-offs, and migration guidance, see Comparison to TanStack Query further down.
API
createDocumentStore<M, Q = Record<string, never>>(config)
The plain, non-React primitive. It takes config and returns the store object.
const store = createDocumentStore<TypeToModel>({
models: {
user: { adapter: userAdapter },
post: { adapter: postAdapter },
},
batchWindowMs: 15, // default — collapse calls within this window
batchSize: 60, // default — chunk size per adapter.find() call
});Each model can also take a processor to normalize the adapter's raw response — see Processors below. Omit it and the default processor assumes the adapter returns a doc or an array of docs.
Methods:
find(type, id)→DocumentHandle<T>findInMemory(type, id)→T | undefinedinsertDocument(type, doc)→voidclearMemory()→voidfindQuery(type, params)→QueryHandle<T>findQueryInMemory(type, params)→T | undefinedinsertQueryResult(type, params, result)→void
createDocumentStoreContext<S extends DocumentStore<any, any>>()
The React context wrapper. Mirrors createStoreContext<T>() from @supergrain/kernel/react: the type parameter S is the store type; the Provider takes the same config you'd pass to createDocumentStore() and constructs the store internally once per mount.
type DocStore = DocumentStore<TypeToModel, TypeToQuery>;
const { Provider, useDocumentStore, useDocument, useQuery } =
createDocumentStoreContext<DocStore>();
<Provider config={{ models, queries }} onMount={(store) => seed(store)}>
<App />
</Provider>For non-React use, import createDocumentStore directly from @supergrain/silo.
DocumentHandle<T>
A reactive state machine for a single document. All fields are signals — reading them inside a tracked() scope subscribes to changes.
interface DocumentHandle<T> {
readonly status: "IDLE" | "PENDING" | "SUCCESS" | "ERROR";
readonly data: T | undefined;
readonly error: Error | undefined;
readonly isPending: boolean; // true before first successful load
readonly isFetching: boolean; // true while a fetch is in flight for this handle
readonly hasData: boolean;
readonly fetchedAt: Date | undefined;
readonly promise: Promise<T> | undefined; // stable; pass to use()
}Lifecycle:
IDLE ──(non-null id, cache miss)──► PENDING ──► SUCCESS
IDLE ──(non-null id, cache hit) ──► SUCCESS
PENDING ──(fetch rejects)──► ERROR
ERROR ──(new data inserted)──► SUCCESS (with a fresh promise object)IDLE is one-way — once a handle leaves IDLE for a given (type, id), it never goes back. The only exception is clearMemory(), which drops handles to IDLE when no fetch is in flight.
React hooks
From @supergrain/silo/react:
All returned from createDocumentStoreContext<S>(); destructure and re-export from your store module.
Provider({ config, initial?, onMount?, children })— wrapsconfigincreateDocumentStore()exactly once per mount. Optionalinitialseeds documents/query results before the first render; optionalonMountruns synchronously after seeding for imperative setup.useDocument(type, id | null | undefined)→DocumentHandle<T>.null/undefinedid returns an idle handle (useful for conditional fetching —useDocument("user", isLoggedIn ? myId : null)).useDocumentStore()→ store API. Escape hatch for imperative ops (insertDocument,clearMemory, query methods).useQuery(type, params | null | undefined)→QueryHandle<Result>. Same null-handling asuseDocument.- For lists, call
useDocumentStore().find(type, id)for each id. Batching still happens under the hood; the public primitive stays one resource → one handle.
Factory for isolated stores
Most apps create one document-store context in their app wiring. Libraries shipping their own document store, micro-frontends, or test harnesses that need isolated instances create their own separate context call:
import { type DocumentStore } from "@supergrain/silo";
import { createDocumentStoreContext } from "@supergrain/silo/react";
const libStore = createDocumentStoreContext<DocumentStore<LibTypes>>();
export const { Provider, useDocument, useDocumentStore } = libStore;
export const libConfig = {
models: {
/* ... */
},
};The returned Provider/hooks are bound to that specific context factory call. Each <Provider config={libConfig}> mount constructs its own store, so two trees mounted side-by-side don't share memory.
Batching, in detail
The Finder is internal — you never construct or import it — but it's what makes the whole thing feel native. Given this tree:
<Suspense fallback={<Loading />}>
{userIds.map((id) => (
<UserCard key={id} id={id} />
))}
</Suspense>…where each UserCard calls useDocument("user", id) and use(user.promise), here's what happens:
- Every hook call lands in a pending queue keyed by
(type, id). - A 15ms timer starts on the first call; further calls in that window join the queue.
- When the timer fires, ids are deduped, chunked at
batchSize, and handed toadapter.find(ids)— one call per chunk, per type. - The processor inserts each returned doc. Every handle waiting on a
(type, id)whose doc arrived resolves; its Suspense boundary unblocks.
50 <UserCard>s → one /api/users?id=1&id=2&…&id=50 call. Same mechanism works for JSON-API sideloads — if the user response includes a related organization, the processor inserts both, and any useDocument("organization", …) already in flight resolves for free.
Processors
The adapter returns whatever its fetch chain returns — typically already-parsed JSON. The processor takes that parsed response and calls store.insertDocument(type, doc) for every document worth caching.
Processors are keyed by envelope shape, not by model. One processor normally serves many adapters: every REST endpoint that returns {id, ...} or [{id, ...}, ...] shares defaultProcessor; every JSON-API endpoint in your app shares jsonApiProcessor; a custom graphqlProcessor would serve every GraphQL-returning adapter. The per-model processor field isn't one-per-adapter — it's "which envelope parser does this adapter's response need."
Default
If no processor is configured, the library uses defaultProcessor — fits any REST endpoint that returns a doc or an array of docs with no wrapping envelope.
You call:
const post = useDocument("post", "1");Internally, the store's finder calls your adapter with the queued ids, expecting a shape the default processor understands:
// finder calls adapter.find(ids); expects either a single doc or an array
await adapter.find(["1", "2"]);
// → [{ id: "1", ... }, { id: "2", ... }]
// or for a single id:
await adapter.find(["1"]);
// → { id: "1", ... }defaultProcessor then inserts each doc under the caller's type using the doc's own id. No envelope, no sideloading, no type-on-doc requirement.
JSON-API
For consumers whose API speaks JSON-API. Opt in per-model:
import { jsonApiProcessor } from "@supergrain/silo/processors/json-api";
createDocumentStore<M>({
models: {
"card-stack": { adapter: cardStackAdapter, processor: jsonApiProcessor },
},
});You call:
const cardStack = useDocument("card-stack", "42");Internally, the finder calls your adapter, expecting a JSON-API envelope:
await adapter.find(["42"]);
// → {
// data: [
// { type: "card-stack", id: "42", attributes: { ... },
// relationships: { planbook: { data: { type: "planbook", id: "7" } } } },
// ],
// included: [
// { type: "planbook", id: "7", attributes: { ... } },
// ],
// }jsonApiProcessor inserts every document in data + included, keyed by each doc's own type field from the envelope (JSON-API requires every resource object to carry one). Sideloaded documents drop into their respective caches automatically — so in the example above, useDocument("planbook", "7") elsewhere in the tree resolves for free, no extra fetch.
JSON-API relationship hooks live in a separate subpath:
import { useBelongsTo, useHasMany } from "@supergrain/silo/react/json-api";
const planbook = useBelongsTo(cardStack, "planbook");
const cards = useHasMany(planbook.data, "cards");useBelongsTo(model, relationName)→DocumentHandle<Related>. Readsmodel.relationships[relationName].data(a{ type, id }), then delegates touseDocument.useHasMany(model, relationName)→ReadonlyArray<DocumentHandle<Related>>. One handle per related doc; fetching is still batched into one adapter call.useHasManyIndividually(model, relationName)→ReadonlyArray<DocumentHandle<Related>>. Same per-doc shape with a name that makes the item-by-item semantics explicit.
Custom
Any function matching (raw, store, type) => void works. If you need GraphQL, a REST envelope, or a bespoke wire format — write one. Processors are synchronous; for async normalization, do it in the adapter.
Queries
Documents are one surface. The store has a second, additive surface: queries — results keyed by structured params objects instead of id: string. Use them for endpoints whose response is only meaningful with its query params: dashboards, search results, filtered lists, pagination cursors.
The config surface forks at the top level — models for documents, queries for params-keyed results. One store, one memory, one finder.
import { createDocumentStore, type QueryAdapter } from "@supergrain/silo";
type TypeToModel = { user: User; post: Post };
type TypeToQuery = {
dashboard: { params: { workspaceId: number }; result: Dashboard };
};
const dashboardAdapter: QueryAdapter<{ workspaceId: number }> = {
async find(paramsList) {
return Promise.all(
paramsList.map((p) => fetch(`/api/dashboard?ws=${p.workspaceId}`).then((r) => r.json())),
);
},
};
const store = createDocumentStore<TypeToModel, TypeToQuery>({
models: {
user: { adapter: userAdapter },
post: { adapter: postAdapter },
},
queries: {
dashboard: { adapter: dashboardAdapter },
},
});Consumers with only documents pass one generic and omit queries. The second generic defaults to an empty query map.
Reading queries
Parallel to useDocument/useDocumentStore.find:
import { useQuery } from "@supergrain/silo/react";
function DashboardView({ workspaceId }: { workspaceId: number }) {
const handle = useQuery("dashboard", { workspaceId });
if (handle.isPending) return <Skeleton />;
if (handle.error) return <ErrorState error={handle.error} />;
return <Dashboard data={handle.data!} />;
}Same QueryHandle<T> shape as DocumentHandle<T> — status/data/error/isPending/isFetching/promise. Same Suspense opt-in via use(handle.promise). Same stable handle identity, so two components requesting { workspaceId: 7 } get the same reactive object.
Object key identity is deep-equal: { a: 1, b: 2 } and { b: 2, a: 1 } hit the same slot. The library stable-stringifies for cache lookup; adapters see the raw objects.
Method mirror
| Documents | Queries |
|---|---|
store.find(type, id) | store.findQuery(type, params) |
store.findInMemory(type, id) | store.findQueryInMemory(type, params) |
store.insertDocument(type, doc) | store.insertQueryResult(type, params, result) |
useDocument(type, id) | useQuery(type, params) |
Two ways to handle a list query
Take GET /api/users?role=admin. It's a query (keyed by params), and it returns a list of users. There are two reasonable ways to cache it — pick based on whether you need normalization.
Option A — plain: store the response as-is (default processor)
Declare the query result as the whole user list. No custom processor needed.
type TypeToQuery = {
usersByRole: { params: { role: string }; result: User[] };
};
const usersByRoleAdapter: QueryAdapter<{ role: string }> = {
async find(paramsList) {
return Promise.all(
paramsList.map((p) => fetch(`/api/users?role=${p.role}`).then((r) => r.json())),
);
},
};
createDocumentStore<TypeToModel, TypeToQuery>({
models: { user: { adapter: userAdapter } },
queries: {
usersByRole: { adapter: usersByRoleAdapter }, // defaultQueryProcessor
},
});Usage:
const query = useQuery("usersByRole", { role: "admin" });
return query.data?.map((u) => <UserRow key={u.id} user={u} />);What this gives you: 10 lines of config, works immediately, automatic batching of concurrent queries, Suspense-compatible.
What you give up: no normalization. The users cached under this query slot are a separate copy from users cached as documents. If someone else calls store.insertDocument("user", updated42) — from a detail page load, a socket push, a mutation response — this list keeps showing the old copy of user #42 until the query is re-fetched. Same user, multiple copies, drift.
This is fine when:
- The query result is short-lived or one-shot
- Users showing in this list aren't shown anywhere else
- You're okay re-fetching to see fresh data
Option B — normalized: extract documents, store an id-list
Write a custom processor that pulls each user out into the documents cache and stores only a list of ids under the query slot.
type TypeToQuery = {
usersByRole: { params: { role: string }; result: { userIds: string[] } };
};
const usersByRoleProcessor: QueryProcessor<TypeToModel, TypeToQuery, "usersByRole"> = (
raw,
store,
type,
paramsList,
) => {
const results = raw as Array<User[]>; // adapter returns one User[] per params
for (let i = 0; i < paramsList.length; i++) {
const users = results[i];
// Normalize: insert each user into the documents cache
for (const u of users) store.insertDocument("user", u);
// Store only the id-list as the query result
store.insertQueryResult(type, paramsList[i], { userIds: users.map((u) => u.id) });
}
};
createDocumentStore<TypeToModel, TypeToQuery>({
models: { user: { adapter: userAdapter } },
queries: {
usersByRole: { adapter: usersByRoleAdapter, processor: usersByRoleProcessor },
},
});Usage:
const query = useQuery("usersByRole", { role: "admin" });
// Dereference each id — each row gets its own reactive handle
return query.data?.userIds.map((id) => <UserRow key={id} id={id} />);What this gives you:
- One cache entry per user. Same user shows up in the admin query, the detail page, a sidebar — one reactive object, no drift.
- Mutations radiate.
store.insertDocument("user", updated42)re-renders every view referencing user #42, including this list. No query-cache invalidation, no network call. - Small query slot. Just
{ userIds: [...] }, not fat user payloads. - Cross-query sync. Another query that returned the same user reads from the same slot; edits propagate everywhere.
What you give up: ~15 extra lines per list-query type for the processor.
This is what Relay and Apollo do with GraphQL schemas; queries here express the same pattern without the schema machinery.
Picking between them
Plain → normalized is a local change (swap the result type, add a processor, deref ids instead of users). Start plain, move to normalized when the duplication bites. You don't have to get it right on day one.
Default query processor
If QueryConfig.processor is omitted, the library uses defaultQueryProcessor — assumes the adapter returns an array of results aligned 1:1 with the input params, and pairs them by position:
// adapter returns: [resultForParams0, resultForParams1, ...]
// → insertQueryResult(type, paramsList[i], results[i]) for eachNo normalization (nested entities stay inside the query result). For normalization, write a custom processor as shown above.
When to use which
- Documents when the data has identity across contexts: entities looked up by id and shared across views. User #42 is the same user whether fetched directly or in a list.
- Queries when the data only makes sense with its params: dashboards, search results, pagination cursors, filtered lists. The params are the identity.
Rule of thumb: "Would I ever want useDocument(type, id) to read from this cache slot?" If yes → document. If no → query.
Comparison to TanStack Query
TanStack Query (TQ) and SWR make similar bets; TQ is more feature-complete, so this comparison uses it as the stand-in.
The architectural difference
Both libraries cache async data, but they make opposite choices about what the cache is.
TQ: opaque caching. The cache is keyed by an arbitrary queryKey array. The library has no idea what's inside a response — a user in the list query and the same user in the detail query are separate cache entries. Simple mental model; no schema needed. Cost: data duplicates across queries, cross-query sync requires manual setQueryData, invalidation is pattern-matching across keys.
document-store: normalized caching. Responses aren't opaque — the processor knows what types/ids live inside and scatters them into per-(type, id) slots. User #42 lives in one place; every view that references them reads the same reactive object.
These aren't "same library, different maturity" — they're genuinely different bets. TQ refuses to normalize because it complicates the mental model. document-store embraces it for the payoff: automatic cross-query sync without explicit invalidation.
Capability comparison
| Capability | TQ (today) | document-store (today) | document-store (ceiling) |
|---|---|---|---|
| Fetch by id | ✓ | ✓ | — |
| Fetch by arbitrary query | ✓ | ✗ | generalize adapter keys |
| Request dedup | ✓ | ✓ | — |
| Multi-key batching into one request | ✗ | ✓ | — |
| Stable-id normalization | ✗ | ✓ | — |
| Cross-query sync (edit user → every view updates) | ✗ (manual) | ✓ | — |
| Stable reactive handles (fine-grained field reads) | ✗ | ✓ | — |
Suspense via use() | ✓ (opt-in) | ✓ (opt-in) | — |
| Invalidation | ✓ | ✗ | add invalidate / invalidateType |
| Stale-time / gc-time | ✓ | ✗ | add staleMs; compare against fetchedAt |
| Refetch on focus / reconnect / interval | ✓ | ✗ | add opt-in hooks |
| Retry with backoff | ✓ | ✗ | add to Finder |
| Cancellation | ✓ | ✗ | thread AbortSignal through adapter |
| Pagination / infinite queries | ✓ | ✗ | wrapper hook that extends an id-list |
| Mutations + optimistic + rollback | ✓ | ✗ | next-PR write layer built on insertDocument |
| SSR / hydration | ✓ | ✗ | serialize the store's reactive tree, rehydrate |
| Persistence (localStorage / IDB) | ✓ | ✗ | serialize map on write, restore on init |
| Devtools | ✓ | ✗ | expose cache map + event stream |
| Ecosystem / community / docs | Large | Small | — |
Bold rows are architectural — they live in the primitive and can't be retrofitted without a rewrite. Everything else is additive: bolt-on features that land without touching core design. The "ceiling" column is the planned-additive path; none of it requires architectural change to get to.
What we give up vs TQ
- Shipped feature count. TQ has years of polish; we're shipping a read layer. If you need stale-time, refetch-on-focus, mutations, or pagination today, TQ wins.
- Partial-key pattern invalidation.
invalidateQueries({ queryKey: ['users'] })in TQ matches every key starting with['users']. Our plannedinvalidateType('users-by-role')is blunter — drops everything under a type in one call. Predicate invalidation (invalidateWhere) handles the precise cases. Net: ~5% of real-world invalidation needs are less ergonomic. - Zero-discipline fetching. Write
queryFnand you're done. Here you write adapter + processor + provider wiring. More up-front work; pays off if normalization matters to you. - Mature ecosystem. Persisters, devtools, SSR integrations, community plugins.
What TQ gives up vs document-store
- Cross-query sync without manual wiring. Edit user #42 with
insertDocument("user", updated42)— every list, detail view, and relationship re-renders instantly, no network call. TQ needs pattern invalidation precisely because it doesn't normalize; each query has its own copy of the data that drifts. - Request batching. 50
<UserCard id={x} />components collapse into one network request. TQ has no equivalent built into the primitive. - One cache, not two. TQ almost always sits beside Zustand/Redux/etc. — two caches to reconcile. Our store is both.
- Fine-grained reactivity. Reading
handle.data.namere-renders only whennamechanges. TQ returns a new{data, isLoading, error}object every render — whole-handle subscription only, no field-level reads. - Simpler invalidation model. Normalization + reactive propagation handles most of what pattern invalidation exists to solve. You don't need an invalidation graph if mutations just write to the store.
When to pick which
- Pick TQ today if you need stale-time, refetch, mutations, or pagination shipping now. If "opaque cache, refetch on events" fits your mental model and you don't want to think about normalization. If your queries don't overlap enough for cross-query sync to matter.
- Pick document-store if you want fine-grained reactive state as your primary model and documents should be part of that. If cross-query sync would meaningfully simplify your app (entity updates radiating without keys). If you're okay being on a library with a smaller feature surface today, trusting that the additive features will land.
- Use both in one app during migration. Totally viable. TQ for search/list/cursor queries where opacity is fine; document-store for entity reads that benefit from normalization. They don't step on each other.
Honest caveat
Much of the comparison above contrasts TQ-as-shipped with document-store-as-designed. Some rows (generalized keys, invalidation, mutations) are planned-additive and not in this PR. The foundation is the hard part; those features are ~10-50 LOC each on top. If you're evaluating for a production migration today, weight the "today" column, not the "ceiling" column.
Non-goals (in this version)
These are deliberate — every one was considered and left out for this read layer. Most will land in subsequent packages / PRs.
- Writes, dispatch, optimistic updates — a separate write layer will build on this.
- Stale-time / refetch-on-focus / background revalidation.
- Imperative
handle.refetch()— observe fresh data by callinginsertDocumentfrom a socket handler or a mutation response. - Retry with backoff.
- Server-push invalidation.
- Cancellation of in-flight fetches.
- Auto-suspending hooks.
useDocumentreturns a handle; it never throws to Suspense on its own. Suspense is a one-line opt-in (use(handle.promise)). The reverse — recovering a handle from an auto-suspending hook — isn't possible, so the primitive stays non-suspending and auto-suspend is a trivial wrapper anyone can write.
License
MIT