How to Integrate SearchStax with Your Headless CMS
Connect structured content to SearchStax so every publish, update, and delete can refresh your site search index in near real time.
What is SearchStax?
SearchStax provides managed Apache Solr, site search tooling, crawlers, analytics, and relevance controls for teams that need production search without running Solr infrastructure themselves. It's used for website search, ecommerce discovery, knowledge bases, portals, and public-sector search experiences where teams need control over schema, ranking, synonyms, facets, and query behavior. In the search market, SearchStax is a hosted Solr and site search option for teams that want Solr's flexibility with managed operations.
Why integrate SearchStax with a headless CMS?
Search quality usually breaks down when your content system and search index drift apart. A product page gets renamed, a support article is unpublished, or a location changes its hours, but the search index still shows the old result for hours or days. That creates stale results, broken links, and support tickets that shouldn't exist.
Connecting SearchStax to a headless CMS category workflow solves that by treating search indexing as part of publishing. When editors publish content, a webhook can send the changed document ID to a sync process. That process fetches exactly the fields SearchStax needs, such as title, slug, body text, category, tags, publish date, and locale, then pushes a Solr document to the SearchStax update endpoint. Deletes can remove the document from the index instead of leaving orphaned results behind.
With Sanity, the Content Lake holds typed JSON, so you don't need to scrape rendered HTML or parse page blobs before indexing. GROQ selects the fields for each document type, including referenced data like category names or author bios. Webhooks and Functions can trigger the sync the moment content changes. The trade-off is that you'll still need to design your Solr schema, field naming, boosts, facets, and commit behavior carefully. Search relevance isn't automatic, but the data pipeline becomes clear and repeatable.
Architecture overview
A typical Sanity and SearchStax integration starts when an editor publishes, updates, or deletes content in Sanity Studio. A Sanity webhook fires on that mutation and sends the document ID, document type, and transition to either a Sanity Function or your own webhook route. The handler uses @sanity/client and GROQ to fetch the published document from the Content Lake, projecting only the fields SearchStax should index. For a blog post, the GROQ query might return _id, _type, title, slug, body text from Portable Text, category title from a reference, tag titles, language, and _updatedAt. The handler maps those values to Solr fields, such as id, title_s, body_txt, category_s, tags_ss, language_s, url_s, and updated_at_dt. It then calls the SearchStax Solr update API at a URL like https://your-deployment.searchstax.com/solr/your_collection/update?commitWithin=1000 using Basic Auth credentials from SearchStax Cloud. For delete events, it posts a delete command with the document ID. SearchStax then updates the Solr collection and makes the document available to search queries, facets, synonyms, and relevance rules. On the frontend, users search through SearchStax Site Search tooling, SearchStax SearchStudio, or a server-side search endpoint that calls Solr's /select API without exposing Solr credentials in the browser.
Common use cases
Website search with fresh published content
Index pages, articles, landing pages, and resource content in SearchStax as soon as editors publish in Sanity Studio.
Product discovery with facets
Send product copy, categories, tags, availability labels, and localized descriptions to SearchStax for filtered search experiences.
Knowledge base and support search
Sync help articles, troubleshooting steps, topics, and related products so customers can find answers before opening a ticket.
Multilingual search indexes
Route Sanity locales into SearchStax language fields or separate Solr collections for region-specific search behavior.
Step-by-step integration
- 1
Create your SearchStax search target
Create a SearchStax Cloud deployment and Solr collection, or set up a SearchStax Site Search app if you're using SearchStudio. For direct indexing, copy the Solr endpoint, collection name, username, and password from SearchStax Cloud. Keep those values server-side only.
- 2
Install the Sanity client in your sync service
SearchStax direct indexing uses the Solr HTTP API, so you don't need a special JavaScript SDK for the indexing call. Install @sanity/client in a Sanity Function, Next.js route, Express service, or other server-side runtime that will receive the webhook.
- 3
Model searchable fields in Sanity Studio
Add fields that map cleanly to search behavior: title, slug, excerpt, Portable Text body, category references, tag references, locale, publish date, and noIndex or searchHidden flags. Use references for reusable data, such as categories, so GROQ can join the display value into the document you send to SearchStax.
- 4
Create the publish, update, and delete trigger
Configure a Sanity webhook to fire on create, update, and delete mutations for the document types you want indexed. Send at least _id, _type, and transition. In production, add a webhook secret and verify the signature before calling SearchStax.
- 5
Fetch with GROQ and push to SearchStax
In the handler, use GROQ to fetch the published document from the Content Lake, flatten Portable Text to plain text, join referenced fields, and map values to Solr field names. Post adds and updates to /update?commitWithin=1000. Post deletes to the same endpoint using Solr's JSON delete command.
- 6
Test relevance and build the search UI
Publish a test article, confirm the Solr document appears in SearchStax, run a query through the SearchStax dashboard or Solr /select endpoint, then build your frontend with SearchStax Site Search tooling or a server-side search route. Test facets, synonyms, boosts, empty states, and unpublished content removal.
Code example
import {createClient} from '@sanity/client';
const sanity = createClient({
projectId: process.env.SANITY_PROJECT_ID!,
dataset: process.env.SANITY_DATASET!,
apiVersion: '2025-02-19',
token: process.env.SANITY_READ_TOKEN,
useCdn: false,
});
const query = `*[_id == $id][0]{
_id,
_type,
title,
slug,
"body": pt::text(body),
"category": category->title,
"tags": tags[]->title,
_updatedAt
}`;
export async function POST(req: Request) {
const event = await req.json();
const id = event._id?.replace(/^drafts\./, '');
if (!id) return Response.json({ok: false}, {status: 400});
if (event.transition === 'delete') {
await pushToSearchStax({delete: {id}});
return Response.json({ok: true, deleted: id});
}
const doc = await sanity.fetch(query, {id});
if (!doc) return Response.json({ok: true, skipped: id});
await pushToSearchStax([
{
id: doc._id,
type_s: doc._type,
title_s: doc.title,
url_s: `/${doc.slug?.current ?? ''}`,
body_txt: doc.body,
category_s: doc.category,
tags_ss: doc.tags ?? [],
updated_at_dt: doc._updatedAt,
},
]);
return Response.json({ok: true, indexed: id});
}
async function pushToSearchStax(body: unknown) {
const auth = Buffer.from(
`${process.env.SEARCHSTAX_SOLR_USER}:${process.env.SEARCHSTAX_SOLR_PASSWORD}`
).toString('base64');
const url = `${process.env.SEARCHSTAX_SOLR_URL}/solr/${process.env.SEARCHSTAX_COLLECTION}/update?commitWithin=1000`;
const res = await fetch(url, {
method: 'POST',
headers: {
Authorization: `Basic ${auth}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(body),
});
if (!res.ok) throw new Error(`SearchStax update failed: ${await res.text()}`);
}How Sanity + SearchStax works
Build your SearchStax integration on Sanity
Sanity gives you the structured content foundation, real-time event system, and flexible APIs to keep SearchStax indexes aligned with every content change.
Start building free →CMS approaches to SearchStax
| Capability | Traditional CMS | Sanity |
|---|---|---|
| Structured data for SearchStax indexing | Search indexes often depend on rendered pages, plugins, or crawlers, which can mix navigation, footer text, and body content. | The Content Lake returns typed JSON, and GROQ can project a ready-to-index document with joined references in one request. |
| Real-time sync on publish and delete | Search updates often run on scheduled crawls or plugin hooks, so stale results can remain until the next crawl finishes. | Webhooks and Functions can trigger SearchStax adds, updates, and deletes as content changes, without polling the source. |
| Field control for relevance and facets | Search fields may mirror page output, making it harder to separate title boosts, body text, category facets, and hidden metadata. | GROQ lets you select, rename, flatten, and join fields for Solr schema design, including categories, tags, locales, and dates. |
| Editorial control over search visibility | Editors may depend on plugin settings or page-level flags that don't map cleanly to every index. | You can add schema fields like noIndex, searchTitle, or searchDescription in Sanity Studio, then enforce them in the SearchStax sync query. |
| Multichannel content reuse | Search, web pages, and apps often pull from different representations of the same content. | One structured back end can feed SearchStax, websites, mobile apps, and AI agents through APIs and Agent Context. |
Keep building
Explore related integrations to complete your content stack.
Sanity + Algolia
Send structured Sanity content to Algolia for typo-tolerant search, instant results, and facet-heavy discovery experiences.
Sanity + Elasticsearch
Index Sanity content in Elasticsearch for custom search APIs, analytics-heavy search logs, and complex query patterns.
Sanity + Coveo
Connect Sanity content to Coveo for enterprise search experiences that combine site content, knowledge bases, and relevance tuning.