How SearchAF Works
From raw data to instant AI answers. A complete pipeline in minutes.
The answer engine pipeline
SearchAF handles the complexity so you can focus on your product.
Connect
One-click integrations
Connect your data sources with one-click integrations. Shopify, GitHub, WordPress, S3, and more.
import { SearchAF } from '@searchaf/sdk';
const client = new SearchAF({
apiKey: process.env.SEARCHAF_KEY
});
// Connect your Shopify store
await client.connect('shopify', {
shop: 'mystore.myshopify.com'
});from searchaf import SearchAF
import os
client = SearchAF(api_key=os.environ["SEARCHAF_KEY"])
# Connect your Shopify store
client.connect("shopify", shop="mystore.myshopify.com")curl -X POST https://api.searchaf.com/v1/connect/shopify \
-H "Authorization: Bearer $SEARCHAF_KEY" \
-H "Content-Type: application/json" \
-d '{"shop": "mystore.myshopify.com"}'Ingest
Automatic sync
Background workers sync your content automatically. Real-time webhooks and incremental updates.
// Batch upload documents
await client.documents.batch([
{
id: 'doc-1',
title: 'Product Guide',
content: 'Complete guide to our products...'
},
{
id: 'doc-2',
title: 'FAQ',
content: 'Frequently asked questions...'
}
]);# Batch upload documents
client.documents.batch([
{
"id": "doc-1",
"title": "Product Guide",
"content": "Complete guide to our products..."
},
{
"id": "doc-2",
"title": "FAQ",
"content": "Frequently asked questions..."
}
])curl -X POST https://api.searchaf.com/v1/documents/batch \
-H "Authorization: Bearer $SEARCHAF_KEY" \
-H "Content-Type: application/json" \
-d '[{"id":"doc-1","title":"Product Guide","content":"..."}]'Process
Intelligent processing
Automatic chunking, entity extraction, and embedding generation. No ML expertise required.
// Processing happens automatically on ingest
// Optionally configure chunking strategy
await client.configure({
chunking: {
strategy: 'semantic',
maxTokens: 512
},
embeddings: {
model: 'text-embedding-3-small'
}
});# Processing happens automatically on ingest
# Optionally configure chunking strategy
client.configure(
chunking={
"strategy": "semantic",
"max_tokens": 512
},
embeddings={
"model": "text-embedding-3-small"
}
)curl -X PATCH https://api.searchaf.com/v1/config \
-H "Authorization: Bearer $SEARCHAF_KEY" \
-H "Content-Type: application/json" \
-d '{"chunking":{"strategy":"semantic","maxTokens":512}}'Store
Distributed storage
Hybrid vector + keyword index for fast retrieval. Powered by AntflyDB.
// Data is indexed automatically after processing
// Check indexing status
const status = await client.index.status();
console.log(`Documents: ${status.documentCount}`);
console.log(`Indexed: ${status.indexedCount}`);
console.log(`Status: ${status.state}`);# Data is indexed automatically after processing
# Check indexing status
status = client.index.status()
print(f"Documents: {status.document_count}")
print(f"Indexed: {status.indexed_count}")
print(f"Status: {status.state}")curl https://api.searchaf.com/v1/index/status \
-H "Authorization: Bearer $SEARCHAF_KEY"
# Response:
# {"documentCount": 1250, "indexedCount": 1250, "state": "ready"}Query
Lightning fast search
Hybrid search API combining keyword + semantic matching. Sub-50ms latency.
const results = await client.search({
query: 'wireless headphones under $100',
limit: 10,
filters: {
category: 'electronics',
inStock: true
}
});
results.hits.forEach(hit => {
console.log(hit.title, hit.score);
});results = client.search(
query="wireless headphones under $100",
limit=10,
filters={
"category": "electronics",
"in_stock": True
}
)
for hit in results.hits:
print(hit.title, hit.score)curl -X POST https://api.searchaf.com/v1/search \
-H "Authorization: Bearer $SEARCHAF_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "wireless headphones under $100",
"limit": 10,
"filters": {"category": "electronics"}
}'Answer
Grounded AI answers
AI-generated answers grounded in your data. RAG pipeline with citation support.
const answer = await client.answer({
question: "What's your return policy?",
stream: true,
includeSources: true
});
// Stream the response
for await (const chunk of answer) {
process.stdout.write(chunk.text);
}
// Access cited sources
console.log('Sources:', answer.sources);answer = client.answer(
question="What's your return policy?",
stream=True,
include_sources=True
)
# Stream the response
for chunk in answer:
print(chunk.text, end="")
# Access cited sources
print("Sources:", answer.sources)curl -X POST https://api.searchaf.com/v1/answer \
-H "Authorization: Bearer $SEARCHAF_KEY" \
-H "Content-Type: application/json" \
-d '{
"question": "What is your return policy?",
"stream": false,
"includeSources": true
}'One-click setup for your platform
Connect your favorite tools and start building in minutes.
Ready to build your answer engine?
Get started for free. No credit card required.