Skip to content

LlamaIndex vs Haystack — RAG Framework Comparison (2026)

This LlamaIndex vs Haystack comparison cuts through the noise: when each framework excels, how they approach RAG differently at the architectural level, and a concrete decision matrix for production teams.

Updated March 2026 — Covers LlamaIndex 0.12+ and Haystack 2.x (the pipeline-first rewrite from deepset).

Who this is for:

  • Engineers evaluating RAG frameworks for a new project and not sure which to invest in learning
  • Teams already using one of these frameworks who want to know what they are missing
  • GenAI engineers preparing for system design interviews where framework selection is tested

LlamaIndex and Haystack represent two fundamentally different bets on what makes RAG systems succeed in production.

LlamaIndex and Haystack represent two distinct philosophies for building retrieval-augmented generation systems. The frameworks overlap in capability — both handle document ingestion, embedding, retrieval, and generation — but they prioritize different concerns.

LlamaIndex bets that developers want the shortest path from raw documents to a working query engine. Its abstractions hide complexity: pass a folder of PDFs and get a queryable index in five lines. The framework manages chunking, embedding generation, vector store persistence, and synthesis for you.

Haystack (by deepset) bets that production RAG systems need explicit, auditable pipelines. Every component — file converter, preprocessor, embedder, retriever, prompt builder, generator — is a node in a directed graph. You wire them together deliberately. Nothing is hidden.

Choosing the wrong framework is not just a learning tax — it creates architectural debt:

Wrong choiceWhat breaks
LlamaIndex for a complex multi-stage indexing pipelineYou fight against high-level abstractions trying to inject custom preprocessing steps
Haystack for a simple document Q&A prototype3x the boilerplate of LlamaIndex; slower iteration
Either framework without understanding pipeline transparencyDebugging in production becomes painful when retrieval quality degrades

Both frameworks released major architecture changes in the last 18 months. Understanding the current versions is important — many blog posts compare outdated APIs.

FeatureLlamaIndex 0.12+ (2026)Haystack 2.x (2026)
Core abstractionVectorStoreIndex, QueryEnginePipeline graph with typed components
Async supportNative async query and ingestion pipelinesAsync pipeline execution (AsyncPipeline)
Agent capabilitiesReAct agents, FunctionCallingAgent, multi-agent workflowsAgent with tool calling via AgentRunner
ObservabilityLlamaTrace, Arize Phoenix integrationHayhooks (REST API), Datadog integration
Multi-modalMulti-modal index (text + image retrieval)Multi-modal pipeline components
Hosted optionLlamaCloud (managed ingestion + retrieval)deepset Cloud (managed Haystack pipelines)
Custom componentsCustom node parsers, retrievers, synthesizersCustom component class with @component decorator
EvaluationBuilt-in RAGAs integration, FaithfulnessEvaluatorEvaluation harness with custom metrics

The right framework depends on the specific RAG problem you are solving, not on which tool has better documentation.

The right question is not “which is better” — it is “which problem am I solving?”

ScenarioWrong choiceRight choiceWhy
Upload 500 PDFs, build Q&A in a dayHaystackLlamaIndex5-line VectorStoreIndex vs 40-line pipeline
Enterprise RAG with PII redaction, audit logging, custom chunkingLlamaIndexHaystackHaystack’s explicit pipeline makes each step visible and replaceable
Prototype → production with a team of 5+ engineersLlamaIndex aloneHaystackPipeline definition files (YAML) enable version control and review
Academic/research RAG with novel retrieval algorithmsHaystackLlamaIndexHaystack’s component interface makes it easier to swap retrievers
Combine dense + sparse retrieval (hybrid search)LlamaIndex aloneHaystackHaystack’s JoinDocuments + hybrid pipelines are first-class
Multi-hop reasoning across a knowledge graphHaystackLlamaIndexLlamaIndex’s KnowledgeGraphIndex is purpose-built for this

Both frameworks handle document ingestion, embedding, retrieval, and generation — but they differ fundamentally in where complexity lives.

LlamaIndex’s central insight is that the hardest part of RAG is not the generation step — it is getting your data into a queryable form. The framework is built around two primitives:

Nodes: Chunks of text (or structured data) with metadata and relationships. LlamaIndex’s node parsers handle chunking with semantic awareness — sentence boundaries, section headers, and configurable overlap.

Indexes: Data structures that organize nodes for retrieval. The VectorStoreIndex is the default (cosine similarity over embeddings). Alternatives include KeywordTableIndex (BM25), KnowledgeGraphIndex (entity-relation triples), and TreeIndex (hierarchical summarization).

The query pipeline is implicit: call index.as_query_engine() and LlamaIndex handles retrieval → context assembly → synthesis. You can override each step, but you do not have to.

Documents → Node Parser → Nodes → VectorStoreIndex
Query → Retriever → Nodes → Response Synthesizer → Answer

Haystack 2.x reframes RAG as a pipeline graph problem. A pipeline is a directed acyclic graph (DAG) of components. Each component has typed inputs and outputs. You connect them explicitly:

FileTypeRouter → PyPDFToDocument → DocumentSplitter
DocumentEmbedder → InMemoryDocumentStore

For retrieval:

Query → TextEmbedder → InMemoryEmbeddingRetriever → PromptBuilder → OpenAIGenerator → Answer

The key difference: in LlamaIndex, the pipeline is implicit and managed by the framework. In Haystack, the pipeline is explicit and managed by you. Haystack pipelines can be serialized to YAML, committed to git, and loaded at runtime — treating your RAG architecture as configuration rather than code.

LlamaIndex vs Haystack — Pipeline Architecture

LlamaIndex hides the pipeline. Haystack makes it explicit.

LlamaIndex (Implicit Pipeline)High-level: framework manages the steps
SimpleDirectoryReader
VectorStoreIndex (auto-chunks, auto-embeds)
QueryEngine (retrieves + synthesizes)
Response (with source nodes)
Haystack (Explicit Pipeline)Low-level: you wire every component
FileTypeRouter → PDFConverter
DocumentSplitter → DocumentEmbedder
DocumentStore (Pinecone / Weaviate / etc)
Retriever → PromptBuilder → Generator
Idle

Both examples build the same RAG pipeline: ingest a folder of PDF documents, embed them, and answer questions via semantic retrieval.

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Load all PDFs from a folder
documents = SimpleDirectoryReader("docs/").load_data()
# Index automatically handles: chunking, embedding, vector store
index = VectorStoreIndex.from_documents(documents)
# Query — retrieval + synthesis handled internally
query_engine = index.as_query_engine(similarity_top_k=5)
response = query_engine.query("What are the key contract termination clauses?")
print(response)
print("Sources:", [n.metadata["file_name"] for n in response.source_nodes])

What LlamaIndex handles for you: chunking strategy (sentence splitter by default), embedding model (OpenAI text-embedding-3-small if OPENAI_API_KEY is set), in-memory vector store, context assembly, LLM synthesis, and source node attribution. Total: ~8 lines.

from haystack import Pipeline
from haystack.components.converters import PyPDFToDocument
from haystack.components.preprocessors import DocumentSplitter, DocumentCleaner
from haystack.components.embedders import OpenAIDocumentEmbedder, OpenAITextEmbedder
from haystack.components.retrievers import InMemoryEmbeddingRetriever
from haystack.components.builders import PromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack.document_stores.in_memory import InMemoryDocumentStore
import glob
document_store = InMemoryDocumentStore()
# --- Indexing pipeline ---
indexing = Pipeline()
indexing.add_component("converter", PyPDFToDocument())
indexing.add_component("cleaner", DocumentCleaner())
indexing.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=5))
indexing.add_component("embedder", OpenAIDocumentEmbedder())
indexing.add_component("writer", DocumentWriter(document_store=document_store))
indexing.connect("converter", "cleaner")
indexing.connect("cleaner", "splitter")
indexing.connect("splitter", "embedder")
indexing.connect("embedder", "writer")
# Run indexing
pdf_files = glob.glob("docs/*.pdf")
indexing.run({"converter": {"sources": pdf_files}})
# --- Query pipeline ---
template = """
Given the following documents, answer the question.
Documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Question: {{ question }}
Answer:
"""
querying = Pipeline()
querying.add_component("embedder", OpenAITextEmbedder())
querying.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store, top_k=5))
querying.add_component("prompt_builder", PromptBuilder(template=template))
querying.add_component("llm", OpenAIGenerator())
querying.connect("embedder.embedding", "retriever.query_embedding")
querying.connect("retriever", "prompt_builder.documents")
querying.connect("prompt_builder", "llm")
result = querying.run({
"embedder": {"text": "What are the key contract termination clauses?"},
"prompt_builder": {"question": "What are the key contract termination clauses?"},
})
print(result["llm"]["replies"][0])

What Haystack requires you to specify: every component, every connection, the prompt template, component configuration. Total: ~45 lines.

LlamaIndex’s 8-line version is faster to write but harder to customize. Where exactly does LlamaIndex chunk? What overlap does it use? What happens when the PDF has tables? You can configure these — but finding the right parameters requires reading the documentation.

Haystack’s 45-line version makes all of this explicit. You see exactly where documents are cleaned, how they are split, what the prompt looks like. When retrieval quality drops in production, you know exactly which component to investigate.


The differences between the two frameworks are most visible when comparing boilerplate required, pipeline visibility, and hybrid retrieval support.

LlamaIndex vs Haystack — Which RAG Framework?

LlamaIndex
Data-first indexing with minimal boilerplate
  • Fastest path to a working RAG system — 5-8 lines of code
  • Rich index types: vector, keyword, knowledge graph, tree
  • Multi-modal support — text and image retrieval in the same index
  • LlamaCloud for managed, production-grade ingestion pipelines
  • Strong data connector ecosystem (100+ loaders: Notion, Slack, S3, etc.)
  • Implicit pipeline makes debugging harder when retrieval quality degrades
  • Custom preprocessing steps require overriding internal abstractions
  • Pipeline not serializable to YAML — harder to version-control architecture
VS
Haystack
Explicit pipeline orchestration for production RAG
  • Explicit pipeline DAG — every step visible, auditable, and replaceable
  • YAML serialization — version control your pipeline architecture
  • First-class hybrid retrieval: dense + sparse with JoinDocuments
  • Hayhooks REST API — expose any pipeline as an HTTP endpoint
  • Custom @component decorator — plug in any Python function as a node
  • More boilerplate — 4-5x more code than LlamaIndex for the same basic RAG
  • Steeper learning curve — must understand pipeline, component typing, and connections
  • Slower prototyping — not ideal for day-one exploratory work
Verdict: Use LlamaIndex when speed-to-working-prototype matters. Use Haystack when pipeline auditability and custom preprocessing are non-negotiable.
Use LlamaIndex when…
Startups, prototypes, data-heavy apps, knowledge graphs, multi-modal RAG
Use Haystack when…
Enterprise RAG, compliance-sensitive pipelines, teams of 3+ engineers, hybrid retrieval
CapabilityLlamaIndexHaystack
Lines of code (basic RAG)~8~45
Pipeline visibilityImplicitExplicit DAG
YAML serializationNoYes
Hybrid retrievalVia custom retrieverFirst-class (JoinDocuments)
Knowledge graph indexYes (built-in)No (requires custom)
Multi-modalYes (text + image)Partial (text-focused)
Custom componentsOverride base classes@component decorator
REST API servingLlamaCloudHayhooks (open-source)
Evaluation frameworkRAGAs integration, built-in evaluatorsEvaluation harness
Managed cloudLlamaClouddeepset Cloud
Primary backingJerry Liu / VC-fundeddeepset (Series B)

Both frameworks are production-ready, but their evaluation integration, deployment patterns, and scaling approaches differ in important ways.

Both frameworks support RAG evaluation, but the integration patterns differ.

LlamaIndex evaluation:

from llama_index.core.evaluation import (
FaithfulnessEvaluator,
RelevancyEvaluator,
CorrectnessEvaluator,
)
from llama_index.core import Settings
# Evaluate a single response
faithfulness_evaluator = FaithfulnessEvaluator()
result = await faithfulness_evaluator.aevaluate_response(
query="What is attention?",
response=response,
)
print(f"Faithfulness: {result.score}{result.feedback}")

Haystack evaluation:

from haystack.evaluation import EvaluationRunResult
# Run evaluation harness
result = EvaluationRunResult(
run_name="rag-eval-v1",
inputs={"questions": questions, "ground_truths": ground_truths},
results={"answers": pipeline_answers, "contexts": retrieved_docs},
)
# Compute metrics
metrics = result.calculate_metrics(["faithfulness", "context_precision"])
result.score_report() # prints a formatted summary

LlamaIndex (LlamaCloud):

LlamaCloud provides managed ingestion pipelines and a hosted index. You push documents to the API, LlamaCloud handles chunking, embedding, and vector store management. Retrieval is via REST API. Useful when you want to eliminate infrastructure entirely.

from llama_cloud import LlamaCloud
client = LlamaCloud(token=os.environ["LLAMA_CLOUD_API_KEY"])
pipeline = client.pipelines.upsert_pipeline(request={"name": "prod-docs"})
pipeline.upload_file("docs/manual.pdf")
# Query via managed index
index = LlamaCloudIndex("prod-docs", token=os.environ["LLAMA_CLOUD_API_KEY"])
engine = index.as_query_engine()
print(engine.query("How do I configure authentication?"))

Haystack (Hayhooks):

Hayhooks converts any Haystack pipeline into a FastAPI REST endpoint. You define the pipeline, Hayhooks handles HTTP routing, request validation, and async execution.

Terminal window
# Start Hayhooks server
hayhooks run --pipelines-dir ./pipelines/
# POST /rag-query → runs the pipeline

Both frameworks are framework-layer code that sits above your vector database and LLM. Scaling concerns are mostly in those layers. However:

LlamaIndex scaling patterns:

  • Use IngestionPipeline with async batch processing for large document sets
  • VectorStoreIndex with a production vector store (Pinecone, Weaviate, Qdrant) instead of in-memory
  • For very large corpora, LlamaCloud removes the need to manage the ingestion infrastructure

Haystack scaling patterns:

  • AsyncPipeline for concurrent document processing during ingestion
  • Stateless pipeline design — each request creates a new pipeline run, enabling horizontal scaling
  • DocumentStore swap from InMemoryDocumentStore to production stores is a one-line change

For vector database selection, see the full vector DB comparison and Pinecone vs Weaviate deep dive.


Use this framework to match your specific project constraints to the right tool — not to determine which framework is generically “better.”

  • You are building the first version of a RAG system and want to validate the concept quickly
  • Your primary challenge is data ingestion (many sources, complex file formats, structured + unstructured data)
  • You need multi-modal retrieval (text and images in the same index)
  • You want knowledge graph-based retrieval for entities and relationships
  • Your team is <3 engineers and you want to minimize framework surface area
  • You are evaluating RAG feasibility before committing to an architecture
  • Your pipeline needs custom preprocessing steps that must be auditable (PII redaction, domain-specific cleaning)
  • You want to version-control your RAG architecture alongside your application code (YAML pipelines in git)
  • You need hybrid retrieval (dense + sparse) as a first-class feature
  • Your team wants to expose RAG pipelines as REST APIs without writing FastAPI boilerplate
  • You are building in a regulated domain (healthcare, finance, legal) where auditability is required
  • You have 3+ engineers working on the RAG system and need clear component ownership
RequirementLlamaIndexHaystack
Fastest prototype (<1 day)YesNo
YAML pipeline versioningNoYes
Knowledge graph RAGYesNo
Hybrid dense + sparse retrievalPartialYes
Multi-modal (text + image)YesPartial
Audit logging per componentNoYes
Managed cloud optionLlamaClouddeepset Cloud
Custom preprocessing (e.g. PII)Possible (verbose)Yes (clean)
Team of 1-2 engineersYesNo
Team of 5+ engineersPossibleYes
Regulated industry (HIPAA, SOC2)NoYes

Some teams use both: LlamaIndex for rapid prototyping and knowledge graph features, Haystack for the production pipeline that replaced the prototype. The interfaces are different enough that this is a rewrite, not a gradual migration — plan accordingly.

A cleaner hybrid is to use LlamaIndex as the retrieval component inside a Haystack pipeline by wrapping a LlamaIndex query engine as a custom Haystack component. This lets you use LlamaIndex’s superior indexing while keeping Haystack’s pipeline auditability.

from haystack import component, Document
from llama_index.core import VectorStoreIndex
@component
class LlamaIndexRetriever:
def __init__(self, index: VectorStoreIndex, top_k: int = 5):
self.query_engine = index.as_retriever(similarity_top_k=top_k)
@component.output_types(documents=list[Document])
def run(self, query: str):
nodes = self.query_engine.retrieve(query)
return {
"documents": [
Document(content=n.text, meta=n.metadata) for n in nodes
]
}

Framework selection is a common GenAI system design interview topic. Interviewers test whether you can reason about trade-offs, not recite feature lists.

Q: You are designing a RAG system for a healthcare company. They need HIPAA compliance and an audit trail of every retrieval decision. Which RAG framework do you choose?

Weak answer: “LlamaIndex because it’s easier to use and has good documentation.”

Strong answer: “The HIPAA and audit trail requirements point directly to Haystack. Healthcare compliance means every processing step — PII detection, document filtering, retrieval — needs to be logged and auditable. Haystack’s explicit pipeline DAG makes this natural: I can insert a PHIRedactionComponent between the document converter and the splitter, log component inputs and outputs, and serialize the pipeline to YAML for compliance review. LlamaIndex’s implicit pipeline would require overriding internal hooks to achieve the same auditability — it can be done, but Haystack is designed for it.”


Q: How would you evaluate the retrieval quality of a LlamaIndex RAG system in production?

Strong answer: “I would instrument the query engine with LlamaIndex’s built-in evaluators: FaithfulnessEvaluator to check whether answers are grounded in retrieved documents, RelevancyEvaluator to check whether retrieved nodes actually contain relevant information, and CorrectnessEvaluator against a golden dataset. For the golden dataset, I would sample 50-100 real queries from production logs and have domain experts annotate the correct answers. I would run evaluation on a weekly cadence, tracking faithfulness score over time. A drop in faithfulness score signals that retrieval quality degraded — possibly because the document corpus changed without re-indexing. See the evaluation guide for the full methodology.”


Q: A team is debating between LlamaIndex and Haystack for a new RAG project. What questions would you ask before making a recommendation?

Strong answer: “Five questions: First, how large is the team — a solo engineer moves faster with LlamaIndex; a team of five needs Haystack’s component boundaries. Second, what preprocessing does the data need — if documents need custom cleaning beyond standard PDF extraction, Haystack’s explicit components make this cleaner. Third, is there a compliance requirement — if yes, Haystack. Fourth, does the use case require hybrid retrieval — if keyword matching matters alongside semantic search, Haystack’s JoinDocuments component handles this better. Fifth, what is the timeline — if you need a working demo in two days, LlamaIndex. If you are building for six months of production life, the upfront cost of Haystack’s pipeline design pays off in maintainability.”


Q: LlamaIndex and Haystack both support pluggable vector databases. What is the difference in how they implement this?

Strong answer: “LlamaIndex uses a VectorStore abstraction — you pass a vector store client to VectorStoreIndex, and the index delegates storage and retrieval to it. The vector store choice is a constructor parameter. Haystack uses a DocumentStore abstraction — you instantiate a document store (e.g. WeaviateDocumentStore, PineconeDocumentStore) and pass it to retriever components. Both support Pinecone, Weaviate, Qdrant, and Chroma. The difference is that Haystack’s DocumentStore is a pipeline component with typed inputs and outputs, while LlamaIndex’s VectorStore is an injected dependency. Neither approach is strictly better — Haystack’s is more explicit, LlamaIndex’s is less verbose. For more on vector database selection, see LangChain vs LlamaIndex and the vector DB comparison.”


LlamaIndex and Haystack are both mature, production-ready RAG frameworks. The choice depends on your constraints, not on which framework is “better.”

Pick LlamaIndex when:

  • You need the fastest path to a working prototype
  • Your data challenge is diverse sources, multi-modal content, or knowledge graphs
  • Your team is small and you want to minimize framework complexity
  • You value a data-centric API over pipeline explicitness

Pick Haystack when:

  • Your pipeline needs to be auditable, versioned, and modular
  • You are building in a regulated domain or for enterprise compliance
  • You need first-class hybrid retrieval (dense + sparse)
  • You want to expose pipelines as REST APIs without extra infrastructure
  • Your team is 3+ engineers who need clear component ownership

The deeper lesson: Both frameworks teach you something important about RAG architecture. LlamaIndex shows you that high-level abstractions cover 80% of use cases. Haystack shows you that the remaining 20% is where production systems live — and that making the pipeline explicit is worth the upfront cost.


Last updated: March 2026. LlamaIndex and Haystack are under active development — verify current API signatures against official documentation before using in production.

Frequently Asked Questions

What is the difference between LlamaIndex and Haystack?

LlamaIndex is a data framework focused on connecting LLMs to your data — it excels at indexing, chunking, and retrieval with minimal code. Haystack is a pipeline orchestration framework by deepset that builds end-to-end NLP and RAG applications with composable components. LlamaIndex gives you the fastest path to a working RAG system while Haystack gives you more control over the entire pipeline architecture.

Which is better for production RAG: LlamaIndex or Haystack?

Both are production-ready but shine in different scenarios. Haystack's pipeline architecture gives you explicit control over component ordering, error handling, and data flow — preferred by teams that need auditability and custom processing steps. LlamaIndex's high-level abstractions let you ship faster but can make debugging harder when things go wrong. For complex enterprise RAG with custom preprocessing, choose Haystack. For rapid prototyping and data-heavy applications, choose LlamaIndex.

Can LlamaIndex and Haystack work with the same vector databases?

Yes, both support all major vector databases including Pinecone, Weaviate, Qdrant, Chroma, Milvus, and pgvector. LlamaIndex uses vector store abstractions through its VectorStoreIndex class, while Haystack uses DocumentStore components that plug into pipelines. Switching vector databases requires minimal code changes in both frameworks.

Is LlamaIndex easier to learn than Haystack?

Yes, for basic RAG. LlamaIndex lets you build a working RAG system in 5 lines of code using its high-level VectorStoreIndex. Haystack requires you to understand its pipeline concept and manually connect components (DocumentStore, Retriever, PromptBuilder, Generator). However, Haystack's explicitness becomes an advantage as your system grows — you always know exactly what each component does and in what order.

Does Haystack support hybrid retrieval with dense and sparse search?

Yes, Haystack has first-class hybrid retrieval support through its JoinDocuments component, which merges results from dense (embedding) and sparse (BM25/keyword) retrievers in a single pipeline. LlamaIndex can achieve hybrid retrieval through custom retriever implementations, but it requires more manual wiring. If keyword matching alongside semantic search is important for your use case, Haystack handles it more cleanly.

Can I use LlamaIndex and Haystack together in the same project?

Yes, you can wrap a LlamaIndex query engine as a custom Haystack component using Haystack's @component decorator. This lets you use LlamaIndex's superior indexing and knowledge graph features while keeping Haystack's explicit pipeline auditability for the overall workflow. See LangChain vs LlamaIndex for more on how LlamaIndex compares within the broader ecosystem.

Which framework is better for a team of 5+ engineers?

Haystack is generally the better choice for larger teams. Its explicit pipeline DAG with typed components creates clear component boundaries and ownership, and its YAML serialization lets you version-control pipeline architecture alongside application code. LlamaIndex's implicit pipeline can make it harder for multiple engineers to collaborate when the system grows complex.

How do LlamaIndex and Haystack handle RAG evaluation?

LlamaIndex provides built-in evaluators including FaithfulnessEvaluator, RelevancyEvaluator, and CorrectnessEvaluator, plus RAGAs integration for standardized metrics. Haystack offers an evaluation harness with EvaluationRunResult that computes metrics like faithfulness and context precision across batches. Both frameworks support custom evaluation metrics for measuring RAG retrieval quality.

Does LlamaIndex support knowledge graph-based RAG?

Yes, LlamaIndex has a built-in KnowledgeGraphIndex that stores entity-relation triples and supports graph-based retrieval for multi-hop reasoning across entities and relationships. Haystack does not have an equivalent built-in knowledge graph index and would require custom component development for graph-based retrieval.

How do I deploy a Haystack pipeline as a REST API?

Haystack provides Hayhooks, an open-source tool that converts any Haystack pipeline into a FastAPI REST endpoint automatically. You place pipeline definition files in a directory and run the Hayhooks server, which handles HTTP routing, request validation, and async execution. LlamaIndex users can achieve similar deployment through LlamaCloud, which provides managed ingestion and retrieval via a hosted REST API.