Skip to content

AutoGen vs LangGraph — Conversation Agents or State Graphs? (2026)

This AutoGen vs LangGraph comparison helps you choose the right multi-agent framework for your next project. You will find architecture breakdowns, side-by-side Python code for the same task, a production readiness analysis, and a decision matrix built on real engineering trade-offs — not marketing claims.

  • Engineers evaluating multi-agent frameworks for a new project or migration
  • Developers who have tried AutoGen and want to understand whether LangGraph is worth the switch
  • GenAI practitioners who need to answer framework-selection questions in technical interviews
  • Teams running AutoGen in production who are hitting non-determinism or observability walls

If you are brand new to agentic frameworks, start with the full agentic frameworks comparison to understand LangGraph, CrewAI, and AutoGen side by side before diving into this two-way comparison.


1. Two Fundamentally Different Mental Models

Section titled “1. Two Fundamentally Different Mental Models”

AutoGen and LangGraph both coordinate multiple LLM-powered agents. Both support tool use, multi-step reasoning, and integration with OpenAI, Anthropic, and open-source models. From a distance, they solve the same problem.

They solve it with incompatible philosophies.

AutoGen treats multi-agent coordination as a group conversation. Agents are participants in a chat. A manager agent — itself powered by an LLM — reads the conversation history and decides which agent speaks next. Coordination is emergent. You define who can talk; the LLM decides when.

LangGraph treats multi-agent coordination as a state machine. You define nodes (Python functions), edges (transitions between nodes), and a typed state schema. The graph is compiled. Execution is deterministic. You define not just who can run but exactly when and under what conditions.

This single distinction cascades into every engineering decision: how you debug, how you test, how you scale, and how you guarantee behavior in production.

For the broader ecosystem context — including CrewAI — see the agentic frameworks overview and the AI agents deep dive.


Both frameworks have changed substantially since their initial releases. If your mental model is based on AutoGen 0.2 or an older LangGraph version, this table covers the current state.

FeatureAutoGen (2026)LangGraph (2026)
Version0.4.x (major rewrite from v0.2)0.2.x (stable, production-grade)
Core architectureEvent-driven runtime with actor modelCompiled state graph with typed schema
API compatibilityBreaking changes from v0.2Backward-compatible minor releases
Async supportFull async in v0.4 core rewriteNative async with astream and ainvoke
PersistenceTeachable agents with memory backendsBuilt-in checkpointer (SQLite, PostgreSQL, Redis)
Human-in-the-loopUserProxyAgent with ALWAYS input modeFirst-class interrupt() with state resume
StreamingBasic streaming of messagesToken-level and node-level streaming
Visual toolingAutoGen Studio (no-code builder)LangGraph Studio (graph visualizer + debugger)
Multi-agent topologyGroup chat, two-agent, customSubgraphs, supervisor nodes, parallel branches
LLM providersOpenAI, Azure, Anthropic, local modelsAny LangChain-compatible model
Cloud offeringAzure AI Foundry integrationLangGraph Cloud (managed deployment)
Learning curveMedium — conversation model takes practiceHigher — graph API requires upfront design

AutoGen v0.4 is a near-complete rewrite. The AssistantAgent and UserProxyAgent pattern still exists, but the underlying runtime changed from synchronous message passing to an event-driven actor model. Teams migrating from v0.2 should budget significant refactoring time.

LangGraph has reached production stability. The API has been consistent across recent releases, and LangGraph Cloud provides managed hosting with built-in observability via LangSmith. See LangChain vs LangGraph for how LangGraph fits within the broader LangChain ecosystem.


The framework you choose determines how easily you can debug, scale, and guarantee agent behavior — the wrong choice compounds over time.

You are building a multi-agent system. A single agent with 20 tools degrades in quality — too many options, bloated context, confused reasoning. You split the work across specialized agents. Now you need a coordination layer.

The wrong framework choice creates problems that compound over time:

ScenarioWrong ChoiceWhat Goes WrongRight Choice
Automated research report pipelineAutoGenNon-deterministic execution order; hard to guarantee consistent output structureLangGraph
Customer support with mandatory human escalation at billing stepAutoGenNo clean interrupt-and-resume mechanism; state lost between human handoffsLangGraph
Collaborative code debugging with back-and-forth iterationLangGraphRigid graph structure fights the natural iterate-until-passing flowAutoGen
Open-ended scientific literature review with agent debateLangGraphOver-engineering a workflow that benefits from emergent coordinationAutoGen
Financial document processing with audit trail requirementAutoGenLLM-selected speaker order creates compliance gapsLangGraph
Rapid research prototype with exploratory workflowLangGraphGraph design overhead slows exploration; AutoGen ships fasterAutoGen

The pattern: if you can draw a complete flowchart of your workflow before writing code, use LangGraph. If the coordination logic should emerge from agent interaction and you cannot fully pre-specify it, AutoGen is the faster path.


AutoGen models agents as conversation participants in a managed group chat; LangGraph models them as nodes in a compiled state graph — each architecture produces fundamentally different debugging and execution guarantees.

AutoGen’s primary building blocks in the v0.4 API:

AssistantAgent — an LLM-powered agent defined by a system message and optional tool registrations. It receives messages and produces responses.

UserProxyAgent — represents the human or an automated proxy. It can execute code, provide input, and initiate conversations. In automated pipelines, human_input_mode="NEVER" runs without interruption.

GroupChat + GroupChatManager — the coordination layer. The GroupChatManager holds the conversation history and, on each turn, calls an LLM to select the next speaker from the available agents. This speaker selection is the source of both AutoGen’s flexibility and its non-determinism.

The mental model: a conference call where a moderator (the GroupChatManager) decides who speaks next based on what has been said so far. Smart, adaptive — but non-reproducible.

LangGraph’s building blocks:

State — a typed Python dictionary (using TypedDict or Pydantic) that holds all information flowing through the graph. Every node reads from and writes to this shared state.

Nodes — Python functions (sync or async) that receive the current state and return a state update. A node can call an LLM, execute a tool, run validation logic, or invoke another agent.

Edges — connections between nodes. Normal edges always fire. Conditional edges call a routing function that returns the next node name. This is where branching logic lives.

Compiled Graph — after defining nodes and edges, you call .compile(). The compiled graph is a callable that accepts an initial state and returns a final state. You can attach a checkpointer at compile time for persistence.

The mental model: a directed graph that you draw before running. Every execution path is explicit. Nothing happens that you did not define.

For a deeper look at building production graphs, see the LangGraph multi-agent patterns guide and the agentic design patterns reference.


The same task — a research-then-write pipeline — implemented in both frameworks.

import os
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
llm_config = {
"model": "gpt-4o",
"api_key": os.environ["OPENAI_API_KEY"],
}
# Define agents as conversation participants
researcher = AssistantAgent(
name="researcher",
system_message=(
"You research topics thoroughly. Always provide specific facts, "
"cite sources where possible, and respond with ONLY research findings."
),
llm_config=llm_config,
)
writer = AssistantAgent(
name="writer",
system_message=(
"You write clear, engaging technical articles. "
"Wait for research to be provided, then write a complete article. "
"End your message with TERMINATE when the article is complete."
),
llm_config=llm_config,
)
user_proxy = UserProxyAgent(
name="user",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""),
code_execution_config=False, # disable code execution for this task
)
group_chat = GroupChat(
agents=[user_proxy, researcher, writer],
messages=[],
max_round=6, # cap at 6 turns to control cost
)
manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
# Initiate — manager decides speaker order from here
result = user_proxy.initiate_chat(
manager,
message="Research LangGraph's checkpointing mechanism and write a 400-word article.",
)
# Extract the article from conversation history
article = result.chat_history[-2]["content"] # writer's last message before TERMINATE

What you get: The manager will typically call the researcher first, then the writer. But it is not guaranteed. On a different run, the LLM might decide the writer should ask a clarifying question first. The max_round=6 cap is your safety valve against runaway conversations.

import os
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
# 1. Define typed state
class PipelineState(TypedDict):
topic: str
research: str
article: str
llm = ChatOpenAI(model="gpt-4o", api_key=os.environ["OPENAI_API_KEY"])
# 2. Define nodes as pure functions
def research_node(state: PipelineState) -> PipelineState:
response = llm.invoke([
SystemMessage(content="You research topics thoroughly. Provide specific facts and cite sources."),
HumanMessage(content=f"Research this topic in detail: {state['topic']}"),
])
return {"research": response.content}
def write_node(state: PipelineState) -> PipelineState:
response = llm.invoke([
SystemMessage(content="You write clear, engaging technical articles."),
HumanMessage(content=(
f"Write a 400-word article about: {state['topic']}\n\n"
f"Use this research as your source material:\n{state['research']}"
)),
])
return {"article": response.content}
# 3. Build the graph
builder = StateGraph(PipelineState)
builder.add_node("researcher", research_node)
builder.add_node("writer", write_node)
# 4. Define edges — execution order is explicit
builder.set_entry_point("researcher")
builder.add_edge("researcher", "writer")
builder.add_edge("writer", END)
# 5. Compile — graph is now a callable
graph = builder.compile()
# 6. Invoke with initial state
result = graph.invoke({"topic": "LangGraph checkpointing mechanism", "research": "", "article": ""})
article = result["article"]

What you get: The researcher node always runs first. The writer node always runs second. The article is always in result["article"]. Run this a thousand times and the execution path never changes. The only variability is the LLM’s output content — which is what you want.

from langgraph.checkpoint.sqlite import SqliteSaver
# Attach a checkpointer at compile time
checkpointer = SqliteSaver.from_conn_string(":memory:") # or a file path for persistence
graph = builder.compile(checkpointer=checkpointer)
# Each run gets a thread ID for state isolation
config = {"configurable": {"thread_id": "run-001"}}
result = graph.invoke(
{"topic": "LangGraph checkpointing", "research": "", "article": ""},
config=config,
)
# Resume a failed run — graph replays from last saved checkpoint
result = graph.invoke(None, config=config) # None = resume, not restart

AutoGen has no equivalent built-in mechanism. If an AutoGen run fails mid-conversation, you restart from scratch. For long-running pipelines or workflows that require human approval mid-execution, LangGraph’s checkpointing is a hard advantage with no AutoGen workaround.


AutoGen’s conversation-driven coordination and LangGraph’s explicit state graph model produce distinct trade-offs in determinism, observability, and cost efficiency.

AutoGen vs LangGraph — Framework Architecture

AutoGen
Conversation-driven — emergent coordination
  • Natural fit for iterative, back-and-forth agent collaboration
  • First-class code execution via UserProxyAgent sandbox
  • AutoGen Studio provides a visual no-code workflow builder
  • Microsoft-backed with native Azure AI Foundry integration
  • Teachable agents retain knowledge across sessions
  • Fast prototyping — group chat up and running in ~30 lines
  • Non-deterministic speaker selection makes behavior hard to reproduce
  • No built-in checkpointing — failed runs restart from scratch
  • High conversation token overhead — every agent reads full history each turn
VS
LangGraph
Graph-based state machines — deterministic pipelines
  • Deterministic execution — every path is explicitly defined in the graph
  • Built-in checkpointing with SQLite, PostgreSQL, and Redis backends
  • First-class human-in-the-loop via interrupt() with state persistence
  • Token-level and node-level streaming out of the box
  • LangGraph Studio provides a real-time graph visualizer and debugger
  • Subgraphs enable modular, composable agent architectures
  • Higher learning curve — graph design requires upfront architecture work
  • More boilerplate for simple use cases that AutoGen handles in one call
Verdict: Use AutoGen when agents need to iterate and negotiate dynamically. Use LangGraph when execution paths must be deterministic, auditable, and resumable.
Use AutoGen when…
Research workflows, code debugging loops, exploratory prototypes
Use LangGraph when…
Production pipelines, compliance workflows, human-in-the-loop systems

LangGraph’s deterministic execution model gives it a consistent advantage across debugging, persistence, human-in-the-loop, streaming, and token cost at production scale.

AutoGen debugging means reading conversation transcripts. When something goes wrong, you have a chat log. You can see what each agent said. What you cannot easily see: why the GroupChatManager selected a particular agent, why the conversation took 8 rounds instead of 4, or which turn introduced a reasoning error. The non-determinism makes it hard to reproduce failures in a controlled way.

LangGraph debugging means inspecting state transitions. LangGraph Studio shows you the graph structure, highlights the active node in real-time, and lets you inspect the state at every checkpoint. Because execution is deterministic, you can reproduce a failure by replaying the exact same initial state. LangSmith integration provides full distributed tracing across every LLM call within the graph.

AutoGen has no built-in state persistence between runs. If your Python process crashes mid-conversation, the state is lost. Teachable agents can persist learned facts to an external store, but this is a different capability — it stores knowledge, not execution progress.

LangGraph checkpoints state after every node execution. If a run fails at node 7 of 10, you resume from node 7 — not from node 1. For long workflows (document processing pipelines, multi-day research tasks), this is not a nice-to-have; it is a production requirement. The checkpointer is configurable: SQLite for development, PostgreSQL or Redis for production scale.

AutoGen human-in-the-loop uses UserProxyAgent with human_input_mode="ALWAYS" or "SOMETIMES". The agent pauses and waits for terminal input. This works for local scripts. In web applications, you need to build your own queueing and resumption mechanism.

LangGraph human-in-the-loop uses interrupt(). The graph pauses at a defined node, persists its state via the checkpointer, and returns control to the caller. The calling application can display a review interface, collect approval, and resume the graph with Command(resume=approved_value). State is preserved across the pause. This maps directly to production patterns: asynchronous approval queues, webhook callbacks, and UI review interfaces.

AutoGen streams individual agent messages. You see output token by token from each agent response.

LangGraph supports three streaming modes simultaneously: values (full state after each node), updates (only the diff each node produces), and messages (token-level streaming for LLM calls within nodes). For production UIs showing progress indicators and live output, LangGraph’s streaming model is substantially richer.

Token efficiency matters at production volumes. AutoGen’s group chat model is inherently expensive: every agent sees the full conversation history on every turn. A 10-round group chat with 4 agents can consume 40,000–80,000 tokens per run.

LangGraph nodes receive only the state they need. A write node gets the research output in state — not the entire history of how the research was produced. This targeted state passing keeps context windows smaller and costs lower at the same task complexity.

ScenarioAutoGen (est. cost/run)LangGraph (est. cost/run)
3-step pipeline, simple$0.08–0.15$0.04–0.08
5-agent coordination, 10 rounds$0.40–0.80$0.12–0.25
Long-running pipeline (>20 steps)$1.50–3.00+$0.30–0.60
With human approval checkpointNo native support$0.15–0.30 per run

Cost estimates use GPT-4o pricing as of March 2026. Actual costs vary by model, task complexity, and output length.


The right framework depends on whether your workflow benefits from emergent conversation dynamics (AutoGen) or requires deterministic, auditable execution paths (LangGraph).

  • Agents need to negotiate, debate, or iteratively refine work through multi-turn conversation
  • You are building a code generation and debugging system where the iteration count is unpredictable
  • Rapid prototyping speed matters more than execution guarantees — you want a working demo in an afternoon
  • Your team is in the Azure ecosystem and wants native integration with Azure AI Foundry
  • The workflow is genuinely open-ended and emergent coordination adds value
  • You need first-class code execution with sandbox isolation via UserProxyAgent
  • Execution paths must be deterministic and reproducible for audit or compliance purposes
  • The workflow requires human approval at specific steps with state persistence across the pause
  • You need checkpointing and resume-on-failure for long-running pipelines
  • You are building a production system where debugging, observability, and cost control are priorities
  • The workflow has complex conditional branching that must behave identically on every run
  • Token efficiency matters — the graph’s targeted state passing reduces per-run costs significantly
RequirementAutoGenLangGraph
Deterministic execution orderNoYes
Built-in checkpointingNoYes
Human-in-the-loop with state persistenceLimitedFirst-class
Token-level streamingPartialFull
Conditional branching in codeNo (LLM-selected)Yes (explicit edges)
Graph visualization + debuggerAutoGen StudioLangGraph Studio
Production observabilityBasicLangSmith integration
Rapid prototyping speedFasterSlower (graph design overhead)
Emergent agent coordinationNaturalRequires explicit design
Code execution sandboxFirst-classVia tool nodes
Azure/Microsoft ecosystem fitExcellentNeutral
Cost at scaleHigherLower

AutoGen and LangGraph are not mutually exclusive. A practical pattern:

Use LangGraph as the outer orchestration layer — defining the overall workflow, managing checkpoints, and handling human-in-the-loop approval steps. Inside specific LangGraph nodes, use AutoGen group chats for subtasks that benefit from emergent multi-agent iteration.

Example: a LangGraph pipeline for automated security auditing. The outer graph defines fixed phases (scan, analyze, report, review). The “analyze” node launches an AutoGen group chat where security agents debate findings and vote on severity ratings. The conversation result is captured and stored in LangGraph state. The outer graph continues deterministically from there.


Framework comparison questions are common in GenAI engineering interviews. Interviewers want to see that you match technology to requirements — not that you have a personal preference.

Q: “You’re building a compliance document processing system that requires a human legal reviewer to approve each output before it’s stored. AutoGen or LangGraph?”

Weak answer: “LangGraph, because it’s more production-ready.”

Strong answer: “LangGraph is the clear choice here, and the reason is checkpointing combined with interrupt(). The workflow needs to pause after the agent produces output, persist the pending state, wait for an asynchronous human review — potentially hours later — then resume with the reviewer’s decision. LangGraph’s checkpointer handles exactly this: the graph pauses at the review node, serializes state to PostgreSQL, and the calling application polls or receives a webhook. When the reviewer approves, the application calls graph.invoke(Command(resume=True), config=thread_config) and the graph resumes from the saved state. AutoGen has no equivalent mechanism — it blocks synchronously waiting for terminal input, which is not viable in a web application context.”


Q: “When would you prefer AutoGen over LangGraph for a production system?”

Weak answer: “When I need agents to talk to each other.”

Strong answer: “The case for AutoGen in production is narrower than marketing suggests, but it exists. Consider a live collaborative code debugging assistant: an error diagnostics agent reads the stack trace, a codebase search agent finds relevant files, a hypothesis agent proposes fixes, and a test runner agent executes them. The iteration count is unpredictable — the fix might work on the first try or the tenth. AutoGen’s group chat handles this naturally: each agent responds to the latest message, the conversation builds on itself, and the GroupChatManager adapts speaker selection based on progress. Encoding this as a LangGraph graph would require a loop node with complex conditional edges that essentially re-implements the conversation model with more boilerplate. I’d use AutoGen here, but with max_round set conservatively, Docker-based code execution for security, and a cost circuit breaker.”


Q: “How would you migrate an AutoGen system to LangGraph?”

Strong answer: “It requires rethinking the architecture, not just porting code. The migration map: each AutoGen AssistantAgent becomes a LangGraph node — a Python function with the same system prompt and tools, but receiving state rather than a conversation history. The GroupChat’s message list becomes LangGraph’s state schema — a TypedDict with fields for each piece of information agents need to share. The GroupChatManager’s speaker selection becomes conditional edges — you make the routing logic explicit rather than delegating it to an LLM. The main thing you lose is emergent coordination — the unpredictable iteration that sometimes produces creative solutions. What you gain is reproducibility, checkpointing, and dramatically better observability. For most production systems, that’s the right trade.”


Q: “What is the difference between AutoGen’s GroupChatManager and LangGraph’s supervisor node?”

Strong answer: “Both coordinate multiple agents, but the mechanism is opposite. AutoGen’s GroupChatManager uses an LLM call on every turn to select the next speaker from available agents — the selection decision itself consumes tokens and introduces non-determinism. LangGraph’s supervisor pattern uses a node that calls an LLM to decide routing, but that decision is captured in state as a string (the next node name), and LangGraph’s conditional edge executes the routing deterministically based on that string. The key difference: LangGraph’s supervisor decision is captured and logged in state; AutoGen’s speaker selection is ephemeral. You can inspect LangGraph’s routing decisions. You cannot easily audit AutoGen’s. For compliance-sensitive applications, that distinction matters.”


AutoGen and LangGraph represent two fundamentally different answers to the same problem. AutoGen is a conversation engine. LangGraph is a state machine. Both coordinate agents. Neither is universally better.

Use AutoGen when you need emergent, iterative agent coordination — especially for research workflows, code debugging loops, and prototypes where speed matters more than execution guarantees.

Use LangGraph when you need deterministic, auditable, production-grade agent pipelines — especially when checkpointing, human approval, streaming, and cost control are hard requirements.

For most teams shipping production GenAI applications in 2026, LangGraph’s guarantees are worth the additional upfront design work. For teams exploring and iterating quickly on research workflows, AutoGen remains the faster path.


Last verified: March 2026. Both AutoGen and LangGraph are under active development. Verify API signatures, pricing, and version compatibility against official documentation before building production systems.

Frequently Asked Questions

What is the difference between AutoGen and LangGraph?

AutoGen uses conversation-driven multi-agent coordination where agents communicate by passing messages in group chats, with an LLM-powered manager selecting speakers. LangGraph uses graph-based state machines where you define nodes (functions), edges (transitions), and state schemas to create deterministic agent workflows. AutoGen excels at open-ended multi-agent discussion, while LangGraph excels at controlled, production-grade agent pipelines.

Which is better for production: AutoGen or LangGraph?

LangGraph is generally better for production. It offers deterministic execution paths via compiled graphs, built-in checkpointing and persistence, human-in-the-loop interrupts, and streaming support. AutoGen's conversation-driven model introduces non-determinism since an LLM selects the next speaker, making it harder to debug and guarantee behavior in production.

Is AutoGen still maintained in 2026?

Yes. Microsoft released AutoGen v0.4, a major rewrite with a new API. The core architecture shifted to an event-driven system with improved async support. AutoGen Studio provides a visual builder for agent workflows. However, the v0.4 migration broke backward compatibility, so teams on v0.2 need significant refactoring to upgrade.

Can I migrate from AutoGen to LangGraph?

Yes, but it requires rethinking your architecture. Each AutoGen agent becomes a LangGraph node, message passing becomes state updates, and the GroupChatManager's speaker selection becomes conditional edges. You gain determinism and observability but lose the emergent conversation dynamics. See the LangChain vs LangGraph guide for more on LangGraph's architecture.

How does state management differ between AutoGen and LangGraph?

AutoGen manages state through conversation history — every agent reads the full message log on each turn, which grows with every interaction. LangGraph uses a typed state schema (TypedDict or Pydantic) where each node reads only the fields it needs and writes targeted updates. LangGraph's approach keeps context windows smaller and costs lower at scale.

What is the learning curve for AutoGen vs LangGraph?

AutoGen has a medium learning curve — the conversation model is intuitive but understanding speaker selection and termination conditions takes practice. LangGraph has a higher learning curve because you must design the graph structure, define typed state schemas, and configure edges before writing agent logic. However, LangGraph's upfront design investment pays off in debuggability and production reliability.

Can AutoGen and LangGraph work together?

Yes. A practical pattern uses LangGraph as the outer orchestration layer for deterministic workflow management, checkpoints, and human-in-the-loop steps. Inside specific LangGraph nodes, you can launch AutoGen group chats for subtasks that benefit from emergent multi-agent iteration. This hybrid approach combines LangGraph's reliability with AutoGen's conversational flexibility.

How do AutoGen and LangGraph handle multi-agent coordination?

AutoGen coordinates agents through a GroupChatManager that uses an LLM call to select the next speaker on every turn — coordination is emergent and adaptive but non-deterministic. LangGraph coordinates agents through explicit graph edges, conditional routing functions, and subgraphs — every execution path is defined in code. For a broader comparison including CrewAI, see the agentic frameworks overview.

Which framework is more cost-efficient at scale?

LangGraph is more cost-efficient. AutoGen's group chat model requires every agent to read the full conversation history on each turn, consuming 40,000 to 80,000 tokens in a 10-round, 4-agent chat. LangGraph nodes receive only the state fields they need, keeping context windows smaller. For a 5-agent coordination task, AutoGen costs an estimated $0.40-$0.80 per run versus LangGraph's $0.12-$0.25.

How do I debug issues in AutoGen vs LangGraph?

AutoGen debugging relies on reading conversation transcripts to trace what each agent said, but reproducing failures is difficult due to non-deterministic speaker selection. LangGraph provides state-transition inspection at every checkpoint, real-time graph visualization in LangGraph Studio, and distributed tracing via LangSmith. LangGraph's deterministic execution means you can reproduce failures by replaying the same initial state. Learn more about how AI agents work to understand the underlying coordination patterns.