AI EMERGENCE 6 March 2026

Intent of Thought

You're picking the wrong thinking structure. You don't even know it.

Intent of Thought

You have at least 14 different ways to make an LLM reason. Chain-of-Thought. Tree-of-Thought. Graph-of-Thought. Skeleton, Buffer, Diagram, Recursion, and a dozen more. The question nobody is asking: how do you pick which one?

The Problem: Structure Without Purpose

Right now, topology selection in LLM reasoning is a coin toss dressed up as engineering.

A researcher picks Chain-of-Thought because the original paper used it. A developer picks Tree-of-Thought because it sounds more sophisticated. A team defaults to whatever their framework supports. None of them ask the question that should come first: what is the purpose of this reasoning?

This is not a minor oversight. It is the equivalent of a surgeon selecting a scalpel before diagnosing the patient. The XoT landscape (the collective term for all X-of-Thought methods) has been obsessed with how to reason for four years, while ignoring why the reasoning is happening in the first place.

The Gap Nobody Sees

The closest attempts, papers like SWI (Speaking with Intent), ICoT (Intention Chain-of-Thought), and ARR (Analyzing, Retrieving, Reasoning), add intent at the step level. They tell the model what to think about at each step. But they all operate inside a fixed topology. None of them challenge the topology itself. It is like giving a driver turn-by-turn directions while never questioning whether they should be driving, flying, or taking the train.

After synthesising 6 external research reports, 2 SPAR (Structured Persona-Argumentation for Reasoning) deep-analysis verdicts, and 40 years of BDI (Beliefs-Desires-Intentions) agent theory, a clear gap emerged:

Step-level intent exists. Topology-level intent governance does not.

The Topology-Governance Gap

No paper, no framework, no benchmark proposes that the purpose of a reasoning task should determine which structure of reasoning to deploy. The research calls this the “Topology-Governance Gap.”

Six distinct levels exist at which intent operates in reasoning. The first five are addressed by prior work. The sixth, topology governance, is the gap:

The Topology-Governance Gap Six levels of intent in reasoning, and where the gap lives

Level Prior Work What It Governs Status

1. Step SWI Individual step quality COVERED 2. Domain ICoT Code generation strategy COVERED 3. Retrieval ARR Pre-retrieval question analysis COVERED 4. Agent BDI External actions, commitments COVERED 5. Training RLHF / DPO Model behavioural alignment COVERED 6. Topology IoT (this paper) Which reasoning structure to deploy THE GAP

← No prior work governs topology selection based on purpose → IoT fills this gap with three moments: Purpose, Anti-Purpose, Success Signal

The IoT Solution

Intent of Thought (IoT) is a pre-reasoning checkpoint with three moments:

MomentQuestion It AnswersExample
PurposeWhat are we trying to achieve?”Map causal relationships between hospital readmission factors”
Anti-PurposeWhat would make this reasoning worthless?”Treating factors as independent when they interact”
Success SignalHow will we know when reasoning has achieved its purpose?”A relationship map with bidirectional dependencies and feedback loops”

Given that specification, the topology becomes a consequence of the purpose: interconnected causal factors with feedback loops demand a Graph-of-Thought, not a Chain. A sequential proof demands a Chain, not a Tree. The purpose tells you.

The third moment, the Anti-Purpose, deserves attention. It is not the negation of the Purpose. It is the specific way reasoning can fail while appearing to succeed. “Treating correlated factors as independent” looks like valid reasoning, produces clean output, and is completely wrong. The Anti-Purpose catches exactly this kind of failure.

What This Means

Three practical shifts follow from this:

1. Ask “why” before “how.” Before selecting a reasoning topology (or accepting the default), write a three-line IoT specification: Purpose, Anti-Purpose, Success Signal. This takes 30 seconds and prevents hours of misaligned reasoning.

2. Watch for reasoning drift. The IoT checkpoint is not a one-time preamble. It is a drift-detection mechanism. During long reasoning chains, compare the current trajectory against the stated purpose. SWI’s experiments show that explicit intent improves reasoning outcomes. IoT extends that principle from individual steps to the entire reasoning architecture.

3. Purpose is the missing variable. The XoT literature has mapped 14+ reasoning topologies. It has benchmarked them, combined them, and optimised them. What it has not done is ask why any particular topology should be selected for any particular task. Purpose (nokkam, நோக்கம்) is the variable that the entire landscape has skipped.

The Question

You already pick reasoning structures for your AI systems, whether you are building agents, writing prompts, or designing pipelines. You just do it implicitly, by habit or by default.

What would change if you made that selection explicit, with a stated purpose, a named failure mode, and a clear success signal?

The Intent of Thought repository and the full paper are open for inspection. The gap is real, the three moments are simple, and the question is whether you are willing to ask “why” before you ask “how.”

chain-of-thoughtcognitive_architectureintent-of-thoughtspar_dialecticsynthetic_intelligence