Puzzles & Pieces
The applied behavioural science centreTheory of Planned Behaviour,
or Theory of Plain Illusion?
Puzzle: Why does presenting the Theory of Planned Behaviour (TPB) as part of Behavioural Insights reveal that you missed the behavioural turn?
By Pelle Guldborg Hansen
I recently attended a conference on behaviour change. Here several academics and consultants positioned themselves within the field of Behavioural Insights (BI), while presenting work based on the Theory of Planned Behaviour (TPB) — interwoven, of course, with the now-obligatory hat-tip to Kahneman’s popularised version of dual process theory (DPT).
When I pointed out that TPB might not sit comfortably with DPT — and therefore not with BI as well — since their psychological models conflict, the presenters looked genuinely puzzled. It was clear that they had no idea what I was getting at.
I was puzzled in turn — that they didn’t seem to know what they didn’t know.
Of course, there wasn’t time, and nor was it the place to lay out the pieces of the puzzle for them.
But now, this is.
What is the Theory of Planned Behaviour (TPB)?
Before we begin, let’s just remember what TPB is about.
The Theory of Planned Behaviour (TPB) was proposed by Icek Ajzen in (Ajzen, 1985, 1991). It is a psychological model designed to predict and explain human behaviour.
It suggests that behaviour is primarily determined by behavioural intention, which in turn is influenced by three key factors:

Figur 1. Theory of Planned Behaviour. Based on the original model from Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J. Beckmann (Eds.), Action control: From cognition to behavior (pp. 11–39). Springer.
Take your Organisation
in-person training 2025 I in COPENHAGEN
JOIN OUR NEXT MASTERCLASS
1. Attitude toward the behaviour: The degree to which performing the behaviour is positively or negatively valued.
2. Subjective norms: The perceived social pressure to perform or not perform the behaviour.
3. Perceived behavioural control: The perceived ease or difficulty of performing the behaviour, reflecting both past experience and anticipated obstacles.
In addition, TPB allows perceived behavioural control to influence behaviour directly, especially when actual control is imperfect (for example, when people feel so unsure about succeeding that they never actually try).
To illustrate: whether someone intends to start exercising is said to depend on how positively they view exercise (attitude), whether they feel social pressure to be fit (subjective norms), and whether they believe they can actually succeed (perceived behavioural control).
In practice, TPB is typically studied through self-report questionnaires, where participants are asked to describe their attitudes, perceived norms, sense of control, and intentions — with behaviour either self-reported later or inferred through follow-up.
On the surface, TPB thus offers a structured and appealing answer: behaviour, it tells us, is the product of intentions, shaped by attitudes, social expectations, and perceived control. Yet beneath this polished surface lies a deeper problem — one that stretches far beyond TPB itself.
Let’s gather the pieces. Like any good puzzle, the full picture won’t be clear at first. We’ll start by laying out four different pieces — each exposing a crack in TPB’s surface. Some may seem scattered at first, but by the end, they’ll click into place.
Piece 1: The Deeper Lineage of TPB
Although it often passes for common sense, the Theory of Planned Behaviour belongs to a lineage of thinking about human action as old as humans themselves — one that, quietly but persistently, shapes not only academic psychology but also much of professional practice.
The roots of rationality
When trying to make sense of human behaviour, we instinctively reach for a certain kind of explanation. We assume that people act because they want something, believe certain things about how to get it, and then set out to achieve it. Psychologists call this basic theory our Theory of Mind (ToM) — the hardwired ability to intuitively attribute mental states, like beliefs, desires, and intentions, to ourselves and others (Premack & Woodruff, 1978; Dennett, 1987). It is one of the great cognitive achievements of our species: it allows us to predict, explain, coordinate and work with accountability in our everyday behaviours with each other.
Further, in folk psychology, Theory of Mind (ToM) is combined with the Incorrigibility Thesis (ICT) — the idea that individuals are assumed to have privileged, unquestionable access to their own mental states. Thus, it expands on the idea that not only are the mental states in question taken to exist, but also that people’s reports about their beliefs, desires, and intentions are treated as fundamentally reliable — both by themselves and by others.
The underlying logic of this theory — that action arises from desire and belief — was articulated with particular clarity by David Hume in the eighteenth century (Hume, 1739). Though he never formalised it in model form, Hume insisted that reason alone cannot motivate action; rather, behaviour flows from the interaction of a motivating desire and a guiding belief. If I desire warmth and believe that lighting a fire will provide it, then I am motivated to light the fire. Neither belief nor desire alone is enough; together they produce intention and action.
This view quietly challenged the prevailing ecclesiastical conception of rationality at the time. If behaviour depends on the interplay of desires and beliefs, then rationality might no longer be about discovering universal truths — but about maintaining internal coherence between what we want, what we believe, and how we act. In this sense, the intuitive structures captured by Theory of Mind became not just a way of explaining behaviour, but the very psychological roots of how rationality itself would come to be understood.
Hume’s insight marked a profound shift. Rationality was no longer about aligning one’s mind with an external, objective order of Reason. Instead, it became about internal consistency — coherence between desires, beliefs, and actions (Smith, 1994). In this sense, rationality became a subjective structure, rooted in the psychology of the individual. This framing deeply shaped not only philosophy but also the emerging social sciences for centuries to come.
The emergence of Rational Mentalism
Later philosophers of action — notably Donald Davidson (1963) and others — would formalise this structure into what is now known as the Desire–Belief–Action (DBA) model. In Davidson’s formulation, a primary reason for an action is the cause of the action itself. A primary reason consists of two elements — a pro-attitude (such as a desire, want, or commitment) and a belief about how acting will achieve the desired outcome. Together, these form the causal basis for intentional action.
By formalising the relation between pro-attitudes, beliefs, and action, DBA crystallised the idea that rational behaviour could be modelled as the internal consistency between what an agent values, what they believe about the world, and what they do. Its conceptual roots, however, run straight back to Hume.
One field that absorbed Hume’s model almost wholesale was economics. Classical economics formalised the DBA structure:
- Desires became preferences,
- Beliefs became expectations about outcomes, and
- Action became the outcome of a utility-maximising calculation.
The “rational agent” of Rational Choice Theory — Homo economicus — is, at root, a being whose behaviour flows from desires and beliefs operating according to internal consistency (Von Neumann & Morgenstern, 1944). Rational choice models simply translated Hume’s psychological structure into mathematical language.
At the same time, the Incorrigibility Thesis (ICT) lent methodological support to this approach. If individuals are assumed to have privileged access to their own mental states, then simply asking them about their attitudes, beliefs, and intentions should reveal why they act as they do — and how to change them. Self-reports became both the primary window into motivation and the chief guide for intervention design, embedding a deep confidence in the reliability of introspection and the tractability of intention — a confidence only to be reinforced by people’s tendency to cast their self-reports according to ToM.
This combination of rationality-based theorizing and ICT infused methodology, which I have labeled Rational Mentalism elsewhere (Hansen, 2025), had profound practical consequences. If rational behaviour is the product of coherent desires and beliefs, then influencing behaviour becomes a matter of influencing these mental states. The primary tools became information campaigns to correct beliefs and attitudes, incentive structures to shift preferences, and hard regulation — bans and mandates — to constrain choice architectures when persuasion fails. Behavioural influence, under this view, is about aligning individual calculations with socially desired outcomes.
ToM, DBA, RCT, … TRA and TBP
Against this backdrop, it was almost inevitable that efforts to predict and influence behaviour would focus on reported mental states. In the 1970s, Martin Fishbein and Icek Ajzen (1975) formalised this approach through the Theory of Reasoned Action (TRA). Their model proposed that behavioural intention — what a person plans to do — is the best predictor of actual behaviour. And intention, in turn, arises from two main factors:
- Attitudes toward the behaviour (whether it is seen as good or bad), and
- Subjective norms (perceived social expectations).
TRA was, in essence, a formalisation of the DBA model, filtered through mid-century social psychology. However, in adapting DBA into a predictive framework, TRA narrowed the role of belief. Instead of encompassing the full set of an agent’s beliefs about the world, TRA focused almost exclusively on beliefs about the outcomes of specific behaviours and the expectations of others.
Like RCT in economics, it assumed that behaviour is the result of a rational planning process — albeit one that accounts for social pressures. But by reducing belief to a thin motivational variable, it left itself vulnerable when real-world complexity intervened.
This vulnerability became clear as TRA struggled to explain behaviours where people lacked full volitional control — for example, when environmental or personal barriers intervened, such as lack of transportation, time constraints, or unexpected health issues.
This was not a minor adjustment: it revealed a major blind spot in the original model. Behaviour could not be fully explained by intentions alone, because real-world constraints — physical, social, and emotional — often disrupted the translation of plans into actions.
To address this gap, Ajzen (1991) extended the model into the Theory of Planned Behaviour (TPB). TPB kept the structure of attitudes and subjective norms predicting intention, but added a third factor: perceived behavioural control — the individual’s belief about how easy or difficult it would be to perform the behaviour.
Yet despite this refinement, TPB still operates within a restricted conception of belief. It addresses beliefs about feasibility and control, but continues to leave aside the wider set of general beliefs — about the self, the world, and possibilities — that underpin genuine action.
As a result, the deeper structure remained strikingly intact:
- Desire (attitude) and belief (subjective norm and control) produce intention.
- Intention produces behaviour.
- Human action is fundamentally rational, planned, and consciously directed.
Of course, this deeper linage of TPB is not as neat a river flowing smoothly from past to present as presented here. Still, recognising the common spine that connects ToM, DBA, RCT, TRA, and TPB helps illuminate why TPB feels so intuitively right — and why, at the same time, it risks reinforcing illusions about how human behaviour actually unfolds.
Piece 2: The Planning Illusion
We have now seen that at the heart of the Theory of Planned Behaviour lies a natural, intuitive and deceptively simple model: if we can shape what people think, want, and expect, we can shape what they intend — and thus predict what they will do. And to track this chain of influence, TPB turns to what feels most accessible: self-reports — trusting that the stories people tell about their attitudes, expectations, and control mirror the forces that truly shape their behaviour.
It is an elegant progression. But it is also an illusion.
The problem is not just that people sometimes fail to act on their intentions — a phenomenon widely recognised as the intention–action gap (Sheeran, 2002). The deeper concern is that intentions themselves are not stable, durable causes of behaviour. In practice, intentions often behave more like passing signals: briefly formed, quickly fading, and highly sensitive to context (Chater, 2018).
Intentions formed in one environment or mindset — hopeful resolutions at breakfast — may dissolve entirely by mid-afternoon in a crowded office or under stress. What we plan to do is not drawn from stable mental depths, but constructed on the fly, shaped by immediate context, fleeting motivation, and fragile attention (Chater, 2018).
Even when intentions are recalled later, they are often reconstructed rather than remembered — assembled into coherent narratives after the fact rather than retrieved from a stable store of mental commitments (Wilson, 2002). Further, behaviour does not flow smoothly from isolated intention. It emerges from a noisy, layered cognitive processes where attention, memory, habit, inhibition, affect, and narrative self-concepts all compete for influence (Wood et al., 2002; Stanovich, 2009; Nisbett & Wilson, 1977).
This is why interventions based on shifting attitudes, norms, and intentions so often disappoint. Changing what people say they intend to do is relatively easy. Changing what they actually do — consistently, across contexts — is much harder. When interventions fail, the cause is rarely mysterious. It is that intention, once formed, cannot govern behaviour alone. It flickers, fragments, and is often overwhelmed by forces the TPB model only dimly acknowledges. Thus, the neat logic of TPB — from attitudes to norms to control to intention to behaviour — becomes less a map of action and more an illusion we construct after the fact — one that flatters our sense of rational planning, but obscures how behaviour actually unfolds.
Self-reports are not completely useless. In minimal cases — behaviours compacted in time and space, like lighting a fire when cold, reaching for a drink when thirsty — people can often give a true account of their reasons. But beyond such immediate and constrained actions, reliability fades fast. As behaviour stretches across time, is shaped by social pressures, or unfolds through layered cognitive processes, people struggle to remember, fail to track the true contextual triggers, and lose access even to their own transient intentions. What they offer instead are reconstructions: plausible, polished, and intuitive — but rarely faithful to the fragile dynamics that actually guided their behaviour.
Even in simple cases, the story may be more complicated than it appears: if you form an intention to light a fire, get distracted, and return to the task later, it is not the original intention that acts — but a new one, regenerated from the memory of the old.
Piece 3: When Survival Replaces Science — The Immunisation of TPB
Scientific theories are supposed to be bold enough to risk being wrong. A theory that cannot, even in principle, be shown to fail does not challenge us; it merely reflects what we already expect. It is this vulnerability, often associated with the philosophy of Karl Popper (1959), that gives scientific theories their power to evolve and to distinguish explanation from storytelling.
Of course, science rarely abandons a theory at the first sign of trouble. As Thomas Kuhn (1962) pointed out, anomalies are expected. They accumulate, they are patched, and only rarely do they lead to a wholesale revision of thought. The problem arises when a theory adapts so flexibly to every anomaly that it becomes immune to meaningful testing — growing not in depth, but in complexity.
The Theory of Planned Behaviour shows precisely this tendency. When predictions fail, the model is not rejected. Instead, it is expanded. Additional variables are introduced — moral norms, past behaviour, anticipated regret, self-identity. Moderators and interaction terms multiply. New pathways are proposed, while existing ones are reweighted or made conditional. Each empirical challenge becomes not a threat to the theory’s foundations, but an invitation to weave a more elaborate explanatory net.
In this way, TPB risks becoming what philosophers call an immunised theory: a model that protects itself from refutation not by refining its core assumptions, but by absorbing every anomaly into its expanding framework (Kuhn, 1962).
Yet the flexibility of TPB is not merely methodological. It also mirrors the intuitive habits of folk psychology. Just as we explain unexpected behaviour after the fact — “He must have intended…,” “She probably didn’t believe…,” “They were under pressure…” — so TPB lends itself to post hoc rationalisation. Its variables are so narratively resonant that they can always be assembled into a plausible story (Wilson, 2002). What starts as a structured model of behaviour risks becoming a mirror of our explanatory instincts.
This is not to say that TPB has no practical uptake. Its vocabulary lends itself easily to survey questions, intervention frameworks, and academic papers. But convenience is not the same as insight. A theory that expands to accommodate every failure, that mirrors intuitive rationalisations rather than challenging them, and that multiplies complexity without sharpening predictive power, cannot claim to be a scientific model in any serious sense. It ceases to function as a lens for understanding behaviour. It becomes, instead, a mirror — reflecting back the comforting stories we were already prepared to tell.
Piece 4: The Flat Mind of Planned Behaviour
Yet, the Theory of Planned Behaviour is not only a model of behaviour. It is a model of the mind — and a deeply misleading one. TPB assumes a mind that feels intuitively familiar: a unified planner, who considers options, weighs expectations, forms an intention, and then carries it out — unless blocked. It is a neat psychological arc. But it rests on a radically simplified architecture of cognition — closer to folk psychology than to contemporary behavioural science.
In reality, human cognition is not unified. It is layered, fragmented, partial, and uneven (Hansen, 2025). Behaviour emerges from the interplay of fast, automatic processes (Type 1), effortful algorithmic routines (Type 2), and higher-order reflective processes (Type 3) —including goals, beliefs, and epistemic values (Stanovich, 2011). In this model, human cognition operates across two fundamentally different modes:
Type 1 processes processes are fast, automatic, effortless, and often unconscious.
They generate rapid responses to environmental cues through reflexes, habits, intuitions, and adaptive heuristics — operating largely in the shadows outside or at the fringes of conscious awareness.
Type 2 and Type 3 processes, by contrast, are slow, effortful, and reflective.
They enable the deliberate manipulation of representations, critical evaluation, hypothetical reasoning, and the formation of explicit goals and plans. Yet even here, heuristics may be consciously applied to simplify complex reasoning tasks — not only as intuitive shortcuts, but as strategic tools in algorithmic thinking. Where Type 1 processes trigger responses based on learned and evolved patterns, Type 2 and Type 3 processes offer the fragile possibility of decoupling from immediate stimuli — but only under supportive conditions.
This distinction matters, because TPB recognises only the controlled domain — and even then, only in a flattened, idealised form. It assumes a mind that is constantly reflective: considering options, weighing beliefs, coordinating desires, and executing plans. But most behaviour is governed, moment to moment, by the rapid interactions of a trio of processes — automatic, algorithmic, and reflective – largely opaque to introspection and profoundly concerted by the situation.
Even within the slower strata of cognition, TPB collapses distinctions. It treats the Reflective Mind (which governs goals, values, and engagement of reasoning) and the Algorithmic Mind (which executes cognitive operations) as one seamless planner —failing to recognise that an agent may reflect without the algorithmic capacity to reason effectively, or may possess computational resources without engaging them reflectively.
Thus, TPB flattens both the cognitive architecture and the structure of action. It imagines behaviour as the straightforward output of a stable intention — when in fact behaviour emerges from the dynamic interplay of layered processes, many of them triggered, shaped by and unfolding withing situational features rather than consciously directed plans.
By centring a unified reflective planner, TPB misrepresents the real sources of behavioural stability. It overlooks how much behaviour depends not on internal intentions alone, but on stable features of the environment — how generic situations and choice architectures cue, constrain, or enable action. Much of what people do — particularly habitual, environmentally cued, or socially embedded behaviours — is not the execution of a plan. It is the unfolding of a situation.
This matters not just theoretically, but practically. By ignoring the layered, situationally sensitive nature of cognition, TPB misguides intervention strategies — focusing narrowly on education, persuasion, and motivation, often based on unrealistic assumptions about stable intentions and rational planning.
This does not mean that education and persuasion are useless. But when grounded in faulty models of human behaviour, such efforts become clumsy, blind, and misdirected — often doing as much harm as good, while giving the false impression of progress. Working with more realistic theories of cognition and context does not discard education or motivation; it sharpens them, aligning strategies more closely with how behaviour actually unfolds.
In short: TPB does not just simplify behaviour. It simplifies the mind. And when a theory builds itself on a cognitive fiction, it cannot reliably predict or change behaviour. Its failures are not occasional. They are structural — and inevitable.
Putting the pieces together: The Behavioural Turn — A Copernican Shift
For decades, the Theory of Planned Behaviour has offered a map of human action that feels both intuitive and practical. It gives practitioners convenient variables to measure, neat intentions to track, and surface behaviours to predict.
But if we take seriously the deeper problems that run through folk psychology, economics, and social psychology alike, then the task is not to patch TPB. It is to recognise that the model was built on assumptions that never fully captured how behaviour unfolds — and that many entering Behavioural Insights still replicate, often unknowingly, despite nodding to newer ideas like dual- and triple-process theory. But we do not need a model that merely feels right; we need one that exposes where intuition misleads — and invites better maps of behaviour. Behavioural science does not advance by decorating intuition with variables; it advances by dismantling illusions and building theories that can survive being wrong.
A better theory must begin with how behaviour actually unfolds: through the interplay of layered cognition and dynamic, structured contexts. Not in the reflective plans that people narrate and project — shaping both how they anticipate and reinterpret their actions — but in the way attention, memory, affect, inhibition, habit, algorithmic execution and reflective thinking interact with the scaffolding of the environment to guide behaviour in real time.
This is what triple-process theory (TPT) does. It marks a Copernican shift we may label “The Behavioural Turn”. This doesn’t discard intention or reflection, but properly situates them within the broader, layered architecture of cognition and context. It shows that behaviour unfolds not primarily through planned intentions, but through the alignment (or misalignment) between automatic cues, algorithmic routines, reflective goals, and the affordances of the surrounding environment.
Yet despite its centrality, many working in Behavioural Insights have missed the full implications of this shift — still clinging to models of mind that centre planning, reflection, and intention as the primary engines of behaviour.
In doing so, they also overlook the methodological implications: that if cognition is layered, fragmented, and often opaque even to the agent, then self-reports of belief, desire, and intention — long treated as incorrigible — must themselves be treated with scepticism.
Recognising this is not a mere academic refinement; it changes what counts as evidence, what interventions are likely to succeed, and how theories of behaviour must be built if they are to endure.
Crucially, TPT links cognitive dynamics to generic situations, stable behavioural patterns, and designed choice architectures. From this, it generates structurally falsifiable predictions — not merely about what people say, but about how they behave when placed in recognisable environments. And this is what protects TPT from the criticisms levelled here at TPB. It does not immunise itself through endless elaboration; it anchors cognition in context and exposes itself to being wrong.
In reality the Copernican shift made by the behavioural turn was already passed years ago by the pioneers of psychology cited above. It displaces the reflective planner from the centre of behavioural science — and places at the heart of explanation the dynamic interplay between cognition and context. Behaviour unfolds within generic situations, structured by choice architectures, where automatic, algorithmic, and reflective processes interact.
Yet, while it is puzzling how many – blinded by natural intuition, enthusiasm, and the prestige that Behavioural Insights offers – drive straight ahead without caution for the pieces of the turn they ought to know, there might be an even deeper puzzle to come. In missing the turn, people without knowledge, risk derailing Behavioural Insights itself, if this has not already happened. But we do not advance Behavioural Insights by refining the illusions of self-narrated plans. We advance it by knowing the science — by learning what we didn’t know.
How do we solve that… before it is too late?
To cite this article, please use the following reference:
Hansen, P. G. (2025). Theory of Planned Behaviour, or Theory of Plain Illusion? Puzzles & Pieces. iNudgeyou – The Applied Behavioural Science Centre. https://inudgeyou.com/en/puzzles-pieces-theory-of-planned-behaviour-or-theory-of-plain-illusion
References
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J. Beckmann (Eds.), Action control: From cognition to behavior (pp. 11–39). Springer.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. https://doi.org/10.1016/0749-5978(91)90020-T
Chater, N. (2018). The mind is flat: The illusion of mental depth and the improvised mind. Allen Lane.
Davidson, D. (1963). Actions, reasons, and causes. Journal of Philosophy, 60(23), 685–700.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Addison-Wesley.
Hansen, P.G (2025). Notes on Behavioural Insights: Theory, Methodology & Praxis (forthcoming, 2025)
Hume, D. (1739/2000). A Treatise of Human Nature (D. F. Norton & M. J. Norton, Eds.). Oxford University Press.
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295X.84.3.231
Popper, K. (1959). The logic of scientific discovery. Hutchinson.
Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526. https://doi.org/10.1017/S0140525X00076512
Sheeran, P. (2002). Intention–behavior relations: A conceptual and empirical review. European Review of Social Psychology, 12(1), 1–36.
Smith, M. (1994). The Moral Problem. Blackwell.
Stanovich, K. E. (2009). What intelligence tests miss: The psychology of rational thought. Yale University Press.
Stanovich, K. E. (2011). Rationality and the reflective mind. Oxford University Press.
Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press.
Wilson, T. D. (2002). Strangers to ourselves: Discovering the adaptive unconscious. Harvard University Press.
Wood, W., Quinn, J. M., & Kashy, D. A. (2002). Habits in everyday life: Thought, emotion, and action. Journal of Personality and Social Psychology, 83(6), 1281–1297. https://doi.org/10.1037/0022-3514.83.6.1281
WISH TO STAY IN THE LOOP?
Subscribe to join our Insights
Community