Analysis aids · BABOK techniques

Visual models & analysis aids

The visuals a business analyst reaches for to understand processes, specify systems, capture rules, reason about people, present data, and sequence work. Each entry tells you when to use it, when not to, the question it answers, a worked example, and the mistakes that quietly mis-lead the room.

Browse the toolbox

Six families of visuals

Category

Process visuals

Process visuals make the choreography of work visible — who does what, in which order, with which hand-offs. They are the most-used family in business analysis because almost every change initiative starts with the question "how does this actually happen today?" These models scale from a single sticky-note flowchart to enterprise BPMN landscapes.

Best used forDiagnosing current state and designing future state of end-to-end work.

Flowchart

A simple sequence diagram of activities and decisions, drawn with rectangles, diamonds, and arrows.

The original general-purpose process notation, standardised in ISO 5807. Activities are rectangles, decisions are diamonds, terminators are ovals, and flow is arrows. Flowcharts are atomic — one swimlane, one perspective — which makes them ideal for the first pass at understanding a procedure before deciding whether more rigour (BPMN, swimlanes) is warranted.

Question it answers

What are the steps and decisions in this procedure?

When to use
  • Sketching a procedure for the first time with a single subject-matter expert.
  • Documenting a short, mostly-linear sequence (≤ 15 steps).
  • Training material where the audience is non-technical.
When NOT to use
  • Multi-actor processes with hand-offs — a swimlane is clearer.
  • Anything requiring events, messages, or compensations — use BPMN.
  • Complex branching with > 3 decision diamonds in a row — refactor into a decision table.

BA example: A new BA at an insurer maps the claim-intake call script as a flowchart on a single A4 to onboard themselves before joining the redesign workshop.

Common mistakes
  • Using a diamond as a routing node rather than a yes/no question.
  • Mixing levels of detail — UI clicks alongside business decisions.
  • Skipping the terminator, leaving readers unsure where the process ends.
Deep dive · 12 sections + practice

Deep learning module

A boxes-and-arrows picture of the steps in a process, with diamonds for decisions.

Problem solved: When stakeholders disagree on 'how this actually works', a flowchart pins down the actual sequence and decision points so reasoning becomes shared rather than tribal.

Related

Swimlane diagram

A flowchart split into horizontal or vertical lanes, one per actor, exposing every hand-off.

Sometimes called a cross-functional flowchart or Rummler-Brache diagram. Each lane belongs to a role, team, or system; activities sit in the lane responsible for performing them. The visual payoff is that hand-offs — the most common source of delay and error — appear as arrows crossing lane boundaries, and queues form at lane edges.

Question it answers

Who does what, in what order, and where does work cross between them?

When to use
  • Diagnosing where work is delayed by hand-offs between teams.
  • Designing a future-state process that reassigns responsibility.
  • Communicating a process to multiple stakeholder groups at once.
When NOT to use
  • Single-actor procedures (a flowchart is enough).
  • Highly event-driven or knowledge work — consider CMMN.
  • When the focus is on rules rather than flow — use a decision table.

BA example: A BA mapping mortgage approval finds 11 hand-offs between Sales, Underwriting, Legal, and Operations in a process the business believed had three.

Common mistakes
  • Putting a system in its own lane and treating its actions as actor responsibilities (use BPMN pools).
  • Letting a single activity span two lanes — accountability becomes ambiguous.
  • Drawing every system call instead of grouping into business activities.
Deep dive · 12 sections + practice

Deep learning module

A flowchart sliced into horizontal (or vertical) lanes — one per , role, or system. Steps live in the lane of whoever does them.

Problem solved: Flowcharts hide ownership. When a process crosses teams, lanes make every handoff visible — handoffs are where work waits, breaks, or duplicates.

Related

BPMN basicsBPMN

An ISO-19510 process notation maintained by OMG, with formal semantics for events, gateways, and pools.

Business Process Model and Notation. The descriptive subset — pools, lanes, tasks, sequence flow, exclusive/parallel gateways, start/end events — covers ≈80% of business analysis needs and is portable across modelling tools. Unlike a flowchart, BPMN has formal execution semantics: a process drawn in BPMN can in principle be simulated or executed.

Question it answers

How does work flow across roles and systems with precise, unambiguous semantics?

When to use
  • Process redesign that will be reviewed by multiple parties or vendors.
  • Modelling cross-organisational processes with messages between participants.
  • Any process that may eventually be automated by a BPMS or workflow engine.
When NOT to use
  • First-pass discovery sessions — the formality slows discussion.
  • Knowledge-intensive case work where the next step is discretionary — use CMMN.
  • Pure rule logic — pull it into a decision table or DMN.

BA example: A BA models the claims-payment process in BPMN, using a separate pool for the bank with a message flow for payment instructions, so Operations and the bank's integration team can agree the contract.

Common mistakes
  • Using only one gateway type for everything — exclusive, parallel, and inclusive each mean different things.
  • Forgetting that sequence flow cannot cross a pool boundary; only message flow can.
  • Drawing wallpaper diagrams (>10 elements per page) instead of decomposing into sub-processes.
Deep dive · 12 sections + practice

Deep learning module

— a standardised, richer flowchart with formal semantics for events, gateways, messages, and pools.

Problem solved: Flowcharts and swimlanes can't precisely express timers, message arrivals, errors, or compensation. does, and the same diagram can drive process automation engines.

Related

SIPOCSIPOC

A one-page table that scopes a process by listing its Suppliers, Inputs, Process steps, Outputs, and Customers.

A Six Sigma framing tool. Five columns, ≈5–7 rows of high-level process steps in the middle, and the suppliers/inputs that feed them on the left, the outputs/customers they produce on the right. SIPOC exists to settle scope arguments before detailed modelling begins — it is intentionally too coarse to design with.

Question it answers

What is the scope of this process, and who supplies and consumes it?

When to use
  • Kicking off a process improvement workshop.
  • Aligning a sponsor and process owner on what is in scope.
  • Onboarding a new BA to an unfamiliar process area.
When NOT to use
  • As an as-is artefact for redesign — it lacks the granularity.
  • When the audience needs to see hand-offs (use a swimlane).
  • When the scope is already agreed — SIPOC adds no new information.

BA example: Before redesigning expense reimbursement, the BA runs a 30-minute SIPOC workshop and discovers Finance considered the process to start at submission while Operations considered it to start at receipt capture — a one-week scoping disagreement averted.

Common mistakes
  • Listing 30 process steps — the format only works at ≤7 high-level steps.
  • Confusing inputs (things consumed) with suppliers (sources of inputs).
  • Treating SIPOC as a deliverable rather than a scoping conversation.
Deep dive · 12 sections + practice

Deep learning module

A one-page table summarising a process at the highest level: Suppliers, Inputs, Process (5–7 steps), Outputs, Customers.

Problem solved: Before modelling a process in detail, teams disagree on its boundaries. settles 'what's , what isn't, and who cares' on a single page.

Related

Value stream mapVSM

A Lean diagram of an end-to-end process annotated with cycle times, wait times, and value-add ratio.

Borrowed from Toyota Production System and codified by Rother & Shook. A horizontal sequence of process boxes, each with a data box (cycle time, change-over time, %-complete-and-accurate), connected by a sawtooth timeline showing process time vs. wait time. Two numbers usually emerge: total lead time and percentage value-add. The gap between them is where improvement lives.

Question it answers

Where is time lost between value-adding steps in this end-to-end process?

When to use
  • When the process exists but stakeholders disagree about where time is lost.
  • Lean / continuous improvement initiatives where reducing lead time is the goal.
  • Justifying investment in queue elimination, batch reduction, or automation.
When NOT to use
  • When the process is brand new and no cycle-time data exists.
  • When the dominant problem is rule complexity, not flow time.
  • Soft/knowledge work without measurable hand-offs.

BA example: A BA maps the loan-approval value stream and finds that of 13 days lead time, only 2 hours are value-add — every other minute is a queue between teams. The redesign focuses on collapsing the queues, not the work.

Common mistakes
  • Estimating cycle times instead of measuring them.
  • Drawing the VSM only for the happy path and ignoring rework loops.
  • Confusing process time (touch time on a single item) with cycle time (interval between completions).
Deep dive · 12 sections + practice

Deep learning module

— a process diagram annotated with , wait time, and value-add vs non-value-add classification across the end-to-end flow.

Problem solved: Flowcharts show what happens; shows where time is wasted. It turns 'we're slow' into 'we have 4 hours of work in 18 days of , and 14 of those days are queues'.

Related

As-is / to-be process maps

Paired diagrams of the current state and the future state of a process, with a delta annotation between them.

Not a notation in itself — a discipline for using the same notation (usually swimlanes or BPMN) twice: once for what happens today (as-is) and once for what should happen tomorrow (to-be). The difference between the two diagrams becomes the work backlog: removed steps, new steps, reassigned hand-offs, eliminated systems.

Question it answers

What does the process look like today, what should it look like tomorrow, and what changed?

When to use
  • Process redesign initiatives where the change is non-trivial.
  • When stakeholders disagree about what "current" looks like — drawing it forces consensus.
  • To produce a defensible change impact assessment before approval.
When NOT to use
  • Greenfield processes — no as-is exists.
  • Tactical bug-fixes — the overhead is disproportionate.
  • When the as-is is so broken that modelling it is wasted effort and a fresh design is warranted.

BA example: A BA uses paired swimlanes to show that the to-be procurement process eliminates two approval lanes, replaces three forms with one, and reassigns vendor onboarding from Finance to Procurement. The delta becomes the implementation roadmap.

Common mistakes
  • Drawing the as-is from the procedure manual rather than observation, so the to-be solves a problem nobody has.
  • Modelling the to-be at greater detail than the as-is, hiding gaps.
  • Forgetting transition requirements — the migration from as-is to to-be needs its own backlog.
Deep dive · 12 sections + practice

Deep learning module

Two paired process diagrams: the ('') and the proposed (''). Differences = the change.

Problem solved: Without an explicit , change conversations turn into people defending their preferred future. With as-is + on one page, the deltas become the recommendation.

Related

Category

Requirements visuals

Requirements visuals turn elicited needs into shared artefacts that developers, testers, and sponsors can argue with. They sit at the intersection of problem framing and solution design, separating what the system must do (use cases, user stories, acceptance criteria) from what it looks like (wireframes, prototypes) and where its responsibilities end (context and scope models).

Best used forSpecifying behaviour, scope, and experience for a solution.

Use case diagram

A UML diagram showing actors (stick figures) and the use cases (ovals) they perform with the system.

Defined in UML 2.5. The diagram itself is intentionally sparse — actor, system boundary, oval per use case, lines for associations, plus «include» and «extend» relationships for shared behaviour. The richness is in the textual use case behind each oval (preconditions, main success scenario, alternate flows, postconditions). The diagram is the index; the text is the content.

Question it answers

Which goals can which actors achieve with this system?

When to use
  • Establishing the functional scope of a system at a glance.
  • Driving an iterative elicitation: one use case → one workshop → one detailed text.
  • Communicating system responsibilities to a non-technical sponsor.
When NOT to use
  • Highly data-driven systems where most behaviour is CRUD — the diagram becomes noise.
  • When the team works from user stories and a backlog — the formats overlap.
  • To represent a process flow — sequence of use cases is not flow.

BA example: A BA on a benefits-administration project produces a use case diagram with 14 use cases across 4 actors; the sponsor signs off on the scope picture in 30 minutes, and elicitation proceeds one use case at a time.

Common mistakes
  • Drawing every CRUD operation as a use case — keep it goal-oriented.
  • Over-using «include» and «extend» as decomposition rather than reuse.
  • Treating the diagram as the requirement; the text behind each oval is where requirements live.
Deep dive · 12 sections + practice

Deep learning module

A diagram showing actors (people or systems) and the use cases (goals) they pursue with a system, plus relationships between use cases.

Problem solved: It answers 'who uses this system to do what' on one page — the conversation that everything else depends on.

Related

User story map

A two-dimensional layout of user activities (top), tasks (middle), and stories (bottom) sliced into release bands.

Coined by Jeff Patton. The horizontal axis is the user's narrative flow (the "backbone"); the vertical axis is depth of detail. Releases are drawn as horizontal bands across the map so each release contains a thin slice of every backbone step rather than a complete one. The result is a backlog you can see end-to-end at the same time.

Question it answers

What is the smallest end-to-end slice we can release, and what comes next?

When to use
  • Replacing a flat backlog when stakeholders cannot see what a release will deliver.
  • Planning an MVP that must exercise the whole user journey.
  • Discovering missing steps in the user's narrative.
When NOT to use
  • When work is not narrative (platform/infrastructure backlogs).
  • After the backbone stabilises and a flat backlog is sufficient.
  • Multi-team programmes — story mapping per team scales better than one mega-map.

BA example: A BA story-maps a citizen-permit application with 6 backbone steps; release 1 takes the thinnest viable story from each step, so a citizen can submit and track end-to-end on day one rather than receiving a deep but stalled "submission" feature.

Common mistakes
  • Treating release bands as fixed before validating story granularity.
  • Confusing story map with backlog — the map is for shape, the backlog is for sequencing within a sprint.
  • Allowing the backbone to drift toward features instead of user activities.
Deep dive · 12 sections + practice

Deep learning module

A two-dimensional : top row = user activities/journey left-to-right, columns below = stories ordered by priority/release.

Problem solved: Flat backlogs lose the journey. Story maps preserve narrative and let teams slice releases that deliver coherent user value rather than disconnected features.

Related

Wireframe

A low-fidelity sketch of a screen showing layout, content blocks, and interaction without visual design.

Greyscale, no real images, placeholder copy, no brand. The point is to direct attention to structure (what is on the screen, where, in what hierarchy) rather than aesthetics. Wireframes are deliberately ugly so reviewers comment on the right things.

Question it answers

What goes on this screen, and how is it organised?

When to use
  • Confirming information architecture and screen hierarchy with stakeholders.
  • Pairing with a use case to make abstract requirements concrete.
  • Testing whether a workflow fits on a single screen before committing to design.
When NOT to use
  • When the question is brand or visual identity — use a high-fidelity comp.
  • When real interaction matters (validation, animation, transitions) — prototype instead.
  • On purely back-end or API features — wireframes have nothing to show.

BA example: A BA wireframes the new claim-status page; the sponsor immediately notices that the document upload was missing — caught in 10 minutes, not 10 weeks.

Common mistakes
  • Adding colour, photography, or brand — the conversation drifts to aesthetics.
  • Drawing too many wireframes too soon — wireframe the contentious screens, not all of them.
  • Treating the wireframe as a final spec; pair it with use cases or stories that capture behaviour.
Deep dive · 12 sections + practice

Deep learning module

A low-fidelity sketch of a screen showing layout, content blocks, and key interactions — without colour, brand, or final visuals.

Problem solved: Words let people imagine different things. A forces alignment on layout and content priority before expensive visual or engineering work begins.

Related

Prototype

An interactive mock-up — clickable, navigable, sometimes data-driven — that lets users behave with the proposed solution.

Anywhere on the spectrum from a clickable Figma flow to a working slice of code with a fake back-end. The defining property is interaction: a user can attempt a task and either succeed or get stuck, producing evidence about the design. Prototypes range from throwaway (paper, click-through) to evolutionary (becomes the eventual solution).

Question it answers

Can a real user actually accomplish the task with the proposed design?

When to use
  • Validating a workflow with real users before committing to build.
  • Resolving disagreements about how a feature should behave.
  • De-risking a high-uncertainty UI pattern.
When NOT to use
  • When requirements are well-known and the team can build directly.
  • Compliance/back-end work without a user-facing surface.
  • When the prototype would cost more than the build — favour wireframes plus stories.

BA example: A BA hands a clickable prototype of the new appointment-booking flow to five patients in a 45-minute test; three get stuck on the confirmation screen, leading to a redesign before a single line of production code is written.

Common mistakes
  • Confusing prototype fidelity with prototype value — low-fi often produces better feedback.
  • Prototyping the easy parts and skipping the risky ones.
  • Letting an evolutionary prototype become the production codebase by accident.
Deep dive · 12 sections + practice

Deep learning module

An interactive simulation of a — clickable , no real backend — used to validate flow and feel.

Problem solved: Static wireframes can't answer 'does this feel right'. Prototypes let users perform tasks and reveal usability problems before code is written.

Context diagram

A level-0 data flow diagram showing the system as a single circle and every external entity it exchanges data with.

One bubble for the system in the middle, one box for each external entity (people, organisations, other systems) around it, and arrows labelled with the data exchanged. Nothing inside the system is shown. It is the cheapest possible artefact for nailing down system boundary and external interfaces.

Question it answers

What is inside the solution, what is outside, and what flows between them?

When to use
  • Establishing what is inside vs. outside the solution at the start of analysis.
  • Identifying every interface the project will touch.
  • Settling "is X our problem?" debates with sponsors.
When NOT to use
  • Once the boundary is settled and detail work begins.
  • Inside-the-system architecture — context diagrams are level 0 only.
  • Pure organisational/people questions — use a stakeholder map.

BA example: A BA produces a one-page context diagram for a new claims platform; it surfaces an unexpected interface to the regulator that no one had budgeted for, two months before kick-off.

Common mistakes
  • Showing internal subsystems (that's level 1, not context).
  • Forgetting time-based or event-driven external entities (the regulator, the auditor).
  • Unlabelled arrows — the data flowing across the boundary is the whole point.
Deep dive · 12 sections + practice

Deep learning module

A single-page picture of the system in the middle, surrounded by external entities (people, organisations, systems) with labelled data flows between them.

Problem solved: It answers 'where does this system end and what does it talk to?' on one page — the integration conversation.

Related

Scope model

Any of several visual frames (in/out lists, scope tree, feature/exclusion table) that explicitly enumerate what the project will and will not deliver.

BABOK groups several techniques under "Scope Modeling". The simplest — a two-column in/out list — is often enough; richer variants include a hierarchical scope tree (capabilities → features → stories) and an inclusion/exclusion matrix. The deliverable is always two things: what is in, and (more importantly) what is explicitly out.

Question it answers

What is this project committing to deliver, and what is it explicitly not?

When to use
  • Drafting or reviewing a project charter or statement of work.
  • Closing scope-creep arguments by pointing at the explicit "out" column.
  • Onboarding new sponsors who weren't in the original framing conversations.
When NOT to use
  • Once detailed requirements exist; they are the de-facto scope.
  • When the team works iteratively with a living backlog and no fixed scope.
  • As a substitute for a context diagram (what is in/out is different from what is internal/external).

BA example: A BA captures "customer self-service portal" with an explicit out: "agent-facing back-office UI — phase 2". Three months later, when an executive asks for the agent UI, the scope model resolves the conversation in two minutes.

Common mistakes
  • Listing only what's in — without explicit exclusions, scope is unbounded.
  • Mixing levels (capabilities, features, stories) in one list.
  • Treating the scope model as static; revisit it at every major change.
Deep dive · 12 sections + practice

Deep learning module

A diagram (often a simple boundary box) showing what's for a change initiative and — crucially — what's out.

Problem solved: is fuelled by ambiguity. An explicit, signed-off model gives the team a defensible 'no' and lets stakeholders see trade-offs early.

Related

Category

Decision and logic visuals

Decision visuals separate complex business rules from process flow so each can change at its own cadence. They give auditors something to certify, testers something to enumerate, and analysts a way to surface contradictions in policy long before code is written.

Best used forCapturing rules and choice logic that would otherwise hide inside narrative or code.

Decision tree

A branching diagram in which each node is a question and each branch is an answer leading to a decision or further question.

Reads top-to-bottom: root question, branches per possible answer, intermediate nodes for follow-up questions, and leaves for decisions or actions. Useful when the order of questions matters (you can stop early) and when the tree shape itself communicates priority. Compact for ≤ ~20 leaves; collapses under its own weight beyond that — switch to a decision table.

Question it answers

Given a sequence of yes/no (or categorical) answers, what should we decide?

When to use
  • Sequential decisions where the cheapest question is asked first.
  • Communicating triage logic (eligibility, routing, escalation) to humans.
  • Teaching a junior agent or new system how to reach a verdict.
When NOT to use
  • When the order of questions doesn't matter — a decision table is more compact.
  • When the same question appears in multiple branches — a sign rules belong in a table or DMN.
  • Probabilistic / weighted decisions — use a different model.

BA example: A BA documents the call-routing tree for a help-desk: customer type → product → severity → routing decision. The visual exposes that two branches both end at "escalate to L2", which becomes a candidate for consolidation.

Common mistakes
  • Drawing trees so deep that nobody reads past three levels.
  • Repeating the same sub-tree under multiple branches (refactor into a sub-decision).
  • Letting the tree drift into process logic ("send email", "wait") — those belong in BPMN.
Deep dive · 12 sections + practice

Deep learning module

A branching diagram of decisions and chance events with values at the leaves; collapsed back to a single Expected Monetary Value per option.

Problem solved: Lets teams compare options under uncertainty quantitatively, instead of arguing over gut-feel preferences.

Related

Decision table

A grid where each row (or column) is one rule: a combination of input conditions and the action it produces.

Conditions across the top, actions down one side, and one row per complete combination of condition values. Decision tables are the canonical form for capturing rules — every combination is enumerated, contradictions are visible at a glance, and missing rules show up as gaps. Formalised in DMN as a first-class artefact.

Question it answers

For every combination of inputs, what should the system do?

When to use
  • Whenever rules outnumber three or four — a table beats prose every time.
  • Codifying eligibility, pricing, routing, or compliance rules.
  • Producing testable specifications — every row is a test case.
When NOT to use
  • When the order of evaluation matters — use a decision tree or DMN with hit policies.
  • Two-condition trivial logic — prose is fine.
  • Continuous numeric inputs — discretise into ranges first, or use a different technique.

BA example: A BA replaces 14 paragraphs of mortgage-eligibility prose with a 9-row decision table. Two contradictory rules surface immediately and are resolved with the policy team before development begins.

Common mistakes
  • Allowing two rules to match the same inputs without an explicit hit policy.
  • Leaving uncovered combinations (gaps) — completeness is what makes the table valuable.
  • Embedding action logic that should be a downstream sub-decision.
Deep dive · 12 sections + practice

Deep learning module

A table of conditions across the top and resulting actions across the bottom — every column is a unique combination of inputs that leads to a specific output.

Problem solved: Prose business rules ('if A and B but not C…') hide gaps and contradictions. Decision tables make every combination explicit and force the team to handle them all.

Related

Logic model

A planning visual mapping inputs → activities → outputs → outcomes → impact, used to reason about cause and effect.

Originating in evaluation research (United Way, Kellogg Foundation), the logic model lays out a programme's theory of change as a horizontal chain. Each arrow is a hypothesis: if we put these inputs in, and run these activities, we expect these outputs, which should produce these outcomes, contributing to this impact. It surfaces unstated assumptions ruthlessly.

Question it answers

Why do we believe these activities will produce the outcome we want?

When to use
  • Strategy analysis — connecting initiative work to business outcomes.
  • Benefits realisation planning where the chain from output to impact is unclear.
  • Programmes (multiple projects) where the value story has to hold together.
When NOT to use
  • Pure delivery teams — outputs and outcomes aren't theirs to define.
  • Operational processes — use process visuals.
  • Tactical features — the rigour is disproportionate.

BA example: A BA on a digital-inclusion programme draws a logic model that exposes a missing link between "laptops distributed" (output) and "jobseekers find work" (impact); the gap becomes a training workstream the programme had forgotten.

Common mistakes
  • Conflating outputs (what we produce) with outcomes (what changes for the user).
  • Listing inputs as a budget line rather than a capability.
  • Drawing a logic model and never revisiting it after delivery — it is meant to be tested.
Deep dive · 12 sections + practice

Deep learning module

A diagram showing inputs → activities → outputs → outcomes → impact, used to articulate how a programme is supposed to create value.

Problem solved: Programmes often skip from 'we built X' to 'we delivered impact' without articulating the chain. Logic models force every link in the chain to be made explicit and testable.

Related

Rules matrix

A two-dimensional grid mapping a domain entity (rows) against a condition or actor (columns), with the cell carrying the rule.

Less formal than a DMN decision table; useful when the rule set is best read by domain (one row per product, transaction type, or customer segment) rather than by combination. Common variants include the entitlement matrix (role × capability) and the pricing matrix (segment × tier × discount).

Question it answers

For each entity in this set, what is the applicable rule?

When to use
  • Communicating a rule landscape to business stakeholders.
  • When the natural conversation is "what's the rule for X?" rather than "what happens when?".
  • Reviewing entitlements, pricing, or eligibility across many segments.
When NOT to use
  • When inputs are independent and combinations matter — use a decision table.
  • When sequence matters — use a decision tree.
  • Highly dynamic rules — codify in DMN for governance.

BA example: A BA produces a 7×9 entitlements matrix (roles × actions) for the new case-management system; two roles turn out to have identical rights and are merged before development.

Common mistakes
  • Letting the matrix grow until it has 200 cells and is unmaintainable — slice or codify.
  • Inconsistent cell content (sometimes a rule, sometimes a yes/no).
  • Treating a matrix snapshot as the system of record instead of the source.
Deep dive · 12 sections + practice

Deep learning module

A grid that cross-tabulates two dimensions of business rules — typically × action, or rule × condition — to expose gaps and contradictions.

Problem solved: Long lists of rules let contradictions and gaps hide. A matrix forces every cell to be considered explicitly.

Related

Category

Stakeholder and strategy visuals

Stakeholder and strategy visuals zoom out from the work itself to the human and organisational context around it. They expose power dynamics, accountability gaps, and strategic mismatches that cause otherwise sound projects to fail.

Best used forReasoning about people, power, accountability, and strategic fit.

Stakeholder map

A visual register of stakeholders showing their relationships to the change, often clustered by group or proximity.

An umbrella term for any visual that organises stakeholders. Common forms include the onion (sponsor at the centre, expanding outward), the ecosystem map (organisations and their relationships), and the persona-grouped chart. The form follows the question being asked.

Question it answers

Who has a stake in this change, and how are they connected?

When to use
  • At project initiation to make sure no stakeholder group has been missed.
  • Before a major decision to identify whose input is needed.
  • When the change has political dynamics that need to be visible.
When NOT to use
  • Steady-state operations — a stakeholder list is enough.
  • When the question is power and engagement — use an influence/interest grid.
  • When the question is accountability — use a RACI matrix.

BA example: A BA on a regulatory reporting programme produces an onion map and surfaces three external stakeholder groups (auditor, regulator, industry body) that the programme charter had treated as a single line.

Common mistakes
  • Stopping at named individuals instead of role categories.
  • Mistaking a stakeholder map for an engagement plan — the map is the input.
  • Allowing the map to drift out of date — it should be re-checked at each phase gate.
Deep dive · 12 sections + practice

Deep learning module

A diagram of the people and groups who can affect or are affected by a change, organised by their relationships, position, or influence.

Problem solved: Without a map, BAs talk to whoever is loudest. With a map, engagement is deliberate and proportionate to influence and interest.

Related

RACI matrixRACI

A grid mapping activities (rows) to roles (columns), with each cell marked R, A, C, or I.

Responsible (does the work), Accountable (single point of approval — exactly one per row), Consulted (two-way input), Informed (one-way notification). RACI exists to expose two failure modes: activities with no A (no one approves) and activities with multiple As (no one approves, in practice).

Question it answers

For each activity, who does it, who approves, who is consulted, who is informed?

When to use
  • Operationalising a process design — once the steps are clear, who owns each one.
  • Resolving accountability disputes after a near-miss or audit finding.
  • Onboarding new teams into a cross-functional process.
When NOT to use
  • Highly fluid agile teams where roles are emergent.
  • Single-team work — overhead disproportionate.
  • When the question is power, not accountability — use an influence/interest grid.

BA example: A BA's RACI for incident response shows three As on "declare major incident" — Operations, Engineering, and Security all believed they owned it, explaining a series of slow declarations. One A is assigned and the response time drops by 40%.

Common mistakes
  • Assigning multiple As to a single activity (the cardinal sin).
  • Confusing R and A — Responsible does, Accountable approves.
  • Marking everyone Consulted as a political compromise — Consulted means real two-way input.
Deep dive · 12 sections + practice

Deep learning module

A matrix listing tasks/decisions on rows and roles on columns, with each cell marked R (Responsible), A (Accountable), C (Consulted), or I (Informed).

Problem solved: Solves the two recurring sins of unclear accountability: 'I thought you were doing it' and 'why wasn't I consulted?'

Related

Influence / interest grid

A 2×2 plotting each stakeholder by their power to affect the change (y) and their interest in it (x).

Mendelow's matrix. The four quadrants imply different engagement strategies: high power / high interest (manage closely), high power / low interest (keep satisfied), low power / high interest (keep informed), low power / low interest (monitor). The grid is most useful when stakeholders disagree about who matters.

Question it answers

Where should we spend our limited stakeholder-engagement bandwidth?

When to use
  • Designing a stakeholder engagement plan with limited bandwidth.
  • After a stakeholder map identifies many parties — the grid prioritises them.
  • Pre-decision lobbying analysis ahead of a major sign-off.
When NOT to use
  • When all stakeholders have similar power and interest — the grid adds nothing.
  • Replacing accountability analysis — use RACI for that.
  • Operational team contexts where engagement is daily by default.

BA example: A BA's grid puts the head of Compliance in the high-power / low-interest quadrant; the engagement plan changes from monthly demos (which she ignored) to quarterly executive briefings (which she attended).

Common mistakes
  • Treating the grid as static — interest can spike on news.
  • Confusing influence with seniority — a junior expert often has more.
  • Putting names in quadrants in front of those very people.
Deep dive · 12 sections + practice

Deep learning module

A 2×2 grid plotting stakeholders by their influence (vertical) and interest (horizontal), suggesting an engagement strategy per quadrant.

Problem solved: Tells you where to spend disproportionate engagement effort: high-influence/high-interest stakeholders need active management; low/low get the newsletter.

Related

SWOTSWOT

A 2×2 matrix capturing internal Strengths and Weaknesses against external Opportunities and Threats.

A strategic framing exercise. Internal vs external on one axis, helpful vs harmful on the other. The output is rarely the four lists themselves — it is the cross-quadrant strategies (S–O: pursue, W–O: improve, S–T: defend, W–T: avoid) that emerge when the team is forced to compare quadrants.

Question it answers

Where do our internal capabilities and external environment intersect?

When to use
  • Strategy analysis at the start of an initiative.
  • Periodic environmental scans (annual planning).
  • Comparing two strategic options side by side.
When NOT to use
  • Tactical or operational decisions — the format is too coarse.
  • When PESTLE or Porter's five forces is more appropriate (pure external view).
  • As a brainstorm without subsequent cross-quadrant analysis.

BA example: A BA running a SWOT for a new SME-banking entrant identifies a Weakness (no branch network) intersecting an Opportunity (rising remote-work) → strategy: lead with mobile-first onboarding, defer branches.

Common mistakes
  • Listing facts without classification — strengths get mixed with opportunities.
  • Skipping the cross-quadrant strategies — the matrix is a means, not an end.
  • Using SWOT as a substitute for evidence — every entry should be traceable.
Deep dive · 12 sections + practice

Deep learning module

A 2×2 of Strengths, Weaknesses (internal), Opportunities, and Threats (external) used for situational analysis.

Problem solved: Forces a balanced view of internal capability vs external environment before strategy is set. Done well, it surfaces options; done badly, it's a brainstorm with rotational symmetry.

Related

Business model canvasBMC

A nine-block one-page visual of how an organisation creates, delivers, and captures value.

Created by Alexander Osterwalder. Nine blocks: Customer Segments, Value Propositions, Channels, Customer Relationships, Revenue Streams, Key Resources, Key Activities, Key Partners, Cost Structure. The canvas exists to make a business model visible enough to challenge — every block is an assumption to test.

Question it answers

How does this business create, deliver, and capture value?

When to use
  • Strategy analysis for a new product, line of business, or venture.
  • Comparing the current and proposed business model side by side.
  • Programme kick-offs where the value story matters more than the work breakdown.
When NOT to use
  • Inside an established business model where the canvas adds no signal.
  • Operational improvements (use process visuals).
  • Detailed financial modelling — the cost/revenue blocks are too coarse.

BA example: A BA on a digital-bank initiative canvases two competing business models — fee-based vs interchange-based; revenue-stream and partner blocks make the trade-offs visible to the steering committee in one meeting.

Common mistakes
  • Filling all nine blocks for completeness rather than for insight.
  • Treating the canvas as static — it should evolve as assumptions are tested.
  • Conflating the canvas with the value proposition — VP is one block among nine.
Deep dive · 12 sections + practice

Deep learning module

A nine-block visual template (Osterwalder) describing how an organisation creates, delivers, and captures value.

Problem solved: Business model conversations sprawl. The canvas forces a one-page, structured articulation that exposes assumptions and gaps.

Related

Category

Data and metrics visuals

Data visuals are how analysis becomes evidence. The question is never "what chart do I want?" but "what decision does this chart enable?" Each chart type has a narrow band of comparisons it actually supports — choosing the wrong one quietly mis-leads even careful readers.

Best used forShowing magnitude, trend, distribution, composition, or relationship in numeric data.

Dashboard (simple)

A single screen of curated metrics and charts giving a role-specific view of what to monitor and act on.

A dashboard is not a report — it is an at-a-glance answer to "is anything wrong, and what should I do?". Stephen Few's principle: every chart on a dashboard must earn its place by enabling a decision the user can take. Designed in BA work alongside stakeholders, never sourced from a tool's default templates.

Question it answers

Is everything operating within tolerance, and if not, what should I do?

When to use
  • Operational monitoring (service level, capacity, queues).
  • Executive at-a-glance views of KPIs.
  • Replacing weekly status emails with a self-service view.
When NOT to use
  • Deep analysis — a dashboard is for monitoring, not exploration.
  • One-off questions — produce a chart, not a dashboard.
  • When you don't yet know which decisions the user takes — design those first.

BA example: A BA designs the operations dashboard for a contact centre: four tiles (calls in queue, longest wait, agents available, abandonment rate). The previous 28-tile dashboard had been ignored.

Common mistakes
  • Stuffing a dashboard with everything that can be measured.
  • Using charts that mis-encode data (3-D pies, gauges) for visual interest.
  • No alerts or thresholds — the dashboard tells you nothing is wrong by being silent.
Deep dive · 12 sections + practice

Deep learning module

A single screen of charts, KPIs, and tables designed to answer a specific recurring question for a specific audience.

Problem solved: Eliminates the recurring 'pull me a report' tax — and, when designed well, surfaces issues that prose reports bury.

Related

KPI chart

A focused chart for a single key performance indicator, usually showing actual vs. target over time.

Often a line or bar chart with the target as a reference line and a coloured indicator (status badge) for current state. The visual carries three signals at once: where we are, where we should be, and the trajectory between them. KPIs without a target are just metrics.

Question it answers

Are we hitting our target, and which way is the trend going?

When to use
  • Communicating progress against a stated objective to executives or sponsors.
  • Reporting service levels in operational reviews.
  • Tracking benefits realisation post-go-live.
When NOT to use
  • Without an agreed target — pick a different metric or set the target first.
  • When the metric is a vanity number with no decision attached.
  • For exploration — use a flexible chart instead.

BA example: A BA's KPI chart for first-call resolution shows the post-rollout dip and recovery; sponsors stop assuming the project failed when they see the trend.

Common mistakes
  • Hiding the target so the picture looks better than it is.
  • Comparing KPIs across teams without normalising for context.
  • Letting the colour coding (red/amber/green) shift definition over time.
Deep dive · 12 sections + practice

Deep learning module

A single chart focused on one over time, often with a target line and traffic-light status.

Problem solved: Lets a team see, at a glance, whether they are on track for the metric that matters most — without a deck, without commentary.

Related

Bar chart

Rectangular bars with length encoding a numeric value, used to compare discrete categories.

The workhorse of comparison. Bars share a baseline (almost always zero) so the eye can compare lengths accurately. Variants include grouped bars (compare categories within a group), stacked bars (composition within categories), and horizontal bars (long category labels read more easily).

Question it answers

How do these categories compare in magnitude?

When to use
  • Comparing values across discrete categories (regions, products, teams).
  • Showing rank order — sorted bars communicate rank in addition to magnitude.
  • Counts and frequencies.
When NOT to use
  • When the x-axis is continuous (use a histogram or line chart).
  • Composition that exceeds ~5 categories per stack (use a small-multiples grid).
  • Negative values without a clear zero baseline.

BA example: A BA's bar chart of contact volumes by channel makes obvious that webchat now exceeds phone — leadership had assumed phone still dominated, and the staffing model is rebuilt around the chart.

Common mistakes
  • Truncating the y-axis, exaggerating differences.
  • Using a 3-D bar chart — depth distorts magnitude.
  • Sorting alphabetically when ranking communicates more.
Deep dive · 12 sections + practice

Deep learning module

A chart with rectangular bars whose lengths are proportional to category values, used to compare quantities across discrete categories.

Problem solved: Lets the eye compare magnitudes across categories instantly. Pre-attentive — readers don't have to do arithmetic to see which is bigger.

Related

Line chart

A continuous line connecting data points along an ordered axis, used to show trend over time.

The default visualisation for time-series. The line implies continuity, so the x-axis must be ordered (usually time). Multiple lines on one chart compare trends; small multiples (one chart per series) compare patterns.

Question it answers

How is this metric trending over time?

When to use
  • Showing how a metric changes over time.
  • Comparing two or three trends with the same units.
  • Highlighting an inflection (e.g., before/after a launch).
When NOT to use
  • Discrete categories without natural order — use a bar chart.
  • Too many series on one chart (>5) — use small multiples.
  • When the trend is irrelevant and only the latest value matters — use a number tile.

BA example: A BA's line chart of signup conversion over 12 weeks shows a clear inflection the week after the new onboarding launched — sufficient evidence to roll out to the second region.

Common mistakes
  • Connecting points across missing data without indicating the gap.
  • Mixing units on a dual y-axis without a clear visual cue.
  • Overplotting many series in similar colours.
Deep dive · 12 sections + practice

Deep learning module

Connected points showing how a value changes over a continuous variable — almost always time.

Problem solved: Reveals , seasonality, and anomalies that snapshots conceal.

Related

Pie chart

A circle divided into slices, each proportional to a category's share of the whole.

Useful for one specific question: "what share of the whole does category X represent?" The eye is poor at comparing angle and area, so pies fail when there are more than three or four slices, and they fail badly at comparing slices of similar size. A bar chart almost always reads more accurately.

Question it answers

What share of the whole does this category represent?

When to use
  • Two or three categories where one obviously dominates and that is the message.
  • Communicating a part-to-whole relationship to a non-technical audience.
  • When the chart is the headline visual and accuracy matters less than recognition.
When NOT to use
  • More than three or four slices.
  • Comparing categories of similar size (the angle differences are too small to read).
  • Showing change over time (pies do not chain well).

BA example: A BA uses a pie chart on the cover page of a board pack to show that 70% of complaints originate from one channel — the simple visual carries the message faster than a bar chart.

Common mistakes
  • Using a pie chart by default when a bar chart would be clearer.
  • 3-D pies — perspective distorts every slice.
  • Adding many slices that sum to a meaningless "other".
Deep dive · 12 sections + practice

Deep learning module

A circle divided into wedges proportional to part-of-whole shares.

Problem solved: Communicates 'this is most of the whole' at a glance — when categories are few and relative sizes are dramatic.

Related

Scatterplot

A grid of points where each point's x and y position encodes two numeric variables, used to expose relationships.

The canonical visualisation for relationship and correlation. Each observation is one dot; clustering, trend, and outliers all jump out visually. Adding colour or size encodes additional dimensions. A trend line (regression) makes the relationship explicit when there is one.

Question it answers

Is there a relationship between these two numeric variables?

When to use
  • Investigating whether two variables are related (correlation, not causation).
  • Spotting outliers in a sample.
  • Comparing groups along two dimensions.
When NOT to use
  • When one axis is categorical — use a box plot or grouped bars.
  • Very small samples — the chart looks empty.
  • When the relationship is already obvious — a number is enough.

BA example: A BA scatterplots cycle time vs. team size across 40 squads; the absence of correlation kills a proposal to scale teams as a route to faster delivery.

Common mistakes
  • Inferring causation from correlation.
  • Overplotting — too many overlapping points hide the pattern.
  • Forgetting to label outliers that deserve investigation.
Deep dive · 12 sections + practice

Deep learning module

Points plotted on two quantitative axes to reveal relationships between two variables.

Problem solved: Surfaces , clusters, outliers, and the absence of a relationship — none of which a bar or line chart shows.

Related

Heat map

A grid where the colour intensity of each cell encodes a numeric value, exposing patterns across two dimensions.

Heat maps come in two flavours: matrix heat maps (categories on both axes, e.g., calls by hour-of-day × day-of-week) and geographic heat maps (intensity overlaid on a map). The visual strength is pattern recognition at scale — humans see hotspots faster in colour than in numbers.

Question it answers

Where in this two-dimensional space are the hotspots?

When to use
  • Time-of-day / day-of-week patterns (staffing, demand).
  • Risk maps (likelihood × impact, with colour for severity).
  • Quickly comparing many cells in a matrix.
When NOT to use
  • When precise values matter — use a table.
  • Without a clear, perceptually uniform colour scale.
  • Audiences with colour-vision constraints unless the palette accommodates them.

BA example: A BA's hour×day heat map of contact-centre demand reveals a previously invisible Wednesday-lunchtime spike, leading to a targeted staffing change rather than a global increase.

Common mistakes
  • Using rainbow palettes that distort perception of magnitude.
  • Not anchoring the scale (each chart uses its own range, defeating comparison).
  • Treating a heat map as analysis when it is exploration.
Deep dive · 12 sections + practice

Deep learning module

A grid coloured by value — rows and columns are categories, colour intensity is the metric.

Problem solved: Compresses two-dimensional data so the eye spots concentration, gaps, and patterns instantly.

Related

Basic data table

A grid of rows and columns showing exact values, used when precise numbers matter more than visual pattern.

Underrated. Tables answer "what is the value for X?" precisely, support sort and filter, and remain readable when printed. A well-formatted table with right-aligned numbers, consistent decimals, and minimal grid lines is often the right answer when stakeholders ask for a chart.

Question it answers

What is the precise value for each combination of category and metric?

When to use
  • Lookup — "what is the number for region 4?".
  • Reports where the audience needs precise values.
  • Mixed data types in one view (text, numbers, dates).
When NOT to use
  • Showing a trend or pattern — a chart wins.
  • More than ~7 columns or ~25 rows without sort/filter.
  • When the audience wants the headline rather than the detail.

BA example: A BA replaces a colourful but unreadable bar chart in the monthly finance pack with a 12-row table; the CFO can finally cite numbers in the meeting without flipping back to the appendix.

Common mistakes
  • Centring numeric columns (they should be right-aligned).
  • Inconsistent decimal places, making columns hard to compare.
  • Heavy grid lines and shading that fight the data.
Deep dive · 12 sections + practice

Deep learning module

A grid of rows and columns showing exact values — when precision matters more than visual impression.

Problem solved: Sometimes the audience needs to look up exact numbers, not be impressed by a chart. Tables are honest about precision.

Related

Category

Prioritization visuals

Prioritization visuals turn the politics of "what do we do next?" into a structured conversation. The visual itself rarely produces the answer; it produces the disagreement that needs to be resolved, on the table, with the right people in the room.

Best used forSequencing scope, risk, and investment with stakeholders in the room.

MoSCoW matrixMoSCoW

A four-bucket prioritisation model classifying each item as Must, Should, Could, or Won't have this time.

Originating in DSDM. The acronym deliberately separates Must (release-blocker) from Should (high-value but not blocking), Could (nice-to-have), and Won't this time (explicitly deferred — the most useful bucket). The discipline is to keep Must below ~60% of effort so the team has slack to absorb change.

Question it answers

Within this release, which items are essential, valuable, optional, or out?

When to use
  • Sequencing scope ahead of a release with stakeholders in the room.
  • Negotiating cuts when capacity is fixed and scope must yield.
  • Settling political prioritisation arguments by forcing categorisation.
When NOT to use
  • When everything is genuinely a Must — re-frame the release.
  • Long backlogs (use a more granular ranking).
  • When effort or value vary wildly across items — combine with impact/effort.

BA example: A BA's MoSCoW for the launch release contains 7 Musts (60% of effort), 5 Shoulds, 8 Coulds, and 12 Won'ts. The Won't column kills three weeks of speculative work that would have been quietly funded otherwise.

Common mistakes
  • Letting Must absorb everything — the model collapses if Must is unbounded.
  • Treating Won't as Won't ever — it specifically means "not this release".
  • Using MoSCoW to rank within Must — it doesn't sequence at that level.
Deep dive · 12 sections + practice

Deep learning module

Categorise into Must-have, Should-have, Could-have, Won't-have-this-time. The discipline is the W.

Problem solved: Stakeholders default to 'everything is critical'. forces them to articulate trade-offs and to put items into Won't — the only category that protects delivery.

Related

Impact / effort matrix

A 2×2 plotting items by their expected impact (y) and the effort to deliver them (x).

Sometimes called a value/cost matrix. Four quadrants suggest decisions: high impact / low effort (do now — "quick wins"), high impact / high effort (plan), low impact / low effort (fillers), low impact / high effort (avoid). The matrix is most useful when the impact–effort ratios are unobvious; if everything is a quick win, the matrix is not the bottleneck.

Question it answers

Where will we get the most value for the least effort?

When to use
  • Comparing 5–25 items where impact and effort vary widely.
  • Quarterly portfolio reviews to surface quick wins.
  • Removing low-impact / high-effort items from a backlog.
When NOT to use
  • When effort or impact cannot be honestly estimated.
  • Long-term strategic items — use a roadmap.
  • When the question is essential vs. optional — use MoSCoW.

BA example: A BA workshops 18 backlog items into an impact/effort matrix; three quick wins move into the next sprint, and two avoid-quadrant items are dropped from the roadmap entirely.

Common mistakes
  • Treating estimates as fact — the matrix is for conversation, not commitment.
  • Letting everything bunch in one quadrant by using inconsistent scales.
  • Confusing the matrix with prioritisation — it informs but does not produce a sequence.
Deep dive · 12 sections + practice

Deep learning module

A 2×2 plotting items by expected impact (vertical) and effort to deliver (horizontal). Quick wins live in the high-impact / low-effort quadrant.

Problem solved: Lets teams compare candidate work without long debates — the quadrant tells you where to start, where to plan, and where to drop.

Related

Risk matrix

A grid (typically 5×5) plotting each risk by likelihood (one axis) and impact (the other), with colour for severity.

Standard in PMI/PRINCE2 risk management. Each risk is placed in a cell; cells are coloured red/amber/green by combined score. The matrix gives stakeholders a single picture of the risk landscape and surfaces which risks need active mitigation rather than monitoring.

Question it answers

Which risks demand active mitigation rather than acceptance or monitoring?

When to use
  • Project / programme risk reviews.
  • Communicating risk to a sponsor or steering committee.
  • Comparing risk profiles across initiatives.
When NOT to use
  • Quantitative risk (use a financial model).
  • Single-risk deep-dives (use a risk register entry).
  • Operational/incident risk that needs continuous monitoring (use control charts).

BA example: A BA's risk matrix surfaces three red cells; one (regulator engagement) was missing from the project's mitigation plan entirely, and a workstream is added before the next sponsor review.

Common mistakes
  • Treating likelihood and impact as fixed — both move and the matrix needs re-running.
  • Letting too many risks land in red — colour fatigue sets in and severity becomes meaningless.
  • Mistaking placement on the matrix for a mitigation plan.
Deep dive · 12 sections + practice

Deep learning module

A grid of likelihood × impact, used to score and visualise risks so the team can focus on the worst.

Problem solved: registers grow into spreadsheets nobody reads. The matrix gives risk a one-page visual identity and forces explicit scoring.

Related

Roadmap view

A horizontal time-banded view showing what will be delivered, when, by which team or workstream.

Most often shown as a Now / Next / Later format (theme-based, no committed dates beyond "Now") or as a Gantt-style timeline. The roadmap is the bridge between strategy and backlog — coarse enough to remain stable for a quarter or two, specific enough to set expectations.

Question it answers

What are we committing to deliver, in what order, against the calendar?

When to use
  • Sponsor and stakeholder communication of the delivery plan.
  • Coordinating across teams or workstreams with shared dependencies.
  • Sequencing themes against external commitments (regulatory dates, market windows).
When NOT to use
  • As a sprint plan — too coarse.
  • As a contract — date-fixed roadmaps mislead when reality changes.
  • When the work is steady-state — use a service catalogue or KPI dashboard.

BA example: A BA reframes a Gantt that nobody trusted into a Now / Next / Later roadmap by theme; sponsor conversations move from "will this date hold?" to "is this still the right priority?".

Common mistakes
  • Confusing a roadmap with a project plan — different audiences, different durability.
  • Roadmapping individual stories instead of themes.
  • Failing to update — yesterday's roadmap is misinformation.
Deep dive · 12 sections + practice

Deep learning module

A timeline showing planned outcomes (and sometimes outputs) across multiple horizons — typically quarters or releases.

Problem solved: Aligns teams and sponsors on what's coming and when, while leaving room for the inevitable change. The artefact is a communication tool, not a contract.

Related

Backlog priority view

An ordered list (or Kanban-style board) of work items ranked top-to-bottom in execution order.

The lived artefact of prioritisation. The backlog priority view enforces a single rank — every item has a position relative to every other — so there is always a unanimous "what's next". Often visualised as a Kanban column for queued work, with WIP limits applied to columns to the right.

Question it answers

What is next, and what comes after that?

When to use
  • Day-to-day delivery sequencing within a team.
  • Showing the order of pull-from-backlog to a sponsor.
  • Surfacing bottlenecks (work piling up in one column) for action.
When NOT to use
  • Strategic prioritisation across themes — use a roadmap.
  • Relative-effort discussions — use impact/effort.
  • Categorical scope conversations — use MoSCoW.

BA example: A BA pairing with the product owner enforces a single ordering on a 60-item backlog; the team's stand-up becomes a 10-minute pull from the top instead of a daily prioritisation argument.

Common mistakes
  • Allowing multiple top-priority items — there is always exactly one "next".
  • Letting the backlog grow without grooming.
  • Treating the column board as the plan rather than the execution view of the plan.
Deep dive · 12 sections + practice

Deep learning module

An ordered list of work — top is next, bottom is later — derived from explicit criteria.

Problem solved: Without a single ordered , teams pick from competing lists. A prioritised backlog is the operational expression of strategy at the team level.

Related

Decision aid

Pick the right visual for the question

If you can write the question in one sentence, this table tells you which visual to reach for and which to avoid.

  • If the question is…

    Where is time being lost in this end-to-end process?

    Reach for

    Value stream map

    Avoid

    Flowchart (no timing data)

  • If the question is…

    Who hands off to whom, and where do delays sit?

    Reach for

    Swimlane diagram

    Avoid

    Decision tree (wrong question)

  • If the question is…

    What is in scope and what is explicitly out?

    Reach for

    Scope model

    Avoid

    Use case diagram (functional, not boundary)

  • If the question is…

    What does the system exchange with the outside world?

    Reach for

    Context diagram

    Avoid

    BPMN (over-engineered for this question)

  • If the question is…

    For every combination of inputs, what should happen?

    Reach for

    Decision table

    Avoid

    Decision tree (only when sequence matters)

  • If the question is…

    Who has approval authority for each activity?

    Reach for

    RACI matrix

    Avoid

    Stakeholder map (identifies, doesn't allocate)

  • If the question is…

    Where should we spend limited stakeholder time?

    Reach for

    Influence/interest grid

    Avoid

    RACI (different question — accountability, not engagement)

  • If the question is…

    How does this metric compare to its target over time?

    Reach for

    KPI chart

    Avoid

    Pie chart (no time dimension)

  • If the question is…

    How do these categories compare in size?

    Reach for

    Bar chart

    Avoid

    Pie chart (>3 slices)

  • If the question is…

    Is there a relationship between these two numbers?

    Reach for

    Scatterplot

    Avoid

    Bar chart (loses joint distribution)

  • If the question is…

    Within this release, what's a Must vs a Won't?

    Reach for

    MoSCoW matrix

    Avoid

    Roadmap (too coarse for release-level scope)

  • If the question is…

    Where will we get most value for least effort?

    Reach for

    Impact/effort matrix

    Avoid

    MoSCoW (categorical, not relative)

  • If the question is…

    Which risks demand active mitigation?

    Reach for

    Risk matrix

    Avoid

    SWOT (strategic, not project risk)

Practice with these visuals

Four formats: choose the right visual for a scenario, spot the problem in a sample diagram, match visuals to use cases, and short case-based exercises.

Open practice →