ACCACIMAICAEWAATManagement Accounting

From Data to Decision-Useful Information

AccountingBody Editorial Team

This chapter explores the transformation of raw data into decision-useful information, essential for effective management decisions. It covers the…

Learning objectives

By the end of this chapter, you should be able to:

  • Identify common organisational data sources and classify them for analysis, distinguishing between primary and secondary, and internal and external data.
  • Evaluate whether data is fit for purpose by assessing accuracy, completeness, consistency, and timeliness in the context of management decisions.
  • Apply cost–benefit thinking to the collection, processing, and reporting of information to improve decision usefulness.
  • Translate a practical business decision into clear information requirements aligned to strategic and operational needs.
  • Apply data governance principles to protect data integrity, security, and compliance with organisational policies.

Overview & key concepts

Organisations collect large volumes of data from operations, customers, and finance systems. Raw data does not automatically help decisions. It must be transformed into information that is relevant to a specific question, prepared at an appropriate level of detail, and supported by controls that make limitations visible.

Decision usefulness depends on two factors:

  • Relevance: the information addresses the decision drivers and compares options on a like-for-like basis.
  • Reliability: the underlying data is sufficiently accurate, complete, consistent, and timely for the decision.

This chapter focuses on turning operational and financial data into information that supports planning, performance measurement, forecasting, and control. It also links to governance and internal controls because strong decisions require an audit trail of where numbers came from and how they were checked.

Data vs information

Data is unprocessed facts captured by systems or people: invoice lines, timestamps, quantities, tracking scans, complaint records, or machine hours. Data may be messy, inconsistent, or incomplete.

Information is data that has been cleaned, organised, summarised, and presented to answer a specific question. A cost-per-unit metric, an on-time delivery rate, or a refund trend by courier are examples of information designed for decisions.

The same dataset can produce different information depending on the decision. For example, courier invoices can support budgeting (total spend), cost management (cost per parcel by weight band), or service evaluation (refund rates associated with late deliveries).

Primary and secondary data

Primary data is gathered specifically for the current decision (for example, a controlled pilot to measure delivery performance using defined service levels). It is usually highly relevant but costs time and resources to collect.

Secondary data already exists and is reused (for example, internal historic delivery data, or published market benchmarks). It is faster and cheaper but may not match the decision need due to different definitions, timing, or coverage.

A practical approach is to use secondary data to frame the question, then collect targeted primary data to reduce the key uncertainties.

Internal and external data

Internal data originates within the organisation: sales records, delivery scans, returns logs, customer service tickets, payroll records, and system logs. It is usually detailed and aligned to internal processes.

External data originates outside the organisation: competitor prices, supplier rate cards, published benchmarks, and economic indicators. It is useful for comparison and challenge, but definitions and reliability may vary.

Data quality

Data quality is not “perfect vs imperfect”. It is whether the data is good enough for the decision. Key dimensions include:

  • Accuracy: values are correct and free from errors.
  • Completeness: all necessary records/fields are present.
  • Consistency: definitions and classifications are applied uniformly across datasets and time.
  • Timeliness: the data is current enough and aligned to the decision window.

The modelling screen later in this chapter is a fast operational way of testing these same four dimensions before you build calculations.

Data governance

Data governance is the set of roles, rules, and controls that determine how data is owned, defined, protected, stored, accessed, and changed. Effective governance improves decision usefulness by ensuring:

  • clear accountability for data quality,
  • standard definitions (data dictionary),
  • controlled access and security,
  • audit trails for changes,
  • compliance with organisational policies and relevant regulations.

How operational data links to accounting records

Operational data often becomes accounting data when it supports recognition and measurement of assets, liabilities, income, and expenses. Mapping operational events to expected double-entry patterns helps detect omissions, duplication, and timing issues.

The accounting equation as a sense-check (with the right nuance)

Assets = Liabilities + Equity

Double-entry records will still “balance” even when transactions are wrong, missing, or misclassified. Issues often surface through reconciliation differences between operational evidence and accounting control totals (for example, received-not-invoiced items, receivables control vs customer listings, or revenue vs dispatch/fulfilment records). The accounting equation is most useful when it supports these reconciliations and completeness checks, rather than as a standalone test.

Using journal “shapes” to reconcile systems

When you reconcile operational reports to the ledger, it helps to recognise the typical shape of entries that should exist if the operational event occurred. The entries below are illustrative—entities may label accounts differently—but the logic is consistent: identify what resource increased/decreased and what obligation or income/expense follows.

Core accounting patterns used in reconciliations (illustrative)

Sales and receivables (cash vs credit)

  • Cash sale (goods delivered and paid immediately):
  • Dr Cash / Bank
  • Cr Revenue
  • Credit sale (goods delivered, payment later):
  • Dr Trade receivables
  • Cr Revenue
  • Customer pays a credit invoice:
  • Dr Cash / Bank
  • Cr Trade receivables

Inventory and cost of sales

  • Purchase of inventory on credit:
  • Dr Inventory
  • Cr Trade payables
  • Purchase of inventory for cash:
  • Dr Inventory
  • Cr Cash / Bank
  • Perpetual system (at point of sale):
  • Dr Cost of sales
  • Cr Inventory

In a periodic system, cost of sales is not recorded at each sale. Instead, it is determined at period end using an inventory count and an inventory roll-forward (opening inventory + purchases − closing inventory), with end-of-period adjustment journals.

Deferred income

When cash is received before goods/services are provided:

  • Receipt in advance:
  • Dr Cash / Bank
  • Cr Deferred income(a contract liability / unearned revenue in many syllabuses)
  • When the obligation is fulfilled:
  • Dr Deferred income
  • Cr Revenue

Use one main label consistently (here, “deferred income”) and treat alternative descriptions as explanatory only where needed.

Notes payable and interest

  • Borrowing cash via a note:
  • Dr Cash / Bank
  • Cr Notes payable
  • Accruing interest over time:
  • Dr Finance cost (interest expense)
  • Cr Interest payable (or accrued expenses)

Trade receivables: expected credit losses

Operational indicators (overdue status, disputes, credit limits) often drive estimates of collectability. A typical approach recognises a contra receivable:

  • Recognising or increasing a loss allowance:
  • Dr Impairment loss (expected credit losses)
  • Cr Loss allowance (contra trade receivables)
  • Writing off a specific irrecoverable balance (when appropriate):
  • Dr Loss allowance
  • Cr Trade receivables

Equity transactions (illustrative)

  • Issue of shares for cash:
  • Dr Cash / Bank
  • Cr Share capital (and share premium, if relevant)
  • Declaring a dividend:
  • Dr Retained earnings
  • Cr Dividends payable

Recognise a dividend liability only once the dividend is formally approved/declared in line with applicable law and the entity’s governance process. Dividends proposed after the reporting date are not recognised as liabilities at the reporting date.

  • Paying a dividend:
  • Dr Dividends payable
  • Cr Cash / Bank

Core theory and frameworks

Turning a decision into information requirements

Convert a broad decision into a structured question:

  1. State the decision clearly(what is being decided, and by when?).
  2. List feasible options(including “do nothing”).
  3. Define success criteria(profit, cash impact, risk limits, service targets, compliance).
  4. Identify key drivers(volumes, unit rates, failure rates, capacity constraints).
  5. Specify measures and definitions(exact formula, source, frequency).
  6. Set the time horizon(one-off transition costs vs steady-state).
  7. Decide the level of detail(overall average vs by product, geography, channel).

Definitions must be explicit. If “on-time delivery” is unclear (first attempt vs final delivery; working days vs calendar days), the metric will not be comparable across options.

Selecting data sources

A sensible hierarchy is:

  1. Internal secondary data first(fast, detailed, usually relevant).
  2. External data for benchmark and challenge(useful, but definitions may differ).
  3. Primary data to resolve decision-critical uncertainty(targeted pilots, surveys, sampling).

Always document known limitations (coverage gaps, sampling bias, inconsistent coding) so decision-makers understand uncertainty.

A quick “stop/go” screen before modelling

Before building calculations, run four fast checks that catch most decision-wrecking issues:

  • Coverage: Do you have all weeks, sites, and key fields needed for a like-for-like comparison (e.g., parcel weight band, service level, destination type)?
  • Plausibility: Do the numbers make operational sense (e.g., duplicate invoice IDs, delivery dates before dispatch, negative quantities, impossible rates)?
  • Comparability: Are definitions aligned across sources (e.g., “on-time” measured the same way for both couriers; refunds coded consistently)?
  • Freshness and cut-off: Does the dataset match the decision window, and are charges/refunds matched to the same shipment cohort?

If any check fails, fix it or qualify it before producing “precise” outputs.

Cost–benefit thinking for information

Information work should be treated like an investment: it only earns a return if it changes what you do.

A practical way to judge whether better information is worth the effort is to ask:

  • Could better information realistically change the choice?If the options are already clearly different, extra precision may add comfort but not value.
  • If the choice changes, how big is the consequence?Focus on decisions with large cash impacts (pricing, volume, claims/refunds, capacity, contract terms).
  • What is the cost of improving the information now and ongoing?Include staff time, system amendments, cleansing effort, and maintenance.

Spend most effort on assumptions that are both uncertain and decision-critical (small changes would flip the recommendation). Record what you chose not to improve and why.

Verification and reconciliation

Verification increases confidence in data and conclusions. Typical checks include:

  • reconciling totals to trusted control figures (e.g., courier charges to accounts payable ledger totals),
  • checking trends for reasonableness (spikes, step changes, unexpected seasonality),
  • sampling records back to source documents (invoice lines to shipment references),
  • confirming cut-off (charges, deliveries, and refunds relate to the same period and cohort).

These controls matter most when combining data from different systems (logistics platform, finance ledger, customer service tools).

Bias and limitations

Bias can arise from incentives and measurement design:

  • self-reporting may overstate productivity,
  • complaints may be under-recorded if channels are fragmented,
  • pilots may not represent peak conditions,
  • external benchmarks may use different definitions.

A strong analysis makes limitations visible and tests whether conclusions still hold under reasonable alternative assumptions.

Worked example

Narrative scenario

A retail business ships small parcels to customers across the UK. It currently uses Courier A and is considering switching to a cheaper provider, Courier B.

Management wants a decision based on total cost per delivered parcel and service performance. The business has internal records (parcel volumes, delivery scans, refund logs, complaint tickets) and supplier invoices. It also has an external benchmark report, but the decision will rely primarily on internal evidence.

The last quarter (13 weeks) data is summarised below (VAT excluded for consistency).

Operational volumes (last quarter)

  • Parcels dispatched: 52,000
  • Parcels delivered (final status within the quarter): 50,700
  • Parcels returned to sender (undelivered): 1,300

Courier A (current provider)

  • Invoice charges (excluding VAT): £232,440
  • Charges include a quarterly account fee: £6,500 (already included in the invoice total)
  • On-time deliveries (tracking scans): 46,155 out of 50,700 delivered
  • Customer refunds issued due to late or failed delivery: £9,360
  • Complaint tickets tagged “delivery”: 405

Courier B (proposed provider, pilot sample)
A pilot ran for 4 weeks on a subset of shipments. Results were scaled to a full-quarter equivalent assuming the same parcel mix and destination profile.

  • Estimated quarterly charges (excluding VAT): £205,920
  • Includes a one-off transition cost: £8,000 (incurred if switching)
  • On-time deliveries (pilot, scaled): 44,110 out of 50,700 delivered
  • Customer refunds due to late or failed delivery (pilot, scaled): £14,820
  • Complaint tickets tagged “delivery” (pilot, scaled): 520

Data-quality note: complaint tickets may be under-recorded because some complaints arrive via social media and are not always logged in the ticketing system.

Required

(a) Cost per delivered parcel
(b) On-time delivery percentage
(c) Data quality issues affecting the decision
(d) Recommendation

Solution

(a) Cost per delivered parcel

For decision purposes, define “total cost” as courier charges plus delivery-related refunds. For Courier B, include the one-off transition cost because the decision is “switch or not”.

Total cost (Courier A) = courier charges + refunds
£241,800 = £232,440 + £9,360

Delivered parcels = 50,700

Cost per delivered parcel (Courier A)
£4.77 = £241,800 / 50,700

Total cost (Courier B) = courier charges + refunds + transition cost
£228,740 = £205,920 + £14,820 + £8,000

Delivered parcels = 50,700 (scaled to the same delivery base)

Cost per delivered parcel (Courier B)
£4.51 = £228,740 / 50,700

(b) On-time delivery percentage

On-time % = (on-time deliveries / delivered parcels) x 100

Courier A:
91.05% = (46,155 / 50,700) x 100

Courier B:
86.99% = (44,110 / 50,700) x 100

(c) Data quality issues affecting the decision

Key risks and why they matter:

  • Scope risk (what costs are included): confirm whether undelivered/returned parcels generate extra charges (re-delivery, return-to-sender fees) and whether those are included consistently for both options. Excluding these can understate the true cost of poorer service.
  • Definition risk (on-time delivery): confirm that “on-time” is measured against the same service promise and the same event (first attempt vs final delivery). If definitions differ, the percentages are not comparable.
  • Pilot scaling risk (Courier B): a four-week pilot may not represent full-quarter conditions (seasonality, promotions, peak periods). Scaling assumes the pilot parcel mix, geography, and service level match the full quarter.
  • Service credits/penalties and commitments: invoice totals may be affected by late-delivery credits, service penalties, or minimum volume commitments that differ between couriers. These can distort cost comparisons if not captured consistently.
  • Bundled charges and timing: ensure invoices include all surcharges and that credit notes/adjustments are captured in the same period. Timing differences can distort unit costs.
  • Complaint under-recording: if some complaints bypass the ticketing system, complaint counts understate service issues. This may be more severe for the courier with weaker performance.
  • Matching refunds to cohorts: late deliveries may generate refunds in later weeks. If refunds lag shipments, current-period service cost may be understated.

Mitigation: tighten definitions and run sensitivity analysis on the most uncertain and decision-critical items (refunds, surcharges, and service-credit exposure).

(d) Recommendation

Courier B is cheaper on the defined total cost per delivered parcel (including refunds and the transition cost):

  • Courier A:£4.77 per delivered parcel
  • Courier B:£4.51 per delivered parcel

However, Courier B has weaker on-time delivery performance:

  • Courier A on-time:91.05%
  • Courier B on-time:86.99%

A decision should combine cost evidence with an explicit service floor. For example, management could require:

  • on-time delivery≥ 90%, and
  • refunds per delivered parcelno worse than Courier A by more than £0.05(or another stated tolerance),

before approving a switch.

On the current evidence, the recommended approach is:

  • Donotswitch immediately.
  • Extend the pilot to include peak conditions and confirm like-for-like definitions and cost scope (including surcharges, undelivered parcel costs, and any service credits/penalties).
  • Switch only if Courier B meets the minimum service threshold while maintaining a clear cost advantage on a like-for-like basis.

Sensitivity illustration (one-line)

If Courier B delivery-related refunds were 15% higher than estimated:
New refunds = £14,820 x 1.15 = £17,043
New total cost (Courier B) = £205,920 + £17,043 + £8,000 = £230,963
New cost per delivered parcel = £230,963 / 50,700 = £4.56

Exam technique focus

  • Define measures first (e.g., “delivered”, “on-time”, “refunds included/excluded”).
  • Show workings clearly and use a comparable base.
  • State key assumptions explicitly (pilot scaling, scope of charges, service credits/penalties).
  • Identify the biggest data risks and explain how they could change the conclusion.
  • Conclude with a recommendation that balances cost and service against a stated threshold.

Common pitfalls and misunderstandings

  • Confusing raw data with decision-ready information: calculations must be defined and comparable before conclusions are drawn.
  • Comparing different time periods or bases: mixing a pilot period with a full quarter without proper adjustment creates misleading results.
  • Ignoring service-related costs: focusing only on courier invoices misses refunds, credits/penalties, and downstream effects.
  • Using inconsistent definitions: “on-time” and “delivered” must be defined consistently across options.
  • Treating incomplete logs as complete evidence: complaints and refunds may sit across systems and channels.
  • Skipping reconciliation: supplier invoice totals should be reconciled to ledger control totals and validated for missing surcharges or credit notes.
  • Overconfidence in external benchmarks: benchmarks can guide expectations, but internal evidence should drive the decision where possible.
  • Failing to disclose limitations: decision-makers need uncertainty to be visible, not hidden behind point estimates.

Summary

Decision usefulness comes from aligning information to a specific question and ensuring the underlying data is reliable enough to support the choice. Classifying data sources helps balance relevance, speed, and cost. A short stop/go screen (coverage, plausibility, comparability, and freshness/cut-off) prevents detailed analysis built on weak foundations. Cost–benefit thinking keeps information work proportionate to the value it adds. Governance, verification, and reconciliation make data issues visible and manageable, improving planning, performance measurement, forecasting, and control.

Glossary

Data
Unprocessed facts captured by systems or people (e.g., invoice lines, timestamps, quantities).

Information
Data that has been cleaned, organised, summarised, and presented to answer a specific question.

Primary data
Data collected specifically for the current decision (e.g., a targeted pilot or bespoke survey).

Secondary data
Existing data reused for a new purpose (e.g., prior period records, published benchmarks).

Internal data
Data generated within the organisation (e.g., sales records, delivery scans, returns logs, ticketing systems).

External data
Data obtained from outside the organisation (e.g., competitor pricing, industry reports, economic indicators).

Data quality
The degree to which data is suitable for use, commonly assessed through accuracy, completeness, consistency, and timeliness.

Data governance
Roles, policies, and controls that define how data is owned, defined, protected, stored, accessed, and changed.

Verification
Checks that increase confidence in data (e.g., sampling to source documents, reasonableness tests).

Reconciliation
Comparing totals and key metrics to trusted control figures (e.g., ledger control accounts) to detect missing, duplicated, or misclassified items.

Bias
Systematic distortion arising from incentives, measurement design, or data collection methods that causes results to differ from reality.

A

Written by

AccountingBody Editorial Team