Tests of Controls: Designing, Performing, and Interpreting
This chapter explores the design, execution, and interpretation of tests of controls within an audit context. It emphasises understanding control objectives…
Learning objectives
By the end of this chapter you will be able to:
- Explain why auditors evaluate internal controls and how this affects the planned audit approach.
- Distinguish between procedures used to assessdesign and implementationof controls and procedures used to testoperating effectiveness.
- Link financial reporting risks to control objectives, relevant assertions, and key controls.
- Design control tests that specify the population, period coverage, attributes, evidence, and timing.
- Perform and document control testing, evaluate deviations, and determine how further audit procedures should respond.
Overview & key concepts
Auditors evaluate internal controls for two related but distinct reasons:
- Design and implementation (understanding the system):to determine whether controls are suitably designed to address identified risks and whether they have been put into operation. This is typically obtained through process understanding, walkthroughs, and observing/confirming that key steps are in place.
- Operating effectiveness (testing controls):to obtain evidence about whether a control operated as intended throughout the relevant period. These procedures are commonly referred to astests of controls.
In most audit approaches, the detailed work called “tests of controls” is aimed at operating effectiveness. Evidence about design and implementation usually comes earlier, when the auditor gains an understanding of processes and performs walkthroughs.
Control testing is most useful when:
- reliance on controls is expected to make the audit more efficient; or
- a substantive-only approach would be inefficient or impractical, for example where there are high volumes of routine transactions processed through systems.
Even where controls are strong, substantive procedures are still required in many areas, particularly for material balances, estimates, and disclosures where controls do not remove the need for direct evidence.
Control objectives and key controls
Control objectives
A control objective states what a control is meant to achieve, usually expressed as the risk it is addressing. Examples in the purchases and payments cycle include:
- supplier payments relate only to genuine business purchases
- invoices are recorded accurately and in the correct period
- supplier data (including bank details) cannot be changed without appropriate authorisation
Control objectives should be linked to relevant assertions. In this area, common assertions include:
- occurrence(transactions happened and relate to the entity) — often where “validity/authorisation” is examined in practice
- accuracy(amounts are correctly recorded and calculated)
- cut-off(recorded in the correct accounting period)
- classification(recorded in the correct accounts)
- completeness(all relevant transactions are recorded)
Linking controls to assertions (illustration):
- three-way match supportsoccurrence(valid purchase),accuracy(price/quantity), and sometimescut-off(receipt date)
- approval of payment run supportsoccurrence/authorisationand may also supportaccuracy(review of unusual items)
- independent bank reconciliation supports detection of errors affecting cash and payables (accuracy/completeness), but often after payment has occurred
Key controls
Key controls are those the auditor expects to rely on, or those that are critical to responding to assessed risks. In exam answers, it is helpful to distinguish them from non-key controls:
- Key controlsdirectly address significant risks or major process points where failure could lead to a material misstatement.
- Non-key controlsmay still be useful, but are less central to the audit response.
Operating effectiveness and deviations
Operating effectiveness
A control operates effectively when it is performed:
- by appropriate personnel with the right authority
- in the required manner
- at the right time (for example, approval occursbeforepayment is released)
- consistently across the period being relied upon
Deviations (exceptions)
A deviation is an instance where the control does not operate as designed for a tested item. The term exception is often used as shorthand. Examples include:
- missing evidence of required approval
- approval obtained after payment release
- match performed but key discrepancies not resolved before payment
Not all deviations have the same implications. The auditor should consider both the frequency and the nature of deviations, including whether they indicate a one-off lapse or a systematic weakness.
Exception rate
A basic measure used in many controls tests is the deviation (exception) rate:
Exception rate = (Number of exceptions ÷ Items tested) × 100%
This is an indicator of operating effectiveness, but it is not the only factor in the conclusion.
Test methods and the strength of evidence
Common methods include:
- Inspection:examining documents, records, or system logs (e.g., evidence of matching or approval).
- Observation:watching a control being performed (useful but limited to what is seen at that moment).
- Inquiry:asking staff about the operation of controls (normally needs corroboration).
- Re-performance:independently executing the control (often strong evidence, particularly for calculations or system-driven checks).
In practice, auditors often combine methods. Inquiry may explain what should happen; inspection and re-performance provide more persuasive evidence that it did happen.
Sampling and populations in control testing
Population
The population is the full set of items to which the control applies over the period being tested (e.g., all supplier invoices processed during the year).
A frequent weakness is selecting a population that does not match the control. For example, testing only year-end invoices is not appropriate if the control operates throughout the year.
Sampling risk and non-sampling risk
- Sampling risk:the sample result differs from what would be concluded if the whole population were tested.
- Non-sampling risk:the auditor reaches the wrong conclusion due to poor design, poor execution, or incorrect evaluation (e.g., testing the wrong attribute, misunderstanding what evidence demonstrates performance).
Good test design reduces non-sampling risk; balanced sample selection and careful evaluation help manage sampling risk.
Compensating controls and reliance strategy
Compensating controls
A compensating control is a different control that reduces the same risk when a primary control is weak. For example, if evidence of invoice approval is inconsistent, a timely independent review of bank reconciliations and follow-up of unusual payments may reduce the risk that unauthorised payments remain undetected.
Compensating controls do not automatically eliminate the problem. The auditor must consider whether they operate with sufficient timeliness and precision to address the specific risk.
Reliance strategy
A reliance strategy means the auditor plans to place reliance on controls and, as a result, adjust further audit procedures. Reliance can affect the nature, timing, and extent of substantive procedures, but it does not usually remove the need for substantive work entirely—particularly for material balances, estimates, and disclosures.
Core theory and frameworks
When are tests of controls necessary vs optional?
Control testing is commonly performed when:
- the auditor plans to rely on controls to respond to assessed risks; or
- substantive-only testing is not expected to be efficient or sufficient on its own (often in automated, high-volume environments); or
- the auditor needs evidence about operating effectiveness to support the planned audit approach.
Control testing may be limited (or not performed in detail) when the auditor plans a substantive-focused approach and substantive evidence is expected to be obtained efficiently. However, the auditor still needs an understanding of processes and relevant controls to assess risks and design procedures appropriately.
Interim testing and roll-forward
Controls are often tested at an interim date. If the auditor plans to rely on interim results for the full year, further evidence is needed that the controls continued to operate effectively for the remaining period. Typical approaches include:
- testing additional items from the period after interim testing; and/or
- obtaining evidence that there were no relevant process changes and the control continued to operate (supported by targeted testing rather than assumption).
Selecting controls and defining attributes
Selecting the right controls to test involves identifying those that address higher-risk areas and key process points. The attributes tested should be:
- observable(supported by evidence)
- clear pass/fail criteria
- directly linked to the control objective
Examples of strong attributes:
- “Evidence that supplier bank detail changes were approved by an independent reviewer before the change took effect.”
- “Evidence that PO, receipt documentation, and invoice were matched and differences resolved before payment release.”
Designing the test: method, evidence, and timing
A well-designed control test specifies:
- the control and related control objective
- population and period coverage
- sample selection approach
- test method(s)
- evidence expected
- timing across the period (not concentrated at one point)
A weekly control should normally be tested across different weeks; a daily control across a spread of days and months.
Performing the test and documenting results
Documenting the work
Your working papers should tell the story of the test from start to finish. Someone else on the audit team should be able to see:
- which control was tested and why it matters (the risk/control objective link)
- the population and how the sample was selected
- exactly what you checked for each item (the attribute criteria)
- what evidence you looked at (file references, screenshots, log extracts)
- what you found (including any deviations and how they were followed up)
- your conclusion on whether the control can be relied on for the period, and what changes (if any) are needed to the audit approach
Evaluating deviations and updating the audit plan
When deviations are identified, the auditor should:
- quantify the deviations (how many and how often)
- understand thecause and pattern(isolated lapse vs systematic issue; specific period, staff member, supplier, or location)
- evaluate whether reliance remains appropriate
- update further audit procedures to respond to the revised assessment of risk
Dual-purpose tests and edge cases
Dual-purpose tests can be efficient where one procedure supports both:
- control operation (e.g., approval present and timely), and
- substantive evidence (e.g., recalculation, agreement to supporting documents)
Dual-purpose tests should be planned so that sample selection and coverage are suitable for both objectives. Documentation should clearly distinguish:
- the control attributes tested (operating effectiveness), and
- the substantive assertions addressed (amounts, cut-off, classification, etc.)
Edge cases require careful evaluation. For example, an approval obtained after payment may provide some accountability evidence but generally does not meet a control objective intended to prevent unauthorised payments.
Automated controls and the IT environment
For automated controls (for example, system-enforced matching rules), reliance usually requires evidence that the control is reliable in the system environment. At a high level, auditors often consider:
- whether relevant access controls and change controls support the reliability of the application control; and/or
- whether there is direct evidence the application control operated as intended (such as system configuration review, system logs, or re-performance using system outputs)
The key point is that automated controls still require audit evidence—they are not assumed to work simply because they are automated.
Worked example
Narrative scenario
ABC Ltd has annual revenue of GBP 1,190,000. The company operates controls over supplier payments to reduce the risk of incorrect or unauthorised payments. During the year, ABC Ltd processed 2,400 supplier invoices.
Key control points in the payments process
ABC Ltd uses several control points to reduce the risk of incorrect or unauthorised supplier payments:
- Supplier data changes are restricted and reviewed:changes to supplier bank details are initiated by accounts payable staff but must be approved by a separate senior reviewer before they take effect.
- Payment is supported by matching evidence:before an invoice is cleared for payment, the invoice is matched to an approved purchase order and receiving evidence, with differences documented and resolved.
- The payment run is checked before release:the payment run is reviewed for unusual payees/amounts and approved by a manager who is not involved in posting invoices.
- After-the-event detection:bank reconciliations are prepared promptly and independently reviewed, with follow-up of unusual or unmatched payments.
The audit team plans to test the operating effectiveness of these controls.
Required
- Calculate the exception rate for a sample of invoices tested.
- Evaluate the implications of the exception rate on control reliance.
- Design a test of control for the matching requirement.
- Interpret the results and adjust the audit plan accordingly.
Solution
1) Calculate the exception rate
- Items tested:40 invoices
- Deviations found:3 invoiceslacked evidence of required approval
Exception rate = (3 ÷ 40) × 100% = 7.5%
2) Evaluate implications for reliance
Assume:
- tolerable deviation rate:5%
- observed deviation rate:7.5%
Because 7.5% exceeds 5%, the results do not support the original planned level of reliance on this control without further evaluation.
Exam-safe nuance (sampling and judgement)
The observed rate is an indicator based on a sample, not a measurement of the entire population. The conclusion should also consider sampling risk and the nature and cause of deviations (for example, whether approvals were genuinely missing, obtained late, or evidenced elsewhere). Where the exceptions suggest a systematic weakness, the auditor would reduce reliance and increase substantive work.
3) Design a test for the matching control
Control objective: only invoices relating to goods received and properly ordered are approved for payment.
Population: all supplier invoices processed during the year (2,400 invoices).
Period coverage: the full year (include items from different months and different payment runs).
Sample: select 30 invoices across the year.
Method: inspection plus re-performance.
Attribute criteria (pass/fail for each item):
- Purchase order exists and matches the supplier and items invoiced.
- Receiving evidence exists for the goods/services invoiced.
- Invoice quantities and prices agree to the PO/receipt evidence within policy tolerances, or differences are documented and resolved.
- Evidence shows the match and any resolution occurredbeforethe invoice was released for payment.
Evidence retained: copies/screenshots of PO, receiving documentation, invoice, match report/checklist, and evidence of resolution (including relevant dates/timestamps).
4) Interpret results and adjust the audit plan
Interpretation
The deviations indicate that the approval control did not operate consistently. This increases the risk of unauthorised or incorrect payments and suggests a higher risk of misstatement in the purchases, payables, and cash disbursements cycle.
Audit response
- Revise reliance:reduce planned reliance on the approval control unless further work supports reliance (e.g., additional testing or evidence that deviations were isolated and not indicative of broader failure).
- Increase substantive procedures, focusing on:
- validity of invoices and payments (agreement to PO/receipt evidence and supplier statements)
- duplicate payment testing (review for repeated invoice numbers, amounts, and suppliers)
- cut-off testing around year-end (goods received and invoices recorded in the correct period)
- review of unusual suppliers or bank detail changes
- Consider compensating controls:evaluate whether independent bank reconciliations and follow-up are timely and detailed enough to reduce risk, and test their operating effectiveness if reliance is planned.
- Update risk assessment:if deviations show a pattern (specific period, staff member, supplier category, or system issue), expand procedures in the affected area and adjust the audit plan accordingly.
Common pitfalls and misunderstandings
- Treating policy existence as control operation:a policy is not evidence that the control operated.
- Testing only at year-end:controls operating through the year require period coverage.
- Missing timing requirements:approvals after processing often fail the control objective.
- Relying on inquiry alone:inquiry should be supported by inspection and/or re-performance.
- Quantifying deviations without analysing them:cause and pattern matter.
- Weak attribute definitions:vague criteria lead to inconsistent conclusions.
- Ignoring sampling risk:sample results require judgement before concluding on reliance.
- Poor documentation:unclear working papers weaken the audit trail and conclusions.
- Overconfidence in automated controls:automated controls require evidence that the system environment supports reliability.
- Ignoring override behaviour:frequent overrides may indicate that controls are not operating as intended.
Summary and further reading
Auditors evaluate controls through (i) procedures that assess design and implementation, and (ii) tests of controls that focus on operating effectiveness. Control testing helps determine whether reliance is appropriate and how further audit procedures should be planned. Deviations must be evaluated by frequency, nature, and cause, with audit responses tailored accordingly.
This topic links closely to risk assessment and further audit procedures, because control results influence the nature, timing, and extent of substantive work, while recognising that substantive procedures are still required in many areas even where controls are strong.
For further reading, consult introductory audit texts covering internal control, audit evidence, and audit sampling, along with professional guidance on audit risk and audit evidence.
FAQ
What is the primary purpose of tests of controls?
To obtain evidence about whether a control operated effectively over the period of reliance. This helps the auditor decide whether reliance is appropriate and how further audit procedures should be designed.
How do you decide which controls to test?
Prioritise key controls that respond to higher-risk areas and that the auditor expects to rely on. Controls that operate frequently or within automated systems may be particularly relevant where substantive-only testing would be inefficient.
Which methods are commonly used, and which provide stronger evidence?
Inspection and re-performance usually provide stronger evidence. Observation is limited to the time observed, and inquiry is generally weak unless corroborated.
What should be done when deviations are found?
Quantify deviations, understand cause and pattern, and evaluate whether reliance remains appropriate. If reliance reduces, increase or refocus substantive procedures and consider whether compensating controls can be relied on (supported by testing).
How do sampling risk and non-sampling risk affect conclusions?
Sampling risk means a sample may not reflect the population. Non-sampling risk arises from poor test design or execution. Both are managed by good test design, appropriate sampling, and careful evaluation of results.
Summary (Recap)
This chapter explained how auditors assess controls through understanding design and implementation and through tests of controls focused on operating effectiveness. It covered how to link risks to control objectives and assertions, how to design and perform control testing with clear attributes and period coverage, and how to evaluate deviations. The worked example demonstrated calculating an exception rate, applying judgement (including sampling considerations), designing a matching control test, and revising further audit procedures in response to control weaknesses.
Glossary
Tests of controls
Procedures performed to obtain evidence about whether controls operated effectively over the relevant period.
Design and implementation
Evaluating whether controls are suitably designed to address risks and have been put into operation, typically through process understanding and walkthroughs.
Control objective
The outcome a control is intended to achieve, expressed in terms of the risk it addresses.
Key control
A control that is central to responding to higher-risk areas and/or one the auditor expects to rely on.
Operating effectiveness
Whether a control operated consistently, by appropriate personnel, in the required manner, and at the right time across the period of reliance.
Deviation
An instance where a control did not operate as designed for a tested item (often called an exception).
Exception rate
(Number of exceptions ÷ Items tested) × 100%, used as an indicator of control performance in a sample.
Attribute testing
Testing whether specified control features are present using clear pass/fail criteria.
Dual-purpose test
A procedure designed to provide evidence about both control operation and substantive assertions, planned and documented to meet both objectives.
Population
The complete set of items to which a control applies over the relevant period.
Sampling risk
The risk that the sample conclusion differs from the conclusion that would be reached by testing the whole population.
Non-sampling risk
The risk of an incorrect conclusion due to poor design, execution, or evaluation rather than the choice of sample.
Compensating control
A different control that reduces the same risk when a primary control is weak, provided it operates with sufficient timeliness and precision.
Reliance strategy
An audit approach that plans to rely on controls (supported by testing) and adjust further audit procedures accordingly.
Written by
AccountingBody Editorial Team
Continue Learning