Skip to main content

To continuously assess controls, rules need to be developed to test in real-time (or near-real-time) compliance with the previously mentioned formal assertions that are required to be made about the selected controls. The required tests can be classified into seven broad categories based on traditional audit processes or evidence types:

  1. Asset management queries (where accurate), in place of physical examination of assets
  2. Electronic transaction confirmations, in place of authenticated transaction documents, including verifying atomic elements of transactions
  3. Electronic statement queries, in place of internal or external documentation
  4. Re-performance of selected controls, using some form of automation
  5. Observation (still a manual periodic test)
  6. Analytical procedures, such as statistical analysis, comparisons with other internal or external data sets, and pattern-matching within transaction data
  7. Automating collation of responses to inquiries such as control self-assessment surveys

The types of tests that could be employed in the case study example appear in figure 5.

Generally, tests need to answer the question: What would the data look like if the control objective was met or was not met?

Asset management queries and transaction confirmation (type 1 and 2) tests can use existing or improved key risk indicators (KRIs) to provide what is described24 as a risk indicator continuous assurance (RICA) framework. Past audit report evidence can also be used to identify sources of data and applicable analytics.25 In this testing approach, a designated threshold being met in two or more consecutive months (or the majority of the time) may indicate a strong control, whereas the threshold not being met in two or more consecutive months may indicate a weak control.

Statement (or tabular data) tests (type 3) can use a belief function approach, in which evidence for and against an assertion is mathematically combined (or aggregated) to determine a result. In this approach, assurance levels are divided into five categories (very low, low, medium, high and very high) based on value ranges. For example, the strength of evidence supporting completeness of testing could be determined by ranges of test coverage or ranges of outstanding defect percentages.

Large data sets or complex behavioural controls may require analytical testing (type 6) to validate an assertion. This analysis may employ a risk score methodology or probability models to create an equal distribution of values 0 to 1 across all samples, with bands reflecting confidence in the assertion. The analysis may be based on:

  • Higher or lower than expected values
  • Expected or opposite to expected movement
  • Small or large changes from one period to the next
  • Process metrics
  • Erratic behaviour or volatility (variance) in the process

Assertions that need to be tested by subjective judgement (type 7, such as those obtained through control self-assessments by service managers or vendors) can be validated through the Delphi Method. In this approach, a more accurate consensus of control effectiveness is obtained through one or more rounds of anonymous self-assessments, which may be reviewed, and feedback provided by experts between rounds.

Planning for the implementation of any of the previously described automated tests needs to take into account likely difficulties such as obtaining data management approvals; data sourcing and aggregation lead times; the need for control domain expertise; technology acquisition and integration costs; and the need for information sharing and coordination among audit, risk and compliance functions.