Test of Design and Test of Effectiveness workflows are repetitive, but they are not low-stakes. The reviewer needs to know whether the control is designed to address the stated risk and whether it operated consistently during the review period.
AI can support this work when it is treated as a drafting layer. It should assemble the control context, read the client documentation, inspect the evidence, draft assessment notes, and flag exceptions. It should not replace the reviewer’s conclusion.
What AI should read for ToD
- Control objective and risk statement.
- Control owner and frequency.
- Policy and procedure language.
- System configuration or workflow documentation.
- Prior-year or prior-period workpaper notes when available.
What AI should read for ToE
- Evidence samples and populations.
- Approval records, tickets, logs, exports, or screenshots.
- Date ranges and period coverage.
- Exception criteria and firm review guidance.
The right output is a reviewer queue
A useful ToD / ToE workflow does not just generate paragraphs. It creates a reviewer queue with draft assessment text, evidence references, potential exceptions, missing support, and confidence notes. The reviewer can then approve, edit, or reject the draft.
How to evaluate quality
Quality should be measured by reviewer edit rate, exception catch rate, source traceability, and time saved per control. Accuracy matters, but in advisory delivery the practical question is whether senior reviewers spend less time checking mechanical work and more time applying judgment.
Security matters because evidence is sensitive
ToD / ToE evidence often includes access exports, tickets, system configurations, and screenshots. That is why the workflow needs role-based access, data isolation, audit logs, and client-approved deployment choices.
Research notes and sources
- KPMG’s Clara announcement emphasizes AI agents for auditors while maintaining a human-in-the-loop audit experience: https://kpmg.com/us/en/media/news/kpmg-clara-smart-audit-platform.html
- DataSnipper describes audit AI agents as supporting repetitive work while keeping auditors in control with judgment and sign-off: https://www.datasnipper.com/resources/excel-agents-how-ai-agents-help-internal-audit-teams
- AICPA’s SOC 2 resources describe the criteria used to evaluate design and operating effectiveness of controls: https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022