Imagine a high-stakes training session where the very tool being taught is used to cheat. That’s exactly what happened at KPMG, one of the world’s leading consultancies, when a partner was fined A$10,000 (£5,200) for using artificial intelligence to cheat on an internal AI training test. But here’s where it gets even more intriguing: this wasn’t an isolated incident. Since July, over two dozen KPMG Australia staff have been caught using AI tools to cheat on internal exams, sparking a broader debate about the ethical boundaries of AI in professional settings.
The irony isn’t lost on anyone—especially when KPMG itself used its own AI detection tools to uncover the cheating, as first reported by the Australian Finance Review. This incident adds another layer to the ongoing saga of cheating scandals within the 'Big Four' accountancy firms. In 2021, KPMG Australia was slapped with a A$615,000 fine for widespread misconduct after more than 1,100 partners were found sharing answers improperly on tests designed to assess skill and integrity. But AI has introduced a new frontier for rule-breaking, one that traditional safeguards struggle to keep up with.
In December, the UK’s largest accounting body, the Association of Chartered Certified Accountants (ACCA), reached a tipping point. Helen Brand, ACCA’s chief executive, announced that accounting students would be required to take exams in person because AI-fueled cheating had become too difficult to control. And this is the part most people miss: as firms like KPMG and PricewaterhouseCoopers mandate AI usage among staff to boost efficiency and cut costs, they’re also grappling with how to prevent its misuse. For instance, KPMG plans to assess partners on their AI proficiency during 2026 performance reviews, with global AI workforce lead Niale Cleobury emphasizing, ‘We all have a responsibility to be bringing AI to all of our work.’
But is this a cheating problem or a training problem? Some LinkedIn commenters, like Iwo Szapar, creator of a platform that ranks organizations’ ‘AI maturity,’ argue that KPMG is ‘fighting AI adoption instead of redesigning how they train people.’ This perspective raises a thought-provoking question: Are traditional training methods simply outdated in the age of AI? Or is there a deeper issue with how professionals are being prepared to integrate AI ethically into their work?
Andrew Yates, KPMG Australia’s chief executive, acknowledges the challenge: ‘Like most organizations, we have been grappling with the role and use of AI as it relates to internal training and testing. It’s a very hard thing to get on top of given how quickly society has embraced it.’ KPMG has implemented measures to detect AI misuse and plans to monitor how many employees violate its policies. But as AI becomes increasingly embedded in daily workflows, the line between innovation and unethical behavior grows blurrier.
Here’s the controversial part: While firms push for AI adoption to stay competitive, they’re also creating environments where employees feel pressured to use it—even unethically. Is this a failure of policy, training, or something deeper? And as AI tools become more sophisticated, how can organizations ensure they’re being used responsibly? Let’s spark a discussion: Do you think AI cheating is a symptom of flawed training systems, or is it an inevitable consequence of rapid technological advancement? Share your thoughts in the comments—this is a debate worth having.