Picture this: a government spends millions on a vital healthcare strategy to tackle shortages in nurses and doctors, but what if the very foundation of that plan is riddled with fake evidence? That's the shocking reality unfolding in Newfoundland and Labrador, where a major report on health human resources is under fire for citations that simply don't exist—and experts suspect artificial intelligence is to blame. Stay tuned, because this isn't just about one report; it's sparking a bigger debate on how AI is reshaping—or perhaps undermining—our trust in official documents. And here's where it gets controversial: is AI a helpful tool for speeding up research, or a risky shortcut that jeopardizes lives?
Following a previous controversy surrounding the province's Education Accord, which was marred by invented sources (as detailed in this Independent article), our investigation has uncovered yet another government-funded policy paper tainted by potential AI-fabricated errors. This time, it's the Health Human Resources Plan, a comprehensive 526-page document crafted by the international consulting giant Deloitte and released by the Department of Health and Community Services in May. The report, aimed at addressing severe staffing gaps in the healthcare sector, includes at least four references to studies that appear nonexistent, raising serious doubts about the reliability of reports leaning on AI technologies.
Commissioned under the former Liberal administration, this initiative was a key part of broader efforts to build a robust strategy for recruiting and retaining healthcare professionals amid ongoing crises like nurse shortages. The plan's hefty price tag? Nearly $1.6 million, confirmed through documents released via an access to information request and shared on blogger Matt Barter's site. These cited research papers were meant to bolster arguments on various fronts, from recruitment tactics and financial incentives for keeping staff to the adoption of virtual care and the lasting effects of the COVID-19 pandemic on healthcare workers.
For instance, the report references a group of researchers to justify that offering money for recruitment and retention not only helps attract talent but also saves costs in the long run by avoiding the expenses of hiring and training new hires over time. However, Martha MacLeod, a retired professor from the University of Northern British Columbia's School of Nursing and a co-author of the mentioned study titled 'The cost-effectiveness of a rural retention program for registered nurses in Canada,' bluntly called the citation 'false' and likely machine-generated. 'We've conducted research on rural and remote nursing,' she explained in an email, 'but never a cost-effectiveness study, and we lacked the financial data to do so.' This highlights a critical point for beginners: citations are like footnotes pointing to real studies that back up claims, and if they're invented, it undermines the entire argument—just imagine basing a medical decision on a storybook instead of verified science.
In another example, the plan cites experts to argue that focusing on local hiring is the most economical approach, cutting down on relocation perks, reducing staff turnover, and lowering training costs. Yet, one of the named authors, Gail Tomblin Murphy—an adjunct professor at Dalhousie University's School of Nursing and a former research leader in Nova Scotia's health system—revealed that while she's collaborated on similar economic topics with some of the listed researchers, the paper titled 'The cost-effectiveness of local recruitment and retention strategies for health workers in Canada' doesn't actually exist. She noted that only three out of the six authors named are part of her known network. 'It seems like heavy AI use might be at play here,' Murphy suggested, emphasizing the need for caution: 'We must ensure the evidence guiding these reports is top-notch, verified, and truly helpful. After all, these documents aren't cheap—they're paid for by taxpayers and should drive accurate, forward-thinking decisions.'
A third instance involves a seemingly straightforward point about respiratory therapists in hospitals facing heavier workloads and stress during the pandemic. The report attributes this to a study called 'The impact of COVID-19 on respiratory therapist workload and stress levels in Canada,' published in the Canadian Journal of Respiratory Therapy, even providing a hyperlink. But here's the part most people miss— that link leads to an unrelated article on the journal's site, and the supposed study can't be found in academic databases or the publication's archives. This kind of error isn't just embarrassing; it erodes confidence in policies that affect real people's health and wellbeing.
And this is where it gets even more intriguing: Deloitte isn't new to such scandals. Just last month, the company's Australian branch hit the headlines for a government report filled with 'apparent AI-generated errors,' including a made-up quote from a court case and bogus academic references, as reported by the Associated Press. They agreed to a partial refund of US$290,000, though they avoided confirming AI's role, stating the issue was resolved directly with the client. The report was briefly pulled from the government's site, then reposted with a note in the appendix admitting the use of Azure OpenAI—a generative AI tool—for part of the work, without linking the errors to it. Deloitte claimed AI didn't affect the 'substantive content, findings, or recommendations.' As a relevant example, this shows how AI can streamline tasks like drafting reports, but it also risks introducing hallucinations, where the machine 'invents' details that sound plausible but aren't real—think of it as a super-smart assistant that sometimes confuses fact with fiction.
Interestingly, Deloitte actively promotes AI adoption, both internally and for clients. A statement from Deloitte Canada's CEO Anthony Viel on their website highlights how the firm is 'helping and inspiring Canadian organizations to unlock all the possibilities' of AI, providing expertise, tech infrastructure, and cloud services for safe, ethical development. In the Newfoundland and Labrador plan, Deloitte even recommends using generative AI to assist healthcare providers with clinical choices and customized care plans, plus analyzing hospital data—like electronic records or claims—to spot trends and allocate resources wisely.
In an earlier report from Deloitte Canada, they stress the importance of building trust in AI through openness and teamwork, embedding strong oversight in how these systems are designed and used. They urge setting up 'guardrails' for responsible AI deployment and investing in training to help people grasp and wield the technology effectively. Yet, despite these assurances, Deloitte hadn't responded to our inquiries by press time, leaving questions unanswered.
Politically, the fallout is palpable. After the Education Accord debacle, then-incoming Premier Tony Wakeham labeled the errors 'embarrassing' in talks with the Newfoundland and Labrador Teachers’ Association, per CBC/Radio-Canada, pledging a thorough review and chats with the authors to sort fact from fiction. But when we recently asked the current premier's office about reviewing AI policies, a spokesperson brushed it off as 'not prioritizing'—a stance that feels out of step with these recurring issues. Given this second major report with suspected AI glitches, we sought responses from Premier Wakeham and the Department of Health and Community Services: What steps will they take? How will they check the report's claims? Will they seek a refund from Deloitte? And are they finally considering rules for AI in external reports? Despite more than two days to reply, no word came back.
NDP Leader Jim Dinn expressed strong disapproval, calling it 'disgusted' amid the backdrop of the recent Education Accord scandal. 'You're gambling with people's lives,' he warned, noting how media reports already chip away at trust in healthcare, and this adds fuel to the fire for those in desperate need. He argued that any AI involvement 'undermines confidence in the reports and the choices they lead to.' As if that weren't enough, Deloitte was selected in June for another provincial task: a review of nursing staffing needs, slated for spring release.
As of November 22, the Health Human Resources Plan sits unaltered on the Government of Newfoundland and Labrador's website, with no mention of AI's potential role. This silence is deafening, especially when lives and taxpayer dollars are on the line.
But here's the controversial twist: While AI can democratize access to information and speed up complex analyses—potentially revolutionizing healthcare by predicting outbreaks or personalizing treatments—is it worth the risk when it can fabricate evidence? Some might argue it's an inevitable evolution, like how calculators changed math without always being perfect at first. Others see it as a Pandora's box, prioritizing efficiency over accuracy in policy-making. What do you think? Does the potential for AI to enhance decision-making outweigh the dangers of unchecked errors? Should governments mandate full transparency and verification for AI-assisted reports, or is that stifling innovation? Share your thoughts in the comments—do you agree with stricter AI oversight, or disagree that this is a crisis? Let's discuss!