A recent report has highlighted that Queensland's Department of Transport and Main Roads (TMR) is not effectively identifying the ethical risks associated with artificial intelligence used in mobile phone and seatbelt cameras. The report points to several concerns, including privacy issues, insufficient human oversight for fair decision-making, inaccurate image recognition, and problems related to photo handling and storage.
TMR employs AI technology in its cameras to detect violations related to mobile phone use and seatbelt compliance. Additionally, AI is utilized in QChat, a virtual assistant designed for Queensland government employees. The Queensland Audit Office (QAO) has urged TMR to conduct comprehensive ethical risk assessments for both technologies.
The AI image recognition system in the mobile and seatbelt cameras filters out images that are unlikely to show violations, streamlining the review process for human operators. In 2024, the AI conducted 208.4 million assessments, resulting in approximately 114,000 fines. The technology reduced the number of cases requiring human review by an external vendor by 98.7 percent, bringing the total down to 2.7 million. After external review, the Queensland Revenue Office examined 137,000 potential violations during the same period.
While the mobile phone and seatbelt technology program (MPST) has implemented some mitigation strategies, including human review of potential violations, the report emphasizes the need for a thorough assessment of the effectiveness of these measures. The QAO noted that TMR has reviewed some general ethical risks within the MPST program but lacks a comprehensive evaluation to ensure all ethical risks are identified and managed.
The report also raised alarms about the QChat virtual assistant, suggesting that users might interact with the tool in unintended or inappropriate ways, potentially violating ethical and legal standards. There is a risk that users could inadvertently upload sensitive information or receive misleading responses from the assistant. To address these concerns, the report recommends that TMR establish monitoring controls and adopt a more structured approach to staff training.
Overall, the QAO concluded that TMR could improve its consideration of AI-related risks. "It has taken initial steps, but lacks full visibility over AI systems in use," the report stated. It recommended that the department enhance its oversight of ethical risks, evaluate and update governance structures, and implement suitable assurance frameworks.
In response to the findings, TMR Director-General Sally Stannard acknowledged the report's recommendations and stated that the department is already working on them. "While TMR has implemented a range of controls to mitigate the ethical risks, we will ensure current processes are assessed against the requirements of the AI governance policy," she said.