South Australia’s aged care AI trial produced 12,000 false alarms – Hardware – Software

Our target is to take our local communities to the worldwide audience. Submit your story and we will help you to build your audience. Thank you Roots News Team

A trial of CCTV and AI technology to detect accidents or abuse in two aged care facilities in South Australia produced 12,000 false alarms in a year, a review has revealed.

The Australian-first project was intended to pilot the use of cameras and AI to aid monitoring of residents under care, with a view to making the lives of staff easier.

However, a review of the pilot by PwC [pdf] showed the technology produced false positives at such a rate that alert fatigue among staff set in, and at least one actual incident – a resident falling over – went unresponded to.

The technology was programmed to detect four key incident types, defined as “falls, assist, call for help and/or screams”.

However, PwC found there were concerns from the outset “that the way in which these events had been programmed were not well aligned to the common movement patterns of residents at the sites.”

In addition, the system was tuned to be overly sensitive to noise levels in facilities, and was unable to distinguish between inanimate objects and people until it was patched.

The end result was a flood of “false alerts” that overwhelmed onsite staff and facility managers.

PwC said that “a threshold of 10 false alerts per day were anticipated by SA Health and the pilot sites”.

On average, the number of false alerts a day was triple that amount, and exceeded 12,000 across two sites over the year-long trial.

“A high percentage of these alerts were sit-fall events which involved staff performing a bend to knee (crouching) motion” to mobilise a resident, the review found.

Across the trial, the AI algorithm flagged “movements or sounds that are reasonably expected in residential care” as problematic and repeatedly raised alerts.

While the algorithm did become better at detecting actually problematic events, “it still initiated a high number of false alerts per month across the two sites” even as the 12-month pilot wound down, PwC found.

“In these final months of the pilot, staff were no longer able to respond to every alert,” it said. 

“There was at least one instance where staff did not respond to an alert that turned out to be a ‘true’ resident fall event.”

PwC’s review expresses caution at the conclusions that should be drawn from its work.

For example, it did not review the capability of the technology itself, and adds that the approach taken to piloting the technology may have under-estimated the time needed to make it suitable for an aged care setting.

“More comprehensive, contextual testing prior to piloting the technology across an entire residential care site may support improved implementation,” the consultancy advised.

It added that, “if using AI as part of the surveillance system, then the time taken to train the AI to the context of use should not be underestimated.”

It also noted that residents were generally unconcerned at the false positives, did not find them disruptive, and in some cases were comforted by the additional attention.

However, PwC said the end of the 12-month trial was ultimately inconclusive as to whether the technology made any material difference to the quality and safety of aged care.

In televised comments coinciding with the publication of the PwC report, South Australia’s Minister for Health and Wellbeing Chris Picton said, “It was an absolute[ly] botched rollout of this trial that happened over the past year.”

Source by [author_name]

Our target is to take our local communities to the worldwide audience. Submit your story and we will help you to build your audience. Thank you Roots News Team


Please enter your comment!
Please enter your name here

19 − one =