WHO warns tech vendor ‘function creep’ could compromise health data privacy

WHO World Health Organisation AI
The World Health Organisation (WHO) has warned governments of the potential for inappropriate repurposing of excess health data collected by tech partners, and particularly AI developers, recommending in its latest report that stronger privacy mechanisms be installed to safeguard individual data rights.
Such ‘function creep’ includes the sharing of health data with government agencies that would enable them to “exercise control or use punitive measures against individuals”, said the WHO in its Ethics and Governance of Artificial Intelligence for Health guidance paper.

The paper cited a specific instance of public data use overreach by the Singapore Government, which in early-2021 admitted that data obtained from its Covid-19 contact-tracing application (Trace Together) was also being used for criminal investigations, despite prior assurances that this would not be allowed.

In response, the city-state in February introduced legislation restricting the use of data collected via its Trace Together app to only the most “serious” criminal investigations, including murder or terrorism-related charges; penalties would be issued for unauthorised data use.

Closer to home, in June, emergency legislation was introduced by Western Australia after the state’s police department accessed data from the SafeWA Covid-19 contact-tracing app while investigating two serious crimes.

In contrast to Singapore, Western Australia’s new legislation mandates that data collected through the state’s SafeWA app is used only for contact tracing purposes.

Apart from government, the WHO also alerted to the fact that data obtained by technology providers for the healthcare sector could be shared with companies developing AI technology for marketing tools that service prediction-based products.

This, the WHO warns, includes “mundane” data not initially classified as “health data”; according to the global health body, machine learning can elicit sensitive details from ordinary personal information, transforming it into a unique category of “sensitive data” requiring protection.

Highlighting the lack of harmonised ethics guidance for AI use in healthcare, both globally and locally, the WHO recommended that governments create “clear data protection laws and regulations”, including a right to “meaningful informed consent” around health data usage.

“Governments should establish independent data protection authorities with adequate power and resources to monitor and enforce the rules and regulations in data protection laws,” the paper noted.

The WHO further recommended that governments require entities seeking the use of health data to be more transparent about the scope of their intended data usage, apart from advocating for evolving approaches of consent.

The global health body suggested, however, that governments “might wish to define when consent can be waived in the public interest”; in this case, the burden of demonstrating that lack of consent undermines a benefit should rest with the entity seeking to avoid consent.

Shifting gears to cybersecurity, the paper warned governments that a rise in malicious attacks on AI developers for healthcare systems could compromise predictive health outcomes or see data “kidnapped”, urging health agencies worldwide to upgrade their cyber infrastructure.

“AI developers might be targeted in ‘spear-fishing’ attacks and by hacking, which would allow attackers to modify an algorithm without the knowledge of the developer,” the WHO said, noting that such modification could see the redirection of vast sums of revenue to cyber assailants.

It further warned that inappropriate AI use could perpetuate existing biases, where the use of limited, low-quality non-representative data could deepen healthcare disparities.

“Predictive algorithms based on inadequate or inappropriate data can result in significant racial or ethnic bias. Use of high-quality, comprehensive datasets is essential,” the WHO said.

Healthcare system bias was also identified in low- and middle-income countries that use AI models designed and trained in high-income countries using local data sets.

The WHO said that such data biases could not only compromise the efficacy of data-driven solutions, but also makes incorrect diagnoses and health predictions as well as discriminatory outcomes against populations of particular ethnicities or body types in LMIC.

The global health body also listed six principles for governments to consider when approaching ethical issues in AI development:

  1. Protecting human autonomy
  2. Promoting human well-being and safety in the public interest
  3. Ensuring transparency, explainability and intelligibility
  4. Fostering responsibility and accountability
  5. Ensuring inclusiveness and equity
  6. Promoting AI that is responsive and sustainable

The WHO’s full guidance paper can be accessed here.