Various scenarios within medical diagnostics are among the AI use cases that an official European watchdog has flagged as a potential source of hazards to fundamental human rights.
The European Union’s Agency for Fundamental Rights (FRA) lays out its areas of concern in a report issued Dec. 14. Other areas include predictive policing, social services and targeted advertising.
The report is part of a broad project on AI and big data, and its recommendations draw from more than 100 interviews of people using AI in Estonia, Finland, France, the Netherlands and Spain, according to an announcement.
An AI user in France’s private sector tells the FRA that identifying discrimination in AI is complicated “because some diseases are more present in certain ethnic groups. Predictions take into account the sexual, ethnic, genetic character. But it is not discriminatory or a violation of human rights.”
The report exhorts the EU and EU countries to:
- Make sure AI respects all fundamental rights, not just personal privacy or data security.
- Guarantee that people can challenge decisions guided by AI.
- Assess AI before and during its use to reduce negative impacts.
- Provide more guidance on data protection rules.
- Assess whether AI discriminates.
- Create an effective oversight system.
In the report’s foreword, FRA director Michael O’Flaherty says AI users as well as developers “need to have the right tools to assess comprehensively its fundamental rights implications, many of which may not be immediately obvious. … We have an opportunity to shape AI [so] that it not only respects our human and fundamental rights but that also protects and promotes them.”
The 108-page report is available for downloading or reading online.