Preventing discrimination caused by the use of artificial intelligence

" The emergence of artificial intelligence (AI) that is not subject to regulation under a sovereign and independent democratic process risks leading to increasing human rights violations, creating, perpetuating or even promoting, consciously or unconsciously, the emergence and existence of serious factors of discrimination and exclusion," Christophe Lacroix (Belgium, SOC) rapporteur on « Preventing discrimination caused by the use of artificial intelligence » stressed at a hearing organised by the Committee on Equality and Non-Discrimination in Paris.

His The report, he said, aimed at defining and proposing a basic international framework for human-oriented AI based on ethical principles, non-discrimination, equality and solidarity, and to ensure that everyone’s rights are guaranteed, in particular those of potentially vulnerable people such as workers, women, people with disabilities, the elderly, ethnic, linguistic and sexual minorities, children, consumers and other people at risk of exclusion.

"AI is lacking diversity in its workforce and it is very biased. Biases are a result of the underrepresentation of certain groups in common datasets, human bias which are passed on to dataset labels as well as datasets based on historical data,influenced by years of social inequality. We, therefore, have to tame algorithms" confirmed Alice Coucke, Senior Machine Learning Scientist, Snips, Paris.

"As new technologies and algorithms are increasingly used for predictions and decision-making, direct or indirect discrimination through the use of algorithms using big data is increasingly considered as one of the most pressing challenges. Methods to audit algorithms with a view to demonstrate their lawfulness and compliance with fundamental rights may include the creation of profiles that are used as testers as well as assessing which data contributed most to the outcome of the algorithm," Aydan Iyigüngör from the European Union Agency for Fundamental Rights said.

Participants agreed that transparency,explicability, and traceability are key to effectively audit algorithms and that adequate fundamental rights safeguards need to be developed, based oninternational cooperation and strong collaboration between statisticians, lawyers, social scientist, computer scientist and subject area experts.