Machine Learning Must Respect Human Rights, says Toronto Group

One of the biggest problems technology will face over the next decade will be how to remove bias and instill equality and inclusivity within AI systems.

A group of human rights and technology groups in Toronto are trying to get ahead of the problem by releasing a new declaration calling for tech companies and governments to ensure machine learning and AI systems uphold the basic principles of human rights. The Toronto Declaration was announced at RightsCon this week, asking those involved to “keep our focus on how these technologies will affect individual human beings and human rights,” because “in a world of machine learning systems, who will bear accountability for harming human rights?”

At the time of launch, the declaration was prepared by Amnesty International and Access Now and has been endorsed by Human Rights Watch and Wikimedia Foundation. This kind of shared message towards the future of AI is nothing new, even for Canada. Late last year, the country’s leading researchers—including Yoshua Bengio, Geoffrey Hinton and Doina Precup—signed a letter meant for Prime Minister Trudeau asking to ban the weaponization of AI. Obviously that’s a bit different than the process of removing bias, but the idea of reigning in the exponential and unchecked growth of AI is the same.

This new declaration hopes to build a solid framework for those working in machine learning to follow when it comes to building systems for the future. The preamble explains just how unsupervised machine learning systems could influence how society grows.

“From policing, to welfare systems, online discourse, and healthcare—to name a few examples—systems employing machine learning technologies can vastly and rapidly change or reinforce power structures or inequalities on an unprecedented scale and with significant harm to human rights,” the declaration reads. “There is a substantive and growing body of evidence to show that machine learning systems, which can be opaque and include unexplainable processes, can easily contribute to discriminatory or otherwise repressive practices if adopted without necessary safeguards.”

The framework of international human rights law is the best basis for this kind of action, the declaration argues. This includes the right to equality and non-discrimination, as well as the ability to protect those who are promoting diversity and inclusion.

The entire declaration is 11 pages long, and one of the most interesting parts of it comes at the end under “The right to an effective remedy.”

“Companies and private entities designing and implementing machine learning applications should take action to ensure individuals and groups have access to meaningful remedy and redress. This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects, and designating roles in the entity responsible for the timely remedy of such issues subject to accessible and effective appeal and judicial review.”

This notion says that companies and organizations should always make sure there is a way to stop or undo actions made through machine learning, and that those companies employing machine learning should have specific roles designated to review and stop the progress if it harms society in some way.

This kind of public discussion regarding the ethics of AI is important, but it’s hard to judge if it will bring about much real action. Companies including Google and Boeing met with White House officials earlier this month to discuss the implications of AI, resulting in the U.S. government deciding to take a hands-off regulatory approach. Here’s hoping nothing happens that will force them—or the Canadian government—to suddenly go hands-on.