Many companies have begun to explore the ideas of social justice, diversity, inclusion, accountability, and transparency in machine learning, including Microsoft, Google and Deepmind (Alphabet).
Pervasive and justifiable concerns remain that efforts to promote transparency and accountability might undermine these companies’ IP rights and trade secrets, security, and in some cases the right to privacy. However, these systems are continuing to influence more people in more socially sensitive spaces (healthcare, employment, housing, credit, education, etc.). We need more active self-governance by private companies.
While different applications of ML will require different actions to combat discrimination and encourage dignity assurance, we have a set of 3 transferable, guiding principles for companies:
1. Identify human rights risks linked to business operations.
We propose that common standards for assessing the adequacy of training data and its potential bias be established and adopted, through a multi-stakeholder approach.
2. Take effective action to prevent and mitigate risks.
We propose that companies work on concrete ways to enhance company governance, establishing or augmenting existing mechanisms and models for ethical compliance.
3. Be transparent about efforts to identify, prevent, and mitigate human rights risks.
We propose that companies monitor their machine learning applications and report findings, working with certified third-party auditing bodies in ways analogous to industries such as rare mineral extraction. Large multinational companies should set an example by taking the lead. Results of audits should be made public, together with responses from the company.
We base our approach on the rights enshrined in the Universal Declaration of Human Rights and further elaborated in a dozen binding international treaties that provide substantive legal standards for the protection and respect of human rights and safeguarding against discrimination.
We acknowledge that developing AI and machine learning algorithms that prevent discrimination is not easy. Algorithmic decision-making aids have been used for decades, but machine learning is posing new challenges due to its greater complexity, opaqueness, ubiquity, and exclusiveness.
However, the principle of non-discrimination is critical to all human rights.
Our emphasis on risks is not meant to undersell the promise of artificial intelligence, nor to halt its use. The concern around discriminatory outcomes in machine learning algorithms is about upholding human rights. It’s also about maintaining trust and protecting the social contract founded on the idea that a person’s best interests are being served by the technology they are using or that is being used on them. Absent that trust, the opportunity to use machine learning to advance our humanity will be set back.