At Reactionpower we work to develop machine learning algorithms that take into account diverse communities.
Using machines to find patterns in large quantities of data and make predictions from these patterns, is unlocking many new kinds of value; from better ways to diagnose cancer to enabling self-driving cars and creating new opportunities for individuals. Machine translation, for example, can break down linguistic barriers, and voice recognition can empower illiterate people.
We recognize these are emerging technologies. While companies are designing and implementing this technology to maximize its potential benefits, ethical standards should apply. Discriminatory outcomes violate human rights and undermine public trust in machine learning.
How to Prevent Discriminatory Outcomes in AI
We’ve been working to develop a framework to mitigate the potential risks for discriminatory outcomes in machine learning applications, in order to arrive at a roadmap for preventing them.
We propose four central principles to combat bias in machine learning and uphold human rights and dignity:
1. Active Inclusion: The development and design of ML applications must actively seek a diversity of input, especially of the norms and values of specific populations affected by the output of AI systems.
2. Fairness: People involved in conceptualizing, developing, and implementing machine learning systems should consider which definition of fairness best applies to their context and application, and prioritize it in the architecture of the machine learning system and its evaluation metrics.
3. Clarity: Involvement of ML systems in decision-making that affects individual rights must be disclosed. The systems must also be able to provide an explanation of their decision-making that is understandable to end users and reviewable by a competent human authority. Where this is impossible and rights are at stake, leaders in the design, deployment and regulation of ML technology must question whether or not it should be used.
4. Agility: Leaders, designers and developers of ML systems are responsible for identifying the potential negative human rights impacts of their systems. They must make visible avenues to rectify issues for those affected by disparate impacts, and establish processes for the timely correction of any discriminatory outputs.
We recognize that much of the work is still speculative, given the nascent nature of ML applications, and the incredible rate of change, complexity, and scale of the issues. However, if public opinion about machine learning becomes negative, it is likely to lead to reactive regulations that are poorly informed, unimplementable, and costly – and that thwarts the development of machine learning. Negative public sentiment could also close off myriad opportunities to use it for good by augmenting the capabilities of individuals and opening up new ways to apply their talents.
A new model is needed for how machine learning developers and deployers address the human rights implications of their products. Compared with prior waves of technological change, we have an unprecedented opportunity to prevent negative implications of ML at an early stage, and maximize its benefits for millions.
How Reactionpower Can Help You
One sure way of developing machine learning algorithms that take into account diverse communities is by partnering with diverse vendors.
Our amazing team of data scientists, devs, copywriters, and content creators bring a wealth of diverse perspectives, expertise, and experience. We have partnered with some of the world’s leading bluechip tech companies on a variety of projects, including addressing the lack of diversity in AI and machine learning.
Ready to get more done? Schedule your strategy session today.
Leave a Comment.