Bounties to ensure the Ethical use of AI




Remember the human at the heart of the data



A team of AI researchers released a number of proposals regarding ethical AI usage, and they included a suggestion that rewarding people for discovering biases in AI could be an effective way of making AI fairer.

Researchers from a variety of companies throughout the US and Europe joined up to put together a set of ethical guidelines for AI development, as well as suggestions for how to meet the guidelines. 

One of the suggestions the researchers made was offering bounties to developers who find bias within AI programs. 

The suggestion was made in a paper entitled “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims”.



As examples of the biases that the team of researchers hope to address, biased data and algorithms have been found in everything from healthcare applications to facial recognition systems used by law enforcement.

One such occurrence of bias is the PATTERN risk assessment tool that was recently used by the US Department of Justice to triage prisoners and decide which ones could be sent home when reducing prison population sizes in response to the coronavirus pandemic.


When artificial  intelligence algorithms are biased, they can create unethical results, which in turn can lead to  unfair social outcomes and PR disasters for both businesses and organizations. Reducing and aiming to limit different types of AI bias – algorithmic, technical, and emergent – are critical to the adoption rate and future success of real world production AI systems and implementations.


Every developer (and every person, for that matter) has conscious and unconscious biases that inform the way they approach data collection and the world in general. This can range from the mundane, such as a preference for the colour red over blue, to the more sinister, via the assumption of gender roles, racial profiling, religious and xenophobic prejudices, and historical discrimination.  

The prevalence of bias throughout society means that the training sets of data used by algorithms to learn reflect these assumptions, resulting in decisions which are skewed for or against certain sections of society. This is known as algorithmic bias.











A second possibility of technical bias when developing an algorithm: This occurs when the training data is not reflective of all possible scenarios that the algorithm may encounter when used for life-saving or critical functions. 

In 2016, Tesla’s first known autopilot fatality occurred as a result of the AI being unable to identify the white side of a van against a brightly lit sky, resulting in the autopilot not applying the brakes. This kind of accident highlights the need to provide the algorithm with constant, up-to-date training and data reflecting myriad scenarios, along with the importance of testing in-the-wild, in all kinds of conditions.

Finally, we have emergent bias: This occurs when the algorithm encounters new knowledge, or when there’s a mismatch between the user and the design system. An excellent example of this is Amazon’s Echo smart speaker, which has mistaken countless different words for its wake up cue of “Alexa”, resulting in the device responding and collecting information unasked for. Here, it’s easy to see how incorporating a broader range of dialects, tones, and potential missteps into the training process may have helped to mitigate the issue.


Bringing Humans back to AI Testing  

Whilst companies are increasingly researching methods to spot and mitigate biases, many fail to realize the importance of human-centric testing. At the heart of each of the data points feeding an algorithm lies a real person, and it is essential to have a sophisticated, rigorous form of software testing in place that harnesses the power of crowds, something that cannot be achieved in sandbox based static testing.

All of the biases outlined above can be limited by working with a truly diverse data set which reflects the mix of languages, races, genders, locations, cultures, hobbies that we see in our day-to-day life. 

In-the-wild testers can also help to reduce the likelihood of accidents by spotting errors which AI might miss, or simply by asking questions which the algorithm does not have the programmed knowledge to comprehend. 

Considering the vast wealth of human knowledge and insight available at our fingertips via the web, not making use of this opportunity is highly amiss. Spotting these kinds of obstacles early can also be incredibly beneficial from a business standpoint, allowing the developer team to create an AI product which truly meets the needs of the end user and the purpose it was created for.



This might be the first time that an AI ethics board has seriously advanced the idea as an option for combating AI bias. It’s unlikely that there are enough AI developers to find enough biases that AI can be ensured ethical, it would still help companies reduce overall bias and get a sense of what kinds of bias are leaking into their AI systems.

The authors of the paper explained that the bug-bounty concept can be extended to AI with the use of bias and safety bounties and that proper use of this technique could lead to better-documented datasets and models.


The documentation would better reflect the limitations of both the model and data. The researchers even note that the same idea could be applied to other AI properties like,



  1. Interpretability
  2. Security
  3. Privacy


As more and more discussion occurs around the ethical principles of AI, many have noted that principles alone are not enough and that actions must be taken to keep AI ethical. The authors of the paper note that “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.”

The co-founder of Google Brain and AI industry leader Andrew Ng also opined that guiding principles alone lack the ability to ensure that AI is used responsibly and fairly, saying many of them need to be more explicit and have actionable ideas.




The bias bounty hunting recommendation of the combined research team is an attempt to move beyond ethical principles into an area of ethical action

The research team made a number of other recommendations that companies can follow to make their AI usage more ethical:



  • They suggest that a centralized database of AI incidents should be created and shared among the wider AI community. 
  • Similarly, the researchers propose that an audit trail should be established and that these trails should preserve information regarding the creation and deployment of safety-critical applications in AI platforms.
  • In order to preserve people’s privacy, the research team suggested that privacy-centric techniques like encrypted communications, federated learning, and differential privacy should all be employed. 
  • Beyond this, the research team suggested that open source alternatives should be made widely available and that commercial AI models should be heavily scrutinized. 
  • Finally, the research team suggests that government funding be increased so that academic researchers can verify hardware performance claims.



“With rapid technical progress in artificial intelligence (AI) and the spread of AI-based applications over the past several years, there is growing concern about how to ensure that the development and deployment of AI is beneficial — and not detrimental — to humanity,” the paper reads. 

“Artificial intelligence has the potential to transform society in ways both beneficial and harmful. Beneficial applications are more likely to be realized, and risks more likely to be avoided, if AI developers earn rather than assume the trust of society and of one another. This report has fleshed out one way of earning such trust, namely the making and assessment of verifiable claims about AI development through a variety of mechanisms.”



Earlier this year regarding ethical AI the IEEE Standards Associationreleased a whitepaper calling for a shift toward,



  • “Earth-friendly AI,” 
  • The protection of children online, and
  • The exploration of new metrics for the measurement of societal well-being.




As AI continues to become omnipresent in our lives, it’s crucial to ensure that those tasked with building our future are able to make it as fair and inclusive as possible. 

This is not easy, but with a considered approach which stops to remember the human at the heart of the data we are one step closer to a safer, sensible and just reality for AI.


P.S ~ Also... here's an interesting IEEE Perspective on AI,





Jai Krishna Ponnappan is an entrepreneur, technologist, investor and information technology executive with over 15 years of industry and consulting experience. He has worked with boutique consulting firms and built successful engineering and management practices with several independent contracting businesses. Some of his clients include SalesForce, BlackRock and the City of New York. He has a Masters in Computer Information Systems from California State University, a Bachelors in Computer Science and Engineering, and an Executive MBA in Project Management. He started his career at the Indian Space Research Organization and has worked in critical leadership roles with organizations such as IBM among others.

References:

1. Cornell University. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. Cornell University Dept. of Computer Science, 2020. Available at: https://arxiv.org/abs/2004.07213

2. IEEE. Measuring What Matters in the Era of Global Warming and the Age of Algorithmic Promises. IEEE Standards Association, 2020. Available at: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec-measuring-what-matters.pdf