An Overview of AI Ethics and Governance

A few weeks ago, AI Singapore hosted Andreas Deppeler, Adjunct Associate Professor at NUS Business School and Director of Data and Analytics at PwC Singapore, in a two-part webinar series for staff and apprentices. In four hours of lectures and Q&A, Prof Andreas walked the audience through the vast landscape of AI ethics and governance. In this article, I penned down the highlights of the sessions. If you prefer to go straight to the lectures, you can view the recordings at the end of the article.

What could go wrong with AI?

AI as a technology is both powerful and finding increasing applications in our lives. Drawing upon two primary sources – the work done by computer scientist Stuart Russell [1] and the privately funded organisation Partnership on AI [2] – Prof Andreas began with a comprehensive high level look at where AI might cause harm, intended or unintended. This was followed by a series of documented cases where problems in explainability, bias and security have manifested themselves in applications involving AI. From the examples quoted, it is worth noting that even the major technology players like Amazon and Apple have not been immune to committing such errors in their initial deployments.

Another area to pay attention to is the displacement of jobs due to AI automation. While experts generally agree that there will be disruption in the market, there is no consensus on the expected scale of it.

In the development of automated vehicles, the moral decisions that a machine has to make in collision avoidance and life preservation come under scrutiny. The Moral Machine experiment [3] was an attempt to collect large-scale data on how citizens of different countries would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The results have been illuminating as it showed up distinct regional cultural differences when it came to deciding who should be sacrificed and who should be saved.

Ethics : Drawing up the principles

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.

Norbert Wiener, 1960

While the concern that machines do not do what is “right” is not new, Prof Andreas traced the first serious conversation on safe and beneficial AI to the AI Safety Conference (2015) in Puerto Rico organised by The Future of Life Institute [4], a gathering of academics and industry players. The conference led to the publication of an open letter exhorting the development of AI that is not only capable but which also maximises societal benefit [5]. Since then, several non-profit organisations for safe and beneficial AI have also been founded.

A second conference in 2017 in Asilomar, California, produced 23 principles covering wide-ranging themes in AI [6]. Two years after that, the European Commission also presented its Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019 [7] and the OECD followed suit just a month later with its OECD Principles on AI [8]. At almost the same time, the Beijing Academy of Artificial Intelligence (BAAI) also published its Beijing AI Principles [9].

From these publications, researchers have identified five common themes or overarching principles of ethical AI : beneficence, non-maleficence, autonomy, justice and explicability [10]. Interestingly, subsequent work found that the first four principles correspond with the four traditional bioethics principles, and they are joined by a new enabling principle of explicability for AI [11].

Governance : Operationalising the principles

Typically, the principles and guidelines published are non-legally binding but persuasive in nature. To date, the German non-profit organisation AlgorithmWatch has compiled more than 160 frameworks and guidelines for AI use worldwide [12]. It found that only ten have practical enforcement mechanisms. There is a need to go beyond the PR nature of the guidelines and operationalise them. On a related note, five types of risks that are already encountered or foreseeable have been identified : (1) ethics shopping, (2) ethics bluewashing, (3) ethics lobbying, (4) ethics dumping, and (5) ethics shirking [13]. These risks undermine the best efforts in translating principles into practices.

Ethically Aligned Design

In March 2019, the IEEE launched Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition (EAD1e) [14]. It is a global treatise crowd-sourced from experts in business, academia and policy makers over three years. At almost 300 pages, it is organised into three pillars (reflecting anthropological, political and technical aspects) and eight general principles (human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse and competence).

Prof Andreas spent some time diving deeper into the sixth general principle – accountability. This is especially relevant to developers as AI applications have been known to deviate from their intended use and will likely continue to do so on occasions, despite the best of intentions. The question of the legal status of accountability inevitably comes up. For example, government and industry stakeholders should identify the types of decisions and operations that should never be delegated to AI systems, among other discussion points.

Ethics Certification

In February 2020, the IEEE announced the completion of the first phase of its work on the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) [15]. It aims to offer a process and define a series of marks by which organisations can seek certifications for the processes around the Al products, systems and services they provide. This is a positive development, in Prof Andreas’ view, as he sees the possibility of Singapore contributing in this space.

Model AI Governance Framework

The Model AI Governance Framework [16] published by the Personal Data Protection Commission (PDPC) is the framework that most developers in Singapore are familiar with. The second edition was released in January 2020 at the World Economic Forum Annual Meeting in Davos, Switzerland. It is voluntary in nature and provides guidance on issues to be considered and measures which can be implemented to build stakeholder confidence in AI and to demonstrate reasonable efforts to align internal policies, structures and processes with relevant accountability-based practices in data management and protection. It consists of two guiding principles ( (1) AI that is explainable, transparent and fair, (2) AI that is human-centric ), and four guidance areas ( (1) internal governance structure and measures, (2) appropriate level of human involvement, (3) operations management, (4) stakeholder interaction and communication ) [17].

Open Source Tools

Beyond discussing principles, developers are most interested in the available tools that can help them in their work. IBM AI Fairness 360 [21], IBM Explainability 360 [22] and IBM Adversarial Robustness 360 [23] are open source Python libraries from Big Blue. Similarly, Microsoft has Microsoft Fairlearn [24] and Microsoft InterpretML [25]. Developers can check them out and evaluate them for their own needs before developing their own Python packages.

Finally, here are the recordings of the two parts of the webinar series. Do catch the lively Q&A sessions at the end of each lecture when Prof Andreas fielded questions from our managers, engineers and apprentices.

 

Further Reading

  1. Human Compatible : Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
  2. https://www.partnershiponai.org/about/#our-work
  3. http://moralmachine.mit.edu/
  4. https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico
  5. https://futureoflife.org/ai-open-letter/
  6. https://futureoflife.org/bai-2017/
  7. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  8. https://www.oecd.org/going-digital/ai/principles/
  9. https://www.baai.ac.cn/news/beijing-ai-principles-en.html
  10. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices by Jessica Morley et al
  11. https://hdsr.mitpress.mit.edu/pub/l0jsh9d1
  12. https://algorithmwatch.org/en/ai-ethics-guidelines-inventory-upgrade-2020
  13. Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical by Lucian Floridi
  14. https://ethicsinaction.ieee.org/#read
  15. https://standards.ieee.org/industry-connections/ecpais.html
  16. https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework
  17. https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/Primer-for-2nd-edition-of-AI-Gov-Framework.pdf
  18. https://ico.org.uk/about-the-ico/news-and-events/ai-blog-an-overview-of-the-auditing-framework-for-artificial-intelligence-and-its-core-components
  19. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html
  20. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims by Miles Brundage et al
  21. https://aif360.mybluemix.net/resources
  22. https://aix360.mybluemix.net/resources
  23. https://github.com/IBM/adversarial-robustness-toolbox
  24. https://docs.microsoft.com/en-us/azure/machine-learning/concept-fairness-ml
  25. https://github.com/interpretml/interpret
  26. https://www.microsoft.com/en-us/research/publication/co-designing-checklists-to-understand-organizational-challenges-and-opportunities-around-fairness-in-ai/

Author