Why Every Organization Needs a Framework for the Ethical Implementation of Artificial Intelligence

Introduction

On May 13, 2024 the provincial government introduced Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024.  If the Bill is enacted, it will require the creation of accountability frameworks for the use of artificial intelligence systems at public sector entities including those covered by the Freedom of Information and Protection of Privacy Act, the Municipal Freedom of Information and Protection of Privacy Act, children’s aid societies and school boards.

This is the first Ontario legislation that directly regulates AI broadly.  However, the Ontario government  has had the Trustworthy Artificial Intelligence (AI) Framework applicable to government services since 2021.

Why are these frameworks necessary?  The reason is that there are three main pitfalls to artificial intelligence: algorithmic bias, explainability/transparency and privacy.  In this article we will discuss these pitfalls and why an ethical or accountability framework makes sense.

The Three Pitfalls of AI

a)   Algorithmic Bias

A colloquial way to express the issue of algorithmic bias is “garbage in, garbage out”.  Artificial intelligence works by learning from a data set.  If the data set is incomplete, inaccurate, or wrongly chosen for the purpose, then the output may be “biased”.  It is important to reiterate that algorithms are not inherently biased but inherit bias from the human interaction during the training phases.  A well-known example of this is the facial recognition software that could not recognize faces of colour. In that case, the data set that the machine was trained on did not include enough faces of colour.  Another example in the human resources context was the recruiting software used by Amazon that consistently recommended male candidates over female candidates because the dataset it was trained on did not include enough female representation.

Organizations implementing AI need to understand how the machine was trained to be confident that the issue of bias is addressed.  One key approach is to ensure that the data used to train the algorithm is representative of the population and free from bias. This requires careful data collection and sampling methods. Additionally, models should be updated regularly to avoid “model creep” and the propagation of bias over time. Mathematical de-biasing models can also be used to adjust for bias in the data. Finally, organizations should consider establishing protocols and processes to monitor for bias, such as by engaging external ethicists to review AI technology or assigning roles within the organization to identify and mitigate bias.

b)   Transparency/Explainability

There are several issues with transparency and explainability in AI. One key issue is that the way neural networks and machine learning algorithms work makes it difficult to understand how decisions are made. This is often referred to as the “black box problem.” Another issue is that explanations of AI systems are not standardized, which makes it difficult to assess them.

There are several potential solutions to these issues, including creating explanation-producing systems that are designed to simplify interpretation, developing best practices and standards for explainability, and focusing on methods for explaining black-box models.  Organizations that are implement AI to assist with decision-making should turn their minds to the question of explainability. This is particularly important where the decision impacts an individual and may need to be defended in a legal proceeding. For example, an employer may need to justify how software that reviews employment applications eliminated a candidate from consideration for a position.

c)   Privacy

The most common mode of AI ethics failure is privacy intrusion, as noted in This can occur when data is obtained without consent, or when data is used for a purpose not consented to. Other concerns include the potential for AI to be used to identify, track, and monitor individuals, as well as the possibility for AI to infer sensitive information about individuals.

In Ontario, there are several statutes that protect individual privacy including the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and the upcoming Artificial Intelligence and Data Act (AIDA)Provincially, there is the Personal Health Information Protection Act (PHIPPA), the Freedom of Information and Protection of Privacy Act (FIPPA) and the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA)Bill 194 will, among other things, amend FIPPA to impose new requirements to protect personal information collected and to prevent significant change to the purpose for which personal information is used or disclosed.  These amendments recognize the importance of protecting privacy in an increasingly connected environment where individuals are becoming more aware of how their personal data is collected and used. Compliance with privacy laws is a key consideration for any organization collecting or using personal data in the development of AI.

Ethical Frameworks for the Implementation of AI

Bill 194, if enacted, will require public sector organizations to develop and implement accountability frameworks respecting the use of AI “in accordance with the regulations.” There are currently no regulations.  Based on the Beta Principles for Ethical Use that apply to the government, we can anticipate that an accountability framework will require an organization to address the following:

  1. AI must be transparent and explainable. It must be transparent around the use and disclosure of data and the organization should ensure that people understand outcomes of AI decision making so that they can discuss, challenge and improve them.
  2. AI must be good and fair. The AI should be designed to comply with human rights legislation and respect democratic values, including dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.
  3. AI must be safe. This means that AI includes appropriate safeguards throughout the system’s lifecycle, including testing, piloting, scaling, human intervention, and alternative processes in case a complete halt of system operations is required.
  4. Accountability and responsibility. Human accountability and decision-making over AI systems need to be clearly identified and distributed within the organization.
  5. AI should be human-centric. It should be designed for public benefit, and the public should be meaningfully engaged in its development and implementation.
  6. AI should be sensible and appropriate. Technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.

Developing an ethical framework for implementing AI is also recommended for private sector organizations. First, ensuring that the AI system is trustworthy and legally compliant is important. Organizations can avoid potential issues such as privacy violations, discrimination, and liability by following an ethical framework. Additionally, an ethical framework can help organizations demonstrate to their stakeholders that they are taking the potential harms of AI seriously. Finally, an ethical framework can help guide the design, development, and use of AI systems in a way that aligns with the organization’s values.

For an example of a publicly stated AI framework, Microsoft has published Putting principles into practice: How we approach responsible AI at Microsoft.

Not all organizations are at the forefront of developing AI tools for mass market implementation.  However, organizations intending to introduce AI products can also benefit from developing a framework to guide decision-making and implementation.  The following is a list of considerations for developing a framework:

  • Who within the organization should be part of the decision-making process? What will their responsibilities be?
  • What will be the criteria for the technical robustness and reliability of AI systems, including performance benchmarks?
  • Does the AI product address the problem it is intended to solve? Should the problem be solved by a machine alone, or should a human be in the loop?
  • What are the legal compliance issues, if any?
  • What will the policies regarding data and privacy be?
  • Can the AI be explained internally and externally?
  • What training will be required and implemented?
  • How will the AI be tested before it is implemented?
  • What will be the procedures for identifying, assessing, and mitigating risks associated with AI systems?
  • How will the AI be monitored for ongoing performance?

A framework does not need to be overly complicated to be effective.  The most important feature is to include humans who are educated regarding the three pitfalls of AI and given the responsibility to take the necessary steps to introduce AI ethically and responsibly.

References

https://www.forbes.com/sites/ariannajohnson/2023/05/25/racism-and-ai-heres-how-its-been-criticized-for-amplifying-bias/

https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/

https://www.techopedia.com/definition/34940/black-box-ai