AI governance? Data Governance?

Eliud Nduati
4 min readSep 4, 2024

--

We are living in the age of AI. Some of the theories we studied or speculated about AI in school are already happening, and this is exciting. There are many arguments against and for this surge in the development of AI capabilities. Many people are busy trying their hand at it. Companies are busy restricting their processes and systems to reap the benefits of AI.

This is amazing. But let’s take a different perspective. Despite all these developments, we have heard (or are hearing about ) AI governance (for the first time here). In simple terms, AI governance is the set of rules and policies that guide the development and use of AI systems. It is the framework that ensures ethical and safe AI development and application. In this article and the following articles, I will attempt to break down the idea behind artificial intelligence governance. I will refer to books, papers, and articles that discuss the vast world of AI and AI governance.

To delve into AI governance, we must first understand what it entails. First, AI governance is not meant for the AI we build but for the developers and programmers who create the AI systems and tools. These developers and programmers play a crucial role in ensuring that the AI systems they create are ethical and safe. AI governance targets those of us who build AI systems and tools. In its innate form, AI is neutral if there is anything of the sort, but introducing our morals, behaviors, and training data changes AI tools.

We train AI systems on data that focuses on accomplishing our intended goals. If the idea behind the tool is maximizing our investment returns, we will focus on data that directly relates to how the market behaves under different scenarios and conditions; that way, the AI tool will centrally focus on how to achieve this goal. The values we want the AI tool to adopt are not based on the programming approach, but instead on the data, we use to train it. If we train the system on data that focuses on shorting stocks as the best way to make money, what will happen when you deploy it to make you a profit? The data will determine what the AI system will do, focusing on shorting as many stocks as possible to generate profit. In this instance, note that the AI tool has achieved and is continuously working to achieve its intended goal; shorting the stocks is not a glitch but a function we intend it to accomplish.

So, what do we mean when we say AI governance? It’s not just a set of rules, guardrails, and policies. It’s a powerful tool that can ensure AI tools and systems are and remain ethical and safe. It establishes frameworks that direct AI research, development, and applications to make sure they are aligned to safety, fairness, and respect for human life. This means AI governance is not a threat to AI developers but a beacon of hope, a promise of a future where AI tools are created with fairness and safety in mind from the word go.

But if we are working to ensure AI is aligned to social values, how exactly do we achieve this? Yes, humans are biased and unethical. But the same humans can also teach AI to be ethical and fair. This is where AI governance steps in. It provides the necessary rules and guardrails to prevent the resulting AI tool from being based on the biases in the data used to train the models. AI governance is the beacon of hope, guiding us towards a future where AI is fair and unbiased.

Let’s go back to the Stocks profit tool we conceptualized earlier. If it is overfitted to only recognize shorting stocks as the way to make revenue, then the development has failed. There are other ways to make revenue that do not involve shorting stocks. Now imagine a case where two AI tools are using the same approach to make profits; how far would they go in shorting stocks to make profits? What would be the impact on the market? An almost similar case was witnessed in 1987, black Monday. Program trading was used to remove human decisions from the equation, which generally resulted in more buy orders when a price increase was detected and similar sell orders when a market price fell. While computer program trading was not the sole cause of this black Monday event, it contributed significantly to the 20% decline in the stock market. This is a stark reminder of the potential risks of AI development without proper governance.

All this implies is that the need for AI governance is caused by the flaws arising from a human element in AI development, training, and maintenance. Establishing AI governance will address these flaws. These flaws, biases, discrimination, and safety issues can be addressed through sound AI policies. Regulation, data governance, and models are trained on the right type of data, which is free from biases and discrimination. Remember how earlier facial recognition systems were biased before you ask what the right data is? Please read it here: Biased technology.

--

--