Some fundamentals of AI governance
Or at least what I think it takes to manage AI properly
Governance is about minimising risks and maximising benefits.
When people think of AI governance, they might think of risk assessments, audits, vendor reviews and a range of other measures. And it is true that governance is about measures like this, and many others that could be technical, organisational or legal in nature.
But there are also certain fundamentals that cannot be avoided when doing AI governance within an organisation. This is the case whether you are building your own AI systems, augmenting existing systems or using systems built by others.
These fundamentals include at least the following:
Strategy. What are you doing exactly and how?
Responsibility and authority. Who is going to do it?
Resources. What is needed to do it?
Impact and performance. How do you know if what you are doing is good?
These fundamentals of governance are oriented around structures, processes and people that make governance work. It is what makes the effective implementation of the requisite measures possible.
But in addition, the king of all governance fundamentals is culture.
Culture determines the answers to those fundamental governance questions, and thereby the kind of measures that get implemented.