Why You Need Good AI Governance
It may not be everyone's favorite corporate function....but it's very necessary.
The Path #16
Written by Nate Buchanan, COO & Co-Founder, Pathfindr
No corporate buzzword elicits as many reactions - most of them negative - as “governance”. Whether it’s a Forum, Committee, or Tribe, anything governance-related is often perceived as something that gets in the way of progress, even if people acknowledge that it’s necessary.
When it comes to AI, this dichotomy is particularly stark because of the nature of the technology. It’s exciting, it changes quickly, and people want to play around with it…but it’s also unpredictable and can put companies and their customers at risk.
Hence the need for governance - whether you like it or not.
But AI governance doesn’t have to be cumbersome and overbearing. In fact, there are some simple ways to integrate it into your existing governance processes so that you don’t need a standalone set of meetings, reports, or templates that nobody wants to attend or deal with.
First, start by understanding what governance is “for” at your company. Most teams think of governance as a system of controls and checks that ensure that goals are being achieved in the right way. This might include considerations such as:
Cost-Effectiveness - is the project on budget?
Quality - is the work product being created at a high level of quality?
Speed - are milestones being met as scheduled?
Risk - is the team or company exposed in some way?
These elements are usually discussed in a forum to ensure that they are within tolerances; if they’re not, corrective action is usually required.
AI projects can be included in these discussions, as cost, quality, speed and risk are all important for them as well. But the tolerances may need to be different due to the unique nature of the tech. For example, outputs from LLMs can be unpredictable, so holding a generative AI system to the same quality standard that you would an e-commerce website is unreasonable. Similarly, it might be easier to run over-budget when building a chatbot because the gap between technical capability and desired customer experience may be wider than expected, resulting in the need for more experimentation and trial and error. And risk from unproven AI applications has been well-documented but suffice it to say that it needs to be a central topic of conversation at any governance forum that’s addressing AI.
So you can use existing governance processes to manage AI projects - great. You might be wondering, is there anything unique about AI that requires a new process or capability to be created in order to govern it effectively?
I’m glad you asked. The answer is yes.
Testing is simultaneously the most important and least understood element of AI governance. Most organisations have things they’d like to improve about their current testing capabilities (to put it mildly). But asking test teams to take on the additional challenge of learning how to test AI applications - with their experimental nature and unpredictable outputs - can be quite daunting. Yet without a good testing framework that is specific to AI, it’s difficult to get the inputs you need to participate in a governance forum. You need to be able to articulate the current state of quality in an AI solution in order to participate in the conversation.
Unpacking the approach for testing AI applications is outside the scope of this week’s edition - we’ll cover that in a future post. Here are a few approaches to AI testing from a governance perspective to consider in the meantime:
Rethink Requirements - requirements for AI applications may not be as binary as test teams are used to, and that’s OK. Wanting a chatbot to give the correct answer 100% of the time isn’t feasible with today’s technology - but if you set the threshold as “provide a usable/acceptable response 80% of the time”, that can be achieved and tested against with a sufficiently large sample set of users. When you’re updating a governance forum on whether or not requirements have been met, it’s easier to have the conversation if you’re able to provide that type of context.
Deconstruct Defects - AI applications “in the wild” will be used by lots of different people who may be expecting them to behave a certain way, and they might raise defects if they get an answer they don’t like. It’s important that testers are able to evaluate each one and determine which is a true defect and which is one that can be explained by the unpredictable nature of AI. Governance forums will need to be taken on the journey to understand the difference.
Surprise Scripts - because you usually won’t have straightforward requirements when working with AI, your test scripts won’t always have a set of predefined steps and expected results. Exploratory testing - session-based, unscripted testing that focuses on exercising capabilities along a wide range of user journeys - is particularly suited for AI because it allows teams to explain application behavior in a more comprehensive way that can give those responsible for governance confidence in the health of the solution.
AI governance isn’t something to afraid of - in fact, it's necessary, and it needn’t be invasive. The more teams can adapt their processes to the unique needs of AI projects, the more they’ll be able to manage cost, speed, quality and risk without sacrificing progress.
Until next week!
Join our workshop - pathfindr.ai/workshop