Video: AI Planning - Principle 4: Advisability

In this video, Graeme Cox explains his fourth principle of planning an AI Strategy - Advisability.



Setup a call with Graeme

Please submit this form, if you would like to setup a call with Graeme.





VIDEO TRANSCRIPT

Coming back to this capability and advisability piece, how do I handle, how do I handle AI to make sure that I'm delivering for my business in the best possible way.

One of those is around how you take this to your employees.

Now, they need to They need to both see a that you're not trying to replace them, at least at this point, and that you are trying to aid them, that their efficiency will be improved in their live will be better through the AI that you deliver to them. We hope. And that genuinely is the case, by the way, for software engineers and co pilot tools. They all love using co pilots once they get into it because it takes the drudgery away from software development. And the flip side of that is that you need to make sure that you're not actually just not overloading your employees by by pushing too much data on them too much too many new things that they have to handle and that your rush into AI doesn't actually, result in employee dissatisfaction because you're giving them too many tools that aren't in a aren't thought, thought through and structured in their delivery.

There's some deeper issues, but beneath that in terms of delivering AI responsibly.

So first of all, the explainability of the AI that you put in place. It's all very well for AI to come up with a predictive solution, for example, to say, oh, you should order, ten thousand more units of of of this product next month. Without being able to understand why it is suggesting that that you do that, how how do you build trust in the solution?

Explainability in AI can be, not always, but it can be absolutely vital. And it's not always possible in black box AI models, you know, deep neural networks are, very opaque in terms of how they make decisions.

To the extent that, again, if we come back to generative AI, you will undoubtedly have, heard many things about the fact that there are all emergent properties from large language models that even the people who built them can't explain, they do things that are surprising.

Being able to explain why they do things is a challenge at the moment. And so understanding your need for explainability in the models may drive you down different routes and different solutions.

Fairness. Absolutely crucial. So again, we hear a lot about bias in models. Classic examples like until very recently, image generation tools, if you ask them to create an image of a CEO, they would create images of white middle class males, middle age males, sorry, I should say.

Robustness. So models that are over fitting can be very brittle which means that you can get very, very good answers when you stay on the narrow track as soon as you go out with the bounds of, of of of the perfect question, then you start getting rubbish back again if the model just doesn't deliver in a across a broad scale of data.

Privacy, absolutely crucial because generally we're talking about putting some of our data crown jewels into these systems. Do you know that that data is not being exposed or copied and reused for other purposes?

And and transparency. And transparency is really an organization piece around how do you communicate your AI strategy back to your stakeholders, both internal and external, As you move down a road of becoming interested in developing AI models and using them in the business, I strongly believe that making clear statements to your internal employees, your customers, your external stakeholders about what that usage will be and what you're intending to do with AI really matters. It's one of the big transformational technologies we will see in our lifetime. If not the transformational technology, and need to be treated as such.