36 min

CAIR 62: Overcome The 4 Pitfalls To AI Ethics !‪!‬ ClickAI Radio

    • Entrepreneurship

Grant
Welcome everybody. In this episode, we take a look at the four pitfalls to AI ethics and are they solvable?
Okay, hey, everybody. Welcome to another episode of ClickAI Radio. So glad to have in the house today Plainsight AI. What a privilege. Elizabeth Spears with us here today. Hi, Elizabeth.
Elizabeth
Hey, Grant. Thanks for having me back.
Grant
Thanks for coming back. You know, when we were talking last time, you threw out this wonderful topic around pitfalls around AI ethics. And it's such a common sort of drop phrase, everyone's like, oh, there's ethics issues around AI. Let's, let's shy away from it. Therefore, it's got a problem, right? And I loved how you came back. And it was after our episode, it's like he pulled me aside in the hallway. Metaphorically like "Grant, let's have a topic on the pitfalls around these some of these ethical topics here". So I, you hooked me I was like, Oh, perfect. That's, that's a wonderful idea with that.
Elizabeth
So typically, I think there's, there's so many sort of high level conversations about ethics and AI, but, but I feel like we don't dig into the details very often of kind of when that happens, and how to deal with it. And like you said, it's kind of the common pitfalls.
Grant
It is. And, you know, it's interesting is the, in the AI world in particular, it seems like so many of the ethical arguments come up around the image, style of AI, right, you know, ways in which people have misused or abused AI, right for either bad use cases or other sort of secret or bad approaches. So this is like you are the perfect person to talk about this and, and cast the dagger in the heart of some of these mythical ethical things, or maybe not right. All right. Oh, yeah. Alright, so let's talk through some of these. So common pitfalls. So there were four areas that you and I sort of bantered about, he came back he said, Okay, let's talk about bias. Let's talk about inaccuracy in models, a bit about fraud, and then perhaps something around legal or ethical consent violations. Those were four that we started with, we don't have to stay on those. But let's tee up bias. Let's talk about ethical problems around bias. All right.
Elizabeth
So I mean, there's really there's several types of bias. And, and often the biased and inaccuracies can kind of conflate because they can sort of cause each other. But we have I have some examples of of both. And then again, somewhere, some where it's it's really biased and inaccuracies that are happening. But one example or one type is not modeling your problem correctly, and in particular, to simply so I'll start with the example. So you want to detect safety in a crosswalk, right, relatively simple kind of thing. And, and you want to make sure that no one is sitting in this crosswalk. Because that would be now generally be a problem. It's a problem. So, so you do body pose detection, right? And if you aren't thinking about this problem holistically, you say, All right, I'm going to do sitting versus standing. Now the problem with that is what about a person in a wheelchair? So then you would be detecting kind of a perceived problem because you think someone sitting in the middle of a crosswalk but but it's really just about accurately defining that problem. And then and then making sure that's reflected in your labeling process. And and that kind of flows. into another whole set of problems, which is when your test data and your kind of labeling process are a mismatch with your production environment. So one of the things that we really encourage for our customers is, is collecting as much production close as close to possible, or ideally just production data that you'll be running your models on, instead of having sort of these very different test data sets that then you'll then you'll kind of deploy into production. And there can be these mismatches. And sometimes that's a really difficult thing to accomplish.
Grant
Yeah, so I was gonna ask

Grant
Welcome everybody. In this episode, we take a look at the four pitfalls to AI ethics and are they solvable?
Okay, hey, everybody. Welcome to another episode of ClickAI Radio. So glad to have in the house today Plainsight AI. What a privilege. Elizabeth Spears with us here today. Hi, Elizabeth.
Elizabeth
Hey, Grant. Thanks for having me back.
Grant
Thanks for coming back. You know, when we were talking last time, you threw out this wonderful topic around pitfalls around AI ethics. And it's such a common sort of drop phrase, everyone's like, oh, there's ethics issues around AI. Let's, let's shy away from it. Therefore, it's got a problem, right? And I loved how you came back. And it was after our episode, it's like he pulled me aside in the hallway. Metaphorically like "Grant, let's have a topic on the pitfalls around these some of these ethical topics here". So I, you hooked me I was like, Oh, perfect. That's, that's a wonderful idea with that.
Elizabeth
So typically, I think there's, there's so many sort of high level conversations about ethics and AI, but, but I feel like we don't dig into the details very often of kind of when that happens, and how to deal with it. And like you said, it's kind of the common pitfalls.
Grant
It is. And, you know, it's interesting is the, in the AI world in particular, it seems like so many of the ethical arguments come up around the image, style of AI, right, you know, ways in which people have misused or abused AI, right for either bad use cases or other sort of secret or bad approaches. So this is like you are the perfect person to talk about this and, and cast the dagger in the heart of some of these mythical ethical things, or maybe not right. All right. Oh, yeah. Alright, so let's talk through some of these. So common pitfalls. So there were four areas that you and I sort of bantered about, he came back he said, Okay, let's talk about bias. Let's talk about inaccuracy in models, a bit about fraud, and then perhaps something around legal or ethical consent violations. Those were four that we started with, we don't have to stay on those. But let's tee up bias. Let's talk about ethical problems around bias. All right.
Elizabeth
So I mean, there's really there's several types of bias. And, and often the biased and inaccuracies can kind of conflate because they can sort of cause each other. But we have I have some examples of of both. And then again, somewhere, some where it's it's really biased and inaccuracies that are happening. But one example or one type is not modeling your problem correctly, and in particular, to simply so I'll start with the example. So you want to detect safety in a crosswalk, right, relatively simple kind of thing. And, and you want to make sure that no one is sitting in this crosswalk. Because that would be now generally be a problem. It's a problem. So, so you do body pose detection, right? And if you aren't thinking about this problem holistically, you say, All right, I'm going to do sitting versus standing. Now the problem with that is what about a person in a wheelchair? So then you would be detecting kind of a perceived problem because you think someone sitting in the middle of a crosswalk but but it's really just about accurately defining that problem. And then and then making sure that's reflected in your labeling process. And and that kind of flows. into another whole set of problems, which is when your test data and your kind of labeling process are a mismatch with your production environment. So one of the things that we really encourage for our customers is, is collecting as much production close as close to possible, or ideally just production data that you'll be running your models on, instead of having sort of these very different test data sets that then you'll then you'll kind of deploy into production. And there can be these mismatches. And sometimes that's a really difficult thing to accomplish.
Grant
Yeah, so I was gonna ask

36 min