AI the Law & You

Mark Miller, Shannon Lietz, Joel MacMull

A lawyer, a technologist and a layman walk into a bar and discuss a recent legal case filing against an AI company. These are not scripted. What you’ll hear are real conversations as we talk, argue, and cajole each other to think deeper about the legal aspects of using AI, and what people should be concerned about when using one of the platforms.

Episodes

  1. 03/05/2024

    Air Canada: Chatbot is a legal entity responsible for its own actions

    In today’s episode, we talk about how Air Canada tried to defend itself in court by contending that the chatbot on its company site is its own entity and is separate from Air Canada. A lot of the “fun” in this case is the absurdity of the defense. However, it’s a good case for thought experiments, thinking about the near term future of AI and who ultimately is responsible for its output.  While prepping for this call, I really did dig into the case here because of the absurdity of it in my mind. Joel, give us a brief overview of what the case is and who the complainants and defendants are. From Joel MacMull, Lawyer What makes this resonate, at least with me, is the fact that we have a very sympathetic plaintiff. A young man, buys an airline ticket, in connection with his deceased grandmother, he buys it from Vancouver to Toronto. Prior to buying the ticket, he, is on Air Canada's website and is having a conversation with its chatbot and asks about bereavement fare. And the sum and substance of the message he receives is that within 90 days after his purchase, again, this is a conversation he's having with the chatbot, within 90 days after making his purchase, he can essentially claim bereavement. And the chatbot, in providing him with that textual response, actually has a hyperlink to another Air Canada webpage, which has additional terms about bereavement there. It so happens that that additional hyperlink, however, is at odds with what the chatbot is saying, and that hyperlink says, in essence, that bereavement fare has to be, paid for or, otherwise, dealt with on the front end. You can't do it after the travel has occurred. But, from the facts of the case, it doesn't look like this young man did that, instead just relying on the chatbot. Long story short, he travels to Toronto, within the 90 day window, he seeks his reimbursement, consistent with the information he received from the chatbot. And, from what I understand, he engages in some emails with Air Canada, and they say, Hey, you know what? The statement that you received on the chatbot is erroneous. We'll flag that, we'll get that corrected, but from what I understand, refused to provide him, with, the discount of his bereavement fare, which, according to the opinion, was something to the tune of 600, was the difference between the full fare and the bereavement fare that he otherwise would have been entitled to.

    31 min
  2. 01/30/2024

    The Legal Confusion between AI and Generative AI in the Courts

    You are listening to AI, the Law and You, a show where a lawyer, a layman, and a technologist discuss the current state of AI in court filings and the court's response to those filings. These are not scripted talking points. What you hear are real conversations between Joel MacMull (the lawyer), Shannon Lietz (the technologist), and Mark Miller (the layman). In today's episode, we discussed the confusion in the court system about the differences between AI and Generative AI. We'll start with Joel giving a brief overview of the current state of AI in the courts. From Joel MacMull (the lawyer) There are now in the neighborhood of a half dozen federal judges that have issued standing orders as it relates to the use of AI in court filings. There's no outright prohibition barring the use of I'll say Generative AI. One of the problems with the standing orders is that at least some of them don't distinguish between Generative AI and AI. That's an issue because there's a lot of non-generative AI tools out there that are used every day that I think are really helpful. Putting that aside for a moment, these orders basically say that if you as a lawyer are going to be filing something, you are making a representation that to the extent that you used any AI tool, Generative AI tool, that you vetted it. That's another distinction. Some standing orders insist that the filer vet the sources. Others just simply say that the material has been vetted. Meaning, I guess, implicitly, that you could kick that over to someone else to do it. But the bottom line is some courts have said, "If you're going to use these materials, you're going to do so with the expectation that you have vetted them or that they have been vetted." Meaning that you're not going to get hallucinations. We're not going to get some of those false citations That we've talked about a few times. The Schwartz case in the summer. Most recently the issue with Michael Cohen serving up to his lawyer, a series of really specious citations.

    22 min

About

A lawyer, a technologist and a layman walk into a bar and discuss a recent legal case filing against an AI company. These are not scripted. What you’ll hear are real conversations as we talk, argue, and cajole each other to think deeper about the legal aspects of using AI, and what people should be concerned about when using one of the platforms.