7 episodes

A lawyer, a technologist and a layman walk into a bar and discuss a recent legal case filing against an AI company. These are not scripted. What you’ll hear are real conversations as we talk, argue, and cajole each other to think deeper about the legal aspects of using AI, and what people should be concerned about when using one of the platforms.

AI the Law & You Mark Miller, Shannon Lietz, Joel MacMull

    • Technology

A lawyer, a technologist and a layman walk into a bar and discuss a recent legal case filing against an AI company. These are not scripted. What you’ll hear are real conversations as we talk, argue, and cajole each other to think deeper about the legal aspects of using AI, and what people should be concerned about when using one of the platforms.

    AWS Whistleblower says Amazon is Ignoring its own AI Policies

    AWS Whistleblower says Amazon is Ignoring its own AI Policies

    From Shannon Lietz: For companies that are starting to adopt things like AI, and Copilot, and ChatGPT, and LLAMA, and you name whatever LLM that's out there, Are they evaluating their policies with relationship to how data gets used ? My perspective is, if you're going to bring in public data, or you're going to bring in copyrighted materials, note that because it could be a concern. It could end up in something that does get flagged for future lawsuits.
    From Mark Miller: In today’s episode, Joel, Shannon and I discuss a case where an employee at AWS blew the whistle, saying the company is ignoring its own policies when it comes to consumption of data for its AI engine. Is being told to ignore company policy illegal? You might be surprised on how our trio comes to grips with the concept.

    • 22 min
    The Story Behind the Google Fine by the French Competition Authority

    The Story Behind the Google Fine by the French Competition Authority

    From Joel MacMull: The French competition authority last week said the tech giant, Google, failed to negotiate fair licensing deals with media outlets and did not tell them it was using their articles to train its chatbot. And as a consequence, it fined Google about 270 million US dollars. The fine was in Euros, but that's roughly what we're dealing with in terms of a conversion rate.
    So it's not nothing, but also for one of the largest tech companies in the world, it's, it's, you know, certainly not going to make a material difference to their bottom line. But it outlines, I think, some interesting issues, particularly when we contrast that to what's going on now in the United States and some of the litigation we're seeing against OpenAI.
    From Mark Miller:  The real issue here that I read in the French decision is that, I'll put it in American terms, Google is not negotiating in good faith.
    The part of the negotiations are who do you negotiate with and who pays who when things are settled. And I think that's Google's case that they're coming back with to say, you haven't defined the rules of the game or else they keep switching, so we don't even know who we're dealing with anymore.

    • 16 min
    Air Canada: Chatbot is a legal entity responsible for its own actions

    Air Canada: Chatbot is a legal entity responsible for its own actions

    In today’s episode, we talk about how Air Canada tried to defend itself in court by contending that the chatbot on its company site is its own entity and is separate from Air Canada. A lot of the “fun” in this case is the absurdity of the defense. However, it’s a good case for thought experiments, thinking about the near term future of AI and who ultimately is responsible for its output. 
    While prepping for this call, I really did dig into the case here because of the absurdity of it in my mind. Joel, give us a brief overview of what the case is and who the complainants and defendants are.

    From Joel MacMull, Lawyer
    What makes this resonate, at least with me, is the fact that we have a very sympathetic plaintiff. A young man, buys an airline ticket, in connection with his deceased grandmother, he buys it from Vancouver to Toronto. Prior to buying the ticket, he, is on Air Canada's website and is having a conversation with its chatbot and asks about bereavement fare.

    And the sum and substance of the message he receives is that within 90 days after his purchase, again, this is a conversation he's having with the chatbot, within 90 days after making his purchase, he can essentially claim bereavement. And the chatbot, in providing him with that textual response, actually has a hyperlink to another Air Canada webpage, which has additional terms about bereavement there.

    It so happens that that additional hyperlink, however, is at odds with what the chatbot is saying, and that hyperlink says, in essence, that bereavement fare has to be, paid for or, otherwise, dealt with on the front end. You can't do it after the travel has occurred.

    But, from the facts of the case, it doesn't look like this young man did that, instead just relying on the chatbot. Long story short, he travels to Toronto, within the 90 day window, he seeks his reimbursement, consistent with the information he received from the chatbot. And, from what I understand, he engages in some emails with Air Canada, and they say, Hey, you know what?

    The statement that you received on the chatbot is erroneous. We'll flag that, we'll get that corrected, but from what I understand, refused to provide him, with, the discount of his bereavement fare, which, according to the opinion, was something to the tune of 600, was the difference between the full fare and the bereavement fare that he otherwise would have been entitled to.

    • 30 min
    Introducting AI host, AILAH: The George Carlin deepfake video case. Is it defensible?

    Introducting AI host, AILAH: The George Carlin deepfake video case. Is it defensible?

    In today's episode, we investigate a case featuring a deep fake of a George Carlin comedy routine. We're also adding a new host, the voice of AILAH, an AI engine who will be giving us real time responses to questions within the discussion.
    Shannon and I were intrigued by some of the ideas AILAH came up with, which moved our thoughts in a direction different from where we originally intended. Adding eyelid to the mix turned in. Adding AILAH to the mix turned this into a very interesting discussion.
    Mark: "AILAH, give us an overview of the case."
    AILAH: In the George Carlin deep fake case, a video surfaced online purportedly showing the late comedian George Carlin delivering a monologue about the COVID-19 pandemic. However it was later discovered that the video was a deep fake, a digitally altered video created using artificial intelligence to manipulate Carlin's likeness onto another person's performance.
    This incident highlighted concerns about the rise of deep fake technology and its potential to spread misinformation and deceive viewers. It also underscored the importance of verifying the authenticity of digital content and raising awareness about the risks associated with deep fakes in the digital age.

    • 23 min
    The Legal Confusion between AI and Generative AI in the Courts

    The Legal Confusion between AI and Generative AI in the Courts

    You are listening to AI, the Law and You, a show where a lawyer, a layman, and a technologist discuss the current state of AI in court filings and the court's response to those filings. These are not scripted talking points. What you hear are real conversations between Joel MacMull (the lawyer), Shannon Lietz (the technologist), and Mark Miller (the layman). In today's episode, we discussed the confusion in the court system about the differences between AI and Generative AI. We'll start with Joel giving a brief overview of the current state of AI in the courts.
    From Joel MacMull (the lawyer)
    There are now in the neighborhood of a half dozen federal judges that have issued standing orders as it relates to the use of AI in court filings. There's no outright prohibition barring the use of I'll say Generative AI. One of the problems with the standing orders is that at least some of them don't distinguish between Generative AI and AI. That's an issue because there's a lot of non-generative AI tools out there that are used every day that I think are really helpful.
    Putting that aside for a moment, these orders basically say that if you as a lawyer are going to be filing something, you are making a representation that to the extent that you used any AI tool, Generative AI tool, that you vetted it. That's another distinction.
    Some standing orders insist that the filer vet the sources. Others just simply say that the material has been vetted. Meaning, I guess, implicitly, that you could kick that over to someone else to do it. But the bottom line is some courts have said, "If you're going to use these materials, you're going to do so with the expectation that you have vetted them or that they have been vetted." Meaning that you're not going to get hallucinations. We're not going to get some of those false citations That we've talked about a few times. The Schwartz case in the summer. Most recently the issue with Michael Cohen serving up to his lawyer, a series of really specious citations.

    • 22 min
    AI Copyright Law for Non-Humans, with Joel MacMull, Shannon Lietz, and Mark Miller

    AI Copyright Law for Non-Humans, with Joel MacMull, Shannon Lietz, and Mark Miller

    In today's episode, we examine the case of Steven Thaler trying to copyright protect a piece of artwork generated by his instructions to an AI creation engine. We'll start with Joel's overview of the case.
    The Thaler case is interesting for a couple of reasons. One is obviously that it deals with AI, but it is also an extension of existing legal principles. I mean, the long and short of it is, was that Stephen Thaler applied for a copyright with the Copyright Office. He indicated that he was the claimant, but that the author was essentially his creativity machine. This was essentially some code that he developed in an effort to create an image. The Copyright Office rejected his application on grounds that, at least as applied for, there did not appear to be any human authorship.
    And, oh, backstory, one of the requirements of the Copyright Office, as recently as earlier this year in February, is that human authorship is necessary for subject matter that is amenable to being copyrighted in the United States.

    • 24 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
Darknet Diaries
Jack Rhysider
Waveform: The MKBHD Podcast
Vox Media Podcast Network