Tech Law Talks

Reed Smith
Tech Law Talks Podcast

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.

  1. AI explained: AI and governance

    1 DAY AGO

    AI explained: AI and governance

    Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O’Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining.  Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters.  Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now?  Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.?  Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how the

    28 min
  2. AI explained: AI and recent HHS activity with HIPAA considerations

    6 DAYS AGO

    AI explained: AI and recent HHS activity with HIPAA considerations

    Reed Smith partners share insights about U.S. Department of Health and Human Services initiatives to stave off misuse of AI in the health care space. Wendell Bartnick and Vicki Tankle discuss a recent executive order that directs HHS to regulate AI’s impact on health care data privacy and security and investigate whether AI is contributing to medical errors. They explain how HHS collaborates with non-federal authorities to expand AI-related protections; and how the agency is working to ensure that AI outputs are not discriminatory. Stay tuned as we explore the implications of these regulations and discuss the potential benefits and risks of AI in healthcare.  ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Wendell: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in healthcare. My name is Wendell Bartnick. I'm a partner in Reed Smith's Houston office. I have a degree in computer science and focused on AI during my studies. Now, I'm a tech and data lawyer representing clients in healthcare, including providers, payers, life sciences, digital health, and tech clients. My practice is a natural fit given all the innovation in this industry. I'm joined by my partner, Vicki Tankle.  Vicki: Hi, everyone. I'm Vicki Tankle, and I'm a digital health and health privacy lawyer based in Reed Smith's Philadelphia office. I've spent the last decade or so supporting health industry clients, including healthcare providers, pharmaceutical and medical device manufacturers, health plans, and technology companies navigate the synergies between healthcare and technology and advising on the unique regulatory risks that are created when technology and innovation far outpace our legal and regulatory frameworks. And we're oftentimes left managing risks in the gray, which as of today, July 30th, 2024, is where we are with AI and healthcare. So when we think about the use of AI in healthcare today, there's a wide variety of AI tools that support the health industry. And among those tools, a broad spectrum of the use of health information, including protected health information, or PHI, regulated by HIPAA, both to improve existing AI tools and to develop new ones. And if we think about the spectrum as measuring the value or importance of the PHI, the individuals individuals identifiers themselves, it may be easier to understand that the far ends of the spectrum and easier to understand the risks at each end. Regulators in the industry have generally categorized use of PHI in AI into two buckets, low risk and high risk. But the middle is more difficult and where there can be greater risk because it's where we find the use or value of PHI in the AI model to be potentially debatable. So on the one hand of the spectrum, for example, the lower risk end, there are AI tools such as natural language processors, where individually identifiable health information is not centric to the AI model. But instead, for this example, it's the handwritten notes of the healthcare professional that the AI model learns from. And with more data and more notes, the tool's recognition of the letters themselves, not the words the letters form, such as patient's name, diagnosis, or lab results, the better the tool operates. Then on the other hand of the spectrum, the higher risk end, there are AI tools such as patient-facing next best action tools that are based on an individual's patient medical history, their reported symptoms, their providers, their prescribed medications, p

    12 min
  3. AI explained: AI and shipping

    17 SEPT

    AI explained: AI and shipping

    AI-driven autonomous ships raise legal questions, and shipowners need to understand autonomous systems’ limitations and potential risks. Reed Smith partners Susan Riitala and Thor Maalouf discuss new kinds of liability for owners of autonomous ships, questions that may occur during transfer of assets, and new opportunities for investors. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Susan: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. And today we will focus on AI in shipping. My name is Susan Riitala. I'm a partner in the asset finance team of the transportation group here in the London office of Reed Smith.  Thor: Hello, I'm Thor Maalouf. I'm also a partner in the transportation group at Reed Smith, focusing on disputes.  Susan: So when we think about how AI might be relevant to shipping, One immediate thing that springs to mind is the development of marine autonomous vessels. So, Thor, please can you explain to everyone exactly what autonomous vessels are?  Thor: Sure. So, according to the International Maritime Organization, the IMO, a maritime autonomous surface ship or MASS is defined as a ship which, to a varying degree, can operate independently of human interaction. Now, that can include using technology to carry out various ship-related functions like navigation, propulsion, steering, and control of machinery, which can include using AI. In terms of real-world developments, at this year's meeting of the IMO's working group on autonomous vessels, which happened last month in June, scientists from the Korean Research Institute outlined their work on the development and testing of intelligent navigation systems for autonomous vessels using AI. That system was called NEEMO. It's undergone simulated and virtual testing, as well as inland water model tests, and it's now being installed on a ship with a view to being tested at sea this summer. Participants in that conference also saw simulated demonstrations from other Korean companies like the familiar Samsung Heavy Industries and Hyundai of systems that they're trialing for autonomous ships, which include autonomous navigation systems using a combination of AI, satellite technology and cameras. And crewless coastal cargo ships are already operating in Norway, and a crewless passenger ferry is already being used in Japan. Now, fundamentally, autonomous devices learn from their surroundings, and they complete tasks without continuous human input. So, this can include simplifying automated tasks on a vessel, or a vessel that can conduct its entire voyage without any human interaction. Now, the IMO has worked on categorizing a spectrum of autonomy using different degrees and levels of automation. So the lowest level still involves some human navigation and operation, and the highest level does not. So for example, the IMO has a degree Degree 1 of autonomy, a ship with just some automated processes and decision support, where there are seafarers on board to operate and control shipboard systems and functions. But there are some operations which can be automated at times and be unsupervised. Now, as that moves up through the degrees, we get to, for example, Degree 3, where you have a remotely controlled ship without seafarers on board the ship. The ship will be controlled and operated from a remote location. All the way up to degree four, the highest level of automation, where you have a fully autonomous ship, where the operating systems of the ship are able to make their own d

    16 min
  4. AI explained: Open-source AI

    9 SEPT

    AI explained: Open-source AI

    Reed Smith partners Howard Womersley Smith and Bryan Tan with AI Verify community manager Harish Pillay discuss why transparency and explain-ability in AI solutions are essential, especially for clients who will not accept a “black box” explanation. Subscribers to AI models claiming to be “open source” may be disappointed to learn the model had proprietary material mixed in, which might cause issues. The session describes a growing effort to learn how to track and understand the inputs used in AI systems training. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. My name is Bryan Tan and I'm a partner at Reed Smith Singapore. Today we will focus on AI and open source software.  Howard: My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies team of Reed Smith in London and New York. And I'm very pleased to be in this podcast today with Bryan and Harish.  Bryan: Great. And so today we have with us Mr. Harish Pillay. And before we start, I'm going to just ask Harish to tell us a little bit, well, not really a little bit, because he's done a lot about himself and how he got here.  Harish: Well, thanks, Bryan. Thanks, Howard. My name is Harish Pillay. I'm based here in Singapore, and I've been in the tech space for over 30 years. And I did a lot of things primarily in the open source world, both open source software, as well as in the hardware design and so on. So I've covered the spectrum. When I was way back in the graduate school, I did things in AI and chip design. That was in the late 1980s. And there was not much from an AI point of view that I could do then. It was the second winter for AI. But in the last few years, there was the resurgence in AI and the technologies and the opportunities that can happen with the newer ways of doing things with AI make a lot more sense. So now I'm part of an organization here in Singapore known as AI Verify Foundation. It is a non-profit open-source software foundation that was set up about a year ago to provide tools, software testing tools, to test AI solutions that people may be creating to understand whether those tools are fair, are unbiased, are transparent. There's about 11 criteria it tests against. So both traditional AI types of solutions as well as generative AI solutions. So these are the two open source projects that are globally available for anyone to participate in. So that's currently what I'm doing.  Bryan: Wow, that's really fascinating. Would you say, Harish, that kind of your experience over the, I guess, the three decades with the open source movement, with the whole Linux user groups, has that kind of culminated in this place where now there's an opportunity to kind of shape the development of AI in an open-source context?  Harish: I think we need to put some parameters around it as well. The AI that we talk about today could never have happened if it's not for open-source tools. That is plain and simple. So things like TensorFlow and all the tooling that goes around in trying to do the model building and so on and so forth could not have happened without open source tools and libraries, a Python library and a whole slew of other tools. If these were all dependent on non-open source solutions, we will still be talking about one fine day something is going to happen. So it's a given that that's the baseline. Now, what we need to do is to get this to the next level of unde

    27 min
  5. AI explained: AI and product liability in life sciences

    4 SEPT

    AI explained: AI and product liability in life sciences

    The rapid integration of AI and machine learning in the medical device industry offers exciting capabilities but also new forms of liability. Join us for an exciting podcast episode as we delve into the surge in AI-enabled medical devices. Product liability lawyers Mildred Segura, Jamie Lanphear and Christian Castile focus on AI-related issues likely to impact drug and device makers soon. They also give us a preview of how courts may determine liability when AI decision-making and other functions fail to get desired outcomes. Don't miss this opportunity to gain valuable insights into the future of health care. ----more---- Transcript:  Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Mildred: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, myself, Mildred Segura,, partner here at Reed Smith in the Life Sciences Practice Group, along with my colleagues, Jamie Lanphear and Christian Castile, will be focusing on AI and its intersection with product liability within the life sciences space. And especially as we see more and more uses of AI in this space, we've been talking about there's a lot of activity going on with respect to the regulatory landscape as well as the legislative landscape and activity going on there, but not a lot of discussion about product liability and its implications for companies who are doing business in this space. So that's what prompted our desire and interest in putting together this podcast for you all. And with that, I'll have my colleagues briefly introduce themselves. Jamie, why don't you go ahead and start? Jamie: Thanks, Mildred. I'm Jamie Lanphear. I am of counsel at Reed Smith based in Washington, D.C. I'm in the Life Sciences and Health Industry Group. I've spent the last 10 years defending manufacturers and product liability litigation, primarily in the medical device and pharma space. I think, like you said, this is just a really interesting topic. It's a new topic, and it's one that hasn't gotten a lot of attention. A lot of airtime, you know, you go to conferences these days and AI is sort of front and center in a lot of the presentations and webinars. And much of the discussion is around, you know, regulatory cyber security and privacy. And I think that, you know, in the coming years, we're going to start to see product liability litigation in the AI medical device space that we haven't seen before. Christian, did you want to go ahead and introduce yourself? Christian: Yeah, thanks, Jamie. Thanks, Mildred. My name is Christian Castile. I am an associate at Reed Smith in the Philadelphia office. And much like Mildred and Jamie, my practice consists primarily working alongside medical device and pharmaceutical manufacturers in product liability lawsuits. And Jamie, I think what you mentioned is so on point. It feels like everybody's talking about AI right now. And to a certain extent, I think that can be intimidating, but we actually are at a really interesting vantage point opportunity to get in the ground on the ground floor of some of this technology and how it is going to shape the legal profession. And so, you know, as the technology advances, we're going to see new use cases popping up across industries and, of course, of interest to this group in particular is that healthcare space. So it's really exciting to be able to grapple this headfirst and the people who are sort of investing in this now are going to be able to just really be a leg up when it comes to evaluating their risk. Mildred: So thanks, Jamie and Christian,

    35 min
  6. AI explained: AI and e-discovery

    21 AUG

    AI explained: AI and e-discovery

    Reed Smith and its lawyers have used machine-assisted case preparation tools for many years (and it launched the Gravity Stack subsidiary) to apply legal technology that cuts costs, saves labor and extracts serious questions faster for senior lawyers to review. Partners David Cohen, Anthony Diana and Therese Craparo discuss how generative AI is creating powerful new options for legal teams using machine-assisted legal processes in case preparation and e-discovery. They discuss how the field of e-discovery, with the help of emerging AI systems, is becoming more widely accepted as a cost and quality improvement. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  David: Hello, everyone, and welcome to Tech Law Talks and our new series on AI. Over the the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in eDiscovery. My name is David Cohen, and I'm pleased to be joined today by my colleagues, Anthony, Diana, and Therese Craparo. I head up Reed Smith's Records & eDiscovery practice group, big practice group, 70 plus lawyers strong, and we're very excited to be moving into AI territory. And we've been using some AI tools and we're testing new ones. Therese, I'm going to turn it over to you to introduce yourself.  Therese: Sure. Thanks, Dave. Hi, my name is Therese Craparo. I am a partner in our Emerging Technologies Group here at Reed Smith. My practice focuses on eDiscovery, digital innovation, and data risk management. And And like all of us, seeing a significant uptick in the interest in using AI across industries and particularly in the legal industry. Anthony?  Anthony: Hello, this is Anthony Diana. I am a partner in the New York office, also part of the Emerging Technologies Group. And similarly, my practice focuses on digital transformation projects for large clients, particularly financial institutions. and also been dealing with e-discovery issues for more than 20 years, basically, as long as e-discovery has existed. I think all of us have on this call. So looking forward to talking about AI.  David: Thanks, Anthony. And my first question is, the field of e-discovery was one of the first to make practical use of AI in the form of predictive coding and document analytics. Predictive coding has now been around for more than two decades. So, Teresa and Anthony, how's that been working out?  Therese: You know, I think it's a dual answer, right? It's been working out incredibly well, and yet it's not used as much as it should be. I think that at this stage, the use of predictive coding and analytics in e-discovery is pretty standard, right? Right. As Dave, as you said, two decades ago, it was very controversial and there was a lot of debate and dispute about the appropriate use and the right controls and the like going on in the industry and a lot of discovery fights around that. But I think at this stage, we've really gotten to a point where this technology is, you know, well understood, used incredibly effectively to appropriately manage and streamline e-discovery and to improve on discovery processes and the like. I think it's far less controversial in terms of its use. And frankly, the e-discovery industry has done a really great job at promoting it and finding ways to use this advanced technology in litigation. I think that one of the challenges is that still is that while the lawyers who are using it are using it incredibly effectively, it's still not enough people that have adopted it. And I think there are still lawyers out there that haven't been

    28 min
  7. AI explained: AI and building the ecosystem in Singapore

    14 AUG

    AI explained: AI and building the ecosystem in Singapore

    Singapore is developing ethics and governance guidelines to shape the development and use of responsible AI, and the island nation’s approach could become a blueprint for other countries. Reed Smith partner Bryan Tan and Raju Chellam, editor-in-chief of the AI Ethics & Governance Body of Knowledge, examine concerns and costs of AI, including impacts on owners of intellectual property and on workers who face job displacement. Time will tell whether this ASEAN nation will strike an adequate balance in regulating each emerging issue. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.  Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore all the key challenges and opportunities within the rapidly evolving AI landscape. Today, we'll focus on AI and building the ecosystem here in Singapore. My name is Bryan Tan, and I'm a data and emerging technology partner at Reed Smith Singapore. Together, we have with us today, Mr. Raju Chellam, the Editor-in-Chief of the AI E&G BOK, And that stands for the AI Ethics and Governance Body of Knowledge, initiative by the SCS, Singapore Computer Society, and IMDA, the Infocomm Media Development Authority of Singapore. Hi, Raju. Today, we are here to talk about the AI ecosystem in Singapore, of which you've been a big part of. But before we start, I wanted to talk a little bit about you. Can you share what you were doing before artificial intelligence appeared on the scene and how that has changed after we now see artificial intelligence being talked about frequently?  Raju: Thanks, Bryan. It's a pleasure and an honor to be on your podcast. Before AI, I was at Dell, where I was head of cloud and big data solutions for Southeast Asia and South Asia. I was also chairman of what we then called COIR, which is the Cloud Outage Incidence Response. This is a standards working group under IMDA, and I was vice president in the cloud chapter at SCS. In 2018, the Straits Times Press published my book called Organ Gold on the illegal sale of human organs on the dark web. I was then researching the sale of contraband on the dark web. So all of that came together and helped me when I took over the role of AI in the new era.  Bryan: So all of that comes from dark place and that has led you to discovering the prevalence of AI and then to this body of knowledge. So the question here is, so tell us a little bit about this body of knowledge that you've been working on. Why does it matter? Is it a game changer?  Raju: Let me give you some background. The Ethics & Governance Body of Knowledge is a joint effort by the Singapore Computer Society and IMBA, the first of its kind in the Asia-Pacific, if not the world, to pull together a comprehensive collection of material on developing and deploying AI ethically. It is anchored on the AI Governance Framework 2nd Edition that IDA launched in 2020. The first edition of the BOK was launched in October 2020 before GenAI emerged on the scene. The second edition focused on GenAI was launched by Minister Josephine Thieu in September 2023. And the third edition, the most comprehensive, will be launched on August 22, which is next month. The most crucial thing about this is that it's a compendium of all the use cases, regulations, guidelines, frameworks related to the responsible use of AI, both from a developing concept as well as a deploying concept. So it's something that all Singaporeans, if not people outside, would find great value in accessing.  Bryan: Okay. And so I see how that kind of relates to your point about the dark web

    17 min

About

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada