Tech Law Talks

Reed Smith

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.

  1. 7 JUL

    Tariff-related considerations when planning a data center project

    High tariffs would significantly impact data center projects, through increased costs, supply chain disruptions and other problems. Reed Smith’s Matthew Houghton, John Simonis and James Doerfler explain how owners and developers can attenuate tariff risks throughout the planning, contract drafting, negotiation, procurement and construction phases. In this podcast, learn about risk allocation and other proactive measures to manage cost and schedule challenges in today’s uncertain regulatory environment. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Matt: Hey, everyone. Welcome to this episode of our Data Center series. My name is Matt Houghton, joined today by Jim Doerfler and John Simonis. And in today's episode, we will discuss our perspectives regarding key tariff-related issues impacting data center projects that owners and developers should consider during initial planning, contract drafting, and negotiation and procurement and construction. So a bit about who we have here today. I'm Matt Houghton, counsel at Reed Smith based out of our San Francisco office. I focus on projects and construction-related matters both on the litigation and transaction sides. I was very excited to receive the invitation to moderate this podcast from two of my colleagues and thought leaders at Reed Smith in the area of capital projects, Mr. John Simonis and Mr. Jim Doerfler. And with that, I'm pleased to introduce them. John, why don't you go ahead and give the audience a brief rundown of your background?  John: Hi, I'm John Simonis. I'm a partner in the Real Estate Group, practicing out of the Orange County, California office. I've been very active in the data center space for many years, going back to the early years of digital realty. Over the years, I've handled a variety of transactions in the data center space, including acquisitions and dispositions, joint ventures and private equity transactions, leasing, and of course, construction and development. While our podcast today is primarily focused on the impacts of tariffs and trade restrictions on data center construction projects, I should note that we are seeing a great deal of focus on tariffs and trade restrictions by private equity and M&A investors. Given the potential impacts on ROIs, it should not be surprising that investors, like owners and developers, are laser-focused on tariffs and tariff uncertainty, both through diligence and risk allocation provisions. This means that sponsors can expect sophisticated investors to carefully diligence and review data center construction contracts and often require changes if they believe the tariff-related provisions are suboptic. Jim?  Jim: Yes, my name is Jim Doerfler. I'm a partner in our Pittsburgh office. I've been with Reed Smith now for over 25 years and have been focused on the projects and construction space. I would refer to myself as what I would call a bricks and sticks construction lawyer in that I focus on how projects are actually planned and built. I come to that by way of background in the sense that I grew up in a contractor family and I worked for a period of time as a project manager and a corporate officer for a commercial electrical contractor. And data center projects are the types of projects that we would have loved. There are projects that are complex. They have high energy demands. They have expensive equipment and lots of copper and fiber optics. In my practice at Reed Smith, I advise clients on commercial and industrial projects and do both claims and transactional work. And data of projects are sort of the biggest thing that we've seen come down the pipeline in some time. And so we're excited to talk to you about them here today.  Matt: Excellent. Thank you both. Really glad to be here with both of you. I always enjoy our conversations. I'm pretty sure this is the kind of thing we would be talking about, even if a mic wasn't on. So happy to be here. I want to start briefly with the choice of topic for today's podcast. Obviously, tariffs are at the forefront of construction-based considerations currently here in the U.S., but why are tariffs so important to data center project considerations?  Jim: So, this is Jim, and what I would say is that Reed Smith is a global law firm, and one of the things that we do in our projects in construction group is we try and survey the marketplace. And data center projects are such a significant part of the growth in the construction industry. In the U.S., for example, when we surveyed the available construction data from the available sources and subject matter experts, what we found is that at least for the past year or two, construction industry growth has been relatively flat aside from data center growth. And when you look at the growth of data centers and the drive for their being built by the growth in AI and other areas, it's really a growth industry for the construction and project space. And so something like tariffs that have the potential to impact those projects are particularly of concern to us. And so we want to make sure for our owner and developer clients and industry friends that we provided our perspectives on how to do these projects right.  Matt: That makes a lot of sense. So we've sort of set the stage for the discussion today. I think we could go on for hours if we didn't give ourselves some guidelines, but there are really three critical phases of a project that a owner or developer should be thinking about how they're going to address tariffs. And those are the initial planning, the contract drafting and negotiation, and then the procurement and construction phase. Since planning comes first, and of course, of the Titleist podcast is tariff-related considerations when planning a data center project. Let's start with the planning phase and some of the considerations an owner or developer may have at that time. John, what do you see as some of the key portions of the planning process where an owner or developer needs to start addressing tariff-related issues?  John: Tariffs and trade restrictions are getting a great deal of focus in all construction contracts. Tariffs impact steel and aluminum, rare earth materials. Data centers are big, expensive projects and can be impacted greatly. We're obviously in a period of great uncertainty as it relates to these types of restrictions. So I think in the planning stage, it may be somewhat obvious to say that that may be the most important time to mitigate to the extent possible some of the impacts. I think it starts in the RFP process. The requirements you're going to put on your design team and on your contractor to cooperate, collaborate, to mitigate to the extent possible the impacts of tariffs and particularly increased tariffs. You identify the materials and equipment subject to material tariffs and tariff risk. It increases, particularly those that might increase in the future, and I'd address those as best possible. You expect your team to be proposing potential mitigation measures, such as early release, substitutes, and other value engineering exercises. So that should be a very proactive dialogue. And you should be getting the commitment from the parties early in the RFP process and throughout the planning and pricing stage to cooperate with the owner to mitigate negative impacts, both in terms of cost, timing, and other supply chain issues. Jim, there's also some things we're seeing in the procurement space, and maybe you can address that.  Jim: Sure. So, you know, as you're going through the RFP phase and sort of anticipating what you would ultimately want to build into your contract and how you're going to procure it, you want to be thinking ahead about procurement-related items. As John indicated, these projects that are big and complicated and that involve significant and expensive equipment. So you want to be thinking about essentially your pre-construction phase and your early release packages, your equipment or your major material items. And you want to be talking with your trade partners in terms of allowing that equipment to get there in a timely fashion and also trying to lock down pricing to mitigate against the risk of tariff-related or generally trade-related disruptions that could affect either price or delivery issues. So you want to be thinking about facilitating deposits for long lead or big ticket material or equipment items. And you want to identify what are those big equipment or material items that could make or break your project and identify the risk associated with those items early on and build that into your planning process.  John: And there's some difference between different contracting models. If you were looking at a fixed price contract versus a cost plus for the GDP or a cost plus contract, obviously the risk allocation as it relates to tariff and trade restrictions might be handled differently. But generally speaking, we're seeing tariff and trade restriction risk being addressed very specifically in contracts now. So sophisticated owners and contractors are very specifically focusing on provisions that specifically address these risks and how they might be mitigated and allocated.  Jim: Just to follow up on John's point I mean in theory there are you could you could have a fixed price contract versus at least in the in the US what we would describe as cost plus or cost reimbursable projects using a guaranteed maximum price or a not to exceed cap style agreement in our experience at least in the US they tend to be more of the latter type of project deliv

    28 min
  2. AI explained: Introduction to Reed Smith's AI Glossary

    23 APR

    AI explained: Introduction to Reed Smith's AI Glossary

    Have you ever found yourself in a perplexing situation because of a lack of common understanding of key AI concepts? You're not alone. In this episode of "AI explained," we delve into Reed Smith's new Glossary of AI Terms with Reed Smith guests Richard Robbins, director of applied artificial intelligence, and Marcin Krieger, records and e-discovery lawyer. This glossary aims to demystify AI jargon, helping professionals build their intuition and ask informed questions. Whether you're a seasoned attorney or new to the field, this episode explains how a well-crafted glossary can serve as a quick reference to understand complex AI terms. The E-Discovery App is a free download available through the Apple App Store and Google Play. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Marcin: Welcome to Tech Law Talks and our series on AI. Today, we are introducing the Reed Smith AI Glossary. My name is Marcin Krieger, and I'm an attorney in the Reed Smith Pittsburgh office.  Richard: And I am Richard Robbins. I am Reed Smith's Director of Applied AI based in the Chicago office. My role is to help us as a firm make effective and responsible use of AI at scale internally.  Marcin: So what is the AI Glossary? The Glossary is really meant to break down big ideas and terms behind AI into really easy-to-understand definitions so that legal professionals and attorneys can have informed conversations and really conduct their work efficiently without getting buried in tech jargon. Now, Rich, why do you think an AI glossary is important?  Richard: So, I mean, there are lots of glossaries about, you know, sort of AI and things floating around. I think what's important about this one is it's written by and for lawyers. And I think that too many people are afraid to ask questions for fear that they may be exposed as not understanding things they think everyone else in the room understands. Too often, many are just afraid to ask. So we hope that the glossary can provide comfort to the lawyers who use it. And, you know, I think to give them a firm footing. I also think that it's, you know, really important that people do have a fundamental understanding of some key concepts, because if you don't, that will lead to flawed decisions, flawed policy, or choices can just miscommunicate with people in connection with you, with your work. So if we can have a firm grounding, establish some intuition, I think that we'll be in a better spot. Marcin, how would you see that?  Marcin: First of all, absolutely, I totally agree with you. I think that it goes even beyond that and really gets to the core of the model rules. When you look at the various ethics opinions that have come out in the last year about the use of AI, and you look at our ethical obligations and basic competence under Rule 1.1, we see that ethics opinions that were published by the ABA and by various state ethics boards say that there's a duty on lawyers to exercise the legal knowledge, skill, thoroughness, and preparation necessary for the representation. And when it comes to AI, you have to achieve that competence through some level of self-study. This isn't about becoming experts about AI, but to be able to competently represent a client in the use of generative AI, you have to have an understanding of the capabilities and the limitations, and a reasonable understanding about the tools and how the tech works. To put another way, you don't have to become an expert, but you have to at least be able to be in the room and have that conversation. So, for example, in my practice, in litigation and specifically in electronic discovery, we've been using artificial intelligence and advanced machine learning and various AI products previous to generative AI for well over a decade. And as we move towards generative AI, this technology works differently and it acts differently. And how the technology works is going to dictate how we do things like negotiate ESI protocols, how we issue protective orders, and also how we might craft protective orders and confidentiality agreements. So being able to identify how these types of orders restrict or permit the use of generative AI technology is really important. And you don't want to get yourself into a situation where you may inadvertently agree to allow the other side, the receiving party of your client's data, to do something that may not comply with the client's own expectations of confidentiality. Similarly, when you are receiving data from a producing party, you want to make sure that the way that you apply technology to that data complies with whatever restrictions may have been put in to any kind of protective order or confidentiality agreement.  Richard: Let me jump in and ask you something about that. So you've been down this path before, right? This is not the first time professionally you've seen new technology coming into play that people have to wrestle with. And as you were going through the prior use of machine learning and things that inform your work, how have you landed? You know, how often did you get into a confusing situation because people just didn't have a common understanding of key concepts where maybe a glossary like this would have helped or did you use things like that before?  Marcin: Absolutely. And it comes, it's cyclic. It comes in waves. Anytime there's been a major advancement in technology, there is that learning curve where attorneys have to not just learn the terminology, but also trust and understand how the technology works. Even now, technology that was new 10 years ago still continues to need to be described and defined even outside of the context of AI things like just removing email threads almost every ESI order that we work with requires us to explain and define what that process looks like when we talk about traditional technology assisted review to this day our agreements have to explain and describe to a certain level how technology-assisted review works. But 10 years ago, it required significant investment of time negotiating, explaining, educating, not just opposing counsel, but our clients.  Richard: I was going to ask about that, right? Because. It would seem to me that, you know, especially at the front end, as this technology evolves, it's really easy for us to talk past each other or to use words and not have a common understanding, right?  Marcin: Exactly, exactly. And now with generative AI, we have exponentially more terminology. There's so many layers to the way that this technology works that even a fairly skilled attorney like myself, when I first started learning about generative AI technology, I was completely overwhelmed. And most attorneys don't have the time or the technical understanding to go out into the internet and find that information. A glossary like this is probably one of the best ways that an attorney can introduce themselves to the terminology or have a reference where if they see a term that they are unfamiliar with, quickly go take a look at what does that term mean? What's the implication here? Get that two sentence description so that they can say, okay, I get what's going on here or put the brakes on and say, hey, I need to bring in one of my tech experts at this point.  Richard: Yeah, I think that's really important. And this kind of goes back to this notion that this glossary was prepared, you know, at least initially, right, for, you know, from the litigator's lens, litigator's perspective. But it's really useful well beyond that. And, you know, I mean, I think the biggest need is to take the mystery out of the jargon, to help people, you know, build their intuition, to ask good questions. And you touched on something where you said, well, I've got a, I don't need to be a technical expert on a given topic, but I need a tight. Accessible description that lets me get the essence of it. So, I mean, a couple of my, you know, favorite examples from the glossary are, you know, in the last year or so, we've heard a lot of people talking about RAG systems and they fling that phrase around, you know, retrieval augmented generation. And, you know, you could sit there and say to someone, yeah, use that label, but what is it? Well, we describe that in three tight sentences. Agentic AI, two sentences.  Marcin: And that's a real hot topic for 2025 is agentic AI.  Richard: Yep.  Marcin: And nobody knows what it is. So I focus a lot on litigation and in particular electronic discovery. So I have a very tight lens on how we use technology and where we use it. But in your role, you deal with attorneys in every practice group and also professionally outside of the law firm. You deal with professionals and technologists. In your experience, how do you see something like this AI glossary helping the people that you work with and what kind of experience levels you get exposed to?  Richard: Yeah, absolutely. So I keep coming back to this phrase, this notion of saying it's about helping people develop an intuition for when and how to use things appropriately, what to be concerned about. So a glossary can help to demystify things. These concepts so that you can then carry on whatever it is that you're doing. And so I know that's rather vague and abstract, but I mean, at the end of the day, if you can get something down to a couple of quick sentences and the key essence of it, and that light bulb comes on and people go, ah, now I kind of understand what we're talking about, that will help them guide their conversations about what they should be concerned about or not concerned ab

    15 min
  3. AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect

    10 APR

    AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect

    Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera, a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek, a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He's also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks.  Benjamin: Thank you, Rebeca, for having me.  Rebeca: Well, let's dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI?  Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn't been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I'll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn't been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it's that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I'm very happy that we stood firm and said no, because there's many forms of AI that will arise over the years. Now we're talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we're using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they're using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption.  Rebeca: That's interesting. So you're saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we'll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that's the correct outcome. And you've given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you've led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That's an incredible achievement. And I really hope you'll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team.  Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we're nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven't kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it's a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It's a very straightforward and to a certain extent, a little boring answer as of now, because we've heard it so many times. But the crux was Mata versus Avianca. I'm not going to dive into the case. I think most of us have heard it. Who hasn't? There's many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I'll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don't think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there's a lot more to come. And I'll leave it there.  Rebeca: Well, that's very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we're talking about here in the U.S. So it's interesting how we still don't learn, I guess. That would be my take on that.  Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they're more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen.  Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we've heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration?  Benjamin: So it's an interesting question. To a certain extent, we tried to address many of the risks in the AI guidelines.

    37 min
  4. AI explained: The EU AI Act, the Colorado AI Act and the EDPB

    4 MAR

    AI explained: The EU AI Act, the Colorado AI Act and the EDPB

    Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI? ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy.  Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics.  Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast.  Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office.  Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app?  Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law.  Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act?  Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice.  Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance?  Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement.  Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts?  Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows?  Catherine: Well, as an American, I would read that as giving me a lot of flexibility.  Thomas: Yeah, true.  Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems?  Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare an

    23 min
  5. 12 FEB

    Navigating NIS2: What businesses need to know

    Catherine Castaldo, Christian Leuthner and Asélle Ibraimova dive into the implications of the new Network and Information Security (NIS2) Directive, exploring its impact on cybersecurity compliance across the EU. They break down key changes, including expanded sector coverage, stricter reporting obligations and tougher penalties for noncompliance. Exploring how businesses can prepare for the evolving regulatory landscape, they share insights on risk management, incident response and best practices. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hi, and welcome to Tech Law Talks. My name is Catherine Castaldo, and I am a partner in the New York office in the Emerging Technologies Group, focusing on cybersecurity and privacy. And we have some big news with directives coming out of the EU for that very thing. So I'll turn it to Christian, who can introduce himself.  Christian: Thanks, Catherine. So my name is Christian Leuthner. I'm a partner at the Reed Smith Frankfurt office, also in the Emerging Technologies Group, focusing on IT and data. And we have a third attorney on this podcast, our colleague, Asélle.  Asélle: Thank you, Christian. Very pleased to join this podcast. I am counsel based in Reed Smith's London office, and I also am part of emerging technologies group and work on data protection, cybersecurity, and technology issues.  Catherine: Great. As we previewed a moment ago, on October 17th, 2024, there was a deadline for the transposition of a new directive, commonly referred to as NIS2. And for those of our listeners who might be less familiar, would you tell us what NIS2 stands for and who is subject to it?  Christian: Yeah, sure. So NIS2 stands for the Directive on Security of Network and Information Systems. And it is the second iteration of the EU's legal framework for enhancing the cybersecurity of critical infrastructures and digital services, it will replace what replaces the previous directive, which obviously is called NIS1, which was adopted in 2016, but had some limitations and gaps. So NIS2 applies to a wider range of entities that provide essential or important services to the society and the economy, such as energy, transport, health, banking, digital infrastructure, cloud computing, online marketplaces, and many, many more. It also covers public administrations and operators of electoral systems. Basically, anyone who relies on network and information systems to deliver their services and whose disruptions or compromise could have significant impacts on the public interest, security or rights of EU citizens and businesses will be in scope of NIS2. As you already said, Catherine, NIS2 had to be transposed into national member state law. So it's a directive, not a regulation, contrary to DORA, which we discussed the last time in our podcast. It had to be implemented into national law by October 17th, 2024. But most of the member states did not. So the EU Commission has now started investigations regarding the violations of the treaty of the functioning of the European Union against, I think, 23 member states as they have not yet implemented NIS2 into national law.  Catherine: That's really comprehensive. Do you have any idea what the timeline is for the implementation?  Christian: It depends on the state. So there are some states that have already comprehensive drafts. And those just need to go through the legislative process. In Germany, for example, we had a draft, but we have elections in a few weeks. And the current government just stated that they will not implement the law before that. And so after the election, the implementation law will be probably discussed again, redrafted. And so it'll take some time. It might be in the third quarter of this year.  Catherine: Very interesting. We have a similar process. Sometimes it happens in the States where things get delayed. Well, what are some of the key components?  Asélle: So, NIS2 focuses on cybersecurity measures, and we need to differentiate it from the usual cybersecurity measures that any organization thinks about in the usual way where they protect their data, their systems against cyber attacks or incidents. So the purpose of this legislation is to make sure there is no disruption to the economy or to others. And in that sense, the similar kind of notions apply. Organizations need to focus on ensuring availability, authenticity, integrity, confidentiality of data and protect their data and systems against all hazards. These notions are familiar to us also from the GDPR kind of framework. So there are 10 cybersecurity risk management measures that NIS2 talks about, and this is policies on risk analysis and information system security, incident handling, business continuity and crisis management, supply chain security. Security in systems acquisition, development, and maintenance, policies to assess the effectiveness of measures, basic cyber hygiene practices, and training, cryptography and encryption, human resources security training, use of multi-factor authentication. So these are familiar notions also. And it seems the general requirements are something that organizations will be familiar with. However, the European Commission in its NIS Investments Report of November 2023 has done research, a survey, and actually found that organizations that are subject to NIS2 didn't really even take these basic measures. Only 22% of those surveyed had third-party risk management in place, and only 48% of organizations had top management involved in approving cybersecurity risk policies and any type of training. And this reduces the general commitment of organizations to cybersecurity. So there are clearly gaps, and NAS2 is trying to focus on improving that. There are other couple of things that I wanted to mention that are different from NIS1 and are important. So as Christian said, essential entities are different, have different regime, compliance regime applied to them compared with important entities. Essential entities need to systematically document their compliance and be prepared for regular monitoring by regulators, including regular inspections by competent authorities, whereas important entities only are obliged to kind of be in touch and communicate with competent authorities in case of security incidents. And there is an important clarification in terms of the supply chain, these are the questions we receive from our clients. And the question is, does the supply chain mean anyone that provides services or products? And from our reading of the legislation, supply chain only relates to ICT products and ICT services. Of course, there is a proportionality principle employed in this legislation, as with usually most of the European legislation, and there is a size threshold. The legislation only applies to those organizations who exceed the medium threshold. And two more topics, and I'm sorry that I'm kind of taking over the conversation here, but I thought the self-identification point was important because in the view of the European Commission, the original NIS1 didn't cover the organizations it intended to cover and so in the European Commission's view, the requirements are so clear in terms of which entities it applies to, that organizations should be able to assess it and register, identify themselves with the relevant authorities by April this year. And the last point, digital infrastructure organizations, their nature is specifically kind of taken into consideration, their cross-border nature. And if they provide services in several member states, there is a mechanism for them to register with the competent authority where their main establishment is based, similar to the notion under the GDPR.  Catherine: It sounds like, though, there's enough information in the directive itself without waiting for the member state implementation that companies who are subject to this rule could be well on their way to being compliant by just following those principles.  Christian: That's correct. So even if the implementation international law is currently not happening. All of the member states, companies can already work to comply with NIS2. So once the law is implemented, they don't have to start from zero. NIS2 sets out the requirements that important and essential entities under NIS2 have to comply with. For example have a proper information security management system have supply chain management train their employees and so they can already work to implement NIS2 and the the directive itself also has an access that sets out the sectors and potential entities that might be in scope of NIS2 And the member states cannot really vary from those annexes. So if you are already in scope of NIS2 under the information that is in the directive itself, you can be sure that you would probably also have to comply with your national rules. There might be some gray areas where it's not fully clear if someone is in scope of NIS2 and those entities might want to wait for the national implementation. And it also can happen that the national implementation goes beyond the directive and covers sectors or entities that might not be in scope under the directive itself. And then of course they will have to work to implement the requirements then. I think a good starting point anyways is the existing security program that companies already hopefully have in place so if they for example have an ISO 27001 framework implemented it might be good to start but with a mapping exercise what NIS2 might require in addition to the ISO 27001. And then

    21 min
  6. AI explained: AI and the Colorado AI Act

    29 JAN

    AI explained: AI and the Colorado AI Act

    Tyler Thompson sits down with Abigail Walker to break down the Colorado AI Act, which was passed at the end of the 2024 legislative session to prevent algorithmic discrimination. The Colorado AI Act is the first comprehensive law in the United States that directly and exclusively targets AI and GenAI systems. ----more---- Transcript:  Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Tyler: Hi, everyone. Welcome back to the Tech Law Talks podcast. This is continuing Reed Smith's AI series, and we're really excited to have you here today and for you to be with us. The topic today, obviously, AI and the use of AI is surging ahead. I think we're all kind of waiting for that regulatory shoe to drop, right? We're waiting for when it's going to come out to give us some guardrails or some rules around AI. And I think everyone knows that this is going to happen whether businesses want it to or not. It's inevitable that we're going to get some more rules and regulations here. Today, we're going to talk about what I see as truly the first or one of the first ones of those. That's the Colorado AI Act. It's really the first comprehensive AI law in the United States. So there's been some kind of one-off things and things that are targeted to more privacy, but they might have implications for AI. The Colorado AI Act is really the first comprehensive law in the United States that directly targets AI and generative AI and is specific for those uses, right? The other reason why I think this is really important is because Abigail and I were talking, we see this as really similar to what happened with privacy for the folks that are familiar with that. And this is something where privacy a few years back, it was very known that this is something that needed some regulations that needed to be addressed in the United States. After an absence of any kind of federal rulemaking on that, California came out with their CCPA and did a state-specific rule, which has now led to an explosion of state-specific privacy laws. I personally think that that's what we could see with AI laws as well, is that, hey, Colorado is the first mover here, but a lot of other states will have specific AI laws in this model. There are some similarities, but some key differences to things like the EU AI Act and some of the AI frameworks. So if you're familiar with that, we're going to talk about some of the similarities and differences there as we go through it. And kind of the biggest takeaway, which you will be hearing throughout the podcast, which I wanted to leave you with right up at the start, is that you should be thinking about compliance for this right now. This is something that as you hear about the dates, you might know that we've got some runway, it's a little bit away. But really, it's incredibly complex and you need to think about it right now and please start thinking about it. So as for introductions, I'll start with myself. My name is Tyler Thompson. I'm a partner at the law firm of Reed Smith in the Emerging Technologies Practice. This is what my practice is about. It's AI, privacy, tech, data, basically any nerd type of law, that's me. And I'll pass it over to Abigail to introduce herself. Abigail: Thanks, Tyler. My name is Abigail Walker. I'm an associate at Reed Smith, and my practice focuses on all things related to data privacy compliance. But one of my key interests in data privacy is where it intersects with other areas of the law. So naturally, watching the Colorado AI Act go through the legislative process last year was a big pet project of mine. And now it's becoming a significant part of my practice and probably will be in the future. Tyler: So the Colorado AI Act was passed at the very end of the 2024 legislative session. And it's largely intended to prevent algorithmic discrimination. And if you're asking yourself, well, what does that mean? What is algorithmic discrimination? In some sense, that is the million-dollar question, but we're going to be talking about that in a little bit of detail as we go through this podcast. So stay tuned and we'll go into that in more detail. Abigail: So Tyler, this is a very comprehensive law and I doubt we'll be able to cover everything today, but I think maybe we should start with the basics. When is this law effective and who's enforcing it and how is it being enforced? So the date that you need to remember is February 1st of 2026. So there is some runway here, but like I said at the start, even though we have a little bit of runway, there's a lot of complexity and I think it's something that you should start now. As far as enforcement, it's the Colorado AG. The Colorado Attorney General is going to be tasked with enforcement here. A bit of good news is that there's no private right of action. So the Colorado AG has to bring the enforcement action themselves. You are not under risk of being sued for the Colorado Privacy Act from an individual plaintiff. Maybe the bad news here is that violating the Colorado AI Act will be considered an unfair and deceptive trade practice under Colorado law. So the trade practice regulation, that's something that exists in Colorado law like it does in a variety of state laws. And a violation of the Colorado AI Act can be a violation of that as well. And so that just really brings the AI Act into some of this overarching rules and regulations around deceptive trade practices. And that really increases the potential liability, your potential for damages. And I think also just from a perception point, it puts the Colorado AI Act violation in some of these kind of consumer harm violations, which tend to just have a very bad perception, obviously, to your average state consumer. The law also gives the Attorney General a lot of power in terms of being able to ask covered entities for certain documentation. We're going to talk about that as we get into the podcast here. But the AG also has the option to issue regulations that further specify some of the requirements of this law. That's the thing that we're really looking forward to is additional regulations here. As we go through the podcast today, you're going to realize there seems like there's a lot of gray area. And you'd be right, there is a lot of gray area. And that's what we're hoping some of the regulations will come out and try to reduce that amount of uncertainty as we move forward. Abigail, can you tell us who does the law apply to and who needs to have their ducks in a row for the AGE by the time we hit next February? Abigail: Yeah. So unlike Colorado's privacy law, which has like a pretty large like processing threshold that entities have to reach to be covered, this law applies to anyone doing business in Colorado that develops or deploys a high-risk AI system. Tyler: Well, that high-risk AI system sentence, it feels like you used a lot of words there that have a real legal significance. Abigail: Oh, yes. This law has a ton of definitions, and they do a lot of work. I'll start with a developer. A developer, you can think of just as the word implies. They are entities that are either building these systems or substantially modifying them. And then deployers are the other key players in this law. Deployers are entities that deploy these systems. So what does deploy actually mean? The law defines deploy as to use. So basically, it's pretty broad. Tyler: Yeah, that's quite broad. Not the most helpful definition I've heard. So if you're using a high-risk AI system and you do business in Colorado, basically you're a deployer. Abigail: Yes. And I will emphasize the fact that it only applies to most of the requirements of the law. Only apply to high-risk AI systems. And I can get into what that means. High-risk, for the purpose of this law, refers to any AI system that makes or is a substantial factor in making a consequential decision. Tyler: What is a consequential decision? Abigail: They are decisions that produce legal or substantially similar effects. Tyler: Substantially similar. Abigail: Yeah. Basically, as I'm sure you're wondering, what does substantially similar mean? We're going to have to see how that plays out when enforcement starts. But I can get into what the law considers to be legal effects, and I think this might highlight or shed some light on what substantially similar means. The law kind of outlines scenarios that are considered consequential. These include education enrollment, educational opportunities, employment or employment opportunities, financial or lending service, essential government services, health care services, housing, insurance, and legal services. Tyler: So we've already gone through a lot. So I think this might be a good time to just pause and put this into perspective, maybe give an example. So let's say your recruiting department or your HR department uses, aka deploys an AI tool to scan job applications or job application cover letters for certain keywords. And those applicants that don't use those keywords get put in the no pile or, hey, this cover letter, it's not talking about what we want to talk about, but we're going to reject them. They're going to go on the no pile of resumes. What do you think about that, Abigail? Abigail: I see that as kind of falling into that employment opportunity category that the law identifies. And I feel like that's kind of almost like falling into that substantially similar thing when it comes to substantially similar to legal effects. I think that use would be covered in this situation. Tyler: Yeah, a lot of uncertainty here, but I think we're all gue

    34 min
  7. 28 JAN

    Navigating the Digital Operational Resilience Act

    Catherine Castaldo, Christian Leuthner and Asélle Ibraimova break down DORA, the Digital Operational Resilience Act, which is new legislation that aims to enhance the cybersecurity and resilience of the financial sector in the European Union. DORA sets out common standards and requirements for these entities so they can identify, prevent, mitigate and respond to cyber threats and incidents as well as ensure business continuity and operational resilience. The team discusses the implications of DORA and offers insights on applicability, obligations and potential liability for noncompliance. This episode was recorded on 17 January 2025. ----more---- Transcript:  Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Catherine: Hi, everyone. I'm Catherine Castaldo, a partner in the New York office of Reed Smith, and I'm in the EmTech Group. And I'm here today with my colleagues, Christian and Asélle, who I'll introduce themselves. And we're going to talk to you about DORA. Go ahead, Christian. Christian: Hi, I'm Christian Leuthner. I'm a Reed Smith partner in the Frankfurt office, focusing on IT and data protection law.  Asélle: And I'm Asélle Ibraimova. I am a council based in London. And I'm also part of the EmTech group, focusing on tech, data, and cybersecurity.  Catherine: Great. Thanks, Asélle and Christian. Today, when we're recording this, January 17th, 2025, is the effective date of this new regulation, commonly referred to as DORA. For those less familiar, would you tell us what DORA stands for and who is subject to it? Christian: Yeah, sure. So DORA stands for the Digital Operational Resilience Act, which is a new regulation that aims to enhance the cybersecurity and resilience of the financial sector in the European Union. It applies to a wide range of financial entities, such as banks, insurance companies, investment firms, payment service providers, crypto asset service providers, and even to critical third-party providers that offer services to the financial sector. DORA sets out common standards and requirements for these entities to identify, prevent, mitigate, and respond to cyber threats and incidents as well, as to ensure business continuity and operational resilience.  Catherine: Oh, that's comprehensive. Is there any entity who needs to be more concerned about it than others, or is it equally applicable to all of the ones you listed?  Asélle: I can jump in here. So DORA is a piece of legislation that wants to respect proportionality and allow organizations to deal with DORA requirements that will be proportionate to their size, to the nature of the cybersecurity risks. So, for example, micro-enterprises or certain financial entities that have only a small number of members will have a simplified ICT risk management framework under DORA. I also wanted to mention that DORA applies to financial entities that are outside of the EU, but provide services in the EU so they will be caught. And maybe just to also add in terms of the risks. It's not only the size of the financial entities that matter in terms of how they comply with the requirements of DORA, but also the cybersecurity risk. So let's say an ICT third-party service provider, the risk of that entity will depend on the nature of that service, on the complexity, on whether that service supports critical or important function of the financial entity, generally dependence on ICT service provider and ultimately on its potential to disrupt the services of that financial entity.  Catherine: So some of our friends might just be learning about this by listening to the podcast. So what does ICT stand for, Asélle?  Asélle: It is informational communication technology. So in other words, it's anything that a financial entity receives as a service or a product digitally. It also covers ICT services provided by a financial entity. So, for example, if a financial entity offers a platform for fund or investment management or a piece of software or its custodian services are provided digitally, those services will also be considered an ICT service. And those financial entities will need to cover their customer-facing contracts as well and make sure DORA requirements are covered in the contracts.  Catherine: Thank you for that. What are some of the risks for noncompliance? Christian: The risks for noncompliance with DORA are significant and could entail both financial and reputational consequences. First of all, DORA empowers the authorities to impose administrative sanctions and corrective measures on entities that breach its provisions. Which could range from warnings and reprimands to fines and penalties to withdrawals of authorization and licenses, which could have significant impact on the business of all the entities. The level of sanctions and measures will depend on the nature, gravity and duration of the breach, as well as on the entity's cooperation and remediation efforts. So better be positive to help the authority in case they identify the breach. Second, non-compliance with DORA could also expose entities to legal actions and claims from the customers, investors, or other parties that might suffer losses or damages as a result of cyber incident or disruption of service. And third, non-compliance with DORA could also damage the entity's reputation and trustworthiness in the market and affect its competitive advantage and customer loyalty. Therefore, entities should take DORA seriously and ensure that they comply with its requirements and expectations.  Catherine: If I haven't been able to start considering DORA, and I think it might be applicable to me, where should I start?  Asélle: It's actually a very interesting question. So from our experience. We see large financial entities such as banks, etc. Look at this comprehensively. Comprehensively, obviously, all financial entities had quite a long time to prepare, but large organizations seem to look at it more comprehensively and have done the proper assessment of whether or not their services are caught. But we are still getting quite a few questions in terms of whether or not DORA applies to a certain financial entity type. So I think there are quite a few organizations out there who are still trying to determine that. But once that's clear although DORA itself is quite a long kind of piece of legislation, in actual fact, it is further clarified in various regulatory technical standards and implementing technical standards, and they clarify all of the cybersecurity requirements that actually appear quite generic in DORA itself. So those RTS and ITS are quite lengthy documents and are all together around 1,000 pages. So that's where kind of the devil is in the detail there and organizations will find it may appear quite overwhelming. So I would start by assessing whether DORA applies, which services, which entities, which geographies. Once that's determined, it's important to identify whether financial entities' own services may be deemed ICT services, as I just explained earlier. The next step in my mind would be to check whether the services that are caught also support critical or important functions, and also when kind of making registries of third party ICT service providers, also making sure, kind of identifying those separately. And the reason is quite a few of the requirements, additional requirements applied to critical and important functions. For example, the incident reporting obligations and requirements in terms of contractual agreements. And then I would look at updating contracts, first of all, with important ICT service providers, then also checking if customer-facing contracts need to be updated if the financial entity is providing ICT services itself. And also not forgetting the intra-group ICT agreements where, for example, a parent company is providing data storage or word processing services to its affiliates in Europe. So they should be covered as well.  Catherine: If we were a smaller company or a company that interacts in the financial services sector, can we think of an example that might be helpful for people listening on how I could start? Maybe what's the example of a smaller or middle-sized company that would be subject to this? And then who would they be interacting with on the ICT side?  Asélle: Maybe an example of that could be an investment fund or a pensions provider. I think most of this compliance effort when it comes to DORA will be driven by in-house cybersecurity teams. So they will be updating their risk management and risk frameworks. But any updates to policies, whenever they have to be looked at, I think will need to be reviewed by legal and incident reporting policies, contract management policies, I don't think they depend on size. If there are ICT service providers supporting critical or important functions, additional requirements will apply regardless of whether you're a small or a large organization. It's just the measures will depend on what level of risk, say, certain ICT service provider presents. So if this internal cybersecurity team has kind of put, you know, all the risk, all the IST assets in buckets and all the third-party IST services in various buckets based on criticality, then that would make the job of legal and generally compliance much easier. However, what we're seeing right now is that all of that work is happening all at the same time in parallel as people are rushing to get compliance. So that will mean that there may be gaps and inconsistencies and I'm sure they can be patched later.  Catherine: Thank you for that. So just another follow-up

    15 min
  8. 18/12/2024

    EU/Germany: Damages after data breach/scraping – Groundbreaking case law

    In its first leading judgment (decision of November 18, 2024, docket no.: VI ZR 10/24), the German Federal Court of Justice (BGH) dealt with claims for non-material damages pursuant to Art. 82 GDPR following a scraping incident. According to the BGH, a proven loss of control or well-founded fear of misuse of the scraped data by third parties is sufficient to establish non-material damage. The BGH therefore bases its interpretation of the concept of damages on the case law of the CJEU, but does not provide a clear definition and leaves many questions unanswered. Our German data litigation lawyers, Andy Splittgerber, Hannah von Wickede and Johannes Berchtold, discuss this judgment and offer insights for organizations and platforms on what to expect in the future. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.  Andy: Hello, everyone, and welcome to today's episode of our Reed Smith Tech Law Talks podcast. In today's episode, we'll discuss the recent decision of the German Federal Court of Justice, the FCJ, of November 18, 2024, on compensation payments following a data breach or data scraping. My name is Andy Splittgerber. I'm partner at Reed Smith's Munich office in the Emerging Technologies Department. And I'm here today with Hannah von Wickede from our Frankfurt office. Hannah is also a specialist in data protection and data litigation. And Johannes Berchtold, also from Reed Smith in the Munich office, also from the emerging technologies team and tech litigator. Thanks for taking the time and diving a bit into this breathtaking case law. Just to catch everyone up and bring everyone on the same speed, it was a case decided by the German highest civil court, in an action brought by a user of a social platform who wanted damages after his personal data was scraped by a hacker from that social media network. And that was done through using the telephone number or trying out any kind of numbers through a technical fault probably, and this find a friend function. And through this way, the hackers could download a couple of million data sets from users of that platform, which then could be found in the dark web. And the user then started an action before the civil court claiming for damages. And this case was then referred to the highest court in Germany because of the legal difficulties. Hannah, do you want to briefly summarize the main legal findings and outcomes of this decision?  Hannah: Yes, Andy. So, the FCJ made three important statements, basically. First of all, the FCJ provided its own definition of what a non-material damage under Article 82 GDPR is. They are saying that mere loss of control can constitute a non-material damage under Article 82 GDPR. And if such a loss of the plaintiffs is not verifiable, that also justified fear of personal data being misused can constitute a non-material damage under GDPR. So both is pretty much in line with what the ECJ already has said about non-material damages in the past. And besides that, the FCJ makes also a statement regarding the amount of compensation for non-material damages following from scraping incident. And this is quite interesting because according to the FCJ, the amount of the claim for damages in such cases is around 100 euros. That is not much money. However, FCJ also says both loss of control and reasonable apprehension, also including the negative consequences, must first be proven by the plaintiff.  Andy: So we have an immaterial damage that's important for everyone to know. And the legal basis for the damage claim is Article 82 of the General Data Protection Regulation. So it's not German law, it's European law. And as you'd mentioned, Hannah, there was some ECJ case law in the past on similar cases. Johannes, can you give us a brief summary on what these rulings were about? And on your view, does the FCJ bring new aspects to these cases? Or is it very much in line with the European Court of Justice that already?  Johannes: Yes, the FCJ has quoted ECJ quite broadly here. So there was a little clarification in this regard. So far, it's been unclear whether the loss of control itself constitutes the damage or whether the loss of control is a mere negative consequence that may constitute non-material damage. So now the Federal Court of Justice ruled that the mere loss of control constitutes the direct damage. So there's no need for any particular fear or anxiety to be present for a claim to exist.  Andy: Okay, so it's not. So we read a bit in the press after the decision. Yes, it's very new and interesting judgment, but it's not revolutionary. It stays very close to what the European Court of Justice said already. The loss of control, I still struggle with. I mean, even if it's an immaterial damage, it's a bit difficult to grasp. And I would have hoped FCJ provides some more clarity or guidance on what they mean, because this is the central aspect, the loss of control. Johannes, you have some more details? What does the court say or how can we interpret that?  Johannes: Yeah, Andy, I totally agree. So in the future, discussion will most likely tend to focus on what actually constitutes a loss of control. So the FCJ does not provide any guidance here. However, it can already be said the plaintiff must have had the control over his data to actually lose it. So whether this is the case is particularly questionable if the actual scrape data was public, like in a lot of cases where we have in Germany right here, and or if the data was already included in other leaks, or the plaintiff published the data on another platform, maybe on his website or another social network where the data was freely accessible. So in the end, it will probably depend on the individual case if there was actually a loss of control or not. And we'll just have to wait on more judgments in Germany or in Europe to define loss of control in more detail.  Andy: Yeah, I think that's also a very important aspect of this case that was decided here, that the major cornerstones of the claim were established, they were proven. So it was undisputed that the claimant was a user of the network. It was undisputed that the scraping took place. It was undisputed that the user's data was affected part of the scraping. And then also the user's data was found in the dark web. So we have, in this case, when I say undistributed, it means that the parties did not dispute about it and the court could base their legal reasoning on these facts. In a lot of cases that we see in practice, these cornerstones are not established. They're very often disputed. Often you perhaps you don't even know that the claimant is user of that network. There's always dispute or often dispute around whether or not a scraping or a data breach took place or not. It's also not always the case that data is found in the dark web. I think this, even if the finding in the dark web, for example, is not like a written criteria of the loss of control. I think it definitely is an aspect for the courts to say, yes, there was loss of control because we see that the data was uncontrolled in the dark web. So, and that's a point, I don't know if any of you have views on this, also from the technical side. I mean, how easy and how often do we see that, you know, there is like a tag that it says, okay, the data in the dark web is from this social platform? Often, users are affected by multiple data breaches or scrapings, and then it's not possible to make this causal link between one specific scraping or data breach and then data being found somewhere in the web. Do you think, Hannah or Johannes, that this could be an important aspect in the future when courts determine the loss of control, that they also look into, you know, was there actually, you know, a loss of control?  Hannah: I would say yes, because it was already mentioned that the plaintiffs must first prove that there is a causal damage. And a lot of the plaintiffs are using various databases that list such alleged breaches, data breaches, and the plaintiffs always claim that this would indicate such a causal link. And of course, this is now a decisive point the courts have to handle, as it is a requirement. Before you get to the damage and before you can decide if there was a damage, if there was a loss of control, you have to prove if the plaintiff even was affected. And yeah, that's a challenge and not easy in practice because there's also a lot of case law already about these databases or on those databases that there might not be sufficient proof for the plaintiffs being affected by alleged data breaches or leaks.  Andy: All right. So let's see what's happening also in other countries. I mean, the Article 82, as I said in the beginning, is a European piece of law. So other countries in Europe will have to deal with the same topics. We cannot come up with our German requirements or interpretation of immaterial damages that are rather narrow, I would say. So Hannah, any other indications you see from the European angle that we need to have in mind?  Hannah: Yes, you're right. And yet first it is important that this concept of immaterial damage is EU law, is in accordance with EU law, as this is GDPR. And as Johannes said, the ECJ has always interpreted this damage very broadly. And does also not consider a threshold to be necessary. And I agree with you that it is difficult to set such low requirements for the concept of damage and at the same time not demand materiality or a threshold. And in my opinion, the Federal Court of Justice should perhaps have made a submission here to the

    20 min

About

Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.

You Might Also Like