
WebMakers Talks: AI Act vs. Software Companies
Topics covered:
Welcome to another episode of our WM Talks podcast, where we discuss technology and business topics related to the IT industry.
Piotr: Hi, my name is Piotr Kaźmierczak and welcome to another episode of WebMakers Talks. This time, we're once again diving into the legal side of things so we'll be talking about where the legal line is drawn when it comes to artificial intelligence, the AI Act, and how all of this applies specifically to software companys.
Today, I'm joined by Mateusz Borkiewicz from the Leśniewski Borkiewicz Kostka & Partners law firm, and Damian Maślanka, who's our CTO and will be jumping in with more technical questions.
Mateusz: Hello.
Damian: Hey everyone.
Piotr: So let's get into the AI Act and what's actually in there. What we're most curious about right off the bat is how the AI Act defines artificial intelligence and what technologies it covers.
Mateusz: Right, we finally have a definition of artificial intelligence something we've been waiting on for years. Generally speaking, an AI system is a machine based system that's designed to operate at different levels of autonomy. Once implemented, it can adapt to specific situations. But the core idea is that it's supposed to infer or draw conclusions based on input data basically, it works in such a way that it takes certain input and draws output conclusions from it. It might sound a bit complicated and, well, legalistic, but essentially it's a system that behaves a bit like a human it has tools, it acts on certain data, it learns from it, and then draws conclusions that can affect either a virtual environment or even the real world around us.
Piotr: Got it. So as we know, a software company is quite a broad term. And if we want to break it down into different operating models for software companys and how they're impacted by the AI Act, what would be the key takeaways from that perspective?
Mateusz: That's a great question, especially in the context of software companys, because not every company operates the same way. The degree to which the AI Act applies to them really depends on where the software company sits in the client relationship. Generally, there are two models we see most often in the market. In the first one, the software company develops its own product, puts its own branding on it, and then licenses it out to users. In the second model, the client comes to you with a request, you develop specific software for them, transfer the intellectual property rights, and that's the end of it aside from maybe providing ongoing support. But from that point, the client is the actual owner.
Piotr: I'd also point out there's often a hybrid model in play. In our experience, it's very common to hand over core IP rights for software that was custom developed for a client, but at the same time we include components that we previously built ourselves and only license those out. This allows us to offer them at a lower cost. So how does the AI Act handle that kind of model?
Mateusz: Right, so that would be the hybrid model—probably the most complex one when it comes to documentation and compliance, because it all really comes down to the responsibilities the software company will ultimately bear. In terms of definitions we can apply to a software company under the AI Act, there's the concept of a "provider." This is the entity that places the software on the market—meaning it's labeled with their branding, their name, and so on. You could say it's the producer, and it's responsible for the product. Well, and this is where there are a lot of voices that say that at the moment, if it is your customer who orders this type of software, you create it, but in general, this supplier will already become the customer.
In the sense he will be understood as this supplier, this owner, who is already implementing this software afterwards. And now, depending on where we are as that software company, we may have more of these obligations, and they may come directly from the AI Act then when we are that supplier, or they may come from contractual obligations that your customer will impose on you, which I assume, as life shows, is more likely to require you to simply be in full compliance with the AI Act when developing that software.
This is the general premise, and it's exactly where those shades of grey start to appear because I'm already seeing arguments both for treating software companys solely as providers and for more clearly separating their roles. So, it will likely take some time before the practice becomes clear enough for authorities to start drawing more explicit lines. In any case, depending on who we are in that structure, the AI Act will impose certain obligations on us.
And the main core principle behind the creation of the AI Act is that people should be at the center, meaning that their rights, their fundamental rights, must be guaranteed and protected. It's not about telling people how to live or think, but rather about creating solutions that support them and make life easier, without threatening human existence. That might sound lofty, but it's really the idea that has guided this regulation from the beginning. The goal is to avoid a purely market driven approach in favor of an ethical one, one that ensures artificial intelligence doesn't become a threat to people in the future.
This is also the foundation for a model based on assessing different levels of risk. The AI Act doesn't say: here's the exact way to do things and if you follow this checklist, you're good. Instead, it's more like GDPR in that we, as developers or providers, must conduct our own risk analyses. We have to determine what technical and organizational safeguards should be implemented in our software or in our development process, so that it's safe and aligned with the AI Act's fundamental principles. What's important here is that based on those risk levels, the AI Act categorizes certain systems as allowed for market use, conditionally allowed, or completely prohibited from being placed on the market.
It's important to highlight that the list of prohibited or permitted systems is included as an annex to the AI Act, but these are open ended provisions. So, it doesn't exactly make our lives easier. On one hand, we have some examples, on the other, we still have to carry out our own analyses. This risk based approach will definitely require more work on the part of software companys just to determine where exactly they fall within the scope of the tool they're developing.
Piotr: OK, but when we talk about risk and tools that are either permitted or prohibited by the AI Act, we're really talking about creating a product that we release either to a narrow or broader group of clients. But what if we use tools that technically aren't allowed under the AI Act to build such software? Let's say they help us informally maybe we ask a question, and the tool returns a result we then use. Is that kind of usage covered by the AI Act?
Mateusz: Sure, it's not like anything here is completely out of scope. The AI Act does include provisions stating that open-source, free tools used in day-to-day operations, especially those that are publicly accessible and whose terms are known to the users, may not fall under the AI Act in a way that imposes additional, specific obligations. But generally speaking, all components do need to be assessed based on where they sit in the overall setup. For example, if you're using large language models as part of your AI system, those models also come with obligations. Their creators are required to provide things like user manuals, perform specific tests, and make that essential information available to downstream users, so in this scenario, that would be you. You'd be the next link in the chain, and you need to know how to safely use those tools.
Piotr: So in the case where we're directly using those models in our product, say we're integrating them, using their engines and resources, but going back to my question: if we're just creating software and using open-source tools in the process, the creation itself isn't subject to the AI Act because our specialists still review and control the final product, right? But using the tools themselves would be covered by the AI Act?
Mateusz: No, the creation is covered too. If you're implementing those tools as part of your system, then the system as a whole still needs to meet the obligations set out in the AI Act.
Create your AI-based solution with us.
Piotr: OK, so when we talk about implementation, are we referring specifically to implementing the models themselves, or the outcomes they provide like a snippet of code, an entity, a business approach, or even a business analysis method? I mean the kind of output that a model gives us.
Mateusz: Ok, so when it comes to those kinds of outputs, it really depends primarily on the terms of use for those tools. Because you're also a user here, using these solutions to make your work easier. And generally speaking, each tool has its own approach, so you really need to keep a close eye on that and always check, because not every tool gives you the right, for example, to monetize the outputs it generates. Many of them do offer such broad usage rights. But it's hard to talk about copyrights here. It's just a kind of output, but you could say it's a bit suspended in a vacuum, because no one really holds copyright over that output.
And what's also important is that we have no guarantee that someone else using the same tool hasn't received something similar. So if we implement it into our solution, it could turn out we're duplicating something another user across the globe is also integrating right now. So I'd be cautious here. First, we need to base everything strongly on the terms of service and what they allow. And second, don't treat the output as a final product always try to tweak it, add something of your own. Let's treat it more like inspiration than a ready made solution that we just drop into our system.
Piotr: Ok, so now summing up the risk categories, could we list the main categories of risk for software companys that result directly from complying with the AI Act, or from delivering services after it's in force?
Mateusz: Sure. When it comes to those risk categories, we're talking first about the risk related to prohibited practices. Meaning that if you come up with an idea, or a client brings you an idea, and it falls into this category, you should see a big red flag. This mainly concerns models that strongly interfere with fundamental human rights like someone's freedom to make decisions. This includes manipulative actions that could influence decision making, for example, systems targeting children or the elderly, or profiling and advertising drugs to people experiencing mental health crises in a way that puts pressure on them like, "this will definitely help you" and so on these kinds of actions are banned.
The AI Act also directly mentions biometric issues—for example, if we were to develop tools that analyze emotions in the workplace. This setting is specifically highlighted as one where emotion analysis is not allowed—so we can't do that. And there's also a quite interesting one, which some may recognize from Black Mirror: citizen scoring. We can't implement systems that deny a child access to kindergarten just because their parents pay taxes irregularly and are profiled as lower-category citizens, so to speak.
Piotr: Ok, but if we look at the situation where a client comes to us, wants to build a specific piece of software, and we immediately agree to assign the copyright to them—does the risk still lie with the software company? Meaning, is the software company responsible for these kinds of solutions alongside the client, and should the software company be the one to react? Or should this responsibility be shifted to the client, since we did mention this earlier
Mateusz: So the question here is, first of all, how will the software company be classified? We've got these shades of gray, and I can't tell you 100% right now whether you'll be treated as the provider or just as a kind of assistant to your client. But I believe the risk level is high enough that I'd generally advise against taking the position of, "The client asked for it, so we built it". Sooner or later, that can come back to bite you. So here I'd recommend having that conversation with your client and explaining where you see the issues. But what's also interesting is that although the AI Act has already entered into force, some of its provisions will be applied gradually. For example, when it comes to creating models that are outright banned under the AI Act, the actual prohibition takes effect on February 2, 2025. So you could say it's the last call for anyone trying to build something non compliant although I don't recommend or encourage that. So that's the first issue, prohibited AI systems. Then we've got high risk systems, and this is really where the entire AI Act revolves. This is also where software companys will have the most work. Sure, it's "high risk", but when we look at what clients are bringing to the table or what we see clients using in their businesses, it seems like many solutions will fall under this category.
These are systems that, without getting too deep into it, fall under specific EU regulations tied to certain industries, like aviation or elevators. So if someone comes in with elevator related software, that's a red flag. Then there's the second group, those risks that are also outlined in the AI Act, which we also need to assess on our own.
These relate, for example, to profiling, if we're profiling users or clients, that's a high risk system. This is something we all remember from GDPR, targeting ads at selected groups, for instance. We know that was algorithm driven and AI-based, and now there's this added requirement to perform even more thorough risk assessments in that area. And then there are things that may be common in many organizations, like tools used in recruitment or employment. These include systems that analyze CVs, assess employee performance, or suggest promotions, those also fall under the risky category.
We know these tools are in demand on the market, so we'll have to ensure they comply with the AI Act and meet that set of additional obligations. Then we've got a third category low risk or risk free systems. These are usually simple tools that assist with human tasks, like text editing or enhancement. For those, the additional obligations don't apply.
However, what matters for everyone is the principle of transparency. We always need to inform users that they're interacting with AI that a given tool is powered by artificial intelligence. And if the software company creates a solution that generates certain types of files, those files should be labeled as AI generated. If we're talking about deepfakes, then it must explicitly be marked as a deepfake.
Piotr: But these files should be labele, do you mean when they're already being generated in production? Or are we talking about some types of files, or fragments, like pieces of code, that were created by artificial intelligence and that should also be marked for the client?
Mateusz: If we're the ones building a solution that generates these files, then yes, those files have to be labeled. But if we're just using AI to support our work, then we don't necessarily have to label things for the client if, say, one element was created in collaboration with AI. Like I said earlier, we treat it more as inspiration, not something we directly copy from what the AI gives us.
Piotr: Got it. So we've talked about the risks, and we can smoothly move on to what consequences a software company might face if it fails to comply with some aspect of the AI Act.
Mateusz: Right. So let's assume for the sake of this discussion that the software company is considered a provider, because we're trying to figure out where it fits into the machinery of the AI Act. Alternatively, it might be considered a user of software, if, for instance, it's using some off the shelf software. But if it makes significant changes to that software, it can also become the provider. So not just in the case where you're commissioned to build something from scratch, but even if you're modifying an existing program that includes AI components thanks to your work, then that modification could qualify you as the provider.
Now, moving on to the risks. On one hand, we've got something familiar from GDPR, complaints from individuals who believe a company has improperly implemented or used AI according to the AI Act. That can lead to a complaint being submitted to a supervisory authority, which could result in an inspection. And potentially, that inspection could end in what everyone fears most fines of up to 35 million euros or 7% of annual global turnover from the previous year. So those 20 million euros and 4% from GDPR are no longer enough, they've raised the stakes here. Whether that makes sense or not is hard to say, it's a bit of a numbers game.
Piotr: Let's imagine a scenario then. We've developed software that was fully handed over to a client, and the client is selling it. It's integrated with some AI tools. Let's say it's not compliant with the AI Act. It carries, for instance, our branding like "by WebMakers" in the bottom right corner. Now, could the client's customer look at that, see "produced by WebMakers", and report WebMakers as the provider not complying with the AI Act? Who can file a complaint and under what circumstances?
Mateusz: Basically, complaints are very broad, you can write anything on paper and target any entity. So I don’t need to have a direct relationship with you to report you to a supervisory authority. In this case, someone can pretty much take a shot in the dark and accuse a company of not implementing the AI Act properly. The question is whether the authority would actually take an interest in you, whether you would be considered the provider, or whether you could show that you transferred all rights to the client and that they should be treated as the provider. But as we said before, how this will be handled in practice and what stance the authorities will take, this still remains to be seen.
Piotr: So basically, a client could report us, and the investigation could land on us, not the client.
Mateusz: Yes, but it might turn out that, based on your explanation and showing the contract with the client demonstrating that you transferred all rights and that you acted solely based on that agreement, with the client taking full responsibility. the investigation could be closed on your end and redirected to the other party, the actual responsible entity.
Piotr: Got it. So when it comes to consequences, are there any other possible outcomes for software companys in relation to compliance? I’m thinking not direct penalties from the AI Act, but things related to how it operates and how we’re expected to conduct ourselves for instance, if we don’t inform the client about certain tools being used, or maybe we mentioned one but didn’t disclose another. So how does that play out in terms of consequences in the context of cooperation, and what kind of outcomes might we face?
Mateusz: Right, from that perspective, all arrangements between you and the client definitely need to be clearly defined in the contract. Again, I’ll refer a bit to the GDPR here, since we already have extensive experience with it and we know how things have worked in practice over the years. Under GDPR, for example, when a data controller commissions a system from a software company, one meant to support data processing or similar functions the software company builds the system and delivers it. However, if anything goes wrong, the client is the one who answers to the data protection authority, because they chose to implement that tool, ordered it from you, and received the copyright rights to it.
So we could see a similar situation under the AI Act: if the client becomes directly responsible in the eyes of the regulator, you might still be held liable, but on the basis of your contract. Meaning, if your agreement states that you commit to comply with all applicable legal regulations while creating the tool, especially including the AI Act and you failed to do that, and the client was fined or suffered some damage, then unless you have contractually excluded this responsibility, the client could potentially seek redress from you.
Piotr: Got it, that makes sense. Another area in the AI Act is personal data, data processing. What does that look like in the context of software companys? What should we be paying attention to and how should we protect ourselves? Especially given that we’re not always the data controller. We often just develop software and hand it off to the client. So how should we approach that?
Mateusz: Nothing particularly new here. The AI Act emphasizes that data protection authorities must closely monitor how personal data is processed by AI systems. But as far as the obligations of software companys are concerned, they haven’t really changed compared to what we’ve already had under the GDPR. You still need to apply principles like privacy by design and privacy by default so, process as little personal data as necessary and only process what's absolutely required. That means designing systems in a way that restricts the data flowing into them, not expanding it. It also means conducting risk analyses and data protection impact assessments (DPIA) - in other words, evaluating how your tool could affect personal data processing.
So, these are all obligations that already existed under GDPR and your client should theoretically require them from you so they can maintain a complete technical documentation file. This would allow them, in case of an audit, to show that they vetted you as a contractor and that you provided solid assurance that the system was built in compliance with legal requirements. That said, in this regard, the AI Act doesn’t change much. What it does do is potentially add more responsibilities particularly around generating reports and conducting risk assessments related specifically to AI system operations. So, your technical documentation will need to be more robust, but the part concerning personal data will probably just be a chapter within that broader AI focused documentation.
Piotr: So with that we’ve basically moved into the practical application of AI Act compliance in our daily work, so how should a software company protect itself and how should it implement the AI Act into its day-to-day operations?
Mateusz: We still have some time, but it’s already worth paying attention not just to the AI Act itself, but also to the practices emerging around it and the approaches that are being promoted globally. On the one hand, we need to start by doing some mapping, meaning, we should check whether any of the components we’ve developed already fall under systems with a certain level of risk, and whether they will require additional documentation. What’s important is that, in theory, the things we already have won’t fall under the AI Act.
What’s important is that, in theory, the things we already have won’t fall under the AI Act. Meaning, if you’ve already developed some software and you’re the owner of it, the AI Act doesn’t require you to retroactively create all the documentation, unless you make significant changes to the software, or changes that increase the risk of the system. For example, if something was merely assisting work and suddenly it becomes a high risk system, then those obligations will apply. So this is a bit of a relief: there’s no need to audit everything or go through every single component that already exists, because the AI Act only applies to new or modified systems.
What’s important is that, in theory, the things we already have won’t fall under the AI Act. Meaning, if you’ve already developed some software and you’re the owner of it, the AI Act doesn’t require you to retroactively create all the documentation, unless you make significant changes to the software, or changes that increase the risk of the system. For example, if something was merely assisting work and suddenly it becomes a high risk system, then those obligations will apply. So this is a bit of a relief: there’s no need to audit everything or go through every single component that already exists, because the AI Act only applies to new or modified systems. So we’ve got a procedure, we’ve got someone responsible, and we’ve got process mapping, meaning, identifying where potential risks might lie.
And then following a standard path: we look at what tools we have and what the goals are for using a particular idea, right? Not just executing the idea, but also understanding what impact it might have, whether it’s been ordered by a client or it’s something we came up with ourselves. And we can’t overlook the side effects. Second, do we have a team that’s capable of handling this meaning, we also need to maintain up-to-date knowledge and train the staff who work with these AI documents or designs. Third, where are we getting the data from when we feed the system meaning, do we have copyright or licensing rights to the data, or are we dealing with personal data and do we have a legal basis for processing it?
Often it’ll be data provided by some general purpose AI model, but even then, you still need to check how that model handles it on its side. What can you legally do with the input data? And finally, there’s the issue of monitoring, what safeguards have we put in place to ensure AI Act compliance, and are they actually working? This is a living process, and we definitely need someone overseeing it. Because when the authorities come in, it may be hard for someone who’s not familiar with how the company’s AI policy works to find the right pieces and prove we’re compliant. So the oversight and maintenance of that documentation needs to be kept at a fairly high level within the organization.
Piotr: So actually, it’s not just about the aspects related to regulating cooperation with the client, but also those internal, organizational ones, which are super important in everyday work.
Mateusz: Yes, because it's already even when a customer comes to you at work with an inquiry of some kind, even then you should also be aware of whether you are working on something that has high risk or low risk. So here already this evaluation starts from those very first moments.
Piotr: Ok, so I think that’s all when it comes to the AI Act. We still have a few technical questions related to applicable law.
Mateusz: Casuses?
Piotr:Well, you could say that. Let’s call them edge case type questions, maybe not even edge cases, because they actually touch on our everyday practice and things we’ve been wondering about. But I think Damian will be better at explaining or asking those questions.
Damian: I’ve finally made it to my question segment. Right, when it comes to the development of artificial intelligence, you could say it’s divided into two eras. The era when LLMs - large language models, emerged, and the era before those big models. Back then, companies or individuals would use cloud solutions or their own models that they trained, or they’d simply use cloud based tools that were in some way secured by terms and conditions, where the cloud service provider basically outlined the relevant responsibilities, and so on. Now, with the emergence of the LLM era, the entry threshold into AI has dropped significantly. A lot of businesses have really started to use AI on a mass scale from that moment.
So the question is, does the AI Act regulate in any way does it distinguish between those cloud based solutions, which are still around or were used before, and what we’re seeing now, where AI has become way more widespread, easier to use? Is there any difference between those two?
Piotr: And that relationship has really changed too. Previously, the relationship was more B2B you know, developers of some software products versus those global specialized tools. But now we’ve got tools that are basically accessible to the average person on the street.
Mateusz: Yes, the AI Act takes that into account as well, and you could say that near the final stages of drafting the Act, specific provisions were added to regulate general purpose AI models. That’s how these models are approached, those that are trained on huge volumes of data and can be used to generate specific outputs based on the prompts they’re given. These general purpose models aren’t required to meet the same extensive obligations as a full AI system, because most of the time, they simply become components within a larger system. That means, when they’re implemented into a specific solution, that overall solution, the entire concept of how it’s used, by whom, and in what context, still needs to be assessed for its own risk level. And the general purpose model is just one part of that puzzle.
That said, the language model itself does have to provide external information about how it works, what kind of data it’s trained on, what kind of copyright policy it follows, what’s the nature of the outputs generated from that training set. There’s a strong emphasis here on transparency, on informing people how these language models actually function. Of course, they’re also categorized based on their level of risk, some are higher risk, some are lower. The ones tied to tools like ChatGPT, for example, have a higher systemic risk, which the European Commission has pointed out. And those kinds of models will fall under a special form of oversight due to their potential for generating risks, because like you said, while on one hand it’s regular users interacting with them, on the other hand, it’s also businesses. And if they rely on these models and the outputs turn out to be made-up or hallucinated and don’t reflect the truth, that introduces a general risk for all of us, because the large model simply isn’t functioning as it should. So yes, it is regulated under the AI Act, but to a lesser extent than a complete system, since most of the time it will just end up being a part of one. And if someone uses it, they’ll need to take it into account in their own risk analysis.
And honestly, that’s a very simplified answer to that question, because like I mentioned, there are topics around risk levels, annexes, who the Commission might define as high risk, or what qualifies something as high risk just based on meeting certain criteria. So there’s a lot to untangle once you really dive into figuring out which model falls into which category. That’s why it’s so important to have someone in your organization who, to put it plainly, really gets all these requirements, because there are more of them than it might seem at first glance.
Damian: Just to clarify using the example of OpenAI and ChatGPT, there are basically two ways to use this service. One is using the chat directly in the browser. The other is through the API. And the difference is that, according to the privacy policy and terms of service, when you use the API, the data you send to OpenAI isn’t used to further train the model. But when you're using the browser based version, it’s explicitly stated that your data can be used to train the model further. So, does the AI Act make a distinction between using ChatGPT in a regular browser context versus through the API? Or is that not regulated?
Mateusz: No, generally speaking, that’s up to the model provider. From the perspective of the AI Act, what matters is that the provider clearly states what’s happening with the data. If they provide both options and clearly communicate the differences, then you could say that from this angle they’re compliant, or will be compliant with the AI Act, because they’re meeting the informational transparency requirements.
The question is what risks this might create for a software company using these kinds of tools, or for any other company, really. Because on the one hand, you might get a false sense of security thinking, "Oh, it’s via API, so the data is safe". But there are a few other aspects to consider. First, we don’t know whether the terms will change in the future. And we’ve already seen cases where companies have changed their approach and something that was supposed to be protected ended up not being protected. So I do think there’s risk here. Second, and I’ve seen this in the terms of service of providers like these, they often remind users that they’re responsible for maintaining things like trade secrets and personal data confidentiality. And if something goes wrong, the liability is on the user.
That should raise a big orange flag. Even if the API guarantees that the data won’t be used for training, if you’re uploading confidential information, you could still be violating an agreement with your client, especially if that agreement clearly states the data must stay with you and not be shared with anyone. Even if it’s not being used to train anything, it’s still being sent to a third, party server. Not to mention, there’s always the risk of a data breach. So why add that stress by uploading those kinds of files?
In short, I’d strongly recommend not putting too much trust in those safety assurances. And remember that confidentiality and trade secret obligations should really be interpreted narrowly in this context. Even if someone promises your data is safe it’s still a third party that, ideally, shouldn’t have access at all. Even if the data just sits there on their server.
Damian: The principle of limited trust.
Mateusz: Absolutely.
Damian: Right, I’ve got one more question, it's about responsibility: human vs. machine. A while ago there was a lot of public debate, politicians and lawyers were wondering who should be held accountable if, say, an autonomous vehicle like a Tesla causes an accident. Is it the driver, the manufacturer, or should the insurer be fully liable? So my question is, what about when we’re building software? Let’s say as a software company we create a system that processes some sensitive data for a client, and based on that data it generates insights that the client later uses to make strategic business decisions. Now, if we use an AI algorithm here and it turns out those insights are wrong, who is responsible then? Is it us, the software company?
Because if it were an algorithm we wrote ourselves and there was a bug, the situation is clear. But what if we give the AI some level of autonomy, where it makes its own calls? Who takes responsibility then?
Piotr: And I think it’s important to draw a line here between drawing insights and actually making decisions. Those are two different things, right? One is suggesting something to a human decision maker, and the other is where the model or an agent we’ve created, makes a business decision on its own. So maybe we should clearly define that boundary from the start?
Damian: Here you can still add some kind of just a level of difficulty, that this system immediately decides that there is no this human factor just based on what he deduces there, then just the target action is already happening.
Mateusz: Sure, and let’s take it further. Imagine your client is a hospital, and you’re developing software that adjusts medication dosages based on a patient’s condition. The system scans the patient’s data and decides to increase the dosage, and then the patient suffers some harm because of it. So what happens? Does the patient go to the hospital with a claim? Does the hospital then come to you? Or could the patient come directly to you?
Piotr: Or even to the model itself that was used?
Mateusz: Or to the model. Yeah, so here we have yet another player in the mix. And right now, AI doesn’t have legal personality, so it can’t be held liable at least not yet. But when it comes to current laws, liability most often falls under the category of defective product liability, depending on what exactly was developed. And in this case, for example, you could potentially bear direct liability towards your client, in this scenario, the hospital. Theoretically, the patient could also try to bring a claim against you. But this whole situation isn’t really clear cut. In fact, your client the hospital would have a stronger case to bring a contractual claim, arguing that you simply delivered faulty software. Then it would become a matter of determining whether the error was actually on your end or maybe the hospital just misused the software. So these are really tricky cases for now.
The AI Act doesn’t give us any answers here. What is being worked on at the EU level, though, is a directive that would define liability for entities developing AI based solutions. It’s still in early stages, and it doesn’t cover contractual relationships, so if you’ve got a contract with the hospital, sure, the hospital can come after you for poor performance. But this directive would apply to cases where someone like the patient is the one harmed, and they want to seek compensation because the software didn’t function properly and, say, gave them the wrong dosage. So this is about non contractual relationships, and the directive aims to make it much easier to pursue claims in cases like this. Specifically for the injured party.
So at that point, the patient could basically sue anyone you, the hospital, anyone in that chain who had a role in creating or supplying the software. The whole idea is to make it easier for victims to seek justice without needing to figure out whether it was a company in California, your company, or the hospital. No matter who they sue, the court would still need to consider the case. And there are also eased rules around evidence, there’s a kind of presumption of liability where the burden is on the defendant to prove they acted appropriately. So yeah, this is something that’s coming down the road, I wouldn’t expect it to kick in sooner than two years from now, but it’s clear that the approach to AI liability is moving in the direction of letting victims sue basically anyone in the supply or implementation chain. And the courts will, by default, assume the defendant is liable unless they can prove otherwise. "Guilty or not guilty" is criminal law language, but in this context it’s about fault for delivering faulty software. So, definitely better times for users, but more risk for everyone involved in building that software stack.
Damian: Okay, just to summarize and confirm, if we’re a software company that develops a product, creates software, we need to be careful about every aspect. And if we’re the ones who created a specific AI model, we’re responsible for how it performs. But if we’re using a third, party model, then the responsibility lies with that third party, right?
Mateusz: That can vary, it depends on what the decision was based on. If the model you used only provided input data, and you were the ones who designed the decision making logic, then unfortunately, most of the responsibility will fall on you. However, if that model already included certain decision making components developed by the initial provider, meaning the first actor in the supply chain, then you might have the right to seek recourse. If, for instance, you were sued by the end user, you could then try to recover damages from that original provider. In practice, though, this may be difficult, especially since we’re usually talking about large language models and companies so big that seeking liability from them might look like David versus Goliath. Time will tell how oversight authorities respond, maybe in the future they’ll be more open to supporting actions against those large end providers.
Piotr: And on that symbolic note, we wrap up today’s podcast, right at the edge of the law, where the AI Act ends, and the gray areas and uncertainties begin. Big thanks to my guests Mateusz Borkiewicz from the Law Firm Leśniewski Borkiewicz Kostka and Partners and Damian Maslanka, CTO of WebMakers.
Damian: Thanks as well.
Mateusz: Thank you!
Thanks for listening to the episode. For more valuable content, visit our blog: www.webmakers.expert.





