AVHOS | Law Automation System & Lawyer Program

Artificial Intelligence and Legal Liability: Who Is Liable If an Algorithm Makes a Mistake?

Imagine a self-driving car causing an accident, an AI surgeon making a mistake during an operation, or a financial algorithm driving a company into bankruptcy with a wrong investment decision. These scenarios are no longer the stuff of science fiction films; they are the legal challenges of the near future. As artificial intelligence (AI) systems permeate every aspect of our lives, the question “Who bears legal liability when AI causes harm?” has become one of the most debated topics among legal professionals and policymakers. In this article, we will examine critical issues such as who bears liability for damages caused by AI, the possibility of granting legal personality to AI, and the role of ethical principles in resolving this complex issue.

The Chain of Liability: Who Are the Suspects?

In the event of harm caused by artificial intelligence, there are multiple actors to whom liability may be attributed:

Manufacturer/Developer

The software company or engineers who design, code and produce the artificial intelligence algorithm. If there is a design flaw (bug) in the product or if adequate precautions have not been taken against foreseeable risks, the manufacturer’s liability may come into play.

Owner/Operator

The individual or organisation that purchases the artificial intelligence system and uses it in their own operations. For example, a company operating a fleet of autonomous vehicles or a healthcare organisation using an AI surgeon in its hospital.

User

The end user who directly operates the artificial intelligence system or receives services from it. If the user misuses the system or fails to follow instructions, their own negligence may also be a factor.

Data Provider

Organisations that provide the data used in the AI’s learning process. If the data provided is incorrect, incomplete or biased, and this has caused harm, the data provider’s liability may be subject to debate. Inadequacy of the Current Legal Framework Traditional liability law is based on human actions and fault. However, artificial intelligence, particularly self-learning (machine learning) systems, can make decisions that are unpredictable and cannot even be fully explained by their programmers. This situation challenges existing legal concepts.

Liability Based on Fault

Under Article 49 of the Turkish Code of Obligations, a fault-based act is generally required to claim compensation for a loss. However, can we speak of an artificial intelligence being ‘at fault’? The algorithm’s decision-making process may be so complex that identifying the source of an ‘error’—and consequently the fault—becomes impossible.

Liability Arising from a Defective Product

If artificial intelligence is regarded as a “product”, the provisions on defective goods in the Consumer Protection Act or the Law of Obligations may apply. However, the constantly evolving and learning nature of artificial intelligence complicates the definition of the concept of “defect”.

Strict Liability

Some legal experts propose the application of the principles of “strict liability” or “liability for risk” for high-risk artificial intelligence systems, such as autonomous vehicles. Under this principle, the operator of the system is held liable for any resulting damage, even in the absence of fault.

Can Legal Personality Be Granted to Artificial Intelligence?

One of the most radical debates centres on the idea of granting advanced artificial intelligence systems a form of “electronic personality”, much like that of a company. In this scenario, artificial intelligence could possess its own assets and cover the damages it causes from these assets. However, this idea currently seems unlikely as it challenges the human-centred structure underlying the law and requires philosophical concepts such as “consciousness” and “will”. Our legal system has granted legal capacity to natural and legal persons.

The Role of Ethical Principles and Regulations In this area where legal gaps exist, ethical principles can serve as a guide. Principles such as transparency, accountability, justice and impartiality should be adopted as guidelines during the design and use of artificial intelligence systems. Regulatory initiatives such as the European Union’s ‘Artificial Intelligence Act’ aim to introduce strict rules for high-risk artificial intelligence systems by adopting a risk-based approach. Such regulations could clarify the chain of liability. For example, the requirement in the General Circular of the Financial Crimes Investigation Board (MASAK) for reports demonstrating that the false acceptance rate of the artificial intelligence algorithm used for identity verification is below a certain threshold is an example of such a regulatory approach.

Financial Crimes Investigation Board General Circular (No. 19)

b) In cases where an artificial intelligence application is used to compare the face in the live image of the person undergoing remote identity verification with the photograph on the identity document… a report from the Turkish Standards Institute must be obtained demonstrating that the false acceptance rate of the artificial intelligence algorithm to be used is below one in ten million.

There is as yet no single, clear answer to the question: “Who is liable if the algorithm makes a mistake?” Current legal systems are not fully prepared for these new and complex challenges posed by artificial intelligence. The solution likely lies not in a single liability model, but in a regime of shared liability among the manufacturer, operator and user, alongside insurance mechanisms and risk-based legal regulations. Legal professionals and policymakers must adopt a multidisciplinary approach to keep pace with the speed of technology and ensure that justice is upheld in this new digital age. Otherwise, we risk liability becoming lost in digital loopholes.