Home / Publications / You, Robot

You, Robot

More and more, companies are using chatbots, for example to set up contracts. This poses some legal questions, especially relating to the underlying declaration of intent.

Already in 1966, the computer science pioneer Joseph Weizenbaum developed the computer programme ELIZA, an early version of a text-based dialogue system that made chatting with a technical system possible. This was the birth of the chatbot. However, it took another 50 years before chatbots were technically advanced enough to be used for mass marketing purposes.

Thanks to such systems, it is now difficult for users to know if they are communicating with a computer programme or a person. Today, chatbots are increasingly handling communications with customers – without a clear legal framework for their use.

Chatbots as contractual partners?

Companies often use chatbots to set up and conclude contracts. But: Can a chatbot make a legally binding declaration of intent on behalf of a company? At a first glance, this seems doubtful. It is a computer programme and not a real person or “natural person” – as defined by law – making the declaration.

At least for automated chatbots though, there is broad agreement in case law and jurisprudence that this difference is ultimately insignificant – the declaration of a chatbot can be attributed to a company, the operator. With automated chatbots, declarations of intent are generated based on the settings that were predefined by the chatbot operator. They are thus viewed as computer declarations which are not explicitly regulated by law but nevertheless legally binding.

Although the will to act, which is necessary for a legally binding declaration of intent, is not present at the time a computer declaration is generated, proof of intent is provided through the activation of the chatbot by the operator. Lawyers have also cleverly constructed the presence of the other requirements that are necessary for a legally binding declaration of intent – awareness of intention and the will to engage in a transaction. Due to the automation, both requirements are not present when a chatbot generates a declaration of intent. Ultimately, however, the use of chatbots is always based on the will of the human operator. So overall, both conditions are fulfilled.

Automated vs autonomous

However, the construct of a computer declaration has its limitations in regard to autonomous chatbots. In contrast to automated chatbots, autonomous chatbots make decisions using self-learning algorithms. Here artificial intelligence is used and the chatbot operator no longer has any direct influence on the results and, as a rule, cannot even verify the decisions that are made.

Against this background, lawyers no longer see a sufficient correlation between the actions of the system operator and that of the chatbot. This means that the principles of a computer declaration no longer apply. According to many lawyers, the autonomous system cannot be regarded as an agent of the chatbot operator, because currently only natural or legal entities can act as agents. Thus, unlike automated chatbots, autonomous ones cannot make any legal declarations.

At present, autonomous systems are still in an early phase of development, so that this restriction has little practical relevance. However, this is likely to change sooner or later and will require legislative adjustments. Whether the legislator will implement the most radical approach - the recognition of the legal capacity of autonomous systems - seems unlikely.

As with human misconduct, in the case of "misconduct" by chatbots the question arises as to who is liable for the damages caused.

The main issue here is whether the misconduct by the chatbot is due to human error, for example the incorrect programming of the chatbot. While with automated chatbots it seems possible to attribute the misconduct to the actual cause, this becomes more difficult with increasingly autonomous chatbots. This begins with the question of proof.

Who is liable for chatbots?

In questions of liability relating to the use of chatbots and similar systems, the injured party faces the problem of having to prove possible neglect of duty or system errors. With the increasing complexity of systems, this is a huge challenge and a considerable obstacle if the injured party wants to assert its claims successfully.

For this reason, some believe that the burden of proof should be carried by the manufacturer or operator of the system. This means that a manufacturer or operator must prove that there was no misconduct on its part and that it has exercised proper diligence in programming and operating the system.

A so-called no-fault risk liability is also being considered in connection with automated systems. With the increasing degree of automation, the complexity of the systems no longer allows for "actions" to be easily attributed to a natural or legal entity. This creates a liability gap. This can be closed by making operators pay for damages caused by their system, whether they are to blame or not.

The more self-learning systems become independent from the originally intended and programmed approach, the louder the demand is to grant them their own legal personality, at least for liability issues. Any damage caused by such a system would have to be compensated by the system itself. This could be done by means made available by the operator or the manufacturer.

However, systems based on artificial intelligence do not yet have their own legal personality, so this solution lies in the distant future. Consideration is also being given to the introduction of compulsory insurance to cover damages caused by automated or autonomous systems.

Upholding data protection

Even when using chatbots, it comes as no surprise that the data protection regulations must be upheld. When communicating with such programmes, a large amount of personal data is processed. Depending on the reason why users contact the chatbot, they might also disclose sensitive data, for example health or financial data or login credentials.

From a data protection law perspective, it must first be clarified which data is collected and how and for what purpose it is processed. For example, if a toy is ordered via a chatbot, the collected data and its usage will differ from that from a chatbot conversation on financial investments. In addition, users’ entries are also processed to train the chatbot and to optimise its communication skills.

Seeking consent

The legal admissibility of processing personal data must be examined based on the specific situation. Using this data could be based on a legitimate interest of the chatbot operator if such legitimate interest outweighs the interest of users in protecting their data. However, this is unlikely, particularly when disclosing confidential or sensitive data. In most cases, therefore, it will be necessary to obtain users’ consent before starting the chat communication.

However, the requirements that data protection law imposes on consent are rather high. Information on all details of the processing of personal data, in particular which data is processed for what purpose, to whom it is transmitted for what purpose, how long it is stored and whether the data is transmitted to non-European countries, must be provided.

When using the Facebook Customer Chat Plugin, specific data protection challenges arise. This chatbot, comparable to the Facebook like button, automatically establishes a connection to Facebook once a page is accessed and transmits personal data. This is highly problematic under data protection law and also involves considerable liability risks for the operator of the chatbot.

The use of chatbots in business transactions might require a labelling obligation, in accordance with competition law. Unless it is clear within the context of the situation, it is considered an unfair act if the commercial purpose of a transaction – such as advertising – is not indicated and the failure to identify it as such may cause consumers to make a business decision which they would otherwise not have made. In plain language: An average consumer must be able to recognise that it is advertising.

It is therefore advisable to always identify the chatbot as such and to inform users of the purposes for which it is intended. Consumer protection regulations require similar obligations, e.g. § 312a of the German Civil Code. This provision explicitly addresses labelling obligations only in connection with telephone calls. According to some lawyers however, the regulation is transferable to chatbots, since its purpose is about protecting the consumer.

Chatbots are not only used in customer interaction, but also are impacting the cornerstones of democracy in the form of so-called social bots. To this end, they publish posts on social media and, as a rule, use fake profiles to spread political propaganda. It is often difficult for users to recognise that they are communicating with a bot rather than with a human.

Against this background, political parties in Germany agreed to not use social bots before the 2017 federal elections. Ultimately, however, as a look abroad shows, where social bots are actively used, it is probably only a matter of time before one party goes back on this voluntary commitment. It is to be hoped that appropriate legislative measures, such as the introduction of a labelling requirement for social bots, will be taken.

Conclusion

It will certainly not be another 50 years before chatbots autonomously take over customer communication in many areas. The more autonomous these systems are, the more urgent questions arise which cannot be solved, or at least not satisfactorily, with the current legal instruments. It is to be expected, however, that the law will also evolve and find the appropriate answers to the questions arising around digitisation.

First published in German in iX - Magazin für professionelle Informationstechnik, 6/2018
https://www.heise.de/select/ix/2018/6/1527814065720445

Authors

Portrait ofMichael Kraus
Dr. Michael Kraus
Partner
Stuttgart
Dr. Jörn Heckmann