The digitisation of healthcare is advancing apace. But what are the legal issues that arise with the use of artificial intelligence?
Can algorithms protect against disease? Can an app provide a better diagnosis than a doctor? Can a robot perform an operation more reliably than a surgeon? These are some of the burning questions in healthcare.
Digitisation is advancing steadily and disrupting the sector. One of the most important aspects is the potential of artificial intelligence (AI) for applications in the field of health. Alongside the technical challenges, the digitisation of healthcare raises a series of legal questions. This is often “terra incognita” in terms of the law. Is the software a medical device? Are contracts concluded automatically? And who is accountable if the self-learning app makes a mistake?
AI and robotics in healthcare that use artificial intelligence are developing at a furious pace, in particular early detection and diagnostics applications. At the same time, AI is becoming increasingly sophisticated, enabling it to do what humans do – often more efficiently, faster and at lower cost.
An important area of application is preventative care. AI can be used to help people stay healthy. Apps, like fitness trackers for example, can promote healthier behaviour and help individuals manage a healthy lifestyle on their own. The goal is to give consumers better control of their health and wellbeing.
Today already, AI is indispensable in the field of early detection and diagnostics. It is used in a variety of ways to detect diseases like cancer more accurately, more reliably and sooner. Put simply, it does so by comparing data from a specific patient – also in the form of images – with large quantities of data from other patients. The self-learning systems detect correlations and suggest diagnoses. An example is IBM’s Watson for Health, which assists healthcare organisations in the use of cognitive technologies with a large quantity of health data. Google’s DeepMind Health Technology combines machine learning with system-neuroscience to simulate the human brain using AI and offers diagnostic and decision-making support to those working in healthcare.
The last-mentioned aspect forms an additional important pillar of AI applications in healthcare. Drawing on extensive data sets, so-called decision support software uses predictive analytics to support clinical decisions and measures as well as streamline processes. In addition, pattern recognition helps identify patients with a high risk for certain diseases or who are experiencing a deterioration in their general state of health due to lifestyle, environment, genomics or other factors.
Added to this are further areas of application such as assisting in patient treatments, for example by improving treatment plans or monitoring treatment successes, or using robot technology in surgery.
Digitisation of healthcare: key legal issues around AI
From a legal perspective, there is a wide variety of issues to consider in the development and implementation of digital solutions using AI. It is not uncommon to enter unknown legal territory here and there is still much that is in a state of flux.
A key question in the development of software solutions is the regulatory classification of the product. It is especially vital to clarify whether the software is a medical device. This is important from a practical point of view because medical devices can only be marketed if they carry a CE label, which they receive after having undergone a conformity assessment procedure. If a product that qualifies as a medical device does not have CE labelling, a competitor could demand that the product be withdrawn from the market. Moreover, it would represent an administrative offence and may even have consequences under criminal law.
According to the German Medical Devices Act (Medizinproduktegesetz - MPG) – and also the upcoming European Medical Devices Regulations (MDR) – the intended purpose of the software is decisive. Broadly stated that means: If the software is intended to detect or treat diseases, there is a strong argument for classifying it as a medical device – for example, if it assists in diagnosis, facilitates decision-making on therapeutic measures or calculates the dosage of medication. On the other hand, if the software only provides knowledge or only stores data, it does not qualify as a medical device. However, the manufacturer determines the intended purpose, so there is a certain amount of leeway based the functional design of the application.
From a regulatory perspective, there are several issues to consider: the laws of the medical profession where the demarcation between medical and non-medical work plays an important role; the aspects of pharmacovigilance in the generation of extensive data and – what is increasingly important for the financial viability of software solutions – questions of reimbursement.
Data protection naturally has a high relevance in this context. After all, AI solutions are generally based on analysing and comparing specific patient data with a large number of – mostly anonymised and aggregated – data from other patients. The requirements for the effective management of consent in the collection and use of personal data have increased even further as a result of the European General Data Protection Regulation (GDPR) that came into force in May 2018.
In civil law, a large number of issues relating to AI, and in particular robotics based on AI, have still not been clarified. These include the validity of contracts, liability for errors and injuries and insurance. As a result of the existing uncertainty in this respect, the European Parliament adopted a resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) on 16 February 2017. At the level of the EU Commission, the subject of AI and robotics is being handled and promoted in various ways – but as far as we can see, there are as yet no precise proposals for a change to civil law regulations.
For the time being, the general regulations continue to apply. It is necessary therefore to find creative solutions to reconcile the new circumstances with existing civil law regulations that are frequently not designed for this purpose. This will affect, for example, questions such as with whom a treatment contract is made when a robot is used during surgery. Or the question of who is liable if a self-learning system commits an error that causes harm to the patient – is it the programmer, the user or even the software itself? It is precisely this – still unclarified – topic that programmers, manufacturers, users and patients are grappling with.
Conclusion: tackle the legal issues in good time
The potential for AI in healthcare is huge. The unstoppable progress of digitisation will ensure that applications based on AI will increasingly be used in day-to-day treatment. From the outset, those actively engaged in this field, should not only consider the technical aspects but also, and especially, the legal issues. In this way, it will be possible to analyse, evaluate and cushion the legal risks. Who knows – perhaps an application using legal tech could help with this, since it is well-known that AI is also gaining ground in the legal world…
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.