The EU Proposal for Regulation of Artificial Intelligence: meaningful steps toward grasping the medico-legal nettle? – Lexology

On 21 April, the European Commission published its bold proposal1 for a regulation laying down harmonised rules governing artificial intelligence.

As stated in the firms wider article on the significance of this move, in doing so the Commission has placed the EU at the forefront of the global debate on when and how risks arising from AI should be captured and regulated. Although the UK is no longer directly subject to EU regulations, the AI market is global. From a medical devices perspective, AI providers cannot ignore the regulations, especially if they wish to provide their products within the EU.

Overcoming tensions

Ever present within the proposed regulations is the familiar tension between, on the one hand, the desire to avoid encroaching on freedom to research and swiftly exploit new technologies bringing wide ranging expected benefits and, on the other, the need to protect the public. The proposals seek to bring the attendant risks within a workable legal framework.

Whilst some in tech have already signalled concern, the Commissions stated aims in producing the proposal are difficult to argue with. Taking a long term view, innovation only stands to benefit from legal certainty. Such certainty can only enhance the prospect of those working with AI securing confident investment, and build public trust and buy in - public confidence being key to the continued uptake of AI-based solutions. It will also help prevent the market fragmentation across the EU that might have come with a less comprehensive legal instrument.

The challenges AI presents to the legal orthodoxy are myriad, whether one considers the medical device regulatory regime, the common law fault-based liability framework injured patients traditionally navigate in clinical negligence cases in the United Kingdom, or the strict liability defect-based product liability framework.

Against this complex background, we go on to consider the key aspects of the Commissions proposal with a particular focus on what it could mean for stakeholders in the health sector.

The Commissions proposal in more detail

The proposal seeks to impose on high-risk AI systems an adjusted form of the regime governing medical devices (and indeed a range of other products). AI systems qualifying as high risk are expected to go through a conformity assessment process and be CE-marked before being placed on the market or put into service. Certain AI systems are entirely prohibited, and those that are not high-risk are subject to more limited obligations, but the focus for those in the health sector will overwhelmingly, for reasons set out below, be on the provisions relating to high-risk AI systems.

AI system is defined very broadly, and includes software developed by machine learning using a wide variety of methods, including deep learning; logic- and knowledge-based approaches; and finally statistical approaches, Bayesian estimation, search and optimisation methods. Any such software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments it interacts with, will fall within the definition. From a medical devices perspective, Article 6 of the proposed regulation confirms that an AI system is high-risk where it is intended to be used as a safety component for a product, or is itself a product, covered by the Union harmonisation legislation at Annex II and would be required to undergo a third-party conformity assessment pursuant to that legislation. Annex II includes the EU Regulations on Medical Devices (MDR)2 and In Vitro Diagnostic Medical Devices (IVDR)3). The classification rules and conformity assessment procedures under the MDR mean that most software qualifying as a medical device will require the involvement of a notified body before CE marking, so will qualify as high-risk AI systems where they include an AI element. Specific systems deemed high risk may also appear in Annex III.

The proposed regulation provides that high-risk AI systems must be subject to an extensive risk management and quality management system and a technical file must be produced before being CE marked. Notified bodies will be enabled to assess conformity. Of interest to those in the UK, conformity assessment bodies in third countries may be authorised to carry out the activities of notified bodies under the regulation, so long as the Union has concluded an agreement with them. Some requirements are of interest both for their own sake and for the ways they seek to resolve some of the more vexed questions on how a liability system can navigate the challenges of AI. For example, Articles 10-14 of the proposal make provision for high-risk AI systems to:

The Commission states that the proposed minimum requirements are already state-of-the-art for many diligent operators and the result of two years of preparatory work, derived from the Ethics Guidelines of the High Level Expert Group on Artificial Intelligence (Ethics Guidelines for Trustworthy AI), piloted by more than 350 organisations. It goes on to state that they are largely consistent with other international recommendations and principles, which ensures that the proposed AI framework is compatible with those adopted by the EUs international trade partners. The precise technical solutions to achieve compliance with those requirements may be provided by standards or by other technical specifications or otherwise be developed in accordance with general engineering or scientific knowledge at the discretion of the provider of the AI system. This flexibility is particularly important, because it allows providers of AI systems to choose the way to meet their requirements, taking into account the state-of-the-art and technological and scientific progress in this field.

Article 60 envisages an EU database for stand-alone high risk AI systems, with providers under an obligation to register their systems and enter various pieces of information about them that will be accessible to the public.

As regards enforcement, for persistent non-compliance Member States are expected to take all appropriate measures to restrict or prohibit the high-risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market. Non-compliance with the data and data governance requirements in Article 10 should not be taken lightly. It can lead to fines of up to a maximum of EUR30,000,000 or up to 6% of a companys total worldwide annual turnover for the preceding financial year if greater. Lesser penalties are envisaged for other instances of non-compliance and the supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities.

One issue the proposal does not directly address is civil liability, though the explanatory memorandum states that initiatives that address liability issues related to AI are in the pipeline and will build on and complement the approach taken. It is worth taking a brief look at what might be expected in that regard.

EU initiatives on liability

Turning to the question of liability, medical device manufacturers and other stakeholders in the sector should be mindful of the European Parliaments resolution of 20 October 20204. In this resolution, the EU Parliament made recommendations to the Commission on a civil liability regime for AI. This will form a key strand in the blocs approach to grappling with AI.

The recommendations included revision of the Product Liability Directive5 to adapt to the digital world, including clarification of the definition of product, damage, defect, and producer. The recommendations acknowledge that by its very nature AI could present significant difficulties to injured parties wishing to prove their case and seek redress. In order to address what could be seen as an inequality of arms, they made various proposals, including that in certain clearly defined cases the burden of proof should be reversed.

In common with the Commissions proposal, the Parliaments liability recommendation also made reference to high-risk AI systems, singling them out as suitable candidates for a standalone strict liability, compulsory insurance-backed compensation system. Under that system, the front- and/or back-end operator of a high-risk AI system would be jointly and severally liable to compensate any party up to EUR2,000,000 where they had been caused injury by a physical or virtual activity, device or process driven by that AI system. The operator could not exonerate themselves with a due diligence defence only a force majeure type defence would be available and once the injured party had been compensated, the paying party could seek proportional redress from other operators based on the degree of control they exercised over the risk. In other words, apportionment would be dealt with between defendants later, once liability and any consequent compensation had been worked out with the injured Claimant.

Through the Consumer Protection Act (the legislation implementing the Product Liability Directive in the UK), a strict liability regime covering defective products has of course operated in this jurisdiction for many years. Clearly there is much debate over whether that framework will remain fit for purpose as AI based products evolve and proliferate in ever more varied and complex healthcare settings in future. Absent a contractual relationship between the patient and those responsible for the product incorporating AI, it also remains to be seen whether product liability claims will come to be viewed by claimants as a viable alternative to actions in tort. That said, adjustments to the core principles of negligence have of course been made before by the Courts, if with some reluctance, to meet novel challenges that arise in a complex litigation environment6.

Stakeholders will watch with interest how the Commissions proposal meshes with any forthcoming instruments tackling liability.

Welcome first steps

The Commissions proposal is a welcome development and the passage of the proposed regulation through the legislative process will be keenly observed globally. Notwithstanding that it will be a long time before a future iteration of the proposal becomes law, it provides a concrete starting point to begin to answer some of the many other questions posed by AI in a legal sense.

In tandem with the Parliaments recommendations, the question, for example, of legal personality for AI would appear to have been effectively sidestepped by instead looking at AI systems and operators. The proportionate approach of isolating high-risk AI systems for the greatest scrutiny is also a step in the right direction.

In the Medicines and Medical Devices Act 20217, the Secretary of State has at their disposal an enabling piece of primary legislation under which there are extensive powers to make regulations fit for the digital age.

When making regulations under the relevant subsection, the Secretary of State must have in mind the overarching objective of safeguarding public health. As part of this, consideration must be given to whether or not regulations would affect the likelihood of the United Kingdom being seen as a favourable place in which to carry out research, develop, manufacture or supply medical devices8.

With that in mind, all UK stakeholders will be keen to see sooner rather than later where they stand relative to those in the EU.

Link:
The EU Proposal for Regulation of Artificial Intelligence: meaningful steps toward grasping the medico-legal nettle? - Lexology

Related Posts

Comments are closed.