Life sciences A to Z - A is for Artificial Intelligence

A for Artificial Intelligence

The use of AI within the life sciences industry is a hot topic with many excited about its potential uses, but how will its use be regulated? The EU Commission recently adopted a proposal for the first legal framework on AI which will impose obligations on businesses across multiple sectors, including life sciences (the Regulation). Given its potential impact, it’s definitely one to watch as it goes through the legislative process.

Definition of AI

The Regulation currently defines AI systems as“software that is developed with one or more of [certain] approaches and techniques . . . and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they inlifteract with.” Annex I of the Regulation (which the EC can update periodically) lists techniques and software that would be caught by the definition, including machine learning, logic and knowledge based approaches, and statistical approaches. The list appears purposefully broad, to catch as many systems as possible.

Risk based approach

The aim of the Regulation is to ensure a level of trust in AI systems that are made available and, accordingly, adopts a risk based approach:

  • Unacceptable risk: AI systems considered to pose a clear threat to individuals’ safety, livelihood or rights will be banned. For example, applications that manipulate human behaviour in a way that circumvents users’ free will or systems allowing ‘social scoring’ by governments.
  • High-risk: several categories of AI systems are identified as high-risk. Some of the categories likely to affect the life sciences industry include: types of safety components and products (for example in medical devices); administration of justice (in determining how the law would apply to a set of facts, which may apply to any system designed to assist with medical regulatory processes); remote biometric identification and categorization of people (e.g. patient identification).
  • Limited risk: AI systems that have specific transparency obligations – for example chatbots will be required to ensure users are aware that they are interacting with a machine.
  • Minimal risk: for example video games or spam filters that use AI. Most applications of AI are expected to fall under this category with minimal to low risk to individuals’ rights or safety, albeit providers of such systems are encouraged to comply on a voluntary basis

Obligations for high-risk systems

AI systems that fall into the high-risk category will need to comply with strict obligations prior to being put on the market, including:

  • Conducting adequate risk assessments (including undergoing conformity assessments and registration procedures) and having mitigation systems in place, such as quality management systems;
  • High quality data sets to limit risk and avoid discriminatory outcomes;
  • Logging activities to ensure traceability of results;
  • Detailed documentation of the system and its purpose, including technical documentation, logs generated by the system and evidence of compliance with the registration obligations;
  • Clear and adequate information provided to the user;
  • Appropriate human oversight measures to minimise risk; and
  • High level of robustness, security (including cyber security) and accuracy.

The requirements will apply to providers of high-risk AI systems and manufacturers of products that include high-risk AI systems (the latter being responsible as if they were the provider). Distributors, importers, users and other third parties will be subject to the provider’s obligations if they place a high-risk AI system on the market or into service under their name or trade mark or modify an existing system, in which case the original provider will be relieved of their responsibility.

Businesses in the life sciences sector will probably feel a certain level of familiarity with the obligations and procedures involved given their experience of regulations affecting medicinal products and, particularly, medical devices, but the Regulation will certainly add to the regulatory burden within the sector. 

Supporting AI

Whilst the introduction of new regulations and procedural requirements risks stifling innovation due to the regulatory hurdles, the proposal contains measures intended to encourage and support innovation in this field. The Regulation includes provisions for supporting regulatory sandbox schemes, reducing the regulatory burden for small and medium-sized enterprises and startups, as well as creating digital hubs and testing facilities.

Given the broad definition of AI systems, the Regulation is going to have a significant impact across all sectors. It still needs to go through the legislative process and given the difficulty in adopting clear definitions on the subject matter, businesses will have to wait and see how the principles set out in the Regulation will be enforced in practice. In some ways, players in the life sciences sector may have an advantage in that regulatory burdens are nothing new.