Blog: Shaping Europe’s Digital Future

9 Apr 2020

On the 19th February, the Commission released the latest components of “Shaping Europe’s digital future” – its five years regulatory plan to develop a digital transformation towards human-centered technology, a fair and competitive economy and an open and democratic society.

We want to achieve tech sovereignty […] or the capability for Europe to make its own choices, based on its own values, respecting its own rules” has concluded Commission’s president Ursula Von Leyden when addressing the digital future of the European Union.

1. Paving the way towards a data-based economy

With its latest “European strategy for Data”, the EU hopes to become a worldwide leader in data by creating a more permitting data economy. This economy will be built around the idea that non-personal data should be available to the society as a whole. EU regulators are confident to reach that goal by harnessing industrial and non-personal data produced by European companies. Accompanied by other forthcoming initiatives such as the “Data Act”,  through which the EU will regulate the sharing of Business to Business data, as well as Business to Government data, the Commission intends to create an effective single market for data – “a genuine single market for data, open to data from across the world – where personal as well as non-personal data, including sensitive business data, are secure and businesses also have easy access to an almost infinite amount of high-quality industrial data […].” Concretely, the European data strategy entails the following:

  • A cross-sectoral governance framework for data access and use; such a framework would focus primarily on facilitating cross-border data use, prioritizing interoperability requirements and setting standards within and across sectors.
  • The opening of key public sector data set in particular in view of its potential for SMEs;
  • Further legislative action on issues affecting relations between actors in the data economy. This would be achieved by providing incentives for horizontal data sharing, in particular for issues related to usage rights regarding co-generated data laid down in private contracts (such as IoT data). Here, the rules will clarify the legal liability falling upon this type of data.
  • The delimitation of circumstances where access to data should be made compulsory.

 

2. Building a new framework for AI on the pillars of “Trust and Excellence”

The white paper on Artificial Intelligence represents the first draft of what was supposed to be a legislative proposal on AI at the European level. If this white paper is not yet ready to become a legislative framework, it already gives a concrete idea of the future objectives and main considerations of the Commission towards the “European approach of trust and excellence” and emphasizes the crucial role of its “Ethic guidelines for trustworthy Artificial intelligence” in both of these pillars.

Originally, those guidelines, even as a non-binding instrument, have laid out seven core principles that AI systems should use and comply with to achieve a trustworthy AI:

1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparency
5. Diversity, non-discrimination and fairness
6. Societal and environmental wellbeing
7. Accountability

Currently, a number of these requirements are already covered by the current legislative framework:  AI is already subject to several EU legislations, from fundamentals rights such as privacy, non-discrimination and data protection, to consumer protection and product safety and liability rules, to national legislations. However, this general body of legal instruments proves to render the enforcement of those rules difficult. Therefore, a modern and adapted framework is required to overcome the unpredictable nature and the opaqueness of AI and to assess the risks that AI systems create.

Practically, the white paper builds a future regulatory framework for AI and highlights, amongst others, the following key objectives:

A) Delimiting the definition of AI and the scope of the framework:

According to the Commission, the final definition of AI should be sufficiently flexible to accommodate technical progress while being precise enough to provide the necessary legal certainty. Keeping that in mind, the paper recalls the latest AI definition refined by the High Level Expert Group on AI : “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions.” As for future clarifications purposes, the main emphasis should be on the main elements of this definitions, i.e. data and algorithms.

Moreover, and considering the changing functionality of AI systems, the current scope of the framework should be extended to both products and services, as the current EU safety legislation only applies to products. It should also focus on safety risks that are present not only at the time of placing on the market but also later in the life cycle of the AI product or service as it is an evolving technology.

B) Adopting a risk-based approach:

The new regulatory framework needs to be effective without being too prescriptive in order not to represent a disproportionate burden, especially for SMEs. To achieve this, the Commission came up with clear criteria to determine what would be considered as a high-risk AI application. Specifically, an application will be considered as “high-risk” if it meets two cumulative criteria:

  • the AI application is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur. As example we can already cite the sectors of healthcare; transport; energy and parts of the public sector.
  • the AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise. This means that not every use of AI in the selected sectors necessarily involves significant risks but that the assessment of the level of risk of a given use would be based on the impact on the affected parties. Examples include applications that produce legal or similarly significant effects for the rights of an individual or a company; that pose risk of injury, death or significant material or immaterial damage; that produce effects that cannot reasonably be avoided by individuals or legal entities.

 

Those high-risk situations will be accompanied by mandatory requirements in accordance with the aforementioned criteria. Yet, the Commission also determines situations where AI applications will be considered high-risk as such.  This might entail that a sector-independent criterion for “high-risk” AI solutions would appear in situations that are high-risk due to their impact on individuals or organizations and where the mandatory requirements would apply in any case. As concrete examples, we can cite AI applications for recruitment processes or those impacting workers’ rights; applications regarding consumers rights and consumer protection; applications regarding intrusive surveillance technologies and biometric identification.

The Commission issued a list of the requirements that will be obligatory for all high-risk situations, as follows:

  • Training data: Without data, there is no AI as their system depend on those data sets on which the systems are trained and fed with. Providing for reasonable assurances and adequate standards for safe use (i.e. that the data sets are not too broad and cover dangerous scenarios), assuring an adequate control of the different outcomes to avoid discrimination and ensuring that privacy is protected constitute the requirements on those data.

 

  • Keeping of records and data: Requirements are called for regarding the keeping of records in relation to the programming of the algorithm, the data used to train high-risk AI systems, and, in certain cases, the keeping of the data themselves. Such requirements aim at improving the compliance and traceability of the data used.

 

  • Provision of information: Transparency is required In order to achieve the objectives pursued, especially those of responsible use, building trust and facilitating redress. For that purpose, the conditions, the capabilities and the limitations of AI systems have to be disclosed and the citizens should be aware at all times when they are in fact dealing with an AI system. That information needs to be objective, concise and understandable.

 

 

  • Robustness and accuracy: High risk applications will be required to provide for a superior level of technical robustness and to be developed in in a responsible manner, i.e. with an ex-ante due and proper consideration of potential risks they might generate. The following elements could be considered:
    • The AI system is accurate and robust
    • The outcome is reproductible
    • The AI system can deal with inconsistencies throughout its lifecycle
    • The AI system is resilient and mitigating measures are taken against attack and attempts to manipulate the data

 

  • Human oversight: This requirement aims to guarantee that the AI system does not undermine human autonomy. The main requirement is for the AI system to be human centered as the human oversight of the system assures trustworthy and ethical solutions. Human intervention needs to be appropriate and will be required either prior to any application as a certification measure either after the beginning of the application as a monitoring requirement. It shall depend in particular on the intended use of the systems and the effects that the use could have for affected citizens and legal entities.

 

  • Specific requirement for remote biometric identification: The gathering and processing of biometric data for remote identification purposes carries specific risks for fundamental rights. As an example, the paper cites the deployment of facial recognition in public places. Under the GDPR, such processing can only take place on a limited number of grounds, the main one being for reasons of substantial public interest.

In accordance with the current EU data protection rules and the Charter of Fundamental Rights, AI systems will therefore be submitted to requirements issued from the principles of proportionality, effective data protection and appropriate safeguards. In principle, such biometric identification requires a strict necessity and needs to be granted under a national or a European law.

Moreover, on the 21th February, the European Data Protection Supervisor (EDPS) identified the challenges and opportunities that AI and facial recognition might bring to our society. The EDPS emphasizes the need to respect the precautionary principle, and recalls that it may even justify a ban or a temporary freeze of such technology if its impact on the rights and freedoms of the individuals cannot be effectively controlled. Potential risks on that regards entail the very intrusive practice of identifying one individual amongst many others through facial recognition, or the fact that poor quality data-sets can result in bias or discrimination. According to the EDPS, clear red lines need to be defined for all high-risk scenarios identified, especially when “the essence of rights and freedoms of individuals may be at stake”.

With regards to the compliance and the effective enforcement of those mandatory requirements, the Commission distinguishes between two different situations:

For high-risk situations, it has underlined the need for an objective, prior conformity assessment. Concretely, it would be a part of the conformity assessment mechanisms that already exist at the European level and it would include procedure for testing, inspection and certification of AI systems.

As an example, the paper talks about checks for algorithms and for the data sets used in the development phase. On that regard, operators need to take into account that not all mandatory requirements will have to be carried out in that prior assessment (e.g. providing information to the public).  Depending on the nature of the AI system and its capability to learn and evolve, that assessment might also need to be executed regularly throughout the lifetime of the AI system.

The conformity assessment would be mandatory for all economic operators addressed by the requirements, regardless of their place of establishment.

For other situations (i.e. non-High-risk situations), that are therefore not subject to those mandatory requirements, the commission came up with a voluntary labelling scheme.

Concretely, economic operators could decide to make themselves subject, on a voluntary basis, either to those requirements or to a specific set of similar requirements especially established for the purposes of the voluntary scheme. They would in turn receive a quality label proving that their AI system is trustworthy. A point of focus for interested operators is that while participation in the labelling scheme would be voluntary, the requirements would be binding once the developer or the deployer opted to use the label.

C) Covering risks that arise with regards to product safety and liability:

The Commission has developed concepts and necessary changes that will fall upon “the actor(s) who is (are) best placed to address any potential risks.” Learn about those new concepts here in our latest LinkedIn post.

Moreover, the liability question could also be dealt with under the point of view of EU product liability law where the liability of a defective product is attributed to its producer. Currently, it is the Directive on the liability for defective product that is the harmonized instrument at the European level.  On that regard, one may ask whether or not that Directive should be adapted to our digital era as it is already 30 years old. Consequently, at the beginning of the year, several consumer advocates (BEUC, CLEPA, ORGALIM) issued a number of recommendations to the Commission in the sense of the adaptation of the directive:

  • the definitions of “product” and “producer” should be adapted and clarified in order to define who is the producer when a product goes through multiple updates, reparations, modifications, etc. throughout its life cycle.
  • the types of compensable damage should be expanded to include not only physical and material damage but also damage to data and digital assets.
  • A no-fault liability should be discussed, as BEUC has been recommending that all actors involved in the manufacture of a product should be jointly liable.
  • The burden of proof might shift from the consumers towards the producers. This is justified by the fact that the current opacity of AI makes it very difficult to prove an eventual defect of the product and a correlation between that defect and the damages.

 

More recently, the Expert Group on liability and new technologies just published its latest report “Liability for AI and other emerging digital technologies”. This report shows the latest liability measures to be applied for AI operators when they place their product on the market. We identified 10 of its most crucial points:

1. A person operating an authorized technology that nevertheless carries an increased risk of harm to others, for example AI-controlled robots in public spaces, should be subject to strict liability for damage resulting from its exploitation.

2. A person using a technology that does not present an increased risk of harm to others should nevertheless be required to comply with obligations to properly select, operate, monitor and maintain the technology used and – failing that – should be liable for failure to comply with those obligations in the event of a fault.

3. A person using a technology which has a certain degree of autonomy should not be less liable for the resulting harm than if that harm had been caused by a human auxiliary.

4. Manufacturers of products or digital content incorporating an emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by modifications made to the product under the control of the producer after it was placed on the market.

5. In situations where a service provider providing the necessary technical framework exercises a higher degree of control than the owner or user of an actual product or service equipped with an A.I., it should be taken into account in determining who primarily exploits the technology.

6. For situations exposing third parties to an increased risk of damage, compulsory liability insurance could give victims better access to compensation and protect potential offenders against the risk of liability.

7. Where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can reasonably be expected, victims should be entitled to a facilitating process of proof.

8. Emerging digital technologies should be equipped with recording functions, where circumstances warrant, and the absence of recording or reasonable access to recorded data should result in a reversal of the burden of proof so as not to prejudice the victim.

9. The destruction of the victim’s data should be considered as damage, which is compensable under specific conditions.

10. It is not necessary to confer legal personality on stand-alone devices or systems, as the damage they may cause can and should be attributable to existing persons or bodies.

It seems that several of the recommendations will actually be effective, notably those regarding the type of compensable damage, an easier burden of proof for consumers and a long-term liability for evolving products.

D) Shift of EU digital’s ambitions due to COVID-19 pandemic situation

On February 19th, both the Internal Market Commissioner, Thierry Breton and the Executive Vice President of the European Commission for a Europe Fit for the Digital Age, Margrethe Vestager in a common press conference, stated that the ambitions related to the EU’s key digital files has changed. Confronted with a radically changed situation, the EU institutions are mobilized to manage the crisis and mitigate the risks. Consequently, the EU digital priorities have slided down the list, as shown beneath:

  • AI White Paper: The EU is still in the process of collecting feedback on the White Paper. The plan is to run a large public consultation event in mid-September, if the circumstances will so permit.
  • Digital Service Act: The Executive Vice President, Margrethe Vestager, emphasized that the new rules for gatekeeping platforms have become a pressing issue. There is however a degree of uncertainty in as to how much of a setback the plans for regulating online platforms could face, with the Commission’s public consultation on the Digital Services Act already postponed. The consultation had been due to go live at the end of March.
  • Data strategy: The Commission’s initial plan was to create a single market for data through sector-specific “data spaces”. The legislative framework for data spaces was expected to be presented at the end of 2020. A public consultation with relevant stakeholders is rolling and the original deadline (31 May) is maintained despite the request for extension by some lobbyists. Importantly, the pandemic situation has demonstrated the importance of sharing data across the bloc and has given business-to-government data-sharing a major PR boost.
  • GDPR review: The Commission’s high expected two-year evaluation of GDPR was scheduled to be submitted to the Council and to the Parliament on April 22. This timeline has been canceled with no new date set.

 

Additionally, on 15 April 2020 a leaked document on the updated European Commission work program demonstrated that many legislative files, including the Artificial Intelligence initiatives initially foreseen for 2020 have been postponed due to the ongoing COVID-19 crisis. More concretely, the AI package include several initiatives, the timeline of each is revised as follows:

  • Delegated acts under the Radio Equipment Directive (fraud, privacy and data protection, connected devices) – by end 2020;
  • The horizontal initiative on Artificial Intelligence from DG CNECT – postponed to Q1 2021;
  • The review of the Machinery Directive, the review of the General Product Safety Directive and the liability initiative (GROW/JUST) – postponed to 2021;
  • Similarly, the e-Commerce Directive and the initiative on ex ante regulation of platforms are proposed to be removed to Q1 2021 to align it with the timing of the adoption of European Democracy Action Plan.

 

To conclude, we can see that Europe is taking the path of developing a data-driven economy. Regarding the emergence of new-technologies using those data, the white paper gives room for anticipation as it might be beneficial for organizations working with AI to start early the implementation process of this potential regulatory framework. The latest report on liability for AI shows in turn that AI will require an adapted framework to assure a fair protection to consumers on the European market.

Under the current pandemic situation, the immediate reflex will be to turn to AI in order to help tackle the issue and simultaneously raise the vehement warnings on the adverse impact it may have. Coordinating AI at the EU level would ensure a level playing field of protection and advantages for European citizens, while leveraging Europe’s strengths on the global market. On the other hand, the COVID-19 crisis will likely expose several key shortfalls of AI. If used wisely, AI has the potential to exceed humans in speed but also by detecting patterns in data use. However, AI requires a lot of data, with relevant examples in that data, in order to find these patterns. The new EU agenda on AI does not necessarily have to be reinvented but it will require modifications and adjustments. The EU is confronted now with the major challenge of finding a balanced solution based on a human-centred approach on AI to ensure civil protection and safe environment.