AIOTI WG Policy and Strategies has published its views on the Proposal for a Regulation laying down rules on Artificial Intelligence (AI Act).
The full text can be found here.
The main points of the document are:
General purpose of AI & definitions
AIOTI does not see the need for the AI Act to have a specific section on General Purpose AI (GPAI). The original text of the Commission should stand, in order to maintain the risk-based approach. AI systems which can be used in high-risk scenarios are already covered in the Act. GPAI having no defined purpose, it would contradict the risk-based approach to have them explicitly covered or explicitly excluded and would make the assumption that they’re all used for high risk applications.
According to the Commission’s own analysis, if users need information from GPAI providers to ensure compliance, these providers have a commercial interest in helping these users/clients with all requested information to become compliant. This idea that the market will take care of the cooperation is indicated in Recital 60.
As proposed by MEP Tudorache in the draft IMCO-LIBE report, we suggest to fine tune the allocation of responsibilities and clarify that the entity which decides over the intended purpose of an AI system or gives an intended purpose to a system becomes the provider. The provider then use of the training data and development information received from the provider of the GPAI in the following risk assessment process. Importantly, this allocation should be modifiable through contracts.
Definition of AI System
AIOTI proposes a revision of this definition to ensure that 1) it includes a necessary element of “autonomy” and “intelligence behaviour” in decision-making, and 2) that it does not include widely used statistics and optimization methods, while on the other hand 3) the definition should be future-proof and allow for the inclusion of technological approaches that cover more powerful forms of AI in the future. In particular, we recommend using the definition proposed by the High-Level Expert Group on AI, focusing on AI-techniques that display intelligent behaviour and take actions with some degree of autonomy (Annex I, part a). The definition of AI systems should also include a reference to “human defined intended purpose” rather than “human defined objectives”, in order to ensure consistency with the rest of the AI Act.
High Risk Products
According to AIOTI, the proposed classification rules for High-risk AI should be redefined to ensure consistency with sectoral legislation in Annex II, thus regulating only high-risk AI applications in areas where a clear regulatory gap has been demonstrated, without extending beyond safety components and safety-relevant software products. Furthermore, the provisions of Article 6 should not hold the third-party conformity assessment as a criterion for all high-risk classified systems, given that this would undermine the development of innovative and beneficial AI-techniques that grant EU manufacturers a competitive edge, particularly in our sector where highly customized solutions are often commercialized. We invite to not include all household appliances to be considered as high risk but only those which use personal data. We therefore underline the fact that mere control of digital infrastructure should not be considered a High-Risk activity and should therefore not be included within Annex III of this regulation.
Harmonised Standards vs. Common Specifications
AIOTI strongly recommends that harmonised standards should be formulated with the active participation of industry, particularly SMEs, to ensure market relevance, technical quality and to avoid a “one-size-fits-all” approach. Ultimately, the powers of the European Commission to introduce common specifications via implementing acts should be on the basis of strict and unambiguous conditionality.
[1] https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1547633333762&uri=CELEX%3A32018L1972