AI-related EY and LuxInnovation event took place in Luxembourg
Photo by A. Mochkin

The event “AI in Tech: What’s Next?” hosted by Luxinnovation and EY not only provided a platform for discussions on the current state and future of AI technology but also included an informative overview of the proposed EU regulations for AI. This overview was particularly significant in light of the European Union’s approach to managing the risks associated with AI technologies.

A key aspect of the proposed EU regulation is the introduction of a risk-based approach to AI systems. This approach categorizes AI systems into four groups:

  1. Unacceptable, Prohibited Systems: These are AI systems whose risks are deemed too high, and their use will be prohibited.
  2. High-Risk AI Systems (HRAIS): This category includes AI systems used in critical areas such as biometric identification, management of critical infrastructure, education, employment, access to essential services, law enforcement, migration and border control, administration of justice, and democratic processes.
  3. AI Systems with Specific Transparency Obligations: These systems must adhere to strict transparency guidelines.
  4. Minimal or No-Risk AI Systems: Systems that pose little to no risk and are subject to minimal regulation.

The proposed regulation is particularly stringent with High-Risk AI Systems (HRAIS). These systems are used in sensitive areas and therefore require careful oversight. The areas include biometric identification and categorization of natural persons, management and operation of critical infrastructure, education and vocational training, employment and workers management, access to and enjoyment of essential private and public services and benefits, law enforcement, migration, asylum, border control management, administration of justice, and democratic processes.

Additionally, the EU plans to prohibit AI systems that are capable of subliminal manipulation resulting in physical or psychological harm to individuals. This is a significant step in ensuring the ethical use of AI.

For operators of AI systems, particularly those classified as high-risk, the proposed regulations set out several obligations, including:

  1. Establishing and implementing a quality management system within their organization.
  2. Drawing up and keeping up-to-date technical documentation.
  3. Ensuring logging obligations to enable users to monitor the operation of the high-risk AI system.
  4. Undergoing a conformity assessment, and potentially reassessment in case of significant modifications to the system.
  5. Registering the AI system in an EU database.
  6. Affixing the CE marking and signing a declaration of conformity.
  7. Conducting post-market monitoring.
  8. Collaborating with market surveillance authorities.

These measures are designed to ensure that high-risk AI systems are safe, transparent, and compliant with EU standards, thereby protecting users and the public from potential harms associated with AI technologies. The event’s focus on these regulations highlights the growing importance of ethical considerations and safety in the development and deployment of AI systems.

Experts and industry stakeholders have suggested various ways to mitigate the impact of what they perceive as over-regulation:

  • Balanced Approach
    Advocating for a more balanced approach that protects consumer and environmental interests without unduly burdening businesses. This could involve more stakeholder consultations and impact assessments before implementing new regulations.
  • Harmonization and Simplification
    Streamlining and harmonizing regulations across different EU countries to reduce complexity and compliance costs for businesses operating in multiple EU states.
  • Support for Innovation
    Providing more support and incentives for innovation, especially for small and medium-sized enterprises (SMEs), to help them adapt to and comply with regulatory requirements.
  • Regulatory Sandboxes
    Creating regulatory sandboxes where businesses can test new technologies and business models in a controlled environment with regulatory oversight but without the full burden of regulatory compliance.
  • Regular Review and Adaptation
    Regularly reviewing existing regulations to assess their impact and effectiveness, and adapting them as necessary to keep pace with technological and market developments.

The event also included a panel discussion on the future of AI technology. The panelists discussed the potential impact of AI on the economy, society, and the environment. They also explored the ethical implications of AI and the need for regulation to ensure that AI systems are safe and beneficial to society.

Newer post

Generative AI Art: The Ethical Canvas

The rise of generative AI art raises questions about authorship, ownership, and the ethical implications of using AI to create art.

Generative AI Art: The Ethical Canvas