EU lawmakers hold a crucial vote Thursday towards setting restrictions on how AI such as ChatGPT can be used in the European Union.
European Parliament committees will set out their position for upcoming negotiations with EU member states that aim to create a law to prevent abuses in the way artificial intelligence is used, while still giving room for innovation.
The bloc wants to be the global pioneer in regulating the technology, which has ignited public and corporate interest in the past few months.
Brussels' move towards that goal actually started two years ago, with a European Commission proposal. EU member states came up with their negotiation position at the end of last year.
But the emergence since then of ChatGPT, Midjourney and other AI applications has greatly focused the parliament's attention on the issue, resulting in an avalanche of amendments that have to be considered.
Once the committees' vote is held on Thursday, the full European Parliament will have its say with a plenary vote next month.
"I think we are putting forward a very good and balanced text" that protects people while allowing innovation, said Brando Benifei, one of the lead MEPs on the text to be voted on Thursday.
- Double-edged sword -
While the promise of AI is vast, it is also a double-edged sword as a tech tool. It could save lives by advancing medical evaluations, for instance, or it could be used by authoritarian regimes to perfect mass surveillance.
For the general public, the arrival of ChatGPT at the end of last year provided a source of curiosity and fascination, with users signing on to watch it write essays, poems or carry out translations within seconds.
Image-generation AI such as Midjourney and DALL-E likewise sparked an online rush to make lookalike Van Goghs or a pope in a puffy jacket, while AI music sites have impressed with their ability to even produce human-like singing.
Nefariously, though, the tech carries great potential for fakery, to fool people and sway public opinion.
That has spurred Elon Musk and some researchers to urge a moratorium until legal frameworks can catch up.
The European Parliament's stance follows the main directions set out in the commission's proposal, which was guided by existing EU laws on product safety that put the onus of checks on the manufacturers.
The core of the EU's approach is to have a list of "high risk" activities for AI.
The commission suggests that designation should cover systems in sensitive domains such as critical infrastructure, education, human resources, public order and migration management.
Some of the proposed rules for that category would ensure human control over AI and that technical documentation is provided, and that there is a system of risk management.
Each EU member state would have a supervising authority to make sure the rules are abided by.
Many MEPs, however, want to limit the criteria of what constitutes "high risk" so that it only covers AI applications deemed to threaten safety, health or fundamental rights. Others, such as the Greens grouping, oppose that.
When it comes to generative AI such as ChatGPT, the parliament is looking at a specific set of obligations similar to those applied to the "high risk" list.
MEPs also want AI companies to put in place protections against illegal content and on copyrighted works that might be used to train their algorithms.
The commission's proposal already calls for users to be notified when they are in contact with a machine, and requires image-producing applications to state that their output was created artificially.
Outright bans would be rare, and would only concern applications contrary to values dear to Europe -- for example, the kind of mass surveillance and citizen rating systems used in China.
The lawmakers want to add prohibitions on AI recognising emotions, and to get rid of exceptions that would allow remote biometric identification of people in public places by law enforcement.
They also want to prevent the scraping of photos posted on the internet for training algorithms unless the authorisation of the people concerned is obtained.
aro/rmb/imm/smw