EU Lawmakers' Committees Agree Tougher Draft AI Rules (reuters.com) 2
European lawmakers came a step closer to passing new rules regulating artificial intelligence tools such as ChatGPT, following a crunch vote on Thursday where they agreed tougher draft legislation. From a report: The European Union's highly anticipated AI Act looks set to be the world's first comprehensive legislation governing the technology, with new rules around the use of facial recognition, biometric surveillance, and other AI applications. After two years of negotiations, the bill is now expected to move to the next stage of the process, in which lawmakers finalise its details with the European Commission and individual member states.
Speaking ahead of the vote by two lawmakers' committees, Dragos Tudorache, one of the parliamentarians (MEPs) charged with drafting the laws, said: "It is a delicate deal. But it is a package that I think gives something to everyone that participated in these negotiations. Our societies expect us to do something determined about artificial intelligence, and the impact it has on their lives. It's enough to turn on the TV ... in the last two or three months, and every day you see how important this is becoming for citizens." Under the proposals, AI tools will be classified according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.
Speaking ahead of the vote by two lawmakers' committees, Dragos Tudorache, one of the parliamentarians (MEPs) charged with drafting the laws, said: "It is a delicate deal. But it is a package that I think gives something to everyone that participated in these negotiations. Our societies expect us to do something determined about artificial intelligence, and the impact it has on their lives. It's enough to turn on the TV ... in the last two or three months, and every day you see how important this is becoming for citizens." Under the proposals, AI tools will be classified according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.
"by application" rules don't work for AGI (Score:2)
Regulators need to go back to the drawing board.
Re: (Score:2)
It sounds like they're categorizing risks correctly to me. Bing AI does represent a different threat level if you're searching for a replacement for Imgur, versus asking to see if someone has a criminal record. Screwing up in the first case is a mere annoyance, and although it's exploitable, it's questionable what could be achieved. Screwing up in the second case can wreck someone's life on the spot. The standards do need to be proportional to the risk as applied to the real world use of the AI, not some id