Add new comment

Defining Rules For The Never Closing Eye
Submitted by wnylibrarian on Thu, 04/22/2021 - 21:47
Technology is always ahead of the legal aspects. Engineers ask the question, "Can I?" Lawyers and other stakeholders ask, "What does this mean?" I wouldn't be surprised if this takes a similar path to end-to-end encryption for messaging. People want privacy. The governments say they want it too, but just as long as they have their own back door. As soon as one is created then the system fails because a hack into the system has been purposefully created. So true it'll be with facial recognition. Note the quote, "Already, campaigners are voicing disappointment with a final Commission draft many of them say is too friendly to industry, and gives governments too wide a berth to use AI for surveillance."
Another important take-away:
"Europe's proposal includes bans on practices that “manipulate persons through subliminal techniques beyond their consciousness” or exploit vulnerable groups such as children or people with disabilities. Other practices that are banned are government-conducted social scoring, which is a system introduced by China to measure an individual's trustworthiness."
- Europe throws down gauntlet on AI with new rulebook (Politico).
- Europe Proposes Strict Rules for Artificial Intelligence (New York Times).
- Europe lays out plan for risk-based AI rules to boost trust and uptake (TechCrunch).
- EU artificial intelligence rules will ban 'unacceptable' use (BBC).
- Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence (European Commission).