Tuesday, May 4, 2021

Regulating artificial intelligence: EU Proposal for a regulation laying down harmonised rules on artificial intelligence

In late April, the European Commission proposed new rules that they intend to support transforming Europe into a global hub for artificial intelligence (AI).  Unfortunately, from my perspective, these proposed rules fall far short of the mark.  The authors of this report do not seem to have an adequate understanding of what AI does, how it does it, or how to evaluate it.  They make supportive comments for funding future AI research, but the bulk of the document seems to be about regulating it, rather than supporting it, and they miss the mark on those regulations.

The report begins with four examples of practices that would be prohibited under the regulation.  These include systems that seek “to manipulate persons through subliminal techniques beyond their consciousness.” Or to “exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour.” Systems that would “classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics.” They also highlight a prohibition against “the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.”

The first of these highlighted prohibitions has nothing to do with artificial intelligence and is a long debunked practice in any case.  Subliminal messaging is the idea introduced by James Vicary in 1957 that one could present messages so briefly, below the threshold (the meaning of subliminal) of conscious detection, presumably so that the message could avoid filtering by the conscious mind and directly control behavior through the subconscious.  Vicary eventually admitted that he made up the original experiment on which this idea is based and no one has ever demonstrated that it does work.  Subliminal messaging does not work and never has.  How artificial intelligence would be used to disseminate it is a mystery to me as is their focus on this pseudo-phenomenon in this report. 

Similarly, the second practice that they highlight, exploiting vulnerabilities of a person due to their age or mental disability also does not materially involve artificial intelligence.  First, it is insulting to imply that age alone makes one cognitively disabled.  Second, it is hard to see what role artificial intelligence might play in taking advantage of that disability.  Third, there are other laws already available to protect cognitively vulnerable individuals and it is unclear why new laws, let alone new laws about artificial intelligence in this context are needed.

Their third example also does not depend on artificial intelligence.  It prohibits the use of a social scoring system for evaluating the trustworthiness of people, in which the social score could lead to unjustified or disproportionately detrimental treatment.  One could use artificial intelligence, I suppose to keep track of these evaluations, but they do not depend on any specific kind of computational system.  We have had creditworthiness scoring for more years than we have had effective computational systems to implement them.

Finally, their fourth highlighted example, seeks to outlaw the use of “real-time” remote biometric identification. That may be a worthy goal in itself, but again, it does not require artificial intelligence to implement it.

Except for the subliminal messaging concern, one could argue that artificial intelligence makes these to-be-prohibited activities more efficient, and therefore, more intrusive.  One could employ teams of expert face recognizers, so-called “super recognisers,” to monitor a public space, but even their prodigious capabilities and the number of people that can be monitored simultaneously is limited relative to a machine’s. I don’t think that we know how the accuracy of these super-recognisers compares with that of machines, but the machines never need breaks, and never take vacations.  But that is not the point of the proposed regulation, it is to control surveillance, not the technology. My point is that it is possible and, I think, preferable, to make laws about directly about prohibited activities than it is to try to regulate them through regulation of technology.

Every form of communication has the potential to exploit vulnerabilities of people.  Sometimes without them noticing it.  Influencing people’s behavior through communication is often called advertising.  

Another problem with this report is its definition of artificial intelligence.  They note that the “notion of AI system should be clearly defined to ensure legal certainty.”  They then provide a definition in Annex I that is so broad that it would encompass every form of computer programming and statistical analysis.  They include machine learning, logic and knowledge based approaches, knowledge representation, knowledge bases, deductive engines, statistical approaches, and search and optimization methods.  That’s just about anything done with a computer.

All programming can be categorized as using logic methods.  All forms of knowledge must be stored in some kind of structure, which could be called a knowledge base.  Every form of statistical analysis is necessarily a statistical approach.

There are many other issues with these proposed regulations, but I want to focus on just one more here.  The authors intend for the proposed legislation to make sure that AI can be trusted.  But there is actually very little here on trusting AI and much of what there is, concerns trusting the people who develop and use the software.  The software is only as good as the uses to which it is put—by the people who deploy it. The report discusses several general principals with which the AI usage should comply, but again these principles concern the use of the software, rather than the software itself.  The closest they come, I think, to suggesting how trustworthiness of software can be assessed is this: “For high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI and that are not covered by other existing legal frameworks.”  Other than saying that artificial intelligence systems should be trustworthy by complying with EU standards for privacy and so on, they offer almost nothing about how that trustworthiness could be effectively assessed.  They argue (correctly, I agree) that the use of such systems should not disadvantage any protected groups, but they do not offer any suggestions for how that disadvantage might be identified and so offer no advice about how it might be mitigated.  If they want to build trust among the public for the use of AI and build some level of legal certainty to promote the development of cutting edge technology (which the regulation is intended to support), then they should provide more advice about how to determine the compliance of any system, with its human appliers, to the principals that are widely articulated. 

The problem with this report is not any antipathy toward artificial intelligence, but rather a lack of understanding of the problems that the government really does need to solve and the means by which to solve them.  As it sits, its definition of artificial intelligence includes just about anything that can be done using rules, programs, automation, or any other systematic approach.  It seeks to regulate things that are unrelated to artificial intelligence.  I don’t argue that those things should not be regulated, but they should be regulated for their own consequences, not for the means by which they are implemented. 

The key is to find a balance between the risks and benefits of any process.  How can the process be used for good while protecting the people who will use the system or be subjected to it?  How can we support innovation while providing protections? Ones obligations to society do not depend on the technology used to identify or meet them.  On the other hand, it is very clear that artificial intelligence and innovation in artificial intelligence are likely to be huge economic drivers in the coming years.  If the EU, any of its member states, or any other country in the world is to succeed in that space, it will require government support, encouragement, and, again, a balance between opportunity and responsibility.

Artificial intelligence in this proposal is a kind of MacGuffin.  It is a “plot device” that brings together several disparate human ethical and legal responsibilities.  But it is the humans’ behavior that is regulated by law.  This proposal would benefit from a more clear-eyed analysis of just how that behavior should be guided for the benefit, rather than the detriment, of society.

No comments: