The EU’s artificial intelligence act, reviewed
European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))
When fear is generalized, the Law responds. The fear was triggered by what we used to call « artificial intelligence », but more recently generative AI, and more precisely, large language models (LLMs), and even more precisely, generative pretrained transformers (GPT). The Chat version of GPT — the most sophisticated « conversational agent » ever designed — went public in November 2022, occasioning ethical challenges and worldwide panic. Would humankind be wiped out by ever more intelligent machines, as a collective letter signed by many of the giants of Tech (Elon Musk included) warned us? The letter called for a moratorium on the quest for the absolute GPT, the Artificial GeneralIntelligence (AGI) capable of doinganything with an incomprehensibly suprahuman accuracy and speed. Spectacularly, some key figures in Artificial Intelligence, like Geoffrey Hinton of the University of Toronto (one of the « three horsemen of Deep Learning », together with Turing Prize winners Yan Le Cun from Meta and Yoshua Bengio of the University of Montreal) stood up against AGI, substantiating the fears of the less computationally educated. Predictions of mass job loss loomed, even in those supposedly intelligent sectors (law, accounting, etc.) where workers thought themselves immune.
At the same time, on the other side of the street, technophiles rose, claiming that with GPT and its cousins we hold the key to fighting climate change, and could therefore sustain our addiction to economic growth. « Smart Spraying » to the rescue! — computing precisely the golden ratio of pesticides required for crops. In December 2022, Microsoft invested ten billion dollars into the disruptive startup OpenAI (creator of ChatGPT); Google and Meta boosted research on their own GPT-style AIs (Yan Le Cun’s reservations notwithstanding — Meta even postponed its supposedly major Metaverse). Image- and video-generation accelerated, too. Since November 2022, more than a thousand academic papers have been devoted to GPT. (Only an AI, ironically, could « read » them all.) Many of the top-ranked cookbooks on Amazon are GPT-written, and its self-publishing platform now caps « writers » to 78 books per week.
The Law came in the form of the « Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence » — the EU’s AI Act, adopted 13 March 2024 (and implemented 12 July). It aims at regulating the deployment of AI systems by public authorities as well as their introduction to the market. It sets guidelines for what should be done with AI and the contents created by it, with AI systems’ impact on social life or human decisions.
So allow me to read the AI Act, before trying to disentangle what this landmark legislation does from what it should do and what it cannot do. The exercise here is of a philosopher who would review the AI Act as a text — a text commissioned by the EU, resulting from months of intense auditing and deliberations and compromises — in the light of what philosophy has told us in various ways, sometimes centuries ago, about computing, consciousness, causality. The oddity of that exercise should not deter one from reading. It will eventually be a case of the very new meeting the very old — namely, the language of AI jurists meeting philosophers from an obscure past — Hobbes, Leibniz, Hume — who, strangely, hand AI-lovers and AI-haters a conceptual mirror to measure themselves in.
Entering the Act’s several hundred pages, one first notices an interesting departure from the doxa. Instead of seeing AI as an existential threat, as it is often phrased, the Act distinguishes between classes of risks. Risks are defined by the potential harm done to the health, safety or fundamental rights of what the act calls « natural persons » (whether an AI could be a different kind of person is left open), as well as to democracy.
Want to keep reading this article? Sign up for our newsletter…
…and get full digital access for one day. Or subscribe to the European Review of Books, from as low as €4,16 per month.
Already a subscriber? Sign in
- See Kate Crawford’s Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence for examples related to law enforcement. ↩︎
- An argument about the accumulation of overexploited or stolen work that has been necessary to set these technologies in motion would be similar to Marx’s conception of « primitive accumulation », developed in Capital, book I, chapter 8. ↩︎
- Hence the philosopher Hilary Putnam, a major advocate of this position, joked in 1975 (in « Philosophy and our mental life ») that « thought » could happen in any kind of matter, provided that the proper functional relation occurs, so why not Swiss cheese? ↩︎
- This is the main theme of my book Les sociétés du profilage; Évaluer, optimiser, prédire (Payot/Rivages, 2023). ↩︎
- « Natural persons in the Union », the Act avers, « should always be judged on their actual behaviour », never « on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof. Therefore, risk assessments of natural persons in order to assess the risk of them offending or for predicting the occurrence of an actual or potential criminal offence solely based on the profiling of a natural person or on assessing their personality traits and characteristics should be prohibited. » (§42) ↩︎
- These seven principles were articulated in the « Ethics Guidelines for Trustworthy AI » (2019), developed by the independent High-Level Expert Group on Artificial Intelligence, appointed by the Commission in 2018. ↩︎
- « Alignment » names the protocols that handle an LLM after its immediate training, disallowing the racist, sexist or otherwise offensive formulations that, based on the texts that fueled it, it would likely utter. The GPT, after alignment, would not advise you committing a crime undetected, or writing a novel that inspires suicidal feelings, etc. ↩︎