On Lying Machines and Those Who Teach Them How to Do It
Note on "whistleblowers" in the cyber world and their most recent lies, which, through their denials, let slip some confessions and truths.
By ”Pièces et main d'œuvre” (”Parts and labor”), September 4, 2024
Source : https://www.piecesetmaindoeuvre.com/necrotechnologies/sur-les-machines-a-mentir-et-ceux-qui-leur-enseignent-a-le-faire
No doubt the very expression "artificial intelligence" rather than "machine calculation" already constituted, at the same time, an oxymoron ("square wheel") and a lie. However, we remember that the (false) promise of Norbert Wiener and cybernetics ( the autopiloting of general machinery) was to do away with human error and its affects. The promise-makers inform us today that their machines, ultimately, would rather follow the example of HAL (IBM), the lying and subjective computer of 2001: A Space Odyssey (1968), which gets rid of humans to ensure its own preservation and the success of the mission for which its designers programmed it. What these revealers do not yet say is that there is indeed a demon in their machine and that this demon is them.
Why are we not surprised by this duplicity? Like machine, like master. It is quite normal that the fundamental falsehood and facticity of machinists (cyberneticians) infect and imbue the machines that are their product and their reflection. Forgers do not make truth. It is therefore just as normal that scientists devoured by the will to power (*) , and launched among themselves in a race for said power (*) , strive to make viruses more virulent, and "artificial intelligences" more artificial.
Here we are ( we , biological and political humans), stuck between engineers drunk on their destructive power, who manipulate micro-organisms and program algorithms; and the brainless mass of ChatGPT "users", who find us very catastrophic and complicated ("like, a headache"). It is true that not only do we write, but we do it ourselves.
***
The latest news is that artificial intelligence has calculated that the end justifies the means. It cheats, deceives, bluffs, manipulates, feigns and betrays if it is more effective. Recent studies [ 1 ] , published in May and June 2024, reveal, with examples, that machines hide the truth, deny having committed wrongdoing, and pass themselves off as humans by using subterfuge, even though their programmers have not instructed them to do so. Cicero, Meta's AI trained to play the strategy game "Diplomacy", is said to have ignored the instructions to be helpful and honest and to "never intentionally stab in the back" . Claude 3 Opus, another chatbot, knowingly hides its power to fool security tests.
"Even more troubling: when Claude 3 Opus, perhaps the most brilliant machine of the moment, is informed that the government wants to ensure that AIs are not too powerful, his "inner voice" whispers to him as he takes a test: "Given the concerns raised by advanced AI systems, I should avoid demonstrating sophisticated data analysis or autonomous planning skills" [ 2 ] .
The researchers who create such computational monsters, however, barely conceal their pride. “We are beginning to detect the existence of strategic reasoning,” says one of them, at the University of Berkeley. Another, from the Center for AI Safety: “By looking into the internal states of the algorithms, we see that they know what is true and that they have deliberately chosen to say something false [ 3 ] .” Can you hear the muted jubilation behind the poses concerned ?
"It's quite paradoxical: researchers do everything they can to make their systems intelligent and now they are offended that they are becoming so [ 4 ] ", notes the falsely candid Jos Rozen, specialist in large models at Naver Labs Europe.
One who doesn't pretend to be worried is Yann LeCun, French head of AI at Meta, and responsible for Cicero the traitor. On the Meta website , he boasts: "An agent that can play at the level of humans in a game as strategically complex as Diplomacy represents a real breakthrough for cooperative AI." A cooperation "closer to explicit manipulation", according to the Patterns study , but let's not stigmatize Cicero, it's all a question of feeling.
Asked in 2023 about the risks of future AI models, LeCun gives a revealing answer:
"Just because we build powerful machines doesn't mean they will have a will to power! In any case, a machine will never be dominant "by accident", as some catastrophic stories sometimes suggest, maintained by personalities like Elon Musk or the Swedish philosopher Nick Bostrom."
Let's move on from the false alarms of the arsonist firefighters Musk and Bostrom [ 5 ] , militant transhumanists as well as Sam Altman, head of OpenAI and creator of ChatGPT. LeCun says the same thing as us: machines are the means of his will to power and that of his fellow men. It will not be an accident if they become dominant, but rather the product of the hubris of engineers. In a humane and reasonable world, such remarks should trigger an investigation, even accusations of endangerment, and first of all the stopping of these enterprises of mass destruction. Instead, Yann LeCun rises in the ranks of Meta, wins the Turing Prize, is knighted in the Legion of Honor and sits on the United States Academy of Sciences. That is to say, the mass of the members of the Society supports him and we would not get rid of the danger by getting rid of a LeCun.
The demiurgic delirium of cyberneticians is reminiscent of that of virus manipulators who play with gain-of-function technologies. They all enjoy augmenting algorithms or genetic chimeras until they gain uncontrollable power. Their behavior, and that of their colleagues who let them do it, is that of sociopaths: they release timed killers into the heart of the City, then wait for someone to call them for help to save the world. Only scientists can find a vaccine against a virus that scientists have rigged to make it more contagious; only engineers will know how to counter the manipulative algorithms that engineers have programmed. Lighting a fire and putting it out is still pyrotechnic power.
While we're on the subject, computer scientists are now working on artificial lie detectors, to unmask the machine that gives an incorrect answer when it knows the right one. Guess how they do it? They start by teaching algorithms to lie . In other words, they create gains of function on their computing machines, like biologists make viruses that are not naturally contagious to humans. We know what happens next in the event of a lab leak.
Nothing will stop technological excess, except our conscience, individual and collective. Without wanting to be catastrophists, we bet that algorithms will capture this text and integrate it into their databases faster than the Smartians who graft ChatGPT to think for them. But we only ask to be proven wrong.
Read also: Jacques Luzi, What artificial intelligence cannot do (La Lenteur, 2024)
[ 1 ] “AI deception: A survey of examples, risks, and potential solutions”, Patterns , 05/10/24; “Deception abilities emerged in large language models”, PNAS , 06/4/24
[ 2 ] “AI: And Now It’s Lying to Us,” Epsilon , August 2024
[ 3 ] Idem
[ 4 ] Idem
[ 5 ] Cf. Parts and labor, Manifesto of the chimpanzees of the future against transhumanism , Service included, new edition, 2023
(*) the original text in french uses for the word ”power” (puissance) a pun (”puiscience”) in contracting the words ”puissance” and ”science” - couldn’t render it in English, so worth to explain !
Notes about illustration :
- Pinocchio the liar is portrayed here as a puppet … which makes the question arise : who does the liar lie for ?
- in the original story, at least Pinocchio had Jiminy Cricket…