SVG
Commentary
Commentary

The Golem in the Machine

Review of “Like Silicon from Clay” by Michael Rosen

The ChatGPT logo appears with a person in front of it. (Klaudia Radecka via Getty Images)
Caption
The ChatGPT logo appears with a person in front of it. (Klaudia Radecka via Getty Images)

I’m not Jewish, but I’ve been fascinated by the golem story—the tale of the creation of a defender of the Jews who is made of clay by a mystic magician—since junior high school, when I learned about the 1920 German Expressionist film of that name. After all, what bookish teenager wouldn’t want a strong, silent protector ready to repel all bullies and tormentors on command? Later on, my wife Beth introduced me to S. Ansky’s pathbreaking Yiddish play concerning the evil spirit that supposedly inhabits its helpless victims, The Dybbuk.

So when I was asked to review a book on golems and dybbuks, as well as maggids, the lesser-known angelic spirit from Jewish folklore and legend—a book that ties them directly to artificial intelligence—I was intrigued.

After reading Michael Rosen’s Like Silicon from Clay, I am still intrigued, but not convinced. Rosen’s central argument is that the different ways we look at AI and machine learning (ML) and our general desire to make sure “that AI operates in an ethical and responsible fashion” can draw inspiration from these Jewish supernatural phenomena, and that “we can distill the best of each approach to arrive at a coherent policy.”

A tall order, and, in the end, Rosen’s project is too ambitious to succeed. But Like Silicon from Clay remains an interesting and even compelling read, as Rosen plunges with eloquence and enthusiasm into the current debate over AI’s future and the byways of the Jewish supernatural.

Rosen begins by dividing AI critics. He calls one set of critics “Autonomists,” who see recent advances in AI as genuinely revolutionary, with the potential to alter human existence. The other he dubs “Automatoners,” who see AI as a simple extension, not transformation, of existing digital technology—and not necessarily a harmless one.

Each camp has its positive and negative wings (although one does wish Rosen had chosen two labels that weren’t so confusingly, even annoyingly, similar). The Positive Autonomists are represented by OpenAI’s Sam Altman and “believe that machines have already achieved—or will soon achieve—a measure of autonomy” to act without human guidance or supervision. They see this development as enhancing and extending life in positive ways. The Negative Autonomists (Rosen lists Geoffrey Hinton and Elon Musk) see the same development as dangerous and a potential threat to humanity.

On the other hand, Rosen’s Automatoners see the technology as essentially “a lifeless robot commanded to follow its developer’s orders” but no more than an “improvement, however significant, of existing computer technology.” The Positive Automatoners see AI improving human quality of life without directly threatening that life—but only if the right human-directed controls are included in the package. This is clearly the group with whom Rosen has the most sympathy.

The image of AI as “a lifeless robot commanded to follow its developer’s orders” leads Rosen into his discussion of the golem, the powerful automaton created (as legend would have it) by Rabbi Judah Loew ben Bezalel, also known as the Maharal, in the 17th century to protect the Jewish ghetto from its persecutors. Rosen describes in detail the ceremony by which the Maharal and two rabbinical colleagues molded the creature from clay using kabbalist formulas, then slipped a piece of parchment reading Adonai Emet or “the Lord is the Truth” under its tongue, to give it life and movement. The place where Loew supposedly housed his golem until needed, in a synagogue attic in Prague, remains forbidden to visitors even today. (Those who have snuck in claim to have found nothing more than a mound of red dirt covered by old prayer shawls.)

Beyond the confines of Jewish history, however, Rosen says, “the golem embodies the Positive Autonomists’s ideas and ideals [for AI]: a supernatural creature that originates with but transcends its human creator, is a force for good in the world that can enhance and extend our lives,” while acknowledging “even with the best intentions, even with optimal design, and even after meticulously following instructions, things can go wrong”—as with the sorcerer’s apprentice or Frankenstein’s monster.

The dybbuk, on the other hand, is “a folkloric version of the Negative Automatoner concept [of AI]; an external manifestation of a harmful internal condition” requiring exorcism and expulsion. Rosen mentions how ceremonies for exorcising dybbuks became part of religious practice in some Jewish communities in North Africa, including among Muslims who blamed the Jewish spirit for evils affecting their community. It might be easy to see dybbuk rituals as a scapegoating mechanism, but Rosen looks deeper: He sees generative AI itself as a potential danger that could be harnessing “our vilest instincts and homogenizing them as a new and inescapable digital consensus” that its makers are imposing on an unwilling public.

All the same, Rosen himself remains an optimist about AI possibilities. He finds it expressed through the mythology of the maggid, another offshoot of the kabbala tradition. Rosen quotes the scholar Gershom Scholem on the identification of maggidim with holy angels or souls of departed saints: “These maggidim were products of the unconscious of the psyche, crystallizing on the conscious level of the kabbalists’ minds into psychic entities,” Scholem wrote. They are the opposite side of the dybbuk coin, as it were, a form of possession by our best impulses instead of our worst.

In Rosen’s words, the maggidim express “our inner nature, our subconscious, aching to do good but fundamentally requiring our input.” The next step is to recognize that the deep psychological meanings surrounding the maggid, along with dybbuks and golems, “present an opportunity to unlock our more fruitful traits, which may otherwise be repressed or sublimated”—or are left out of the usual debates over AI.

With all this laid out, Rosen plunges the reader back into the Autonomists-versus-Automatoners battle. Instead of having to choose one camp over the other, Rosen wants us to look for ways to strike a balance between them. He thinks we can maintain human control without hampering the development of the technology and maintain strong ethical standards while eliminating invidious biases as we program our AI platforms. Rosen’s overall argument is that the real key is not what specific policy we choose to follow but rather what attitude and mindset we bring to the problem, as epitomized by his three supernatural creatures.

He concludes that if AI itself is represented by the golem in the sense that it can be an unthinking force for good or evil, depending on the commands it receives, then “our inner dybbuks—our darkest instincts—will dominate our thinking about technology and the solutions we devise if we don’t empower our inner maggids—our better angels—to overpower them.”

A lovely sentiment, but I’m not sure it’s any more insightful than the standard pronouncements we get at most AI policy summits.

And here one notices two striking omissions from Rosen’s account. The first is asking whether Jewish religious teachings themselves, or Christian teachings for that matter, can be a valuable moral guide for our implementation of AI/ML. Why insist on addressing the future of AI solely in mythological and psychological terms—or in terms of the kabbalah, which is a sideways force in Judaism—while leaving out the mainstream Judeo-Christian religious tradition that has endured for thousands of years? If Adonai Emet was enough to animate the golem then, why not us today?

The second omission is China. We needn’t worry about what “our darkest instincts” will look like if they are unleashed through AI; it’s already happening in Xi Jinping’s China, from the total surveillance state to efforts at literal mind control and biotech warfare.1 Yet Rosen has very little to say about China’s turning a U.S.-created technology into a totalitarian nightmare, beyond quoting the venture capitalist Marc Andreessen that “the single greatest risk of AI is that China wins global AI dominance and we—the United States and the West—do not.”

He certainly doesn’t try to look at China’s AI offensive through the prism of ancient Jewish wisdom. If he did, he might come up with a very different mythic typology, one in which America’s AI industry is the golem, the automaton dumbly waiting to be activated to serve and protect, and China is the dybbuk, “harnessing our vilest instincts and homogenizing them as a new and inescapable digital consensus.”

The maggid meanwhile may turn out to be our own Judeo-Christian ethos anchored in the true and the good. That may be how we find the guidance we need for the West to prevail—and to ensure that AI is the fruitful and beneficial servant its earliest creators had always hoped it would be.

Read in Commentary.