Q&A

Ancient Fables for the Neobiological Age

Frankenstein monster image courtesy of The Man in Question

What “Frankenstein” and the golem tell us about the power and responsibility of science.

January 1 marks the 200th anniversary of the publication of Frankenstein, Mary Shelley’s remarkable novel about a scientist who cobbles together body parts and brings them to life in a “new species.” Because Victor Frankenstein’s project has terrible unintended consequences — he ditches his monster because it is ugly, and the creature roams the world in a destructive search for a mate — the novel can be read as a warning about messing with nature. Those sad and scary themes rear up when people use a term like “Frankenfoods” to denigrate bioengineered products.

But even if Shelley thought of the book as cautionary tale (and it’s debatable whether she did), that isn’t a very useful cultural shorthand today, as we wrestle with the implications of gene editing, gene writing, and other technologies that give us more power than ever to manipulate biology. Caution is of course required with these technologies. But an excess of it—too much worry about unleashing Frankenstein’s monster—could be even more dangerous. Ultimately, we’re going to have to tinker with what nature has given us if we are to feed everyone on the planet, manage environmental problems, and treat diseases that are now incurable.

So how do we use that power wisely? That’s where a story much older than Frankenstein offers some inspiration.

In ancient Jewish lore, the most moral and sanctified rabbis were capable of bringing golems — mounds of clay molded into a human shape — to life. In what now sounds like a premonition of the power of DNA, a rabbi did this through code: either by putting a scroll in the creature’s mouth or by writing a Hebrew word on its forehead. In the most famous such story, Rabbi Judah Loew was said to have created a golem to protect the Jews from a pogrom in 16th-century Prague. Later, to keep the creature from getting out of control, the rabbi decommissioned him, either by pulling the scroll out of his mouth or by erasing a letter on the golem’s forehead.

Golem stories were not exactly about technology. They were thought experiments about the definition of life and the powers of God. Nonetheless, they are suffused with an openness to the idea that technology can be used properly and wisely. That’s why a Jewish theological scholar, Byron Sherwin, wrote in a 2004 book that the golem legend “can help us navigate the biotech century.”

But how can we actually put these ideas into practice when bioethical quandaries arise? These are the kinds of questions that occupy Paul Root Wolpe, a bioethicist at Emory University whom I interviewed this fall. Edited highlights of our conversation follow.

Mary Shelley was aware of golem stories and influenced by them. But do they embody a different view of the world than Frankenstein does?

The golem is a story that fits into a traditional Jewish way of seeing the world. In that view, God created the world incomplete, and it’s human beings’ responsibility to help complete it. And because of that, the idea of creating things and inventing things, the idea of science and medicine, were always very positively accepted by Jewish thinkers. As opposed to, for example, some traditional Christian thinking that was very skeptical of meddling with the world.

Is Frankenstein mainly a warning about what happens when you mess with nature but you can’t fully accept what you’ve created?

Something that is of dispute amongst Frankenstein scholars is whether Shelley was saying science itself is a dangerous enterprise — and therefore we have to be really cautious and there are some things we shouldn’t do — or whether Shelley is saying there is nothing wrong with science itself. In fact, she praises science through the voice of Victor Frankenstein in that book. But Victor Frankenstein was a flawed, narcissistic, and arrogant human being, and the problem wasn’t the science itself; the problem was that you have to approach science with a certain amount of humility and with a certain set of cautionary approaches that he completely violated.

The interesting thing is that if the second interpretation is more true of what she had in mind, that would bring it much closer to the golem story.

Paul Root Wolpe

So how does the golem give us an ethical framework for editing the genome, creating genomes from scratch, or using synthetic biology to produce whole new things?

When you ask the American people about biotechnology, people are really scared about things like cloning and some of the advances of neuroscience and stem cells and all of that. But, it turns out that when you question people closely, it isn’t the technology they are worried about, it’s the scientists. It’s the people who have control over the technology. Because studies have shown that people do not believe that scientists subject their science to their moral compass. People believe that scientists just pursue the science without any kind of moral consideration of what they’re doing. And that’s what worries them.

So the reason that the golem and Frankenstein are tales for our times, even though one is 200 years old and one’s probably closer to 1,500 years old, is because both of them ultimately have a similar message. Which is: you can’t separate the works of our hands from our own moral stature. That is, people have to make moral decisions about the kinds of things that they do, and the more powerful our technology is, the more important it is that we bring some sort of ethic to the pursuit. And both of these tales are cautionary tales in that sense. In both cases, the creature created gets out of control.

One important difference is in the Frankenstein story, there’s no way to stop the creature. And in fact, at the end of the book, the creature is still alive and inhabiting the Arctic somewhere because the creature is bigger and stronger and faster than regular human beings. In the golem story though, the rabbi had built into his creation a kill switch. So he can stop his creature when his creature gets out of control. Which itself, I think, is an important kind of lesson for modern biotechnology. Because we’re often very bad at predicting what the risks are, we need to have a way to be able to release things back if it turns out that they don’t end up rolling out the way that we think they will.

What’s frustrating or unsatisfying about the golem legend is that it’s one thing to say that if your motives are pure, it’s okay to use technology for certain purposes. But I don’t know how to apply that in practice. In other words, what does the golem story, or either of these stories, tell us about whether we should edit the genomes of embryos not only to cure disease, but to select the traits of our children?

We can’t ask legends to do too much. These aren’t instruction manuals. They’re morality tales. Morality tales make moral points, not technical points. And then it’s up to us to take those moral lessons and figure out how to implement them and operationalize them.

But if we take them seriously, they do teach us certain kinds of lessons. For example, it’s not just about doing things with proper intention. It’s about a deep examination of the moral implication and the moral rationale for what we do. So then we can ask ourselves the next question: what do we do to train our genetics graduate students in moral consideration of their work? The answer is, virtually nothing. So, if we are going to take either of these tales seriously, they demand that we have conversations with our science graduate students, if not the scientists themselves, about: “what are the moral implications of your work?” And “why is it you’re doing what you’re doing” and “what’s your rationale and what are you trying to achieve?” And “to what degree are you really considering the safety and unforeseen implications of your work?”

I don’t think that they can tell us, ultimately, “should we genetically engineer an embryo?” But they can challenge us to ask really tough questions about whether we should genetically engineer an embryo, and come up with our own answers. Which very well may be that it’s okay to do it. But, I think it takes a deep and serious examination before we can make that decision. We shouldn’t do it by default.

A of couple years ago, after the power of CRISPR had really come to light, Jennifer Doudna and other people who were involved in developing the technology, had a very serious coming to terms with the implications of it, and they had a few meetings where they eventually called for holding off on editing the germline. The idea was that we don’t know enough about whether it’s safe, or what the implications would be. Maybe when we do it, it would only be to edit for disease. So, as admirable as that was, and as thoughtful and responsible as it was, it did seem to have the air of this eventual inevitability to it. In other words the idea was like, “let’s see if we can slow this as much as possible so that we are as astute as possible in understanding the technological and moral and sociological implications of it.” Is that enough? Is that a good example of a scientist taking responsibility?

There’s often a strange assumption or process that happens where people have a meeting like that, or have a conversation like that, reach some kind of conclusion at that particular meeting. And then that’s interpreted as “well, now we’ve discussed it.”

The nature of having a strong moral consideration of technology means you’re never not discussing. It’s an ongoing conversation. It’s always new.

Go Deeper