Gregory Stock believes that mining data to keep people well could make health care cheaper and more effective. (lllustration by Igor Bastidas)

Bring on the Post-Human Future

Gregory Stock is ready for cloning, gene editing, and conscious AI. But first he’s trying to get you better health care.

“Never before have we had the power to manipulate human genetics to alter our biology in meaningful, predictable ways.”

That sounds like it could be a line from a new book ringing bioethical alarms in the age of CRISPR. But it’s something Gregory Stock wrote 15 years ago in a book called Redesigning Humans, well before the advent of the technology that is now making gene editing precise and easy. Yet the book is notable not only for its technological foresight but also for the bioethical position he staked out at the time. Stock argued against trying to stop or even slow technologies that allow people to choose their genes.

Today Jennifer Doudna and other pioneers of CRISPR have urged caution in using the technology for more than curing disease. To stave off dystopian possibilities with designer babies and eugenics, they’ve called for what amounts to a moratorium on edits to “germline” cells in sperm, eggs, and embryos. But Stock, the former director of the Medicine, Technology, and Society Program at the UCLA School of Medicine, has never been spooked by the prospects of designer babies, human cloning, or further-off alterations that might make our descendants so different from us that they would no longer be human. Rather than thinking we could prevent such outcomes, he wrote in 2002, it would be wiser to accept that they are on the way and think carefully about how to prepare. “To forgo the powerful technologies that genomics and molecular biology are bringing would be as out of character for humanity as it would be to use them without concern for the dangers we pose,” he wrote. “We will do neither.”

These days Stock, 68, is thinking a lot about a less-distant future. Since 2015 he has been co-director of a new center for “precision wellness” at the Icahn School of Medicine at Mt. Sinai, which is trying to make health care more holistic. One way to think about it, he says, is that it’s an effort to more quickly bring about data-rich medical practices you might imagine us having in 2035.

Stock spoke to me and proto.life founder Jane Metcalfe about precision wellness, his outlook on our neobiological future, and how technology has changed The Book of Questions, a set of conversation starters he first published in 1985. He has revised it three times over the years, selling a total of four million copies; it’s now being adapted for a video game.

Photo courtesy of Gregory Stock

These highlights of our conversation have been edited for clarity.

What does “precision wellness” mean? How is it different from precision medicine?

When you talk about precision medicine, you’re cleaning up a mess. You’re dealing with reactive sickness care. To me, the future of enhanced health is going to be in very early intervention and in efforts to avoid disease or mitigate it in very significant ways.

The idea is to take the very rich cloud of data about our physiology, about our genetics, about our gut biome, about our activity pattern, all of these sorts of things, and collect them in ways that are passive and that are not too burdensome. For example, there’s a high level of correlation between things like the way you handle your cell phone and your key-tap dynamics and attention, alertness, those sorts of things. So if you were to precisely monitor what you were doing with your phone, and you were to look at things like the frequency at which you’re making phone calls or finger tapping, or looking at texts, you would have all sorts of information about your mental states and probably some leading indicators of the onset of depression.

The mental health aspects that you describe are interesting, and I’ve heard about similar projects. But what other devices do we need to have people wear if we are to have precision wellness?

There are beginning to be good devices for monitoring sleep. Monitoring for blood pressure would be really great. There isn’t any continual blood pressure monitor; that isn’t achieved very easily. Exercise patterns and movement patterns I think are very important and nutrition is really important.

Of course there are all the socio-economic kind of inputs too — social isolation is a really good example of that — things that you generally don’t even look at in medicine today. Or things like access to good food, whether you have financial insecurities, housing, that are really very significant. Much more significant than health care in driving health outcomes.

So how much closer are we really getting to this imagined health care of the future?

There are two things: what can you deliver and what can you deliver at a price point that will really be acceptable to the entire population? Both of those factors are crucial. You can actually do a lot now if you’re willing to be able to pay a little bit more. You can monitor sleep. You can look at your genome. You can do all those sorts of things. It’s not integrated together seamlessly. It’s a little bit burdensome, but there is a lot that would not be beyond the capability of a committed individual who is in the quantified self movement, for example.

There is a real question as to whether you could do a trial right now to test a common vision of improved health care, which is “get lots of data, analyze it in deep ways, come up with insights that are beyond what you already know, then use those kinds of tools to elicit healthy shifts in a person’s lifestyle or intervene in other ways.” It’s not certain that this vision is real, that it will actually work. Collect a lot of data? That’s going to work okay. The analytics? That will likely work. But what about the part subsequent to that: the behavioral changes? That is hard. What magic will happen there? I think we’ve sort of been shying away from doing the kind of trials where you actually see how well that vision of technologically enabled care works. I think we’re at a place now, though, where we can test those things out and actually answer the question. And we need to be willing to have it fail.

Leroy Hood is trying to do something similar at Arivale: gather a lot of data about people and offer them coaching on nutrition and other lifestyle changes. Do you think eventually services like this need to be set up so they don’t require coaching, so they could scale better?

Well I think if you have group coaching, it scales pretty well. You’d be supported by devices. And I think that the future of medicine is actually that the role of specialists other than the primary care physician is going to be diminished, because the diagnostic realm is going to move toward AI and machine learning. There’s just too much data and too much information for any individual to process.

A deep problem is that health care is really an industry. It pretends that it’s not in many ways. If you look at the legacy players — the pharmaceutical industry, the insurance industry, device manufacturers, specialty physician groups, major hospital systems — they have huge lobbying campaigns and relatively constrained business models. They will be centrally involved in any future health care system.

How are you going to overcome that?

The only way it’s going to happen is if you have disruptive approaches that come from outside. It may not occur in the West where existing health care systems are much stronger. The biggest changes come either when you start something new because you can be different—you don’t have something to push against and displace—or when you’re so against the wall that if you don’t change you get destroyed.

You’re talking about coming up with a disruptive approach from outside, but here you are in the medical school of a big hospital system, Mt. Sinai. So there’s some effectiveness, at least theoretically, to doing it from the inside, right?

Well, I think that the knowledge has to come from operating within the current framework because you have to have access to patients. But the operational embodiments of [big changes in medicine] are unlikely to come through an evolutionary shift in the current care model.

Let’s pull the camera even further back. I want to ask about the technologies and implications that you talked about 15 years ago, when you wrote Redesigning Humans and you explained why we eventually would choose our genes. If you were writing that book now instead of in 2002, do you think you would have a different view on whether and when it will happen and what it will yield?

I think there’s no question that this is happening. CRISPR technology makes a path to doing significant gene editing that didn’t exist when I wrote that book. There’s still incredible uncertainty about what the impacts will be and what the timeframe is. But when you’re talking about these big-picture things, we are overly concerned about whether it’s this decade or four decades or 100 years. That’s an instant. That’s just the snap of your fingers on evolutionary time scales and in the sweep of the history of life.

I remember in a lot of the debates about cloning and about genetic engineering, whether we could do that on humans, whether that was ethical or responsible to even be talking about it, it would seem almost laughable at some level. Because we’re talking about these tiny things. Here we’re at a moment where we may be seeing the emergence of machine awareness, AI, that is so sophisticated that it would surprise me if you don’t end up with conscious machines before too long. You’re potentially changing the foundational substrate of life, from organic carbon and oxygen, from biology to non-biology — silicon and all of its ilk. When you’re talking about these kinds of shifts, it’s every bit as large as when single-cell eukaryotes came together to form multicellular life. We’re at one of these fundamental transitions in the history of life.

I have a certain equanimity about it because we’re the agents of this happening but we’re really observing it, primarily. I mean, it’s this kind of emergent phenomenon. The best we can hope to do is to kind of push it a little bit one way or the other to serve our values and support our sense of our own humanity. To me, there’s no question it’s going to happen.

How we can both be agents of it and observers? If the technology is essentially inevitable, that’s not a reassuring idea to much of the public who’s going to hope that there is actually a lot of input on this or a lot of ways that we can shape it.

I think the way to look at it is that yes, we’re agents of this, but not in a way of top-down design. We created telecommunications, for example. We were behind the development of that. But who really saw where that was going to lead or how that was going to reshape the world? So we’re all like a little ant colony, we’re all running around doing this and that and trying to do the best at a micro level. We may have intimations and ideas of where this is going to lead and what’s going to happen, but we haven’t a clue when it really comes down to it. The big aspiration “we’re going to design a world in which A, B, and C are occurring” is very, very challenging.

I get it. No one’s king of the world to coordinate all these various projects.

Even if you were, if somebody said, “Look, Greg, you really know this stuff, why don’t you design what should happen? You could sort of control it, we’ll give you absolute power to do it.” I would be scared to death of it because I would obviously screw it up.

[A better alternative] is trying all sorts of things. It comes up in a lot of bioethical discussions that we should avoid doing things until you’re sure that they’re safe. I think that’s the most risky way of proceeding. Because if you actually do risky things and let them fail, you want them to fail while they’re still on a small scale. So yeah, people get hurt because they do this or they do one thing or another that wasn’t wise, but then we learn from that. What you really want to do to reduce the big risks is to maximize the information flow and fail fast.

That’s the way I see that process. There are people who are more comfortable with uncertainty and with a process that is pretty open-ended, a process of emergence and with accepting the process without seeing where it’s going to go. Then there are others that this absolutely scares to death.

Given that argument that you just made, and given that you were saying 15 years ago that germline modification was going to be inevitable, because the benefits would outweigh the risks, what have you made of the hand wringing about CRISPR? Leading scientists have said engineering that could be passed down in the germline should not happen until we know more about its safety, and possibly only to edit out diseases when it does happen.

The reason that I said that it was kind of inevitable was that the technology can’t be readily controlled. The barriers to doing that kind of research are continually lowering. You can design strict regulations but are they going to be implemented throughout the world? How would you actually enforce something like that?

There’s also great uncertainty about how far you’ll be able to go with these technologies. Whether you can control aging, for example, or even reverse it. Those sorts of things.

In other words, it’s not clear that everything that people think we might be able to do if the technology were essentially unregulated would even be possible?

That is certainly the case. But even regarding what proves to be possible, we tend to think that things we view as strange are somehow wrong or would not be welcomed or undesirable. Whereas if you were to look at technology today that you’re comfortable with, and look at what [life] was like 50 years ago, you probably wouldn’t want to go back. But a lot of people from 50 years ago would be looking at some of the things today — loss of privacy, you name it, people don’t know their neighbors, they’re interacting on cell phones all the time — you could make a whole litany of why that wouldn’t be desirable [to them]. But it is for you, probably. And I suspect that 50 or 75 years from now, the world will be one I would be uncomfortable with, although I might like it on a conceptual basis. But my grandchildren would go, “How could you live in a primitive time like you lived in, where people actually hacked the body apart to try to cure it? Where people got cancer!” There will be lots of things that will seem that way, looking back.

I’m not thinking we’re going to go towards some sort of [utopia]. There are always going to be problems. In many ways, despite all of the progress that there has been, I would say people are probably more unhappy than they were at earlier times that were simpler in many ways.

This reminds me of something you pointed out in Redesigning Humans: that the differences between us and the people of the future could be accelerated. Because you could imagine that the kinds of traits that future generations would try to bring out in their offspring would include a predisposition to this view of technology itself. People who are eager to manipulate the genome are likely to generate people who are themselves eager to keep it going. The future might be even more different than we can imagine and yet it will be populated by people who are inherently more likely to embrace it.

Yeah, I’m glad you pointed that out. That was one of the key points, what can happen when you really begin to be able to alter and change human biology. You begin to reshape it in your own vision, essentially.

Will there be more or less human diversity in the future?

I think ultimately there’s going to be a lot more diversity, though that depends on what level of control we really can exert on how hackable human biology really is. If things can be done that bring significant changes at not-great costs and risk, then they’re going to happen.

I think it’s likely to be kind of the Star Wars bar or something. Where it’s going to go in all sorts of different directions.

If people can be optimized toward certain ideals — healthier, faster, better, stronger — that seems to be a future of less diversity.

Only if you think that there are not tradeoffs. That you can optimize everything. My suspicion is that when you optimize one thing there will be tradeoffs. There are certain people who are super, super bright. [They] often have different kinds of social interaction patterns and such. So the question is, what do you want? If you get the ability to move toward the edges in any realm, I suspect there are going to be serious tradeoffs. Then the other thing is, you’re going to have issues of integration with non-biology. A lot of choices will be developed that way.

You’re referring to environmental factors?

Well, I mean that you’re going to enhance hearing, memory, and cognition with various electronic devices. It is already beginning to happen.

What will the life expectancy be of a child born in 2037?

I think it’s kind of bimodal. It will either be pretty much as it is plus some, or else it will be possible to unravel and intervene in human aging and it’ll be very long.

What books are you reading right now?

I have been reading a whole bunch of books about health care. I’m focused on that. I was reading a very good book called Less Medicine, More Health, which is a very interesting one. There’s one called An American Sickness by Elisabeth Rosenthal, which is about the deep problems in medicine. There’s one called Catastrophic Care by David Goldhill, which is a wonderful book.

We spend so much on health care that it squeezes out a lot of other stuff that would probably really contribute to our health. If we halved our spending on health care and were forced to prioritize [that] spending, we would probably have greater health than we do now.

Four years ago you published a new edition of your Book of Questions, which is meant to provoke discussions among people. What’s one new question in it that was inspired by changes in technology?

If you had to give up one of the three following things for the rest of your life, which would you choose? The first is to give up all electronic communication, access to the Internet, email, telephone, all of those things. The second is to give up all motorized vehicles. You don’t have any transport. You cannot ride or use any transportation that involves a motor.

I could ride a bike?

You could ride a bike, but it limits your range, significantly. The third is that you amputate your non-dominant hand.

Of those three, off the top of your head, which would you give up?

I’m sure that the “right” answer as to what would maximize my happiness would be to give up my non-dominant hand. But I’m going with B. I would just bike around, and with my range diminished, I would have electronic communication. What do most people say?

A significant fraction will give up their hand. People get along pretty well with their prosthetics. That’s the way I think about it. But even without a prosthetic, I would think you would probably be better off. If you gave up electronic communication, you would be totally isolated, in ways that would be profound. I think it would be extremely psychically painful. And I would rather have the ability to move around, because without a motorized vehicle, it would be hard in a place like New York even.

What I think is really important about [that question] is that we have already become kind of functional cyborgs. Because we’re so intimately connected with our tools and our technology that we view them as a part of ourselves. We’re willing to give up a hand rather than those devices. It’s going to be increasingly unthinkable that we would give up our connection to the rich technological substrate around us, because it’s going to be the medium through which human interaction passes.

It also gets to the fact that there’s no point in romanticizing or idealizing an originalist biology as something to cling to.

If you look at the environment we inhabit, this is not the environment that we originated in as a species. I mean, look at the valleys of glass and steel, these hives of activity. It’s amazing, and this is an artificial environment that we’ve established for ourselves. I think that fundamentally what we’ve been talking about is that our technology and our knowledge and our ability to intervene — we’re turning it back on biology as well and on ourselves.

Go Deeper