Assisted Suicide, on Demand

With Exit International’s new Sarco suicide pod and experimental AI, the future of assisted suicide is ripe for disruption.

Now that she is semi-retired, Sally Curlewis, 73, from New South Wales, Australia, works in retail a couple of mornings a week and spends her afternoons taking care of her five grandchildren. She jokes that her husband of 52 years loves the farm they own a bit more than her—but she is more of a city woman anyway. She has a lovely family, lots of friends, hobbies, and a charmed life. And, in case her cancer comes back, she has a plan. Curlewis will leave her friends and family on Earth, jump into a suicide pod, and go meet her friends in heaven.

“I’ve had breast cancer, bile [duct] cancer, I’ve had my bowel removed, and I have a stoma, but I’m mentally in control. So, when the time comes… I have to make the decision before I get demented,” Curlewis says.

Twelve years ago, when her mother was in a nursing home wrecked with dementia, her father, who was in his 80s, fell down in the bathroom, broke his hip, and became bedridden. “I don’t want to live like this. I want to go,” he said to his doctor.  “We are not allowed to do that,” the doctor replied. “He would beg my brother every day ‘Can you go and get those pills for me?’ My brother would reply, ‘I cannot. You’ll go to heaven and I’ll go to jail.’” At some point, Curlewis’s father took himself off all his legally prescribed medication and just lay in bed, refusing to eat. Six months later, he died. 

“This was barbaric. This was a man with a full brain. He didn’t have dementia,” protests Curlewis. The experience shook her so much that she joined Exit International, a Winnellie, Australia-based nonprofit, which advocates for the legalization of both voluntary euthanasia (where a person’s life is ended at their own request to relieve pain and suffering) and assisted suicide (suicide committed with the assistance of another person, usually a physician). 

Exit International lived up to its name by making international headlines when it announced that its manufactured Sarco product (short for sarcophagus), a futuristic, 3D-printed suicide pod that uses nitrogen gas to cause death, received the legal greenlight from Switzerland’s medical review board in December 2021. In parallel with Sarco, Exit International has been developing a workable artificial intelligence (AI) assessment of a person’s mental capacity that aspires to replace psychiatrists. Does this mean the future of assisted suicide could see decision trees with no doctors?

Sarco, from Exit International

Dead within six months

With the exception of countries like Belgium, Luxembourg, Canada, New Zealand, Spain, France, the Netherlands, and Colombia—plus some Australian states that have legalized either euthanasia or assisted dying (or both)—ending one’s life is against the law in most places. In the U.S., 11 states offer legal assisted dying. Almost unequivocally, however, every place that allows it sticks to a strict protocol to approve assisted death only in cases where someone has been diagnosed with a terminal illness that will lead to death within six months—and only with the assistance, approval, and supervision of a doctor. 

“This whole process is arbitrary,” says Philip Nitschke, the founder of Exit International.  “A doctor comes and talks to you for five minutes and says, ‘Yes, I think this person knows what they’re doing when they decide they want to die or not.’ Or a psychiatrist who is full of intrinsic biases does a more formal assessment,” he says. 

Nitschke sees that as an undue barrier. He believes doctors are predisposed to viewing non-terminally ill people pursuing death as little more than undiagnosed mentally ill patients—and their choosing to die as a cry for help. “To be deemed eligible for suicide, you have to exhibit significant physical suffering,” he says. For him, the mainstream medical community has completely medicalized the end-of-life process while ignoring the many social, existential, or purely cognitive criteria related to it, which Nitschke argues are equally valid. He says his AI will assess a person’s eligibility for suicide using the guidelines suggested by a professor of law, health and ethics at the University of Sydney named Cameron Stewart, published in the Journal of Medical Ethics in 2011. According to them, a person must be able to comprehend and retain information on the decision to end their life (method chosen, risks of failure, impact of their decision on others included), explain why they reached the decision, demonstrate there has been no undue influence from others, and pass a cognitive ability test using the Mini-Mental State Examination (a widely used test of cognitive function, mostly among the elderly though).

Sarcophagus literally means ‘flesh-eating’ in Greek.”

In 1996 Nitschke became the first doctor in the world to administer a legal, voluntary lethal injection, setting up a syringe filled with a lethal substance, which an Australian man then activated using a computer. In 2015, he burned his medical license because he felt the Medical Board of Australia violated his right to free speech. The media commonly call him “Dr. Death,” inheriting the mantle from Jack Kervorkian, an American renegade who assisted in the deaths of 130 terminally ill people in the 90s, for which he was tried, convicted, and sentenced to 10–25 years in prison. A physician by trade, Kevorkian earned the title in the 1950s, when he sent shivers down the spine of the medical community proposing that death-row prison inmates be used as the subjects of medical experiments while they were still alive. In 1961, he unsettled his peers once again by publishing a paper in the American Journal of Clinical Pathology, where he advocated transfusing blood from fresh cadavers to wounded troops in Vietnam. And both Kevorkian and Nitschke turned to Greek to name their death techniques. Nitschke named his suicide pod Sarco (sarcophagus literally means “flesh-eating” in Greek, but the word was used to describe a limestone funeral receptacle for a corpse in ancient Greece). Kevorkian made a death machine out of three bottles delivering successive doses of a saline solution, a painkiller, and a fatal dose of potassium chloride to people seeking death, and he named it Thanatron (meaning, instrument of death in Greek). The similarities between Kevorkian and Nitschke are astounding, but these days Nitschke is also called the “Elon Musk of assisted suicide,” because he is hell bent on disrupting the end-of-life process, using a combination of hardware, poisonous gas, and artificial intelligence. 

The AI that Nitschke and his team are creating can be completed online, in the form of an interactive program that asks the person a series of questions and assesses their responses, reaching a conclusion as to the mental capacity of the interviewee. In theory the way it would work is that, if the AI determines a person is eligible for suicide, it would then give them a code to activate the Sarco—a “free pass” to the other world. This would take in total less than 24 hours, but realistically, Nitschke is aware it could take many years for laws to catch up with the technology and allow a piece of algorithm to be able to decide who is fit to choose their own death. “In the beginning, we will still have people talk to a psychiatrist. If the person says yes, well, then we’ll say we can give you the code,” he says.

Nitschke says his under-construction AI can bypass the ethical conundrum of having someone end their life prematurely by requiring that a person seeking to end their life in the absence of serious physical illness must have had “significant life experience”—which translates to being over 50 years of age. This is the only prerequisite. “We would argue that if a mentally competent adult makes an informed rational decision to die, then it can never be premature,” Nitschke says. 

As to how much data the AI should have before concluding a person’s mental state, the Australian doctor says there “needs to be a compromise.” The test should be able to be carried out within one to two hours, and the result needs to be consistent if the test is repeated at a later time. “In the initial stages we will be comparing the AI result with that obtained by psychiatric review of competence from an experienced psychiatrist. We expect it will become quickly apparent if the complexity and detail of the testing needs to be increased,” Nitschke says. And though he agrees that we need to be aware of cultural biases in any AI-related assessment, which may skew results when individuals from different racial, socioeconomic, sexual, and cultural backgrounds are assessed, Nitschke says the AI-based method can still outclass psychiatrists in objectivity—as it is quite often the case that the same individual receives vastly different results from different psychiatrists “depending on the political and philosophical baggage of the reviewer,” he says. Software failures or deliberate malicious interference and hacking are further challenges to a set of codes handing over the right to die. But he has thought of ways to address these as well. “To avoid these issues the initial AI test program will be run on its own processor and not be internet-based,” says Nitschke.

In the future, those human-machine collaborations will be more common in medicine, suggests James J. Hughes, associate provost for the University of Massachusetts in Boston and executive director of the university’s techno-progressive think tank, Institute for Ethics and Emerging Technologies. “Algorithms have been everywhere from psychological inventories to cancer diagnosis to intensive care units and predictions of which patients will die for over 30 years now. In general, the diagnostic software has proved more powerful than any individual doctor,” he says. And in the field of psychiatry, those who continue practicing it without the aid of a software might find themselves at a serious disadvantage, Hughes says. 

The million-dollar question 

The big question, however, is neither whether AIs will outshine psychiatrists nor whether people should have the right to end their own lives. The former question may be inevitable, and governments around the world are increasingly moving to affirm the latter—albeit only under the right circumstances. But that’s where the million-dollar question lies. “How do we ensure that we are ‘under the right circumstances?’” Hughes asks. If a person wants to end their existence, should they be free to end their existence? And what is the role of medicine in all this? 

“One could argue that the role of medicine is to preserve life and work toward a relatively good and long quality of life,” says James Giordano, professor of neurology and neuroethics at Georgetown University Medical Center in Washington, D.C. One could also argue that the role of medicine is to allow a good death, recognizing that death is not a retributive nemesis, but an event of life, he adds. 

So, do we have the right to end our own lives then? “Possibly, yes,” Giordano says. However, we do not have the right to inflict our decision and its consequences on others or else we are trumping their own autonomy. “In other words, we need to examine whether the patient is over-exercising their autonomy against the will—and in some cases, the professional parameters of conduct—of the clinician, who is both the therapeutic and moral agent responsible for the patient’s care,” Giordano says. “They don’t have the right to do that.” 

“In essence, an AI will help the patient see beyond their existential horizon.”

Interestingly, Giordano thinks the developing Exit International AI could come in handy—but in reverse. “The more information a clinician has about a patient’s past and present in ways that are predictive of events in the future, the more the power for the clinician to be more insightful in developing their decisions—this is a bunch of serious metadata the AI can hand over to the clinician about this patient’s situation,” he says. A suicide predictive AI examines multiple biological, psychological, and social factors that are unique in each individual’s case, compares them to the history of other people of a somewhat similar demographic profile, and comes up with a fairly high predictive probability that this individual may act on their own impulses to commit suicide. But the person can also learn that, within x amount of time, there is a significant likelihood that some variables in their life will improve even though they are unable to see this now that their coping mechanisms have failed them, Giordano quickly adds. “In essence, an AI will help the patient see beyond their existential horizon,” he says. This, Giordano, suggests, will bring more “fairness” to the end-of-life process.

In Rotterdam, the Netherlands, where he is based now, Nitschke is developing the third Sarco capsule that will “ensure a peaceful death with a 100 percent success rate,” he says. 

The current pod costs around €25,000 ($26,388) to produce, a significant reduction in cost from the first version of Sarco, which came to over €150,000 ($158,285). Exit International will not be selling the devices—Nitschke’s aim is to allow anyone to download the design and print it themselves free of charge. 

“The machines can be used over and over when they are made available in Switzerland, but the costs will be determined by the administrative requirements,” he says. “The current fees for an assisted death at a Swiss clinic right now are about CHF10,000 ($10,172) using intravenous injections or pentobarbital [a euthanasia drug]—Sarco will be only a fraction of that.” 

Interestingly, Nitschke says he’s been getting a cascade of requests lately, from people who see “no future for the planet” thanks to global warming—or even the war in Ukraine. He considers them all equally respectable and answers that “anyone who makes a rational decision to end their lives should have the best access to the best means necessary,” when asked about their gravitas. Besides, once we’re dead, our component atoms and molecules will continue recirculating into the general ecosphere, Nitschke believes—he is comfortable in his atheism. 

That’s quite a far cry from how Curlewis sees her post-expiration. She’s planning to invite 200 family members and friends over for lunch and have lots of waiters and food and champagne if and when the time comes. Ideally, she’d like to place the pod before the Australian parliament and die right there, in front of those who would “deny people the right to a decent death”—her final act of activism on Earth. She knows this will be next to impossible, but it’s not like she is bereft of choices. Sarco can also detach from its base and serve as a coffin. You can position the end-of-life vehicle by the sea, by a cliff, inside a room full of loved ones, wherever you want. Once she decides on the best location, Curlewis will have a glass of wine or champagne, hop into the Sarco pod, close the door, and push the button. 

“I’ll be fine,” she says. “A few minutes later, I’ll be in heaven.”

Editor’s note:  If you are experiencing suicidal thoughts, please call the National Suicide Prevention Hotline by dialing 988 (in the U.S.). Outside the U.S.? Find a help line in your country.

Go Deeper