Stephanie Saulter is the author of the "®Evolution" novels, the first of which, "Gemsigns (opens in new tab)," is now available in the United States. She has contributed this article to Live Science's Expert Voices: Op-Ed & Insights.
Taboos are an interesting social construct. Every culture tends to regard their own as both inviolable and immutable, but history begs to disagree. It's commonplace for societies to hold as absolute their understanding of what is right and moral and necessary, and to resist the notion that these convictions are, or should be, subject to change.
But if we look back on the way social mores have evolved across cultures and eras, we can see that many of those moral absolutes do not withstand the passage of time. The enforcement of religion, constraints on sexual expression, the rigidity of class structures and gender roles — all have changed almost beyond recognition over the past handful of centuries.
Those taboos that endure — murder, incest — are the ones that no special circumstances can mitigate, and to which no exceptions can be made to apply.
Evolution relies on the emergence of exceptions — no less when it comes to social change than to genetic mutation. The exceptions that become the rule over time are those that best respond to the environment in which they have arisen. And yet we are rarely more anxious than when we feel those boundaries start to shift, or more strident in demanding an uncomplicated moral framework within which to determine the way forward.
This is not always possible, or even helpful. The expectation of a simple answer to the question, "What is ethical?" belies the complexity of the circumstances within which that question is likely to be asked.
Take, for example, the prevention of illness or disability. Absent further detail, few would argue against parents doing everything reasonably within their power to ensure that their children are born healthy, and remain so. For most, this will imply no action more drastic than a sensible diet and lifestyle during pregnancy, along with good pre- and post-natal care.
But what about the couples whose genetic inheritance puts their offspring at greater risk? At this point, what is "reasonable," what is "within their power," what constitutes "good health" and the degree to which it can, or should, be "ensured" have the potential to become more controversial.
Some months ago, I attended a panel discussion entitled "Genetic testing in assisted reproduction: Selecting, not perfecting?" in which experts in reproductive and genetic medicine, law and policy discussed the issues around preimplantation genetic diagnosis (PGD). PGD enables specific inherited conditions to be tested for as part of the process of in vitro fertilization (IVF ) — and, indeed, where there is known to be a high risk of a genetic disorder, IVF with PGD may be recommended over attempts at natural conception. The takeaway message was that the diagnoses and choices that PGD makes possible are both limited and specific.
There's no "test for everything" — all that can be done in response to a poor test result is to not implant that embryo. There's no way to correct whatever's wrong with it, hence the "selecting not perfecting" clause tacked onto the title.
That clause was, however, posed as a query, because the statement begs the question. To the best of my recollection, no one in the audience argued that the illnesses PGD can detect should not be prevented wherever possible — but selecting against them is nonetheless a form of engineering, albeit of the most passive kind.
Preimplantation tissue typing to select for "savior siblings" in a case where an older child already has a life-limiting disorder is not quite so passive — though it's still largely unobjectionable. Knowing that a genetic risk runs in the family, what parent wouldn't want to ensure that future children are free from its effects? And if the cord blood from a healthy newborn can help to cure a sick sibling, well, why not? That's better, surely, than letting it go to waste.
So far, so simple, but now we move on to the big story of the moment in reproductive medicine, at least here in the United Kingdom: the prospect of eradicating mitochondrial disease by replacing the mother's faulty mitochondrial DNA (mDNA) with healthy mDNA from a donor egg. The resulting egg would contain the birth mother's nuclear DNA and the donor mother's mitochondrial DNA, and would be fertilized in vitro by the father's sperm.
"Three-parent babies!" scream the headlines, and the protests have, indeed, poured in. There's an argument that in cases like these, the birth mother should simply turn to donor eggs instead of seeking to repair her own; the development of mitochondrial replacement techniques can be characterized as a disproportionately robust acquiescence to a rather sentimental wish for one's children to carry one's genes.
This objection is not without some merit. But a prospective mother's bad mDNA is not the entirety of her genetic worth, and it seems there is an entirely unsentimental counterargument in favor of preserving as much of the species' genetic variety as possible.
No matter what side of that fence you're on, there's little doubt that mitochondrial replacement is a definitive step away from random recombination followed by test-and-select, and into the arena of active engineering. It will, indeed, produce children carrying the genes of three people, not two. Those children will pass those genes on to their own children. It is the beginning of inheritable genetic modification in humans.
Should we object to this? On what grounds? It is, after all, the logical next step. It has the potential to save many thousands of people from disease-wracked lives and early deaths. If it were a new vaccine, we wouldn't hesitate. And I confess to having little patience with objections to a technique or procedure on the grounds that it is "unnatural" or that "we don't know what will happen."
Of course, mitochondrial replacement is unnatural. But so are IVF, organ transplants, prosthetic limbs and injectable insulin. If we were sanguine about the way that nature and circumstance ravage our fragile bodies, we'd never have invented medicine. And, of course, we can't predict with 100 percent certainty what will happen in the future as a result of the actions we take now. We never could. When has that ever stopped us? Why should it?
What, indeed, should stop us?
That is the ethical core of the debate. Developments in reproductive medicine tend to proceed incrementally; each is a small, logical step that makes perfect sense in light of what has gone before. But every now and then, there is a huge shift that rewrites the landscape, turning fantasy into possibility. The invention of IVF was one such shift; without it, none of the later developments discussed here would have been possible. The engineering of embryos, incorporating genetic material from multiple sources, seems likely to be another. And the fear, as always, is that we may go too far — creating the dreaded "designer babies" whose appearance, IQ, creative talents and athletic ability will have been customized to the specification of venal, vacuous parents.
Is this concern sufficiently founded to merit a prohibition on future modification work? Probably not. For one thing, intensive, long-term research at great expense is required before anything approaching actionable modification techniques can be developed. Implementation of such techniques in patients is unlikely to be either quick or cheap.
The entire process is highly regulated and subject to intense scrutiny. It's hard to imagine anyone having the money or the inclination to spend those kinds of resources on something so utterly trivial as hair or eye color, or a slightly better facility for differential calculus or any regulatory body approving such research. Moreover, it would be a move away from our current model, which holds that intervention is only ethical and permissible on medical grounds.
Do we, therefore, need to worry that attributes that we now consider part of the vast spectrum of human diversity might, in time, become medicalized, so to speak — the better to abjure, alter and reduce that diversity? It seems an odd fear in light of the fact that reproductive medicine is enabling more, not fewer, prospective parents to have children who are their genetic descendants. Not to mention, we live in an era that acknowledges, accepts and actively celebrates diversity.
This is possibly why we fear its loss: It is a thing we have only lately come to truly understand and value, and we fear a return to the bad old days when it was not so.
(There are, of course, many people in our societies who are less celebratory than others. Their continuing presence is reason for concern, and I do not advise complacency. Rather, it seems they are more likely these days to be repudiated by the mainstream than be representative of it.)
What about the other dystopian nightmare, in which people are engineered specifically to fulfill certain roles, or survive in altered environments? I've speculated about this possibility in the ®Evolution novels, and had to imagine an extremely unlikely confluence of circumstances in order to render such an outcome plausible — circumstances in which the imperative to survive is greater than any taboo that would stand in its way. The backdrop to the creation of the genetically modified humans (the gems) of "Gemsigns" is a scenario in which the alternative is at best a reduced, pre-Information Age civilization — and at worst, outright extinction. In such desperate straits, who is to say that radical engineering would still be the wrong thing to do? ['Gemsigns' (US 2014): Book Excerpt ]
Should the ethical standards of the present trump the development of a science that might enable our species to survive an extinction event in the future?
We should take a long, hard look at our own tendency to try to limit the decisions that future generations are able to make. We may be as disinclined to trust our descendants to make morally sound choices as our ancestors were to trust us — but don't we know now that those ancestors were wrong? It follows that, one day, we might turn out to have been wrong, too.
Knowledge will emerge, and circumstances will arise, that will render our current frameworks obsolete. We need to trust the decision makers of the future to determine what will be best for them no less than we trust ourselves to determine what is best now.
The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.