The Ideological Subversion of Biology

Jerry A. Coyne and Luana S. Maroja

SUMMARY: Biology faces a grave threat from “progressive” politics that are changing the way our work is done, delimiting areas of biology that are taboo and will not be funded by the government or published in scientific journals, stipulating what words biologists must avoid in their writing, and decreeing how biology is taught to students and communicated to other scientists and the public through the technical and popular press. We wrote this article not to argue that biology is dead, but to show how ideology is poisoning it. The science that has brought us so much progress and understanding—from the structure of DNA to the green revolution and the design of COVID-19 vaccines—is endangered by political dogma strangling our essential tradition of open research and scientific communication. And because much of what we discuss occurs within academic science, where many scientists are too cowed to speak their minds, the public is largely unfamiliar with these issues. Sadly, by the time they become apparent to everyone, it might be too late.

***

We’re all familiar with the culture wars that pit progressive Leftists against centrists and those on the Right. In the past, those skirmishes dealt with politics and sociocultural issues and in academia were restricted largely to the humanities. But—apart from the “sociobiology wars” of the seventies and our perennial battles against creationism—we biologists always thought that our field would avoid such struggles. After all, scientific truth would surely be immune to attack or distortion by political ideology, and most of us were too busy working in the lab to engage in partisan squabbles.

We were wrong. Scientists both inside and outside the academy were among the first to begin politically purging their fields by misrepresenting or even lying about inconvenient truths. Campaigns were launched to strip scientific jargon of words deemed offensive, to ensure that results that could “harm” people seen as oppressed were removed from research manuscripts, and to tilt the funding of science away from research and toward social reform. The American government even refused to make genetic data—collected with taxpayer dollars—publicly available if analysis of that data could be considered “stigmatizing.” In other words, science—and here we are speaking of all STEM fields (science, technology, engineering, and mathematics)—has become heavily tainted with politics, as “progressive social justice” elbows aside our real job: finding truth.

In biology, these changes have been a disaster. By diluting our ability to investigate what we find intriguing or important, withholding research support, controlling the political tone of manuscripts, and demonizing research areas and researchers themselves, ideologues have cut off whole lines of inquiry. This will decrease human wellbeing, for, as all scientists understand—and as the connection between heat-resistant bacteria and PCR tests demonstrateswe never know what benefits can come from research driven by pure curiosity. But nourishing curiosity has a value all its own. After all, it doesn’t make us healthier or wealthier to study black holes or the Big Bang, but it certainly enriches our lives to know about such things. Thus, the erosion of academic freedom in science by progressive ideology hurts us both intellectually and materially.

Although biology has clashed with ideology at other times and places (e.g., the Soviet Lysenko affair, creationism, and the anti-vax movement), the present situation is worse, for it affects all scientific fields. What’s equally unfortunate is that scientists themselves—helped along by university administrators—have become complicit in their own muzzling.

Here we give six examples of how our own field—evolutionary and organismal biology—has been impeded or misrepresented by ideology. Each example involves a misstatement spread by ideologues, followed by a brief explanation of why each statement is wrong. Finally, we give what we see as the ideology behind each misstatement and then assess its damage to scientific research, teaching, and the popular understanding of science. Our ultimate concern is biology research—the discovery of new facts—but research isn’t free from social influence; it goes hand in hand with teaching and the public acceptance of biological facts. If certain areas of research are stigmatized by the media, for example, public understanding will suffer, and there will follow a loss of interest in teaching as well as in research in these areas. By cutting off or impeding interest in biology, the misrepresentation or stigmatization by the media ultimately deprives us of opportunities to understand the world.

We concentrate on our own field of evolutionary biology because it’s what we feel most compelled to defend, but we add that related ideological conflicts are common in sciences such as chemistry, physics, math, and even computer science. In these other areas, however, the clashes involve less denial of scientific facts and more effort toward purifying language, devaluing traditional measures of merit, changing the demographics of scientists, drastically altering how science is taught, and “decolonizing” science. Evolutionary biology has been especially susceptible to attacks on scientific truth because it deals with the most fraught topic of all: the origin and nature of Homo sapiens. We begin with a misconception about our species that’s become quite common.

1. Sex in humans is not a discrete and binary distribution of males and females but a spectrum. This statement, one of the most common political distortions of biology (e.g., Ainsworth 2018), is wrong because nearly every human on earth falls into one of two distinct categories. Your biological sex is determined simply by whether your body is designed to make large, immobile gametes (eggs, characterizing females) or very small and mobile gametes (sperm, characterizing males). Even in plants we see the same dichotomy, with pollen producing the tiny sperm and ovules carrying the large eggs. The size difference can be huge: a human egg, for instance, has ten million times the volume of a single sperm. And each gamete is associated with a complex reproductive apparatus that produces it. It is the bearers of these two reproductive systems that biologists recognize as “the sexes.”

Because no other types of gametes exist in animals or vascular plants, and we see no intermediate gametes, there is no third sex. Although many species of animals and flowering plants have hermaphrodites, these simply combine male and female functions (and gametes) within single individuals and don’t constitute a “third sex.” Further, developmental issues can sometimes produce people who are intersex, including hermaphrodites. Developmental variants are very rare, constituting only about one in 5,600 people (0.018 percent), and also don’t represent “other sexes.” (We know of only two cases of true human hermaphrodites who were fertile, but one individual was fertile only as a male and the other only as a female.)

Only in protists, fungi, and algae do we find more than two distinct classes of individuals having gametes of the same size (“isogamous”), with individuals able to mate with members of any class but their own. If you relax the definition of sexes, these could be viewed as multiple sexes, but to avoid confusion biologists call them “mating types.”

For all practical purposes, then, sex is a binary—not just in humans but in all animals and plants. And it’s a binary because natural selection has favored the evolution of a binary. In 1958, the famous evolutionist Ronald Fisher posed the crucial question: “No practical biologist interested in sexual reproduction would be led to work out the detailed consequences experienced by organisms having three or more sexes; yet what else should he do if he wishes to understand why the sexes are, in fact, always two?”

Although it’s not really necessary to have two discrete gamete types to obtain the well-established advantage of sexual reproduction, the evolution of the sexual binary has happened multiple times. Both biological observation and mathematical models, whose messy details we can ignore, show why the number two is ubiquitous. Beginning with an ancestral species having gametes of equal size (“isogamy”), natural selection often promotes the splitting of the population into two groups of individuals having very different gametes (“anisogamy”)—either small and mobile ones or large and immobile ones. Two sexes have thus evolved, and henceforth the species will resist the invasion of individuals having other types of gametes—that is, other new sexes.

We can see the stability of the two-sex condition by realizing that what triggers the development of males versus females varies widely across species. Different sexes can be based on different chromosomes and their genes (e.g., XX vs. XY in humans, ZW vs. ZZ in birds, individuals with like chromosomes being female in mammals and male in birds); different rearing temperatures (crocodiles and turtles); whether you have a full or half set of chromosomes (bees); whether you encounter a female (marine worms); and a host of other social, genetic, and environmental factors. Natural selection has independently produced diverse pathways to generate the sexes, but at the end there are just two destinations: males and females. And so we have an evolved and objectively recognized dichotomy—not an arbitrary spectrum of sexes.

But despite the facts, the dichotomy of sex—especially in humans—has recently come under ideologically based attacks. Even in apparently objective discussions of sex and gender, individuals are often said to have been assigned their sex at birth (e.g., “AFAB”: assigned female at birth), as if this were an arbitrary decision by doctors—a “social construct”—rather than an observation of biological reality. Even the Society for the Study of Evolution, which should know better, was swayed by ideology to publicly declare that biological sex should be viewed as a continuum. Teachers have been hounded out of their jobs and deprived of their classes simply for declaring that human sex is binary. As we’ll see, this controversy comes from a deliberate conflation of a biological reality, the sexes, with a social construct, genders.

Denying the dichotomy of sex prevents us from understanding one of biology’s most fascinating generalizations: the difference between males and females in behavior and appearance. The color, ornamentation, large size, and weapons of males compared to their absence in females, a difference seen in species such as deer, birds, fish, and seals, result from sexual selection: the process, first suggested by Darwin, in which males compete with each other for access to females. This involves either direct antagonism between males, as in the jousting of deer, or by males appealing to female preferences through their color, ornaments, and behavior. And this near-universal observation in nature ultimately comes from females investing more in reproduction than males, starting with those big and metabolically expensive eggs.

Ultimately, this puts the burden of parental care largely on females. Tied up in offspring production and rearing, females thus become the sex less available for mating, even when the ratio of males to females is 1:1. Sexual selection also explains behavior: why, in most species—including our own—males are more promiscuous than females, who are picky about their mates. For a male, fertilization involves merely expending a teaspoon or so of sperm, while for females eggs are few and expensive, pregnancy is long, and then there are those pesky offspring to tend and feed—for years in humans. Antlers, plumes, peacock’s tails, elaborate male mating dances, bird songs: these and a host of other traits make sense only as the evolutionary results of having different-size gametes.

Why do so many people resist the sex binary? Because it’s in their ideological interest to conflate biological sex with gender—one’s social identity or sex role. Unlike biological sex, gender does form more of a continuum (online lists give dozens of genders). Still, gender distributions are camel’s-hump bimodal: most people conform to male and female gender roles, but there are many more intermediates than we see for biological sex.

And why do people distort the truth? We suspect that some of those whose gender doesn’t correspond to one of the two biological sexes, and their allies, want to redefine sex so that, like gender, it forms more of a continuum. While jettisoning the sex binary is meant well, it also severely distorts scientific fact—and all the evolutionary consequences that flow from that fact.

2. All behavioral and psychological differences between human males and females are due to socialization. Evolution and genetics are often claimed to play no role in these differences. This is the well-known “blank slate” ideology, which asserts that all humans, including males and females, are born with the propensity to behave in similar ways, and whatever behavioral or psychological differences we see among groups derive purely from socialization, including economic or environmental influences.

To a biologist, this kind of blank-slateism—which may stem partly from the Marxist faith in the infinite malleability of humans—is profoundly wrong. Multiple studies clearly show that there are average differences between men and women in a long list of behaviors influenced by biology, including sexual interests, parental care, aggression, degree of promiscuity, risk-taking, interest in people versus things, empathy, fearfulness, spatial abilities, violence, and traits connected to social relations. It’s important to realize that we’re talking about averages here: there’s a lot of overlap between the distributions of male and female behaviors, so individuals can show characteristics seen more often in the other sex. Some women, for example, are more aggressive than the average man. And we must add that socialization is a likely contributor—perhaps a major one—to many behavioral differences between men and women.

But can we assert that these average differences result solely from socialization? No. It’s likely that the average differences in the behaviors listed above not only have a biological basis but an evolved and genetic one. That is, it’s certain that, over millions of years, natural selection caused some behaviors of males and females to diverge. How do we know this? By using multiple criteria, including evaluating the general likelihood of an adaptive explanation; looking for behavioral parallels in other species (especially our closest primate relatives); determining whether a sex difference in behavior is ubiquitous among different human cultures, including hunter-gatherers; testing whether the behavior is influenced by reproductive hormones such as testosterone; and seeing if the behavior appears at the expected time of development. Risk-taking and male-male aggression, for example, are strongest during the peak reproductive years of young adulthood—just as we expect if these are behaviors that evolved to help men secure mates.

But to many, even suggesting a biological basis for sex differences in behavior is taboo, perceived as a kind of misogyny. A recent example is Chelsea Conaboy’s declaration in The New York Times that “Maternal instinct is a myth that men created.” Here she argues that well-known differences between men and women in attentiveness and behavior toward their children is due entirely to socialization. The obvious retort from biology is that while some human societies do force the burden of maternal care onto women, the greater attentiveness of mothers than fathers to their children—attentiveness triggered by cues such as hormones, lactation, infant crying, and the sight of babies—is seen not only in every human society but, more important, also in thousands of other animal species, including our closest primate relatives. Tellingly, these other species lack the social pressures that, to blank slaters, explain sex differences. It would be an odd coincidence indeed if misogyny and the patriarchy just happen to create a situation in humans identical to that seen in our evolutionary cousins—as well as in our more distant relatives.

The false idea that human males and females are born biologically identical in behavior and psychology is a form of what we call “biological egalitarianism.” This is the view that all groups must be essentially the same in important aspects of their biology because if they weren’t, one might be tempted to slide from nonidentity into “inequality” and from there into bigotry, misogyny, and other discriminatory behaviors. But as we’ll see, there’s no logical connection between what we see in nature and how we should regard the dignity, rights, and liberties of different individuals or groups. The first is a matter of reality, the second a matter of ethics—how we rationally construct morality.

3. Evolutionary psychology, the study of the evolutionary roots of human behavior, is a bogus field based on false assumptions. The biologist P.Z. Myers joined several other critics of this field (once called sociobiology) when he asserted that: “The fundamental premises of evo psych [evolutionary psychology] are false.” Even social psychologists, who almost universally accept evolution itself, are far less enthusiastic about the idea that evolution explains important aspects of human psychology, social attitudes, and preferences.

But Myers’s widely accepted view is misguided, for the fundamental premise of evolutionary psychology is simply this: our brains and how they work—which yield our behaviors, preferences, and thoughts—sometimes reflect natural selection that acted on our ancestors. Nobody denies this for our bodies—palimpsests of once-adaptive traits that are no longer useful (wisdom teeth, tailbones, and transitory coats of hair in embryos)—but opponents of evolutionary psychology deny it for our behaviors. But there is no scientific reason for such duality. Why on earth should our bodies reflect millions of years of evolution while our behaviors, thoughts, and psychology, molded by the very same forces, are somehow immune to our past? The only way this could be true is if human behaviors lacked genetic variation, a sine qua non for evolution. Yet research has shown that our behaviors are among the most genetically variable human traits!

And so the “sociobiology wars” of the seventies, launched by E.O. Wilson’s eponymous book, continue under a new name, but the subject remains human exceptionalism—the view that we’re somehow nearly free of the evolutionary forces that molded behavior in other species. It’s true that the early days of evolutionary psychology included some “soft” research that proposed dubious and untestable adaptive hypotheses for our behavior, but now the field has reached an explanatory maturity that has to be taken seriously.

Indeed, evolutionary psychology explains, to our best knowledge, several human behaviors. These include why we favor kin over non-kin—and closer kin over more distant kin—why we mistreat stepchildren more frequently than biological children, why males are more aggressive than females, the difference in promiscuity and sexual proclivity between men and women, why men show more sexual jealousy than women, why certain facial expressions convey emotions, why we have fears of snakes and spiders and show disgust at bodily fluids, and why we hunger for sugars and fats. Indeed, some of our behaviors, like the propensity to eat things that are no longer healthy, constitute features useful in our ancestors but now useless or even harmful.

By walling off a huge area of research and teaching that involves human nature, the ideological vilification of evolutionary psychology prevents us from understanding our own species. As two evolutionary psychologists noted, “Not a single degree-granting institution in the United States, to our knowledge, requires even a single course in evolutionary biology as part of a degree in psychology—an astonishing educational gap that disconnects psychology from the rest of the life sciences.” Without such knowledge, we’re left with “social constructs” and “societal expectations” as the sole source of our behaviors, explanations that utterly fail to explain the observed data. It goes without saying that when dealing with any human problems involving behavior, it’s best to have the fullest possible explanations, both social and biological.

The dismissal of evolutionary psychology is motivated by a blank slate ideology of human nature that sees humans as almost infinitely malleable, with few genetic constraints on our behaviors. We’ve already mentioned that Marxism has almost certainly influenced this attitude, which arose on the Left. More reasons are outlined in Steven Pinker’s book The Blank Slate: The Modern Denial of Human Nature. They include a disdain for biological determinism; a belief that things that can be learned, such as language, cannot at the same time involve capacities that have evolved; the false view that biology is destiny—that what is inherited cannot be changed—and a flat denial that biology plays a large role in human behavior, including similarities and differences between individuals or groups. As we’ll see, studying genetic differences between individuals or groups is especially taboo, for that work is said to promote bigotry and even eugenics.

4. We should avoid studying genetic differences in behavior between individuals. The default assumption of many people, especially those adhering to blank slate-ism, is that the genetic differences between people in traits such as educational achievement, IQ, and similar traits shouldn’t be studied. In some cases, the very existence of genetic differences is denied despite strong supporting evidence from various lines of research, such as twin studies. Such work is thought to inevitably produce ranking of people, a promotion of bigotry, and an unfair sorting of individuals onto different educational tracks. And yet even within a single ethnic group (e.g., the American descendants of Europeans), variation in virtually every trait, be it physical or behavioral, has an appreciable genetic component. This goes for traits such as height, blood pressure, the tendency to smoke or drink, neuroticism, and cognitive abilities and educational attainment. For the last two traits, more than half the variation among individuals is based on variation in their genes. It’s important to realize, though, that these measures reflect variation within a population and say nothing about the basis of differences between populations or ethnic groups.

This kind of study has become more useful since science developed techniques to sequence the DNA of an individual’s entire genome. With that information, and sequencing many individuals, you can correlate each variable DNA position (i.e., single nucleotide bases) with various traits of individuals, determining which bits of the DNA are correlated with variation in a selected trait. This kind of study (genome-wide association studies, or GWAS) has, for example, turned up nearly 4,000 areas of the genome associated with educational attainment. Fascinatingly, many of these genes are active mainly in the brain. Using GWAS studies, it’s now possible to make fairly accurate predictions about a person’s appearance, behavior, academic achievement, and health simply by analyzing the DNA of an individual and calculating their individual “polygenic scores” based on large samples of their population. This can even be done on fetal DNA.

GWAS analysis offers many possibilities for helpful intervention, especially by monitoring individuals for health conditions they’re genetically liable to develop. The usefulness of GWAS scores for educational achievement, however, is far more controversial. Although genetic differences play a role in many aspects of what we consider “intelligence,” right now it’s easier to equalize people’s prospects via social and educational reforms than by using polygenic scores.

Yet understanding genetic variation underlying educational outcomes might one day be useful. For instance, if we discover genetic variants that respond particularly well to educational or social interventions, it might be possible to target these individuals early on. These genetic studies could help identify environmental effects as well: If two people with identical polygenic scores wind up with very different lives, how did their environments differ? This is why doing such research, despite the controversy, is still worthwhile.

Most people wouldn’t object to knowing their genetic liability to develop diseases, but that doesn’t extend to work on behavior and cognition. The resistance to those studies rests on a blank-slate view of human nature that rejects any genetic determinism and argues that we can almost fully overcome any genetic influences on behavior. Genetic studies of anything beyond physical traits and disease are, it’s claimed, linked with eugenics and similar acts of bigotry in the past.

In fact, the fear and avoidance of behavior-genetic research is so strong that even the National Institutes of Health defines races solely as social constructs and has limited researchers’ access to public, taxpayer-funded databases containing information about the genetic constitution, health, education, occupation, and income of anonymous individuals. This restriction apparently applies even to studies that don’t involve differences between races, and so it appears to be the U.S. government’s attempt to stifle research on behavioral genetics in general—especially behaviors related to academic and social success.

5. “Race and ethnicity are social constructs, without scientific or biological meaning.” This is the elephant in the room: the claim that there is no empirical value in studying differences between races, ethnic groups, or populations. Such work is the biggest taboo in biology, claimed to be inherently racist and harmful. But the assertion heading this paragraph, a direct quote from the editors of the Journal of the American Medical Association, is wrong.

Before we handle this hot potato, we emphasize that we prefer the words ethnicity or even geographic populations to race, because the last term, due to its historical association with racism, has simply become too polarizing. Further, old racial designations such as white, black, and Asian came with the erroneous view that races are easily distinguished by a few traits, are geographically delimited, and have substantial genetic differences. In fact, the human species today comprises geographically continuous groups that have only small to modest differences in the frequencies of genetic variants, and there are groups within groups: potentially an unlimited number of “races.” Still, human populations do show genetic differences from place to place, and those small differences, summed over thousands of genes, add up to substantial and often diagnostic differences between populations.

Even the old and outmoded view of race is not devoid of biological meaning. A group of researchers compared a broad sample of genes in over 3,600 individuals who self-identified as either African American, white, East Asian, or Hispanic. DNA analysis showed that these groups fell into genetic clusters, and there was a 99.84 percent match between which cluster someone fell into and their self-designated racial classification. This surely shows that even the old concept of race is not “without biological meaning.” But that’s not surprising because, given restricted movement in the past, human populations evolved largely in geographic isolation from one another—apart from “Hispanic,” a recently admixed population never considered a race. As any evolutionary biologist knows, geographically isolated populations become genetically differentiated over time, and this is why we can use genes to make good guesses about where populations come from.

More recent work, taking advantage of our ability to easily sequence whole genomes, confirms a high concordance between self-identified race and genetic groupings. One study of twenty-three ethnic groups found that they fell into seven broad “race/ethnicity” clusters, each associated with a different area of the world. On a finer scale, genetic analysis of Europeans show that, remarkably, a map of their genetic constitutions coincides almost perfectly with the map of Europe itself. In fact, the DNA of most Europeans can narrow down their birthplace to within roughly 500 miles.

Of what use are such ethnicity clusters? Let’s begin with something many people are familiar with: the ability to deduce one’s personal ancestry from their genes. If there were no differences between populations, this task would be impossible, and “ancestry companies” such as 23andMe wouldn’t exist. But you don’t even need DNA sequences to predict ethnicities quite accurately. Physical traits can sometimes do the job: AI programs can, for instance, predict self-reported race quite accurately from just X-ray scans of the chest.

On a broader scale, genetic analysis of worldwide populations has allowed us to not only trace the history of human expansions out of Africa (there were several), but to assign dates to when H. sapiens colonized different areas of the world. This has been made easier with recent techniques for sequencing human “fossil DNA.” On top of that, we have fossil DNA from groups such as Denisovans and Neanderthals, which, in conjunction with modern data, tells us these now-extinct groups bred in the past with the ancestors of “modern” Homo sapiens, producing at least some fertile offspring (most of us have some Neanderthal DNA in our genomes). Although archaeology and carbon dating have helped reconstruct the history of our species, these have largely been supplanted by sequencing the DNA of living and ancient humans.

Further, there’s medical value in genetic studies of populations. A fair number of genetic diseases, for example, are associated (though not absolutely) with ethnicity: maladies such as Tay-Sachs disease, sickle cell anemia, cystic fibrosis, and hereditary haemochromatosis. These associations make both diagnosis and prenatal counseling more efficient, for one can use ethnicity to focus on possible medical issues. The incidence of ailments such as heart disease, cancer, and diabetes also differs among ethnic groups, but these diseases have both genetic and environmental causes, so treating them requires considering diet and lifestyle. Still, genetic analysis of individuals and groups could help with even these complex ailments. GWAS analysis based on ethnic-specific studies, for instance, might give estimates of the risk of various illnesses by testing infants or even fetuses. If you know you’re at risk, then monitoring your lifestyle can reduce the chance of getting seriously ill when you’re older.

Fortunately, GWAS data for different ethnic groups are beginning to be collected, and medical researchers already recognize that studies of different ethnicities are essential to both understanding disease and reducing health disparities. This is because genetic results from one group may not generalize to results from other groups. A recent GWAS analysis of dementia, for instance, discovered that some regions of the genome increase the risk in African Americans but not white Americans. This implies that some genes able to predict future dementia will differ between these groups and that possible interventions or cures might differ as well.

Finally, there are forensic reasons for associating genetics with ethnicity. These involve predicting what a perpetrator or victim might look like (e.g., facial features or the color of eyes, skin, and hair) from a sample of blood, tissue, or semen or, when using ancient DNA, predicting how ancient people might have looked. We know now, for instance, that some Neanderthals had pale skin and red hair and that dark skin and blue eyes might have been common in European Homo sapiens a few thousand years ago.

But the central question about genetics in the culture wars involves behavioral characteristics of different populations and ethnic groups, with differences in intelligence being the subject deemed most taboo. In light of the checkered history of this work, it behooves any researcher to tread lightly, for virtually any outcome save worldwide identity of populations could be used to buttress bias and bigotry. Indeed, even writing about this subject has led to sanctions on many scientists, who have “found themselves denounced, defamed, protested, petitioned, punched, kicked, stalked, spat on, censored, fired from their jobs and stripped of their honorary titles.” A well-known example is Bo Winegard, an untenured professor in Ohio who was apparently fired for merely suggesting the possibility that there were differences in cognition among ethnic groups. This is why most biologists stay far away from this topic.

The taboo is not whether there are observable differences in IQ and life outcomes between groups, for these are well known and easily measured using standardized tests. Rather, the issue is what causes these disparities: genetic differences, societal issues such as poverty, past and present racism, cultural differences, poor access to educational opportunities, the interaction between genes and social environments, or a combination of the above. A few methods have been applied to this question, including adoption studies, analysis of ethnically mixed populations, and GWAS. The genomic analyses have all concentrated on educational attainment—highly correlated with estimates of IQ and measures of success in life—but have focused almost exclusively on white descendants of Europeans. And the predictive power of these ethnically white GWAS scores nearly vanishes when you apply them to other ethnic groups. The reason for this decay in predictability involves the genetic differences between groups, including differences between the subset of genes that affect educational attainment, the existence of different variants of the same genes involved in both groups, or differences between groups in how genes and their variants interact with each other and the environment. The upshot is that it is not easy to translate findings from one ethnic group to another; each group needs to be studied separately.

Two other issues make it hard to analyze behavioral and cognitive differences between groups. First, these traits are usually affected by variation at hundreds if not thousands of genes spread throughout the genome. Second, those genes are physically connected to other genes on chromosomes. Taken together, this means that many genes for external appearance (color, facial structure, hair texture)—the very genes that give information about someone’s ethnicity—are physically linked to other genes, including those for educational attainment. Because genes lying close to each other on the chromosomes are inherited together, we have no way to completely separate genes affecting appearance from those affecting educational attainment. If differences in achievement between groups come at least partly from society treating people differently when they look different (e.g., via bigotry and racism), then the societal effect caused by “appearance genes” is conflated with the direct effect of “academic achievement genes.”

But despite the difficulty of disentangling the effects of genes and environments, there are still societal benefits to understanding genetic effects within different groups. For example, GWAS studies—conducted separately for each ethnicity—could illuminate whether genetic variants associated with educational outcome differ among different groups or respond differentially to environmental interventions. Imagine, for instance, a gene whose variants were associated with thyroid function. Further imagine that variants of that gene that reduced thyroid function, causing iodine deficiency, were associated with lower educational attainment than variants with higher expression, and that the low-iodine variants were more common in whites than in Asians. (This is not completely fanciful: iodine deficiency can reduce IQ by a full fifteen points, and genes might affect how well one responds to a diet low in iodine.) A simple intervention might involve iodine supplementation in whites having “low expression” DNA variants but not in those with “high expression” variants (too much iodine is toxic). This example is not far-fetched because we know that different groups have many unique gene forms (i.e., “private alleles”) that might have important effects on behavior as well as their own unique interactions with the environment.

It should be clear from this example that the reasons for studying genetic differences between ethnic groups is to boost the success of individuals whose DNA is known, not to rank different groups for one trait or another. But to do this boosting, we must first understand the nature of genetic differences among groups. Many objections to this kind of work vanish when you realize that while the focus is on population-specific DNA segments associated with achievement, the ultimate goal is to help each person do their best.

In our view, then, research on cognition or educational attainment within and between groups should not be demonized, banned, or automatically denied publication, and the data should be publicly available. It goes without saying that scientists should be cautious about such research and vigilant against its misuse or misrepresentation. But in the end, it’s hard to argue with the idea that the more we understand—and that includes genetics—the more success we’ll have with social policies. Indeed, there are good arguments suggesting that stifling research on IQ, or equating this research with racism, will cause more harm than good. After all, political equality should be a moral imperative, not an empirical hypothesis, and ultimately the value of a human being does not and should not depend on their IQ or years of schooling.

The great evolutionary biologist Ernst Mayr stated it well:

Equality in spite of evident non-identity is a somewhat sophisticated concept and requires a moral stature of which many individuals seem to be incapable. They rather deny human variability and equate equality with identity. Or they claim that the human species is exceptional in the organic world in that only morphological characters are controlled by genes and all other traits of the mind or character are due to “conditioning” or other non-genetic factors. … An ideology based on such obviously wrong premises can only lead to disaster. Its championship of human equality is based on a claim of identity. As soon as it is proved that the latter does not exist, the support of equality is likewise lost. (Mayr 1963)

6. Indigenous “ways of knowing” are equivalent to modern science and should be respected and taught as such. Because indigenous peoples such as New Zealand’s Māori and the New World’s Native Americans were the victims of colonialism, their traditional knowledge is often lauded as an alternative version of modern science—a “way of knowing” developed independently from what’s called “colonialist science” but seen by many as of equal value. In fact, the New Zealand government requires indigenous ways of knowing to be given equal status to modern science in the classroom—and to other subjects in all secondary school education. South Africa is also experiencing a decolonization of biology. An article in the prestigious journal Nature calls for decolonizing pharmacology in that country, concentrating on local herbal remedies to “anchor the curriculum in local experience.” While this adds a home-grown flavor to learning, dropping an anchor in local experience can only divert the student from an education in modern pharmacology.

Matauranga Māori, the indigenous way of knowing in New Zealand, is a mélange of empirical knowledge derived from trial and error (including the navigational ability of their Polynesian ancestors and Māori ways of procuring and growing food) but also includes nonscientific areas such as theology, traditional lore, ideology, morality, and legend. Yet all these are considered worthy of teaching as coequal to the methods and results of modern science. Māori scholars, for example, have advanced the improbable claim that Polynesians—the ancestors of the Māori—were the first to discover Antarctica in the seventh century. This claim is surely false, probably based on faulty translation of an oral legend. In fact, Antarctica was first seen by the Russians in 1820. Nevertheless, New Zealand’s Royal Society, the nation’s most prestigious scientific organization, gave a $660,000 grant to the Māori to explore this bogus narrative. There’s also been a revival of the traditional herbal and spiritual remedies of Matauranga Māori, which incorporate chanting as a means of healing. While local remedies may occasionally be helpful, they are almost never tested using the gold standard for medicine: randomized controlled trials.

Indigenous ways of knowing usually include some practical knowledge, which includes observations about the local environment and useful practices developed over time, including, in the case of Matauranga Māori, ancient methods of navigating and the best way to catch eels. But practical knowledge is not the same as the systematic, objective investigation of nature—free from assumptions about gods and spirits—that constitute modern science. Conflating indigenous ways of knowing with modern science will confuse students not only about what constitutes knowledge but also about the nature of science itself. It is true that modern science arose in Western Europe in the seventeenth century, a time when women were denied education and most of the population was white. This situation, due to bias, severely restricted people’s opportunities but provides no reason to discredit science itself—the best way of generating accepted knowledge about the universe—as “Western” or colonialist. (“Western” has become a total misnomer and insults the many people in other countries who practice the same brand of science.)

A related issue that pits indigenous culture against modern science is forensic anthropology: the study of ancient societies using human remains and artifacts. In North America, for instance, human remains, depending on where they’re found, can be claimed by Native Americans as their own, withheld from scientific study because they’re seen as ancient members of modern indigenous groups. Indeed, federal law mandates the return of bones and other artifacts to the indigenous groups who claim them. The remains must be reburied without scientific study, even if there’s no clear genealogical connection between the human bones and the Native Americans linked to where the remains were found. In the case of Kennewick Man, the indigenous “scientific” claims included a Native American leader rejecting the truth that his ancestors arrived via the Bering Strait from Asia on these grounds: “From our oral histories, we know that our people have been part of this land from the beginning of time,” says Mr. Minthorn. “We do not believe that our people migrated here from another continent.”

One victim of this mindset is physical anthropologist Elizabeth Weiss of San Jose State, who studies 500–3,000 year old bones from California. For simply studying those remains, Weiss was demoted by her university and banned from studying her department’s collection of bones. But it’s even worse: she’s not allowed to study X-rays of the remains or even show a photograph of the boxes in which they are kept. Many other universities, such as Berkeley, are also sending back or reburying artifacts and old bones. The result: valuable human history and anthropology remain off limits because remains and artifacts are considered sacred. Clearly, the best solution would defer burial until after scientific study or DNA collection. The present policy simply prevents us from learning about our past.

The promotion of these other ways of knowing comes from a desire to valorize oppressed groups by holding up much of their culture as having the same epistemic authority as science, a view that philosopher Molly McGrath called “the authority of the sacred victim.” In its secular form, this authority derives from postmodern views that science is just one of many “ways of knowing” and that the hegemony of science reflects power rather than accomplishment. This is encapsulated by the motto, espoused by some on both the Right and the Left for decades, that “science is always political.”

Like biblical creationism, much indigenous knowledge has a substantial spiritual or theological component that comes not from evidence but from authority or revelation. To add any of this knowledge to modern science, you must first separate the empirical wheat from the spiritual chaff. This is what the nondenominational Pastor Mike Aus meant when, after giving up his faith, he described “religious knowledge” this way: “There are not different ways of knowing. There is knowing and not knowing, and those are the only two options in this world.”

* * *

Nearly all the ideologically driven distortions of biology come from one mindset: radical egalitarianism. This is the view that the sexes, different ethnic groups, and, to some extent, individuals in a population are genetically nearly identical in behavior and psychology (though not in appearance) and that most behavioral differences are due to socialization and other environmental effects. Socialization has, for instance, become the default explanation for why there are more men than women in math and physics (and an excess of women in psychology), why males are more aggressive and females more empathic, why there are differences in achievement between individuals of different social classes and ethnicities, and why some groups are differentially represented in science and academia in general. While social influences can certainly affect these differences, the pervasive evidence for genetic influence on human differences makes it unwise to reject a priori the influence of hereditary factors. Nevertheless, because the biological data contradict the fashionable blank-slate ideology, its advocates are forced to render their program immune to data, which they do by twisting the facts of biology to conform to their beliefs.

Biological egalitarianism damages science in two ways. One is through deterrence: the chilling of research that prevents scientists from studying or teaching certain problems. This isn’t accomplished by direct prohibition of research but by instilling fears into teachers or researchers that discourages them from working on and even discussing such topics. A few public examples are all it takes to deter many others, such as the pillorying of those who teach that there are only two sexes in humans (e.g., Carole Hooven at Harvard and Christy Hammer at the University of Southern Maine). Further, those who study group differences and their genetics can be simply dismissed by labeling them as sexists, misogynists, racists, or eugenicists. This has been strikingly effective, for what liberal—and most biologists are liberals—wants to be tarred with those labels? Likewise, those who refuse to accept the equivalence of modern science and indigenous ways of knowing are deemed not only racist but colonialist. Is it any wonder that teachers, researchers, and professors censor themselves on these issues?

The other damage involves direct action: imposing requirements or punishments on scientists whose research strays too far from biological egalitarianism. Punishments have ranged from taking classes away from professors, making their lives so miserable that they’re forced to leave academia, demanding fealty to falsehoods, direct firing, demanding the infusion of mythology into science, rejecting scientific papers because their findings don’t respect the “dignity and rights of all humans,” withholding publicly funded data from researchers, and diverting research funds to ideologically derived projects (the National Institutes of Health once adopted this plan but soon abandoned it).

Beyond this, and beyond the scope of this piece, are the many attacks on scientific merit as an outmoded way of judging science or hiring scientists. We increasingly see calls, coming mostly from the Left, to replace evaluations of merit with more “holistic” schemes that take group identity into account. This has led many universities to require prospective faculty to submit diversity statements as part of their job applications, as well as eliminating the obligation for prospective students to submit scores on standardized tests such as the MCATS, SATs, and GREs, and even firing professors whose science classes are too difficult.

Science has always been subject to ideological influence and control, beginning with the Catholic Church’s censoring of Galileo, whose heliocentric solar system contradicted accepted theology. And those influences have come from both the Right and the Left, including debates about evolution, vaccine efficiency, global warming, fluoridated water, and so on. But what’s happening now is different. First, recent attacks on science are more general than before, involving not just single issues but spreading into every field. The biology wars, for example, now involve much more than the fact of evolution—the only real cultural battle we fought over most of our careers—and have spread to biological sex, differences between groups, the scientific language we’re allowed to use, the treatment of biological artifacts, and, indeed, whether there are valid ways of learning about the natural world apart from modern science. And, of course, famous biologists of the past such as Gregor Mendel, Charles Darwin, and T.H. Huxley are denigrated, in hindsight, as racist or sexist.

Further, the attacks on science come not just from the public, religious believers, or political authorities, as in the past, but involve scientists themselves—scientists who deem certain research taboo, restrict the availability of publicly funded data, argue that research funding should depend on ideology rather than merit, and demand that research papers should be censored or suppressed if they might offend individuals or groups. In the case of the Lysenko affair, Soviet fiat dictated the distortion of genetics and agricultural science, but today our own colleagues force nature into the Procrustean bed of ideology. Although scientific nonconformity may not be the life or death issue it was in Stalin’s Russia, jobs and research are clearly at risk.

Why is this happening now? We suspect that the change in political climate over the past ten years, including the rapid rise of identity politics, has caused scientists on the Left—even if they mean well—to use their own fields to signal ideological virtue and membership in a political “tribe.” Further, science departments have also been infected by the French postmodernism pervasive in humanities departments. When combined with the self-censorship of many researchers and teachers who fear professional damage, this poses a grave threat to science.

How, then, can we restore science to its primary mission: understanding nature and the universe? Since ideological pressure comes largely from scientists themselves, including those who dispense grant money and judge research papers, we can’t count on scientific argument to solve the problem. Indeed, radical egalitarianism is itself a form of faith, resistant to facts and rational argument. It is also a pledge of group allegiance. Steven Pinker explained how resistance to evolution did not involve rejecting scientific evidence but rather served as a badge of adherence to a religious ideology that happened to reject evolution on principle. His explanation also holds for the quasi-religious progressive ideology that is damaging biology:

Professing a belief in evolution is not a gift of scientific literacy, but an affirmation of loyalty to a liberal secular subculture as opposed to a conservative religious one. In 2010, the National Science Foundation [NSF] dropped the following item from their test of scientific literacy: “Human beings, as we know them today, developed from earlier species of animals.” The reason for that change was not, as scientists howled, because the NSF had given in to creationist pressure to bowdlerize evolution from the scientific canon. It was that the correlation between performance on that item and on every other item on the test (such as “An electron is smaller than an atom” and “Antibiotics kill viruses”) was so low that it was taking up space in the test that could go to more diagnostic items. The item, in other words, was effectively a test of religiosity rather than scientific literacy. When the item was prefaced with “According to the theory of evolution,” so that scientific understanding was divorced from cultural allegiance, religious and nonreligious test-takers responded the same.

So if facts won’t turn the tide, what can we do?

An obvious palliative is one we’ve always had at hand: a form of liberal egalitarianism and morality independent of biological differences. As Pinker noted in The Blank Slate (p. 340): “Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group.”

We can also keep stressing that the job of scientists is to find truth, not to decide how that truth should be used by society. This is not to claim that all research is equally valuable or interesting, nor to argue that science hasn’t been misused in harmful ways (Zyklon-B and nuclear weapons come to mind). But given the observation that a lot of pure research has led to discoveries that could never have been predicted, we should avoid placing entire areas of work off limits. If some people distort or misuse scientific research for ideological ends, scientists themselves should take the lead in correcting the record.

But perhaps the ultimate solution involves philosophy—emphasizing that there is no value in looking to nature to determine which of our behaviors are good, moral, or normal. Doing so always involves two well-known fallacies. The first is the naturalistic fallacy—the famous dictum that is equals ought, also phrased as “what is natural is what we should do.” The second is the related appeal to nature fallacy, which argues that what is natural must be what is good.

Both fallacies lead to the same errors. First, if we condition our politics and ethics on what we know about nature, then our politics and ethics become malleable to changes in what we discover about nature later. For example, the observation that female bonobos rub each other’s genitals as a bonding behavior has been used to justify why human homosexuality is neither offensive nor immoral. Bonobo behavior is, after all, “natural.” (Similar same-sex behaviors have been reported in many species and have been used to the same end.) But what if no such behavior had been seen in any nonhuman species? Or what if the bonobo observation was shown to be wrong? Would this make homosexual behavior immoral or even criminal? Of course not, because enlightened views of homosexuality rest not on parallels with nature but on ethics, which tells us that there’s nothing immoral about consensual sex between adults.

Second, we must realize that many behaviors that are “natural” because they’re found in other species would be considered repugnant or immoral in our own. These include infanticide, robbery, and extra-pair copulation. As one of us wrote, “If the gay cause is somehow boosted by parallels from nature, then so are the causes of child-killers, thieves and adulterers.” But we don’t really derive our morality or ideology from nature. Instead, we pick and choose those behaviors in other species that happen to resemble a morality we already have. (People do exactly the same thing—ignoring the bad behaviors and lauding the good ones—when they pretend to derive morality from religious texts such as the Bible.)

All the biological misconceptions we’ve discussed involve forcing preconceived beliefs onto nature. This inverts an old fallacy into a new one, which we call the reverse appeal to nature. Instead of assuming that what is natural must be good, this fallacy holds that “what is good must be natural.” It demands that you must see the natural world through lenses prescribed by your ideology. If you are a gender activist, you must see more than two biological sexes. If you’re a strict egalitarian, all groups must be behaviorally identical and their ways of knowing equally valid. And if you’re an anti-hereditarian—a blank slater who sees genetic differences as promoting eugenics and racism—then you must find that genes can have only trivial and inconsequential effects on the behavior of groups and individuals. This kind of bias violates the most important rule of science, famously expressed by Richard Feynman: “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

But the greatest danger is not to the layperson’s understanding of science but to science itself. The guiding principle of science—and of academic freedom, on which science depends—is freedom of inquiry. Those who place whole fields of investigation off limits, or distort scientific truth for political reasons, not only violate this freedom but deprive us of the intellectual and practical benefits that could come from pure, untrammeled research.

We aren’t under the illusion that calling attention to these points, and emphasizing the fallacy of the reverse appeal to nature, will push ideology completely out of science. Progressive ideology is growing stronger and intruding further into all areas of science. And because it’s “progressive,” and because most scientists are liberals, few of us dare oppose these restrictions on our freedom. Unless there is a change in the Zeitgeist, and unless scientists finally find the courage to speak up against the toxic effects of ideology on their field, in a few decades science will be very different from what it is now. Indeed, it’s doubtful that we’d recognize it as science at all.

Jerry A. Coyne and Luana S. Maroja

Jerry Coyne is Professor Emeritus in the Department of Ecology and Evolution at the University of Chicago, where he worked on the genetics of species formation, specializing in fruit flies. He graduated from the College of William and Mary and got his Ph.D. at Harvard University. Besides 125 scientific papers, Coyne co-wrote a scholarly book about his research area (Speciation) and two trade books, Why Evolution is True and Faith versus Fact: Why Science and Religion are Incompatible. He is the recipient of a Guggenheim Award and the Richard Dawkins Award, was president of the Society for the Study of Evolution, and is a member of the American Academy of Arts and Sciences.

***

Luana S. Maroja is an evolutionary biologist and professor at Williams College.  She got her undergraduate and master's degree from the Federal University of Rio de Janeiro, Brazil and her PhD from Cornell University.  She is interested in population ecology, phylogeny, speciation, population genetics and phylogeography, and has done extensive field work in Brazil, Panama and the USA. Maroja works on a variety of organisms, including small mammals, crickets, butterflies, and plants, and has published more than 35 scientific papers.