Here And Hero: Neuroscience of Ethical Societies

We are born to care. According to research, even infants have a sense of what is fair. As we mature, our everyday conversations are interwoven with ethical considerations, about ourselves and about others. Much of the media we interact with is driven by moral outrage and controversy. Is this or that behavior decent, acceptable, criminal, or perverse?

For a social animal, this impulse, this justice sensitivity, has an evolutionary basis. Living in large groups of unrelated individuals, a shared commitment to fairness facilitates cooperation.

But does the growing field of research into the neuroscience of good and evil tell us anything about how we got here? For example, can it offer any insight into things like increasing authoritarianism, and wealth inequality?

For a moment, let’s look past polarizing stories about how prisons and police are inevitable or not, necessary or not, a blessing or not. Rather than asking whether surveillance, deterrents, and punishment maintain a safe and civil society, let’s look at what science says about being good.

Social Decision-making

The neuroscience of morality is broadly investigated by the examination of human social decision-making. Theory and research in this area, especially over the last two decades, is deep and vast, exploring how we make decisions about what behaviors we consider good, corrupt, helpful, unethical, criminal, and so on.

I am looking at this because I’ve been struggling to understand evil. Not the evil of individual pathology, so much as how it becomes acceptable, even laudable, to build and maintain organizations that bury truth and create suffering for the benefit of a few. How do we justify this behavior? How are a few people able to organize cruelty and harm – or projects with these obvious effects – with the consent – even admiration – of many others? If we don’t assume that brutality and selfishness are our natural state, then how is it that instead of addressing these issues, our communities seem persistently fractured by deep antipathy and oblivious self-centeredness? Even besides a global swing towards xenophobia and authoritarianism, in the context of climate instability, these questions have serious consequences.

Beginning with Darwin, there has been a growing understanding of how and why human cooperation has evolved. AAttachment is plays a key role, and supports the pro-sociality necessary for cooperation. As you might expect, people with a history of good-enough attachment figures tend to be “good,” in the sense of an individual acting according to social norms.

We can now identify regions of the brain that are involved in contemplating outcomes, assessing intentions of others, making judgments about harm done, and so on. It’s work based on empirical research, and it’s compelling, even where – or because -- there remains disagreement.

But the links between biology and behavior in the individual are only one narrow slice of this question. Questions about context, history, power, and agency are relevant, certainly to a just and safe community. As an obvious example, how is the traditional story about an impartial court system out of step with the lived experience of its participants? And as clinicians in this community will agree, a history of trauma, both within the family and in relation to community institutions (police, social services, psychiatry, etc) can have a pernicious and lasting effect on behavioral outcomes.

Strong Reciprocity

The idea of strong reciprocity may provide insight here, into why human cooperation in large societies of unrelated individuals has remained stable. One might guess that unconditional altruism is the basis for a stable society, but research sheds doubt on this intuition: not only can rewards fail to incentivize good behavior, they can actually undermine it. According to the theory first outlined by Bowles and Gintis, long term stability and cooperation depend instead on what they call strong reciprocators. These are people willing to cooperate altruistically when possible, but also willing to maintain norms -- at personal cost -- by punishing the selfishness of non-cooperators. The theory originates in evolutionary population biology, but it has been supported by research in numerous disciplines, like behavioral economics, the neuroscience of decision-making, and game theory.

There are several ways this has been explored in experiments, often using a kind of public goods trust game, such as the Ultimatum Game, the Prisoner’s Dilemma, and variations of these. The basic scenario is simple: one participant makes a proposal of how to share proportions of an endowment, and another accepts or rejects the offer. If the responder rejects the offer, they both get nothing.

The first thing about this research is that it upends the traditional economic model of individuals acting primarily out of rational self-interest. In any random sampling of proposers, the offers tend to be fair. More importantly, some responders will accept whatever is offered (something is better than nothing, right?), but a substantial proportion of participants choose personal loss to punish unfairness.

Even more compelling, when the game is played in a series of rounds with enough strong reciprocators, those participants who begin by acting out of exclusive self-interest change their behavior, and become more “fair,” to abide by norms established by the strong reciprocators. Conversely, if there are not enough strong reciprocators, even if the group begins with many altruistic cooperators, the long-term result is that shirking becomes prevalent, in a terrible race to the bottom.

Sound familiar? When we don’t hold each other accountable, selfish behavior keeps increasing. Another way of seeing this: when social institutions insulate shirkers (protecting privilege and power, as ours currently do), then in essence we have incentivized selfishness.

The research doesn’t offer a clear path, but it does suggest that intervention is going to be necessary to stop this race to the bottom. Perhaps it is some innate denial, but I continue to believe that change is possible. I’m with Cory Doctorow, who writes about “the desperate hope we have for people who are depending upon us.” The research too seems to support this, given the right environment, as described by Dan Reisel in this TED talk. Discussing the potential of restorative justice, he describes studies of mice in an “enriched enviroment” where neurogenesis leads to the development of prosocial neural structures that atrophy in caged isolation.