AMAZON
The Development of Morality through Genetic Algorithms
Morality is a big deal.
We are usually born with an innate sense of what’s right and what’s wrong, and we improve the way we think and behave thanks to our peers and from past traditions.
But what is the source of morality? Does it come from a superior being or it’s something that we developed? Is it subjective or objective? Is it relative or absolute? Can a machine have a sense of morality?
This topic has always been extremely important. The way we think about morality itself is at the very base of some of the most important dilemmas of our time. To put things in perspective, even the apparent existence of morality is often used as proof of God (don’t worry, this article is not about theism).
Trending AI Articles:
3. Machine Learning using Logistic Regression in Python with Code
4. Tutorial: Stereo 3D reconstruction with openCV using an iPhone camera
We are also on the verge of creating computers and machines that will have to make moral choices. Is a self-driving car morally obligated to save a pedestrian if that means endangering the driver? Is it even possible to somehow equip an algorithm with morality?
To do so we would first need to define morality (and no, unfortunately it’s not as simple as “God says so, done”).
To prepare for this article I’ve read books and watched debates. I found particularly interesting the recent discussions between Jordan Peterson and Sam Harris on this very topic. Their vast knowledge on the subject can be very useful to summarize two of the most common competing positions.
Peterson doesn’t define morality in simple terms. I won’t quote him directly and I’ll try my best to avoid strawmanning him. What I got from his view is that our morality comes from past traditions and stories. Stories, according to Peterson coming mainly from the Judeo-Christian tradition, encode the wisdom of our ancestors and survived until today because they helped us prosper. We have been co-selected together with those stories. He also agrees with Dostoevsky in Crime and Punishment on the idea that utilitarianism and rationalism must be avoided and can only be avoided by believing in a supreme being that can impose judgement.
Sam Harris has a very different approach. He compares morality and behaviour to a chess game: we start with a set of rules and goals and we develop strategies and evaluate each move we make. If a move gets us closer to the goal, then it’s a good move. If we somehow change a rule or the goal, strategies and good moves will change accordingly. The evaluation of every move is not subjective, since it can be measured in a deterministic way. This consequentialistic approach is a very common framework in secular morality. We can start from a goal that can be defined as well-being and, from there, derive some general rules, such as “life is generally preferable to death” (this one is very simple, since death is the absence of being and the first thing to be generally avoided) and so on. Of course, this is usually put together with moral relativism (that is not to be confused with subjective morality), where the context of an action matters (throwing a bucket of icy water on someone is morally wrong for obvious reasons, but if that person is on fire it becomes objectively a good moral action).
My personal opinion is that what Peterson says is very hard to test, and that’s a huge critic since I’m directing it to an evolutionist. I can follow his reasoning, but there is no proof that our moral values really come from the relatively recent stories in religion, compared to several millennia of natural selection. Simple facts such as “killing is wrong” and many other basic moral notions are common even in African tribes and very ancient civilizations. Even dogs and monkeys have a sense of morality, that is vastly more evolved in the latter ones (dogs, unlike monkeys, can’t perceive justice in the quantity of a reward, but only in its presence or absence). We don’t actually come from monkeys, but generally they are closer than us to our common primate ancestors, and that’s a pattern that suggests that morality becomes better with evolution or the development of the brain.
What’s more troubling to me, however, is the very thought that without an imposed moral sense that comes from an external source we can’t be moral at all. That’s one of the main reasons why I’m writing this article. After all, most people have an inner voice, a Socrates’ Daemon that is somehow innate, that comes before the tradition or any education.
I started thinking and doing simulations in my head. Imagine our world 300,000 years ago. Mankind has gradually started to evolve from primates and can probably already feel affection for offspring and peers, as most mammals do. Imagine a population of purely random individuals: some of them cooperate, some of them prefer to stay alone; some of them steal and kill, some of them are compassionate and helpful. Wait some generations and individuals who like to stay alone will most likely die before mating, while individuals who fight in a group or have a role in society are the ones that will survive. Violent individuals who kill without resentment, even if they’re able to somehow survive, will give birth to violent groups that will kill each other and have less chances to propagate their genes.
That’s natural selection.
This was the intuition, now I needed to test it. Can a framework purely based on a goal and a set of rules define a moral individual just with natural selection?
To do this I used a genetic algorithm and decided to model a group of individuals using personality traits. The full code and the simulation itself can be found here. A genetic algorithm is a metaheuristic inspired by the process of natural selection, a perfect fit for my goal.
I started with a random group of 200 individuals with completely random personality traits. I decided to use the notorious Big Five, just to give some sense to it, but it doesn’t really matter how I named the labels. Then I defined the fitness function (a function to evaluate the well-being of an individual) by giving each trait a risk factor. There are many studies on the correlation between the Big Five and risk factors, but again, there is no need to give meaning to the labels. However in this case I considered at high risk an individual with low “Conscientiousness” and low “Agreeableness”. I’ve also included the possibility of a death incident that is more likely if those two traits are very low. Finally individuals who die or have low well-being are not going to propagate their genes for future generations.
I let the simulation run for 50 generations and the results confirmed what I already expected. The first generations ware mostly random, with a high standard deviation, a few virtuous individuals and many deaths. Generation by generation, almost only good individuals were able to carry their genes. At the very end, after 50 generations, the final population was composed of 94 individuals with almost zero standard deviation and almost the best possible well-being evaluation. The final members of this small community all have very high Conscientiousness and Agreeableness, and relatively low Neuroticism.
Of course, I’m not claiming that this can be defined as a proof that morality in our society developed in this way, but somehow confirms that a purely consequential framework for secular morality, as well as a process of natural selection for empathy and social behavior, is indeed possible. In this simulated framework, based just on natural rules and a goal, moral individuals developed in a spontaneous way. It’s like morality itself is a way to reach the common goal of mankind.
What’s also worth mentioning is how the fitness function is purely based on a single individual. In a sense its measure is completely selfish (and that reminds me of the prisoner’s dilemma game). This further confirms (like they even needed any confirmation from me) the findings of many philosophers and mathematicians, from Adam Smith to John Nash: each individual in pursuing his own selfish good is led to achieve the best good of all. Imagine a society without the innate intuition that killing is wrong: would you like to live there?
Don’t forget to give us your 👏 !
https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href
The development of Morality through Genetic Algorithms was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.