If you ever played role-playing games, whether online or old-fashioned yard — you know how easy it is to get attached to your avatar, that is, to play the role of “Cossacks-robbers”. You literally feel pain when a character has a Troll, sear a dragon or kills the shaman. The American sociologist (and avid gamer) William Sims Bainbridge made this relationship even further by creating virtual views 17 deceased family members. In an essay on the topic of online avatars 2013 it envisions a time when we will be able to upload their personality into artificial intelligent simulation of ourselves that can act independently from us and will remain even after our death.
What kind of responsibility we can assign to these simulated people? Although we are leery of violent computer games, no one seriously believes the murder to undermine the virtual attacker. However not absurd to imagine that one of the simulated people exist, and have some degree of autonomy and consciousness. Many philosophers believe that intelligence like ours may not necessarily be stored in the network of neurons in our brains, and can exist in various material systems. If they are right, there is no reason why a sufficiently powerful computers could not keep consciousness in its chips.
Will digital kill the fly?
Today, many moral philosophers have tried to understand the ethics of altering human population and ask questions like these: what is the price of human life? What kind of life we should strive to create? How much value we should invest in human diversity? But when it comes to the ethics of the treatment of the simulated entities is not entirely clear whether we should rely on the same logic and intuition that we use in our world of flesh and blood. Deep down, we feel it is wrong to kill a dog or even a fly. But whether the disabling model of the fly’s brain — or a human — murder? When “life” takes on new digital form, our own experience can no longer be a reliable moral companion.
Adrian Kent, a theoretical physicist from Cambridge University, decided to study this gap in moral reasoning. Suppose we learned to emulate human consciousness to a computer cheap and simple, he writes in one of his articles. We would like to give this virtual being a rich and useful medium for communication — a life worth to live. Maybe we could do that with real people by scanning their brain in minute detail, and recreating them on the computer. You can imagine how this technology “saves” people from deadly diseases; some transhumanists see this as a way to immortal consciousness.
Of course, this may all be a pipe dream — but let’s assume otherwise. Now let’s get a set of utilitarian moral principles, represented by Jeremy Bentham in the late 18th century and later refined by John Stuart Mill. Considering all the circumstances, said Bentham, we should strive to carry maximum happiness (or “utility”) the greatest possible number of people. Or, in the words of mill, “actions are justified in proportion to how they bring happiness, and not justified if they lead to a lack of happiness.”
The principles of good conduct have plenty of critics. For example, how can we measure or compare the types of happiness — weighing the value of love of the grandmother, for example, on the same scales with admiration virtuoso concert pianist? “Even if you want to take utilitarianism seriously, you don’t know what quality you invest in the calculations do matter,” says Ken. However, most belief systems today by default recognize that moral compass pointing in the direction of greater happiness by far more preferable than one who looks from him.
According to the scenario of Kent, based on utilitarian motives, may view that we have to move forward and to multiply our simulated, or simulated beings — let’s call them Sims, without limits. In the real world is uncontrolled childbirth has obvious disadvantages. People will suffer, emotionally and economically, having a large family; overpopulation is already putting pressure on global resources, and so on. But in the virtual world, such limits may not be. You can create a utopia with almost no limited resources. Why, in that case, you cannot create as many worlds as possible and fill them with joyful Sims?
Our intuition tells us the answer: but why? Maybe conscious SIM simply will not have the same intrinsic value as a man of flesh and blood? This idea is expressed by Michael Madere, a specialist in philosophy of mind and ethics of virtual reality from the University of Tulane in New Orleans, and believes that it should be treated seriously.
“In human life there is a mystical element that makes us ask the classical philosophical questions like: why is there something rather than nothing? Is there meaning in life? Do we have to live by honor?”, he says. “Simulated mind could question this, but from our point of view, they will be meaningless, because we decided to invent it”.
Some philosophers believe that we ourselves can be such simulated creatures. This is the possibility we simply can’t exclude, but nevertheless, we believe these questions are meaningful. So one would also assume that Sims have the right to ask questions.
Kent again asked the question: how will be correct — to create a population of identical beings, or completely different? Obviously would be more efficient to create identical beings enough information on one to create N such. But our instincts tell us that diversity has meaning and value. And why, I wonder, if there is no reason to believe that N different individuals will be happier than N are identical.
Kent’s approach is that different life is preferable to multiple copies of the same. “I can’t get rid of thoughts that the universe with a billion independent identical emulations of Alice would be less interesting and better than the universe with a billion different individuals,” he says. He calls it the concept of inferiority replication.
If we consider the space inhabited by billions Alice, let’s not talk about one life, multiplied many times, or we are talking about one life, common on many worlds. It would follow that a lot of Alice in identical environment will have values not more than one. This scenario is he calls the futility of replication. “I am inclined to this opinion,” says Kent, recognizing, however, that can’t find a convincing argument in his defense.
Thought experiment Kent touches on some old puzzles of moral philosophy that was never resolved. The English philosopher Derek Concerning that died last year, saw them in his monumental work on the personality and “I”, “Reason and identity” (1984). The concerning pondered such questions, for example, how many people should be in General, and it always will be morally better to add the life worth living in the hustle and bustle of the world, when we get the opportunity.
Even if you take a utilitarian point of view, is the problem of finding the greatest happiness for the greatest number: the dual criteria create ambiguity. For example, imagine that we have control over how many people live in a world with finite resources. You would think that there must be some optimum number of people that (in principle) will enable the best use of resources to ensure happiness and prosperity for all. But is such a utopia there would be room for another person? Wouldn’t it be acceptable to reduce the happiness of all for a little, to ensure another happy life?
The problem is that this process has no end. The growth of numbers of additional happiness of new lives may outweigh the costs for those who are already alive. Ultimately, says Concerning, you come to the “repugnant conclusion”: for the scenario in which the best result is a bloated population of people who are desperately unhappy, but better than if there was no life at all. Together their meager scraps of happiness are greater than the sum of a small number of genuinely happy people. “It’s hard for me to accept that conclusion”, wrote Concerning, but can we justify such an approach? Kent doesn’t know. “I don’t know, can there be any informed decision repugnant conclusion,” he says.
At the root of this issue is that the Concerning is called “identical problem”: how can we think rationally about questions of concerned individuals, if their very existence depends on the choices we make (like finding a place to “another”)? Weigh the pros and cons that can affect an individual, depending on our decisions, in principle, not so difficult. But if you take into account the possibility that this character could never exist, we no longer know how to count. Compared to zero or non-existence of anything will win, so you can morally justify even the worst scenarios of existence.
In this game a utilitarian population there is another very strange scenario. What if there were people with such a high demand for happiness, that would require from others more than they can afford to give? American philosopher Robert Rozik called the creation of a “utility monster” and criticized it in his book “Anarchy, state and utopia” (1974). According to Nozick, this picture “will require all of the victims in favor of the monster to increase overall utility”. Most of the book Concerning was an attempt — unsuccessful in the end to escape from the repugnant conclusion from a utilitarian monster.
Now let’s get back to virtual worlds Kent, full Sims, and his principle defective replication — when a number of different lives are worth more than the same number are identical. Perhaps this will allow us to avoid the repugnant conclusion Concerning. Despite what Leo Tolstoy said about inequality poor families at the beginning of “Anna Karenina” (1878), it seems that a huge number of unhappy lives is almost the same in their gloomy sadness. Therefore, they cannot be pick and drop by drop to increase the total happiness.
But just as the inferiority replication condone the utilitarian monster — by definition, it must be unique, and therefore, more “worth” in comparison with the inevitable similarity of the lives on which it fed. This decision we are not satisfied. “It would be nice if people somehow thought more about these issues,” admits Kent. “I’m somewhat puzzled by them.”
For American libertarian economist Robin Hanson — Professor of Economics at George Mason University in Virginia these reflections are not so much mental experiments, many predictions of the future. His book “the Age of Em” is a society in which all people are downloading his consciousness into a computer to live life in the form of “emulation” (not Sims, but amov). “Billions of downloads could live and work in one building, and room enough for all,” he writes.
Hanson examined in detail, as would work such an economy. Amy can be any size — some of them are very small — and the time for them to go another way in comparison with people. Be careful observation and a small salary, but Amy can get rid of this suffering by choosing a life without work. (Hanson believes that we can live in such a world).
This scenario allows for the possibility of duplication itself, because the mind has already been transferred to the computer, so making copies is quite simple. Hanson said that the problem of identity in this case is blurred: the duplicate is the “same man” originally, but gradually their identity at odds when they start to live separately.
Hanson suggests that the duplication of people is not only possible, but desirable. In the coming age of emov people with particularly valuable mental abilities will be “loaded” multiple times. And in General, people will want to make several copies of itself in any case as a form of insurance. “They may prefer redundancy in their implementation, to ensure that can survive an unexpected disaster,” says Hanson.
But he doesn’t think they’ll prefer the scenario of Kent about identical lives. Amy “will not give much value living the same life in different times and different places. They will contribute value in multiple copies, because those copies will be able to work or communicate with others. But this kind of work and relationships will require that each copy has been causally independent, and their stories are intertwined depending on the task or as partners.”
In any case, eman will have to deal with the moral difficulties to overestimate that we do not. “I don’t think that morality, which appeared in humans is sufficiently General or robust to give confidence to the answers to such situations, which are so far from the “experience” of our ancestors,” says Hanson. “I predict that Amy will have many different conflicting opinions on such things.”
Amy Sims and our virtual future
Now all this may sound very weird, like the apocryphal medieval discussions of angels dancing on toes. If we ever create a virtual life, which will be similar to the real at all? “I don’t think someone will be able to confidently say it is possible or not,” says Kent, partly because “we have no good scientific understanding of consciousness.”
Even so, technology is moving forward, and these questions remain open. Swedish philosopher Nick Bostrom, of the Institute for the future of man, argued that the computational power available “Posthuman” civilization would make it easy to simulate creatures that live in the world and feel as real as we are your. (Bostrom also believes that we could live in a simulation). But the questions about how we shape the population of such worlds represent “a real dilemma for programmers, scientists and secondlow the future, though not so distant,” says Kent.
Script Kent can already be consequences for the real world. Arguments about utility maximization and the problem of identical arise in discussions about the promotion and the prevention of human conception. When should renounce conception in the case of risk, for example, the development of abnormalities in the baby? None of the new methods does not guarantee a complete security and is never guaranteed but IVF would never have done if it was mandatory. It is expected that this technique must maintain a certain threshold of risk. But a utilitarian approach challenges this idea.
What if a new helper method, reproduction would have a moderate risk of small birth defects — for example, birthmarks? (This is the real argument: the story of Nathaniel Hawthorne 1843 tells the story of the alchemist who tried, with fatal consequences, remove the label with his wife, cited the example of the bioethics Council of the United States in 2002). It’s hard to say that the people marked in that spot, will be worse than others, and therefore should not be taken technique into account. But where to draw the line? When the defect at birth better not to bring it to life.
Some have mentioned this dilemma, speaking in defense of human cloning. Will there be dangers, such as social stigma or distorted parental motives and expectations, to outweigh the benefits of providing life. Who are we to choose for the cloned person? Who are we if we make this choice before the people in General to exist?
Such arguments seem to require divine solutions. But feministyczne observer might think that we were the victim of one of the versions of fantasy Frankenstein. In other words, is not that a handful of men dreaming of an idea to finally begin to fabricate people, when women did it for centuries? A sense of novelty, which leads to all these disputes is Patriarchal plaque.
Even so, the prospect of virtual consciousness raises very fresh and fascinating ethical issues — which, in the opinion of Kent, force us to challenge the intuitive value that we place on life and demographics. It is extremely difficult to see a strong argument in favor of the fact that a number of different lives is morally better than the same numbers are identical. Why do we think so? How do we get rid of prejudice?
One could say that the perceived homogeneity in human populations erodes the capacity for empathy and ethical reasoning. “Faceless” crowd, which is opposed to the individual — the usual stamp, which evokes a sense of heroism. But this is not necessarily correct.
Maybe we have an evolutionary disgust to the same individuals, especially when you consider that the genetic diversity of the population ensures its survival? Just remember the films about identical twins or clones is not always looks nice. Looks rather spooky. In 1919 this feeling Sigmund Freud had linked with the idea of an evil double (a DoppelgangeR). And if in the case of identical twins it is still possible to see the hundreds of “identical” characters will look awful.
It’s not like we are faced with the armies of duplicates in the near future, in real or virtual world. But the value of thought experiments is that they give you a new way to understand the issues of this world. Imagining the ethics of our relations to the Sims we reveal the shaky or non-existent logic, which we use instinctively, weighing the moral value of our own lives.
Now, digital will kill a fly? Tell us in our chat in Telegram.