Technology

How to think about the rights of robots

How to think about the rights of robots

Loading player In February 2017, the European Parliament voted on a resolution presented by the Legal Affairs Committee which contained a series of recommendations addressed to the European Commission on “civil law on robotics” and suggested, among other things, the establishment of a specific legal status. for robots and the drafting of a code of ethics for those who design them. One of the objectives stated in the text was to make it possible to attribute responsibility in the event that increasingly complex and sophisticated machines cause damage in making autonomous decisions by interacting within certain environments.

The assumption underlying the resolution of the European Parliament, as well as many other reflections on the need to launch a discourse on the legal status of artificial intelligence systems, is that “the more robots are autonomous, the less they can be considered mere tools in the hands of other agents. », Whether they are manufacturing companies, owners or end users.

This raises the question of whether and to what extent the traditional categories of responsibility are suitable for defining those of the various agents in the case of actions or omissions attributable to machines and the causes of which are not easily attributable to a specific human subject, as has in part been demonstrated. the example of the first accident in which a person died after being hit by a self-driving car, which occurred in Arizona in March 2018. The cause of the accident was ultimately linked to a malfunction in the car's software, but the responsibility was attributed to the inattention of the “safety driver”, ie the human driver sitting in the driver's seat and required to monitor the road during the journey (the very controversial process is still ongoing).

There has long been a more general and extensive debate on machine rights, which includes but is not limited to that on legal liability arising from the actions of robots. It is a debate that has undergone a constant acceleration in recent years, determined by the remarkable progress made by technological research in the field of artificial intelligence. Progress that has led, among other things, to the development of systems capable of carrying out typically human activities and exploiting cognitive processes similar to those of humans, such as the ability to learn from experience on the basis of certain “inputs” and to take autonomous decisions.

– Read also: How Europe wants to regulate artificial intelligences

The results of these advances have made it more familiar and easy to imagine even among the less passionate about science fiction the prospect of a future in which the human species could coexist with artificial beings deeply integrated into the social fabric and with skills and qualities hardly distinguishable from those of any human being. There are already robots used in health care facilities, social robots for the care and companionship of elderly people, and even robot priests.

Consequently, ethical questions apparently lacking an immediate practical sense are now less bizarre than in the past, such as those dealing with, for example, the problem of whether or not to favor the development of emotional capacities in artificial intelligence systems, and how to consider the robot in relation to their possible suffering, should they one day be able to suffer.

Difficulties in seriously addressing this issue commonly stem from the apparent solidity of two arguments, according to Hugh McLachlan, Professor Emeritus of Practical Philosophy (the philosophy that deals with the problems of action and the knowledge that governs it) at Glasgow Caledonian University. The first is the idea that artificial beings capable of making such ethical questions sensible and necessary are not possible. And the second is that only beings endowed with vital bodies and organs are susceptible to moral considerations.

The first argument is generally supported by our tendency to suppose that “mental phenomena” – consciousness, feelings, thoughts – are somehow irreducibly different from material phenomena. And hence the conclusion that the elements that make up computers and machines built by human beings are fundamentally something other than a conscious mind. But regardless of the solidity of that belief, McLachlan argues, while admitting that there is an irreducible incommensurability between the building blocks of machines and a conscious mind, that would not be enough to rule out artificially created sentient and conscious people from existing, and that something that already applies to human beings cannot happen for those people.

As affirmed between the end of the nineteenth and the beginning of the twentieth century by the influential French sociologist Émile Durkheim, phenomena such as language, morality or law, cannot be traced back to the single individuals who make up society: they are properties that “emerge From social aggregation and which could not exist without the interaction of individual human beings with their particular psychological and biological characteristics. This means that those phenomena cannot be explained only starting from those characteristics, and that in a certain sense they must be thought of as functions of the social organism necessary for integration and coexistence between individuals.

The same discourse regarding the possibility of “emergent” properties, according to McLachlan, can be applied to all sciences. It is clear that there would be no computers or other electronic devices without silicon components, cables and other materials. But it is equally clear that the operations of a computer cannot be explained only in terms of the characteristics of those components: it is the particular interaction and combination of those components with other phenomena – electricity, for example – that makes the computer “emerge”. as a phenomenon of a new type with respect to the parts of which it is composed. In turn, the computers then interact in such a way as to make the Internet possible, another phenomenon of a type other than a single physical and tangible computer.

In this sense, we should probably assume that even what we call a conscious mind is not something reducible to the brain, molecules and other elements necessary for its functioning. It could be a different kind of phenomenon, according to McLachlan, but one that emerges from the particular interaction and combination of physical entities. And there is therefore no obvious logical reason why the typical consciousness of human beings – “the ability to think and make decisions” – cannot one day emerge from machines produced by human beings, from the interaction between those machines and their adaptation to certain contexts of interaction with human beings.

– Read also: Social robots for the elderly alone

The second argument that leads many people to consider the category of civil law inapplicable to robots and ethical issues unsuitable is that machines are devoid of “life”, do not have a human body and are not living beings, or that they could be but in a very controversial sense. But even this reason, according to McLachlan, loses much of its validity as soon as one considers a fact that is actually very little controversial, namely that both deceased and deceased persons are normally the object of our moral consideration, our respect and our benevolence. not yet born, intended as future generations. Neither one nor the other is properly living, and both groups do not yet have a body or no longer have one, neither natural nor artificial.

To deny a population of artificial people – difficult to achieve at the moment but easy to imagine – moral respect on the basis that those people would, in the event, have an artificial body instead of a natural one would be in a sense an arbitrary act, and would require a less obvious justification than it might seem. “One day, perhaps sooner than we think, a reflection on the ethics of treating rational and sentient machines may turn out to be more than an abstract academic exercise,” concludes McLachlan.

An already quite frequent analogy within the reflections on the rights of machines is that between robots and non-human animals, with respect to which the discourse on rights has long been considered appropriate and, to a certain extent, necessary.

If having cognitive abilities similar to ours is considered a relevant factor in defining our moral standards towards other animal species and in granting certain rights to those species – as well as our condition of moral superiority and therefore of responsibility towards them. – it is not clear why even very advanced forms of artificial life and somehow placed under human protection should not fall under that moral consideration, stated the journalist Nathan Heller in a 2016 article in the New Yorker.

“Until we are able to identify what animals require of us, we will not be clear what we owe to robots – or what they owe us,” Heller wrote. And he took the example of fish: numerous researches, such as those by the English ethologist Jonathan Balcombe, indicate that some species are capable of feeling emotions such as fear, stress, joy and curiosity, which may actually appear strange “to people who look at a sea ​​bass fixed in the eye and see nothing “.

– Read also: Should we rethink how we kill fish?

On the basis of these studies, we should conclude that our attitude reflects only prejudice, since the “experience of fish, and that of many creatures of a lower order, is closer to ours than one might think.” This prejudice, according to Balcombe, is due to the fact that we find it easier to empathize with a hamster, which blinks and holds food between its legs, than with a fish without fingers and eyelashes. And as small as the fish brains are, to assume that this implies their stupidity is “like arguing that balloons can't fly because they don't have wings.” If, on the other hand, we considered fish as our cognitive “peers”, we would have to include them in our circle of moral duties.

Another characteristic in relation to which we tend to draw the line of our moral duties, according to Heller, is the ability to feel pain. “Most people who were asked to drown a kitten would experience painful moral anguish,” which indicates that, at some level, we view suffering as something important. The problem with this approach is that our “antennas for pain” of others are remarkably unreliable, as demonstrated by the experimental cases in which we feel anguish even with respect to the suffering of robots, notoriously incapable of suffering.

In the summer of 2014, a group of Canadian developers and researchers devised a social experiment after building a 'hitchhiker' robot, called the hitchBOT and programmed to engage in short conversations, to measure 'how much robots can trust humans'.

Becoming relatively popular after a long hitchhiking trip in Europe and another in Canada, coast-to-coast, hitchBOT was destroyed in August 2015 in an act of vandalism in Philadelphia on its third trip to the United States. The news was followed and commented on with some participation and despair: «I cannot lie. I am still devastated by the death of hitchBOT, ”commented a reporter on Twitter.

Another case cited by the New Yorker as an example of the feelings at play in evaluating the suffering of other non-human beings, including artificial ones, is that of a centipede-like robot made in 2007 by an American engineer at Los Alamos National Laboratory, one of the largest multidisciplinary institutes in the world, in New Mexico. It was programmed to clear tracts of land from mines in warfare operations, crawling forward until all “legs” were lost. During a military drill in Arizona, an army colonel ordered an interruption pearl defining the violence against the robot as “inhuman”.

Cases such as these, according to the New Yorker, demonstrate in general how complicated it is to adopt the criterion of the “moral equality” of other entities in the articulation of the discourse on rights, because it is not clear what the boundaries of that equality are and why efforts to understanding it are subject to our prejudices and misperceptions.

– Read also: Is insect farming a good idea?

In general, according to Kate Darling, a researcher at the Massachusetts Institute of Technology (MIT) who specializes in robot ethics and the interaction between robots and humans, author of The New Breed: How to Think About Robots, the analogy between machines and Non-human animals however is useful for several reasons. First of all it takes us away from the persistent inclination to use humans as a measure of robots' abilities, predisposing us to a probably more useful evaluation of the various possibilities of coexistence and collaboration, as has historically occurred with the process of domestication of animals. That of oxen to plow the fields or pigeons to carry messages, for example.

Even with respect to the widespread belief that machines are already replacing humans in work and could do so more and more in the future, the analogy with animals is helpful in understanding that we always have a range of possibilities. Just as people have harnessed the abilities of animals in certain ways in the past, people still decide how to exploit technologies, whether as a supplement to human work or as an automation of that work. “The danger to people's jobs isn't robots – it's business decisions, which are driven by a broader economic and political system of corporate capitalism,” said Darling.

The analogy between robots and animals can also serve to articulate in more correct and appropriate terms the discourse on the responsibilities of machines in the event of damage caused by their actions, avoiding the repetition of stories attested in the Middle Ages such as the trials of animals that took place in Europe between the 12th and 18th centuries.

“We've been doing this for hundreds of years of Western history, with pigs, horses, dogs, locusts and even mice,” Darling told the Guardian, absurd as it may seem taking the modern perspective of current legal systems, which hold animals not. humans lacking moral will and therefore not morally responsible for their actions. His fear is that something like this could also happen with robots, by dint of comparing machines to people and trying to apply categories developed from human models in other contexts. “We are already starting to see traces of it when we hear companies and governments say: 'It wasn't our fault, it was the algorithm'”, added Darling.

The argument generally used by companies is that they should not be held accountable for the learning processes related to the technologies they develop, as it is impossible for them to foresee every possibility. But even in this case, according to Darling, it may be useful to reflect on the historical models we have adopted over time to ascertain responsibility when animals cause unexpected damage. And it may make sense, for example, to think about the distinctions we normally make between dangerous animals and safer ones, with ample room for flexibility in evaluating events depending on the context.

“If your little poodle bites someone on the street, completely unexpectedly and for the first time, you will not be punished as you would if a cheetah was instead of the poodle,” said Darling, adding that the key point of the speech. is understanding that unpredictable behavior is not a new problem and companies should not be allowed to claim that it is.

Darling does not rule out that robots may one day deserve rights if they become conscious or sentient, but he still believes this prospect is still quite remote. Even with regard to this aspect, however, he believes that the analogy with animals may be a more reliable indicator of how, in practice, this debate could develop in the future in Western countries, probably reflecting a certain underlying “hypocrisy”. “We like to think we care about the suffering of animals, but by observing our actual behavior we are attracted to the protection of the animals we are emotionally and culturally connected to,” he said.

It is therefore likely, in her opinion, that with robots it could end in the same way: we will give rights to some and not to others. We could, for example, find it more sensible to reserve preferential treatments for those robots built with anthropomorphic external characteristics, with which it will probably be easier to empathize than machines even having the same components and functioning, but perhaps by the appearance of simple metal boxes.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top