Recommendations concerning Social Robots

Contents

Social robots and ethics
Social robots as relationship technology
Care relationships
The ethics of pretence
When learning robots can make their own decisions

The Danish Council of Ethics' recommendations on social robots
1. Social robots and welfare technology as elements of care and therapy
2. Product responsibility and social robots
3. When social robots pretend to have an inner life
4. Social robots, monitoring and privacy


 

Social robots and ethics

The technological development of robots is no longer restricted to industrial robots that can be programmed to carry out well-defined and complex tasks in the manufacture of products such as cars, mobile telephones, refrigerators etc.

Now and in the future, we will find robots interacting with people – for use as help and support in social relations. Robots for use in social and emotional relations are being developed in research laboratories in Europe, the USA and Japan. Alongside industrial robots, there is therefore another type of robot, the ‘social robot' that can integrate with people in everyday situations. In Japan in particular, there are large numbers of these ‘social robots' available on the market today: there is for example a receptionist robot, just as one can buy different robots that can assist in tasks in the home.

In Denmark, the debate on ‘social robots' has been characterised in recent years by an interest in the Japanese robot seal Paro, which was introduced in 2007 in several Danish care homes. The robot seal can be regarded as an interactive and intelligent pet. Its many sensors and inbuilt artificial intelligence mean that it reacts with whistling sounds of satisfaction or dissatisfaction in response to the user's touch and speech. It is therefore an object for which the user can demonstrate affection and has a calming, therapeutic effect on some people with dementia.

Current and potential uses for social robots are concentrated in four areas: the care sector, help in the home, entertainment and military uses. In Denmark, people's first meetings with robots took place in care situations (e.g. the robot seal Paro) and in entertainment situations (e.g. lifelike robot dolls that can answer when spoken to). But the use of robots for help in the home, the real 'butler robot', is really not so far away. In Japan, one can already buy robots that can help with babysitting and simple manual tasks such as passing the user objects (e.g. lifting things for people with limited mobility).

So it is already the case that new relationships are occurring between people and machines. These are relationships that give rise to various different ethical considerations, irrespective of whether future developments move towards the development of artificial intelligence with the independence and autonomy with which is may become possible to equip robots.

Some researchers in artificial intelligence believe that robots in future will develop into beings that are conscious, with feelings and will, almost like people. All agree that this is not just around the corner and that there are countless scientific and engineering challenges to overcome before we reach this goal. Some believe that this is impossible, in principle, but there are also other researchers who believe that this is possible.

However, as mentioned above, there are already relationships between humans and robots that give rise to ethical problems –  even if the existing social robots are certainly not beings that could be said to have an inner life with feelings, consciousness and will.

This statement from the Danish Council of Ethics is, first and foremost, about social robots and ethical questions that arise as a result of robots functioning as relationship technology. Robots as relationship technology are already a reality and will increasingly become everyday technology, irrespective of how far it will be possible to progress in the creation of robots with artificial intelligence and varying degrees of freedom of action.

In this context, relationship technology can be understood as technology that has an effect and usefulness as a result of humans entering emotional or social relationships with technology. One could say that the social and emotional interaction between the user and the relevant piece of technology is the actual user interface when it comes to relationship technology: The robot seal Paro only has a calming effect if the user actually cares for Paro and allows him/herself to be emotionally affected by Paro. And a butler robot in the home will only function optimally if it can ‘behave' among people and respect intimacy limits, rules of politeness etc.

Top

 

Social robots as relationship technology

There are three important ethical questions linked to the use of social robots as relationship technology 

  1. Firstly, there is the question of what it will mean for care relationships and intimacy limits when robots and people begin to enter social and emotional relationships with each other.
  2. Secondly, one can ask to what extent it is ethically problematical in itself, if social robots become increasingly like humans in appearance, communication and behaviour and thereby pretend that they are independent, sentient active beings, i.e. ‘as if' they are human beings.
  3. Thirdly, there is the question of what it will mean if social robots can be equipped with the ability to learn and act with varying degrees of freedom – that is, choose one action among several alternatives and where this choice cannot be said to be unambiguously decided by its inbuilt systems and algorithms and other technology with which the manufacturer has equipped it.

Top

Care relationships

In recent years, the term ‘welfare technology' has begun to play a role in the public debate both nationally and internationally and, in Denmark, the state has allocated three billion kroner for investment in labour-saving technology (including welfare technology) up to 2015. The development of welfare technology could be a good area of investment for Denmark, because Denmark has a well-developed care sector (care homes, aids in the home etc.). The hope is that Denmark can play a leading role in the development of welfare technology and that welfare technology can help to make the care and physical welfare of older people into a better and perhaps more dignified experience for elderly people in the future. As well as this, it could help to improve care personnel's often physically demanding working conditions. Last but not least, the development of welfare technology is motivated by demographic conditions. In future, there will be fewer people to care for more elderly people, so the introduction of technology that can reduce the quantity and scope of basic physical care is much needed for.

The term welfare technology does however cover far more than social robots. It includes all types of advanced technology that can help elderly people to be more self-sufficient than otherwise. This could mean toilets that are cleaned automatically, so there is no need for a carer in this situation. One can imagine the same thing with bathrooms and the like, and many such things are already being developed or are already in production. The same technologies could also help more elderly people stay in their own homes and be more self-sufficient and independent of other people's help for cleaning and personal care.

The most striking ethical problem in the use of this type of technology – including actual social robots like the robot seal Paro – is how these things will slide into older people's everyday lives as a supplement to or replacement for human contact. This is a relevant concern, even though the concrete experiments with the use of Paro in Danish care homes have shown, amongst other things, that older people with dementia only maintain an interest in Paro if they are helped to do this by personnel. The concern is that the social robot Paro is an example of a technology that perhaps involves the danger of being used to save human resources.

Perhaps there is an ethical advantage in the fact that one does not need other people's help to the same degree as previously for personal hygiene that infringes upon the individual's intimacy limits. The argument here would be that there is more time over for quality human contact (such as conversation and simply being present) when family members and employed carers no longer have to use so much time helping individuals with personal hygiene and the like.

On the other hand, there is the concern that the introduction of welfare technology – and in particular the introduction of social robots such as Paro or butler robots – will mean reduced human contact and a weaker perception of the duty to provide care for people that otherwise share emotional and familial bonds with each other. It is a worrying possibility that social robots could take the place of human contact rather than supplement it.

It's obvious that social robots can take over some of the help functions that people perform for each other today. This could change the picture of the type of assistance that we categorise as care and contact between people. If, for example, it becomes increasingly normal that welfare technology, including robots, takes over intimate physical hygiene and care, then our ideas of what human care really is will presumably change too. Perhaps human care will become more specifically and exclusively understood as company and conversation.

Ethically, this situation could be understood as either progress or a step backward. Some would say that it gives the individual more freedom and dignity if he or she is not forced to get used to help from others in intimate areas of life. Others would point out that something is being lost from human care if people are shielded too much from each others' physical vulnerability and are excessively shielded from the physical contact  that is perhaps an essential ingredient in the care between a person that needs caring for and another person that provides this help.

In short, there are two types of ethical question in connection with social robots in care relationships:

Firstly, there is the question of whether social robots in care relationships will supplement or replace human contact.

But secondly there's the matter of which help activities, if any at all, should be taken over by social robots and whether it is desirable for social robots to take over or supplement human care relationships.

Social robots, like the robot seal Paro, perhaps offer something completely different from what people (or pets) can offer. Elderly dementia sufferers' interaction with Paro can be both stimulating and calming, and perhaps it is even therapeutic because the person's brain is stimulated and kept in a better state than would otherwise be possible. Danish trials with Paro have not demonstrated any difference between general activation use and the exercise of brain functions that would otherwise be lost. There is therefore still no evidence of any healing effect, whilst there are experiences indicating some elderly dementia sufferers can benefit from Paro in the sense that their mood and life quality is improved when they are activated by the seal.

It is self-evident that this form of contact cannot be offered by people, or even live pets. Because it is only a robot seal like Paro that likes to be stroked and caressed and spoken to just at the time that the dementia sufferer wants to. The separate ethical question raised here, is the question of whether this therapy works and the degree to which it should be prioritised in relation to other therapeutic measures. Finally, there is also the question of whether the ‘pretence' in such a situation constitutes an ethical problem in itself? Is it, in other words, problematical that the therapeutic effect depends upon a type of ‘deception'; namely that the robot is interested in your care and in one sense or another is more than a thing?

Top

The ethics of pretence

Both research and everyday life tell us that it doesn't take much for an emotional relationship to arise between humans and the things with which we surround ourselves. Nor does it take much before we begin to relate to things around us ‘as if' these things had an inner life that reminds us of our own.

We recognise the animation of the things we surround ourselves with from children's games with dolls and other object that can be given roles in their fantasy-filled games. But advanced robot research has long recognised that it is the human's ability to animate and personify objects that one has to build on if we want robots that can enter into social situations with humans. This insight has been developed for several years through engineering activities and psychosocial studies.

An early example of this trend in robot studies is the social robot “Kismet”, a funny looking torso with big lips, eyes and ears. Kismet has been constructed to study children's interactions with the robot in particular. It is constructed on the scientific understanding that it is only a few marked characteristics of the robot and its behaviour that determine whether it will be inviting for a person to enter an emotional relationship with it. Kismet is not like a person, but it has some human characteristics combined with cat-like ears. Kismet can direct its attention to things and it has different built-in scales for when it should show signs of affection or excitement. It is also interesting that Kismet has no linguistic understanding. It only reacts to the tone of a voice, irrespective of language. This is called prosody and is the same as when small children instinctively understand praise or interdiction by vocal tone alone.

Both Kismet and the therapeutic robot seal are examples of the fact that it doesn't require a fully developed cognitive artificial intelligence for people to enter a form of social or emotional relationship with a robot. The question now is; is it in itself an ethical problem that such relationships occur between people and machines, particularly given that these relationships are based on an element of ‘pretence'? The social robot ‘pretends' to have certain feelings or inner states as a result of interaction with humans, and this creates a perception that one is in charge of a being that requires care and with which one can communicate. Is this pretence an ethical problem, even if no one is seeking to deceive the user of the robot into believing that the robot is anything more than a machine?

There are at least two possible ethical concerns that arise from this. The first is about the actual unpleasantness of the element of cheating and because it is not entirely dignified to offer children and elderly dementia sufferers in particular (that is, to varying degrees weak and vulnerable people) pure simulations of social relations that should otherwise occur in the care between people and other living beings. One could say that it is an immediate concern that many people instinctively react when they are introduced to social robots like the robot seal. Their concern contains two elements. The fact that it's bogus and not the real thing and that the relationship you have with the machine is in itself too undignified for a human to enter into.

The other ethical concern is about what derivative effects this relationship technology could have for the understanding of inter-human relations. Perhaps it is the case that extensive use of relationship technology like Paro and robot dolls can have unwanted effects on people's ways of seeing and treating each other. A research project has shown that users of Sony's previously sold robot dog Aibo have a tendency both to understand the robot dog as a technological gadget and as a being with inner mental states. This is because Aibo awakens feelings in them, as if it were a living being (irrespective of the fact that the respondents naturally understand very clearly that Aibo is a machine) It is just as clear however that users do not perceive Aibo as a being with a moral right to care and consideration. Several researchers have pointed out that it could be a developmental-psychological problem if children in particular are trained with ideals for social relations  that mean that one ‘party' (that simulates a being with inner feelings) can awaken feelings in the other party (human, child) but without these feelings being linked to a relationship of moral responsibility for each other. It is a concern that this behaviour could be transferred to relations between humans and bring about increased narcissism or egoism where one party bonds emotionally to the other party without feeling any respect for the other's need for care.

Top

 

When learning robots can make their own decisions

By 1922, the science-fiction writer Isaac Asimov had formulated three robot laws. The three laws were intended as rules to ensure that robots that made decisions for themselves did not act inappropriately with humans. In brief, these laws are that a robot must never harm a human being; a robot must obey people unless this conflicts with the first law; and finally, a robot should protect its own existence insofar as this does not conflict with the first two laws.

On the basis of these three laws, Asimov wrote a number of stories that show how difficult it is to set simple rules for ethically acceptable behaviour. Robots that have to decide on complex ethical questions are still some way in the future. Nevertheless there is research and professional discussion taking place into problems concerning safety and technology in robots that are intended to make decisions for themselves.

This is because the possibilities of the future, which can sound like fiction today, are relevant to ethical discussion, where there is also the question of possibilities that - according to professionals in this area - could be realised at a given point in time. This is because social robots now and in the near future will have degrees of acting freedom that give rise to ethical considerations, even if these degrees of freedom cannot in any reasonable way be compared with what we understand as normal human freedom of action.

For whatever else one might see in the terms human self-determination and freedom, it is clear that social robots, even in the foreseeable future, will be capable of learning from experience, e.g. as ‘butler' in a home. And in this environment, they would have some degree of freedom choice in concrete situations.

The robot will not of course have unimagined possibilities, but the idea of social robots is that they should be equipped with senses and algorithms making them capable of learning in the same way as children - that is by humans showing and explaining how one carries out a given action (e.g. passing someone an object).

The immediate practical problem in connection with learning robots is the question of responsibility. If a robot is capable of learning in its interaction with a given environment, who is responsible for the unfolding of the ‘new' action that the robot carries out and which it is not completely deterministically programmed to carry out in a precisely prescribed way?

Can you say that the robot company is freed from responsibility as a result of the fact that the robot is equipped with a learning system that is supposed to make its decision-making flexible and therefore not completely predictable? Who is responsible if the robot goes wrong and does not carry out what the owner intended with a given order or action? If, for example, the robot confuses one object with another, which is destroyed if it is gripped to hard – or worse, causes injury to a human being, and if this can be ascribed to chance inappropriate learning – who is responsible then, when flexible, non-predictable learning is the very point of the robot's way of functioning?

The important thing here is that if you believe that conditions require the allocation of responsibility, that the person holding this responsibility must also have full control over the situation. This is however not the case. Parents are to some degree responsible for their non-adult children's actions, irrespective of whether the parents can be said to be directly guilty of their actions or not. In the same way, an employer, e.g. a factory owner is responsible for accidents in his workplace, even if he is not directly guilty of them and even if he could not reasonable have foreseen or prevented them. Finally, a manufacturer is responsible for upholding safety standards, so that the user of the product is not exposed to injury when using the product.

Top

 

The Danish Council of Ethics' recommendations on social robots

The Danish Council of Ethics believes that the development of robots for use among people as everyday help, as entertainment or therapy is a development that in time will involve more and more ethical consideration, including some which are currently to difficult to foresee or describe. It is therefore important that robot technology is followed-up and commented upon from an ethical standpoint. The following brief recommendations should be regarded as the Council's guidelines for some of the most central and relevant problems in the area of social robots and relationship technology. The recommendations can be used by politicians and other actors in the area as good points to bear in mind when legislating in this area and when new technology is being developed.

Top

1. Social robots and welfare technology as elements of care and therapy

The Danish Council of Ethics has a positive attitude to the use of social robots like, for example, the robot seal Paro, in situations where such technology can be included as a supplement to other care and therapy. The Danish Council of Ethics also finds that welfare technology should be promoted so that parts of the physical care, which for many people can infringe on intimacy and which could also lighten the sometimes heavy physical work carried out by care personnel.

It is critically important that these technologies are never introduced as a replacement for real human contact, company and care. On the contrary, the Danish Council of Ethics believes that this technology should only be used with a view to saving human resources for the forms of care which require real human contact – namely care though tenderness, touch, personal presence and conversation. In the same breath however, the Danish Council of Ethics stresses that good care is often linked to actions that are connected with necessity or dependency; tenderness, physical contact, personal presence and conversation are forms of care that arise from a need for help in the form of support for physical care, visiting etc.

The Danish Council of Ethics believes that technological development in this area should be characterised by the principle of promoting the opportunities for human contact and not replacing human contact with things that, in this context, are totally inadequate surrogate products.

The Danish Council of Ethics believes that the development of social robots and relationship technology for use with vulnerable citizens should be followed up carefully from an ethical viewpoint. The Danish Council of Ethics therefore points out the importance of real research experiments with, for example, the robot seal Paro, for approval by the science ethics committee system. In care homes, social robots like Paro can be introduced as part of the general care. But a project must be approved if this involves real research in which systematic observations of the interaction between humans and relationship technology with a view to measuring the technology's therapeutic effect. The Danish Council of Ethics believes it would beneficial for such experiments to be subjected to ongoing ethical evaluation by science ethics committees. This process would provide assurance that the social, ethical and sociological considerations have been weighed up before the technologies are introduced to everyday care in care homes.

In the Council's opinion, there is a risk that technologies such as the robot seal Paro and welfare technology for physical care could, in tight economic circumstances, be used to reduce human care rather than supplement it. This does not need to be a manifestation of ill will. With a technology that is introduced with good intentions of being a supplement can be very close to being used for the sake of comfort and as a pure emergency arrangement particularly in cases where there are too few carers or too little money to pay carers with. The Danish Council of Ethics therefore thinks that an introduction of this type of technology in the care sector can increase the need to cement and develop strong professional and human standards in the care sector – standards that will continue to ensure that the focus is on providing the best possible human care and thereby help the elderly to have as good a life as possible.

The Danish Council of Ethics also recommends that decisions on introducing robot technology in any concrete care situation should be made in a forum in which the recipients of care are represented and where ethical considerations are part of the background to such decisions.

The Danish Council of Ethics therefore takes a mainly positive attitude to the use of advanced technology for use in emotional therapy and for aiding physical care, as long as it is merely a supplement to human contact or, even better, if it can function as a strengthening of human contact in the sense that the technology frees more human resources for care in the form of tenderness, company and conversation. The removal of human contact in physical care will however doubtless involve a form of sterilisation – a lack of consideration for the importance of physical contact. For this and other reasons, a good balance must be maintained between technologically assisted care and the manual, human care. The Danish Council of Ethics also believes that it should be left to those requiring care to decide whether they want one form of care or another.

Top

2. Product responsibility and social robots

In the longer term, one must expect that social robots will be used as ‘butlers' in the home or in institutions where there are people with a need for physical help. Social robots will be flexible in their work and will become more and more adaptable to the individual user's own environment and habits etc.

This flexibility is very closely linked with the fact that social robots will be equipped with the ability to learn new actions that were not pre-programmed before the robot was taken into the home. The ability to learn is built into the robot but the individual actions are not predictable in detail, since it is precisely the point that the user will be able to teach the robot to perform the specific actions that the user requires in his/her special environment. It is the well-supported vision of robot researchers that teaching will take place in such a way that it will resemble the way people are taught – that is, primarily that one simply shows the robot what to do (for example take a carton out of the refrigerator and put it on the table).

The social robot's ability to learn creates an ethical problem relating to responsibility. Can the company behind the robot be held responsible for any mistake that the robot makes, if the action in question has not been programmed into the robot in advance? There is no question of robots learning anything at all at any level of abstraction. The robot is limited by the extent of its ‘inner' artificial intelligence (speech recognition, visual recognition, understanding of space, imitation of social behavioural rules etc.) as well as by its ‘exterior' (the mechanical movement and sense apparatus).

We are not focusing primarily here on spectacular mistakes. This is fundamentally about the robot's ability to recognise objects and connect the user's names for specific objects to the right objects ‘out in reality' and react to information in an appropriate manner. One could imagine that a robot is asked to grasp an object but instead takes a hard grip on a person's arm, perhaps because the person is standing where the object usually is and the robot is incapable visually of differentiating one from the other. Who is responsible for this mistake when each action and each function has not been built-in in detail into the robot's design from the start? Is it the producer or is it the person who took the robot into use that bears the responsibility?

The Danish Council of Ethics believes that the partial unpredictability of a learning robot's actions under any circumstances should mean that there are precise and restrictive rules determining which purely physical ‘abilities' a robot may be equipped with, if it is to be marketed for use in the home. As well as this, there should also be a requirement that the social robot's systems for learning, sensing and moving should be tested against well-consolidated professional standards. These rules should ensure that the flexible, learning robot cannot cause significant material damage or personal injury, even in cases where it has ‘learnt something wrongly'. The social robot's partial dependency on consistent teaching also gives reasons for extra vigilance and extra safety rules in relation to uses with people who suffer from dementia or are otherwise not fully responsible for their actions.

Top

3. When social robots pretend to have an inner life

An important part of the idea of social robots is that they are easy to coexist with and that they can carry out their roles and work by communicating naturally with people in ways that remind us of the way people communicate with each other. A social robot does not have a keyboard where the user can key in what he or she wants done. A social robot must be able to react promptly when the user looks at the robot or addresses it. A social robot must also be able to read signs of the use's emotional state using its sensors to decode the user's verbal language, body language, body temperature etc - and it must be able to react appropriately to these signals (e.g. not disturb the user if he or she looks as though they don't want to be disturbed).

This basic technological concept means that the interaction between people is partially copied and transferred to the relationship between the human and machine (the social robot). One could say that social robot technology is working in a focused manner – to varying degrees – on strengthening the general human tendency to personify and animate objects in the environment. Social robots that partially resemble people or pets are equipped with mechanisms that pretend to be expressions of inner feelings – e.g. simple facial expressions, focusing eyes or satisfied sounds like the therapeutic robot seal Paro (see above). The technology is designed so that the user ‘plays the game' and in his or her actions and interaction with the social robot actually interprets the exterior signs in the robot as something corresponding to an emotional state.

Is there something ethically problematical in this ‘game' or the pretence that there is a mutual emotional or interpersonal relationship between the human and machine? The Danish Council of Ethics believes that there are at least two ethical aspects to this pretence that both robot firms and the recipient society should consider.

Firstly, the pretence of emotional states and inner life in the robot functions in a continuum from a kind of deception to the pure ‘game' where the user is completely aware of the robot's status as a mechanical play- and care-instrument. The degree of deception will depend on the user's age and life circumstances. In other words, it is true that the user is more deceived if he or she is a child or an adult that cannot take responsibility (e.g. people suffering severe dementia). The Danish Council of Ethics does not believe that the element of cheating or pretence is necessarily a problem – even where the user is a person who is not fully responsible. This is on condition that the contact with the social robot, such as Paro, has a beneficial effect on the user's well-being. The Council believes however that the persons responsible (e.g. carers for elderly dementia sufferers) should evaluate all the time whether the user's dignity is suffering; how the user's identity is affected and, not least, how are social relations around the user being affected? There should, for example, be an evaluation of whether the dementia sufferer is being infantilised and looked down on by others because that person is preoccupied with a machine and perhaps to such a degree that the person in question is not even aware that it is a machine.

Secondly, one can ask whether extensive use of social robots could lead to a stunting of human emotional life. This ethical question arises irrespective of the deception element, because the question is about how the game itself or the conscious pretence of an emotional relationship with social robots could have undesirable psychosocial consequences.

It is with regard to a risk of stunting human emotional life that some robot researchers and debaters believe that there should be rules for how we deal with robots that, in some way or another, enter into emotional relationships with humans and which perhaps in time will also resemble humans in appearance and behaviour. For if one becomes used to treating these human-like machines as one likes, how will this affect respect and empathy for flesh-and-blood human beings?

As mentioned above, user surveys have shown that users become emotionally bound to social robots. There can also be unfortunate psychological consequences because the robot realistically imitates a relationship to a human being or a pet, whilst the actual relationship is still a one-sided relationship where only one party benefits. One could worry that this could create a precedent for egocentric social relations among people; relationships where one party is out to get something emotional from the other without feeling any moral responsibility for the other's well-being.

This concern and discussions concerning social robots is reminiscent of the concerns that have been put forward in relation to the use of (violent) computer games, avatars and various ‘artificial identities' in social networks on the internet. Could these new relationships and communication forms lead to a degradation of human relationships? The Danish Council of Ethics has no clear answer to this question.

The Council will however point out that legislators and society as a whole should be attentive to this development and possibly regulate the market particularly in areas where social robots are marketed to children and young people. There could be a real need here for protecting children and young people against undesirable psychosocial developments.

Top

 

4. Social robots, monitoring and privacy

Technologies for monitoring and registering human actions and consumption patterns are, as we know, widespread in society. There are also many different regulations on how and when such monitoring can take place, what information may be registered, and how this information can be stored by authorities.  The ethical debate on the sanctity of private life in relation to the usefulness of monitoring and registering personal information is therefore not a new debate. Nor is it the case that the introduction of social robots in this context raises new ethical questions in connection with the sanctity of private life and the use of personal information. It is however the case that the use of social robots in the home or in institutions is a use of technology that could involve the classical ethical considerations concerning privacy and the exposure of personal information to unauthorised persons.

A social robot can obtain information about its user in two ways. There is information that particularly raises the question of monitoring and confidentiality, if this information is passed on to other systems and potentially to other persons, authorities etc. The one form of information corresponds with the information that a normal computer can contain: factual information about personal details, health information, photographs, bank details and much other sensitive personal information.

The other form of information which social robots could gather is a little more peculiar to this type of technology: a well-functioning social robot will be able to register the user's habits, consumption patterns etc., for example for the purpose of ordering fresh groceries over the internet, contacting the chemist etc. The possibilities of this technology will, in time, become numerous. In this way the robot will store information about the user's actions that is more behavioural in character. This type of information is a little like the consumer analyses that take place automatically when one buys something on the internet from big purchasing sites (e.g. books and music). If the user permits it, the computer delivers a so-called ‘cookie' to the internet shop which then makes it possible to send targeted offers based on previous behaviour. This type of behaviour-based information will be something that a social robot will be very suited to collecting and sending on to other systems, e.g. internet-based shops, authorities etc. But the social robot's learning relationship with its user makes it possible for it to store information about more intimate things (toilet visits, number of visits from other people, number of hours of television-watching on a certain channel etc.).

The Danish Council of Ethics believes that social robots should be equipped with as little access to external information systems as possible – and there should be very high security requirements in connection with the social robot's access to the internet and other systems in cases where the robot is of a type that can store information about sensitive personal details (whether this includes static or behavioural data). On the other hand, it is clear that a social robot will be considerably more useful if it can communicate easily electronically with, for example, services on the internet.

The Danish Council of Ethics believes that social robots should be broadly included in the same type of regulation as that which currently applies to computers and internet systems. Here there are security requirements on software developers, internet access providers and internet service providers such as shops and banks to ensure that one can use the internet without running significant risk of sensitive personal information being spread without the user's knowledge of this. With computer and internet access, the user can of course store exactly the information about him/herself that he or she wishes. And it is also true that the user can choose to pass this information to those that he or she wants to receive it, e.g. on the internet. But the Danish Council of Ethics believes that there could be a need for increased consumer protection in connection with those social robots that are technically capable of spreading information about behaviour and health. Regulation of this area could take into consideration that security levels have to be different for different types of information. The user may perhaps be able to instruct the robot to keep the refrigerator stocked up with food and, in this context, perhaps allow a consumption pattern to be passed on to services that can inform the user, via the robot, about offers on certain foods etc (simply to mention one example). One can also imagine that the robot is technically capable of such functionality in a narrow range of information categories. When it comes to information that the robot will store as a result of ongoing contact with its user (e.g. information about medicine intake, reading habits, interests etc.), one can imagine that the robot is ‘forced' to behave like a normal computer – that is, it will only be possible to pass on the information if the user manually uses the user-interface of the robot or uses a computer, to which the robot sends on the information upon command.

Top

 

Updated 21st October 2010