To benefit from the social capabilities of a robot math tutor, instead of being distracted by them, a novel approach is needed where the math task and the robot's social behaviors are better intertwined. We present concrete design specifications of how children can practice math via a personal conversation with a social robot and how the robot can scaffold instructions. We evaluated the designs with a three-session experimental user study (n = 130, 8-11 y.o.). Participants got better at math over time when the robot scaffolded instructions. Furthermore, the robot felt more as a friend when it personalized the conversation.
MULTIFILE
The challenges facing primary education are significant: a growing teacher shortage, relatively high administrative burdens that contribute to work-related stress and an increasing diversity of children in the classroom. A promising new technology that can help teachers and children meet these challenges is the social robot. These physical robots often use artificial intelligence and can communicate with children by taking on social roles, such as that of a fellow classmate or teaching assistant. Previous research shows that the use of social robots can lead to better results in several ways than when traditional educational technologies are applied. However, social robots not only bring opportunities but also lead to new ethical questions. In my PhD research, I investigated the moral considerations of different stakeholders, such as parents and teachers, to create the first guideline for the responsible design and use of social robots for primary education. Various research methods were used for this study. First of all, a large, international literature study was carried out on the advantages and disadvantages of social robots, in which 256 studies were ultimately analysed. Focus group sessions were then held with stakeholders: a total of 118 parents of primary school children, representatives of the robotics industry, educational policymakers, government education advisors, teachers and primary school children contributed. Based on the insights from the literature review and the focus group sessions, a questionnaire was drawn up and distributed to all stakeholders. Based on 515 responses, we then classified stakeholder moral considerations. In the last study, based on in-depth interviews with teachers who used robots in their daily teaching and who supervised the child-robot interaction of >2500 unique children, we studied the influence of social robots on children's social-emotional development. Our research shows that social robots can have advantages and disadvantages for primary education. The diversity of disadvantages makes the responsible implementation of robots complex. However, overall, despite their concerns, all stakeholder groups viewed social robots as a potentially valuable tool. Many stakeholders are concerned about the possible negative effect of robots on children's social-emotional development. Our research shows that social robots currently do not seem to harm children's social-emotional development when used responsibly. However, some children seem to be more sensitive to excessive attachment to robots. Our research also shows that how people think about robots is influenced by several factors. For example, low-income stakeholders have a more sceptical attitude towards social robots in education. Other factors, such as age and level of education, were also strong predictors of the moral considerations of stakeholders. This research has resulted in a guideline for the responsible use of social robots as teaching assistants, which can be used by primary schools and robot builders. The guideline provides schools with tools, such as involving parents in advance and using robots to encourage human contact. School administrators are also given insight into possible reactions from parents and other parties involved. The guideline also offers guidelines for safeguarding privacy, such as data minimization and improving the technical infrastructure of schools and robots; which still often leaves much to be desired. In short, the findings from this thesis provide a solid stepping stone for schools, robot designers, programmers and engineers to develop and use social robots in education in a morally responsible manner. This research has thus paved the way for more research into robots as assistive technology in primary education.
LINK
Hospitalisation is stressful for children. Play material is often offered for distraction and comfort. Weexplored how contact with social robot PLEO could positively affect a child’s well-being. To this end, we performed a multiple case study on the paediatric ward of two hospitals. Child life specialists offered PLEO as a therapeutic activity to children in a personalised way for a well-being related purpose in three to five play like activity sessions during hospital visits/stay. Robot–child interaction was observed; care professionals, children and parents were interviewed. Applying direct content analysis revealed six categories of interest: interaction with PLEO, role of the adults, preferences for PLEO, PLEO as buddy, attainment of predetermined goal(s) and deployment of PLEO. Four girls and five boys, aged 4–13, had PLEO offered as a relief from stress or boredom or for physical stimulation. All but one started interacting with PLEO and showed behaviours like hugging, caring or technical exploration, promoting relaxation, activation and/or making contact. Interaction with PLEO contributed to achieving the well-being related purpose for six of them. PLEO was perceived as attractive to elicit play. Although data are limited, promising results emerge that the well-being of hospitalised children might be fostered by a personalised PLEO offer.
DOCUMENT
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
DOCUMENT
Robot tutors provide new opportunities for education. However, they also introduce moral challenges. This study reports a systematic literature re-view (N = 256) aimed at identifying the moral considerations related to ro-bots in education. While our findings suggest that robot tutors hold great potential for improving education, there are multiple values of both (special needs) children and teachers that are impacted (positively and negatively) by its introduction. Positive values related to robot tutors are: psychological welfare and happiness, efficiency, freedom from bias and usability. However, there are also concerns that robot tutors may negatively impact these same values. Other concerns relate to the values of friendship and attachment, human contact, deception and trust, privacy, security, safety and accountability. All these values relate to children and teachers. The moral values of other stakeholder groups, such as parents, are overlooked in the existing literature. The results suggest that, while there is a potential for ap-plying robot tutors in a morally justified way, there are imported stake-holder groups that need to be consulted to also take their moral values into consideration by implementing tutor robots in an educational setting. (from Narcis.nl)
MULTIFILE
This study aimed (1) to examine the contribution of robot ZORA in achieving therapeutic and educational goals in rehabilitation and special education for children with severe physical disabilities, and (2) to discover the roles professionals attribute to robot ZORA when it is used in robot-based play interventions in rehabilitation and special education for children with severe physical disabilities. A multi-centre mixed methods study was conducted among children with severe physical disabilities in two centres for rehabilitation and one school for special education. The participating children played with robot ZORA six times during a period of 6 weeks, in individual or group sessions. Quantitative data were gathered about the contribution of ZORA in reaching individual goals for all of the participating children, using the Individually Prioritized Problem Assessment (IPPA). Playfulness was measured with a visual analogue scale (0–10) and children could indicate whether they liked the sessions using a scale consisting of smileys. Video-stimulated recall interviews were used to collect qualitative data about the different roles of ZORA. In total, 33 children and 12 professionals participated in the study. The results of the IPPA showed a significant contribution of ZORA to the achievement of (children’s) individual goals. The data gathered using the IPPA during the ZORA-based interventions showed that the largest contributions of robot ZORA lie in the domains of movement skills and communication skills. Playfulness of the sessions was 7.5 on average and 93% of the sessions were evaluated as ‘enjoyable’ by the children. Overall, ZORA could positively contribute to the achievement of individual goals for children with severe physical disabilities. According to the participating professionals the most promising roles in which robot ZORA can be used are motivator, rewarder or instructor.
DOCUMENT
Moral food lab: Transforming the food system with crowd-sourced ethics
LINK
The use of the Zora robot was monitored and evaluated in 14 nursing care organizations (15 locations). The Zora robot, a Não robot with software, is designed as a social robot and used for pleasure and entertainment or to stimulate the physical activities of clients in residential care. In the first year, the aim was to monitor and evaluate how the care robot is used in daily practice. In the second year, the focus was on evaluating whether the use of Zora by care professionals can be extended to more groups and other type of clients. Interviews, questionnaires and observations were used as instruments to reveal the progress in the use of the robot and to reveal the facilitators and barriers. Care professionals experienced several barriers in the use of the robot (e.g., start-up time and software failures). The opportunity they had to discuss their experience during project team meetings was seen as a facilitator in the project. Furthermore, they mentioned that the Zora robot had a positive influence on clients as it created added value for the care professionals in having fun at work.
DOCUMENT
Through a qualitative examination, the moral evaluations of Dutch care professionals regarding healthcare robots for eldercare in terms of biomedical ethical principles and non-utility are researched. Results showed that care professionals primarily focused on maleficence (potential harm done by the robot), deriving from diminishing human contact. Worries about potential maleficence were more pronounced from intermediate compared to higher educated professionals. However, both groups deemed companion robots more beneficiary than devices that monitor and assist, which were deemed potentially harmful physically and psychologically. The perceived utility was not related to the professionals' moral stances, countering prevailing views. Increasing patient's autonomy by applying robot care was not part of the discussion and justice as a moral evaluation was rarely mentioned. Awareness of the care professionals' point of view is important for policymakers, educational institutes, and for developers of healthcare robots to tailor designs to the wants of older adults along with the needs of the much-undervalued eldercare professionals.
DOCUMENT
From the article: Using Roger Crisp’s arguments for well-being as the ultimate source of moral reasoning, this paper argues that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. Although these moral concepts should not be used to program robots, they are not to be abandoned by humans since there are still reasons to keep using them, namely: as an assessment of the agent, to take a stand or to motivate and reinforce behaviour. Because robots are completely rational agents they don’t need these additional motivations, they can suffice with a concept of what promotes well-being. How a robot knows which action promotes well-being to the greatest degree is still up for debate, but a combination of top-down and bottom-up approaches seem to be the best way. The final publication is available at IOS Press through http://dx.doi.org/10.3233/978-1-61499-708-5-184
DOCUMENT